# EEGDash — full markdown corpus Concatenation of every Sphinx-rendered markdown page on eegdash.org, produced at build time for LLM ingestion. See the curated index at for navigation; this file is the raw corpus.

EEGDash: the Python library for 700+ BIDS-first EEG/MEG datasets.

Install with `pip install eegdash`, then load, preprocess, and train PyTorch models on open EEG/MEG data in minutes. Works hand-in-hand with MNE-Python and braindecode. [Browse datasets](dataset_summary.md) [Get started](install/install.md) Quickstart ### Install ```bash pip install eegdash ``` ### First search ```python from eegdash import EEGDash eegdash = EEGDash() records = eegdash.find(dataset="ds002718") print(f"Found {len(records)} records.") ``` Works with Python 3.10+. BIDS-first. Runs locally. [Run your first search](user_guide.md) [Read the Docs](api/api.md)

At a glance

Search-first discovery with reproducible pipelines and standardized metadata.

[![Test Status](https://github.com/eegdash/EEGDash/actions/workflows/tests.yml/badge.svg)](https://github.com/eegdash/EEGDash/actions/workflows/tests.yml)[![Doc Status](https://github.com/eegdash/EEGDash/actions/workflows/doc.yaml/badge.svg)](https://github.com/eegdash/EEGDash/actions/workflows/doc.yaml)[![PyPI](https://img.shields.io/pypi/v/eegdash?color=blue&style=flat-square)](https://pypi.org/project/eegdash/)[![Python Versions](https://img.shields.io/pypi/pyversions/eegdash?style=flat-square)](https://pypi.org/project/eegdash/)[![Downloads](https://pepy.tech/badge/eegdash)](https://pepy.tech/project/eegdash)[![Code Coverage](https://codecov.io/gh/eegdash/EEGDash/branch/main/graph/badge.svg)](https://codecov.io/gh/eegdash/EEGDash)[![License](https://img.shields.io/pypi/l/eegdash?style=flat-square)](https://github.com/eegdash/EEGDash/blob/main/LICENSE)[![GitHub Stars](https://img.shields.io/github/stars/eegdash/eegdash?style=flat-square)](https://github.com/eegdash/EEGDash) 700+ Curated and standardized metadata ready to explore. 5 EEG, MEG, fNIRS, EMG, and iEEG coverage. BIDS Interoperability and reproducibility baked in. GitHub Community-driven datasets, pipelines, and benchmarks. Build with the community Share datasets, contribute pipelines, and help define open standards for EEG and MEG. [GitHub](https://github.com/eegdash/EEGDash) [Join Discord](https://discord.gg/8jd7nVKwsc) Support Institutions ![UCSD](_static/logos/ucsd_white.svg)![UCSD](_static/logos/ucsd_dark.svg)![Ben-Gurion University of the Negev (BGU)](_static/logos/bgu_dark.svg)![Ben-Gurion University of the Negev (BGU)](_static/logos/bgu_white.svg) Funders ![National Science Foundation (NSF)](_static/logos/nsf_logo.png) AWS Open Data Sponsorship Program # Installation EEGDash requires Python 3.11 or higher. The package is on [PyPI](https://pypi.org/project/eegdash), and the source lives on [GitHub](https://github.com/eegdash/eegdash). Two install paths, depending on what you need: Install via `pip` For Beginners ```shell pip install eegdash ``` ![EEGDash Installer with pip](_static/eegdash_install.gif) [Installing from PyPI](install_pip.md#install-pip) Building from source code For Advanced Users ![Terminal Window](https://mne.tools/stable/_images/mne_installer_console.png) For Python users who want the development version. Follow the setup instructions for building from GitHub. [From Source Code](install_source.md#install-source) # Installing from PyPI Install EEGDash from [PyPI](https://pypi.org/project/eegdash) with pip: ```shell pip install eegdash ``` This pulls in the `eegdash` package and its dependencies. #### NOTE Use a recent `pip`. Older versions may resolve dependencies poorly. # Installing from sources This page covers installing EEGDash from source, which is what you want for contributing or for trying features that have not yet been released. For an overview of contributor workflows and project internals, see [Developer Notes](../developer_notes.md). #### NOTE If you only want to install a released version, see [Installing from PyPI](install_pip.md). ## Install a pre-release from PyPI ```shell pip install --pre eegdash ``` This installs the in-development version of `eegdash` from the main branch. It may not be stable. ## Install directly from GitHub Clone the repository and change into it: ```shell git clone https://github.com/eegdash/EEGDash && cd EEGDash ``` ## Install with pip For a one-off install straight from GitHub: ```shell pip install git+https://github.com/eegdash/EEGDash.git ``` From a local clone, install in editable mode so source edits are picked up without reinstalling: ```shell pip install -e . ``` Optional extras let you pull in test and documentation dependencies: ```shell pip install -e .[test,docs,dev] ``` Or install everything, which is what you want for contributing: ```shell pip install -e .[all] ``` # Verifying the installation ```shell python -c "import eegdash; print(eegdash.__version__)" ``` # User Guide This guide walks through the `eegdash` library and its main data access object, `EEGDashDataset`. You will see how to find, access, and manage EEG data for research and analysis. ## The EEGDash Object `EEGDashDataset` is the main tool for loading data for machine learning. For direct access to the metadata database, use the lower-level `EEGDash` object. It is the right choice for exploring what is available, running ad-hoc queries, or managing records. ### Initializing EEGDash Create a client that connects to the public database: ```python from eegdash import EEGDash # Connect to the public database eegdash = EEGDash() ``` ### Finding Records Use `find()` to query the database for records matching specific criteria. Pass keyword arguments for simple filters, or a full MongoDB query dictionary for more advanced searches. ```python # Find records for a specific dataset and subject records = eegdash.find(dataset="ds002718", subject="012") print(f"Found {len(records)} records.") # You can also use more complex queries query = {"dataset": "ds002718", "subject": {"$in": ["012", "013"]}} records_advanced = eegdash.find(query) print(f"Found {len(records_advanced)} records with advanced query.") ``` ### EEGDash vs. EEGDashDataset These two objects do different jobs: - `EEGDash`: query and manage metadata. Returns a list of dictionaries, one per record. - `EEGDashDataset`: load EEG data for analysis or machine learning. Returns a PyTorch-compatible dataset where each item can load the underlying EEG signal. For most data loading work, use `EEGDashDataset`. ## The EEGDashDataset Object `EEGDashDataset` is the main entry point for working with EEG recordings in `eegdash`. It is a high-level interface that queries the metadata database and loads matching EEG data, either from a remote source or from a local cache. ### Initializing EEGDashDataset Create an instance of `EEGDashDataset`. The two main parameters are `cache_dir` and `dataset`. - `cache_dir`: local directory where `eegdash` stores downloaded data. - `dataset`: identifier of the dataset (e.g., `"ds002718"`). A basic example: ```python from eegdash import EEGDashDataset # Initialize the dataset for ds002718 dataset = EEGDashDataset( cache_dir="./eeg_data", dataset="ds002718", ) print(f"Found {len(dataset)} recordings in the dataset.") ``` The resulting object holds every recording in `ds002718`. Files are downloaded to `./eeg_data/ds002718/` on first access. ## Querying for Specific Data `EEGDashDataset` lets you select a subset of recordings by task, subject, session, or run. ### Filtering by Task You can select recordings tied to a specific experimental task. For example, to get all resting-state recordings: ```python # Filter by a single task resting_state_dataset = EEGDashDataset( cache_dir="./eeg_data", dataset="ds002718", task="RestingState" ) print(f"Found {len(resting_state_dataset)} resting-state recordings.") ``` ### Filtering by Subject Filter by one or more subjects: ```python # Filter by a single subject subject_dataset = EEGDashDataset( cache_dir="./eeg_data", dataset="ds002718", subject="012" ) print(f"Found {len(subject_dataset)} recordings for subject 012.") # Filter by a list of subjects multi_subject_dataset = EEGDashDataset( cache_dir="./eeg_data", dataset="ds002718", subject=["012", "013", "014"] ) print(f"Found {len(multi_subject_dataset)} recordings for subjects 012, 013, and 014.") ``` ### Combining Filters Combine filters for narrower queries. For example, to get resting-state recordings from a specific set of subjects: ```python # Combine subject and task filters combined_filter_dataset = EEGDashDataset( cache_dir="./eeg_data", dataset="ds002718", subject=["012", "013"], task="RestingState" ) print(f"Found {len(combined_filter_dataset)} recordings matching the criteria.") ``` ### Advanced Querying with MongoDB Syntax For more complex queries, pass a MongoDB-style query dictionary directly to the `query` parameter. Operators such as `$in` work here. ```python # Use a MongoDB-style query query = { "dataset": "ds002718", "subject": {"$in": ["012", "013"]}, "task": "RestingState" } advanced_dataset = EEGDashDataset(cache_dir="./eeg_data", query=query) print(f"Found {len(advanced_dataset)} recordings using an advanced query.") ``` ## Working with Local Data (Offline Mode) `eegdash` also works with local data you have already downloaded or manage yourself. Pass `download=False` and `EEGDashDataset` reads BIDS-compliant files from disk instead of hitting the database or remote storage. The data must follow a BIDS-like layout inside your `cache_dir`. If `cache_dir` is `./eeg_data` and the dataset is `ds002718`, the files belong under `./eeg_data/ds002718/`. Offline mode in practice: ```python # Initialize in offline mode local_dataset = EEGDashDataset( cache_dir="./eeg_data", dataset="ds002718", download=False ) print(f"Found {len(local_dataset)} local recordings.") ``` With `download=False`, `eegdash` scans `cache_dir` for EEG files and builds the dataset from the local file system. Use this for offline environments, air-gapped machines, or your own curated datasets. ## Accessing Data from the Dataset The `EEGDashDataset` object behaves like a list: index into it to access individual recordings. Each item is an `EEGDashBaseDataset` that carries the metadata and loads the EEG data on demand. ```python if len(dataset) > 0: # Get the first recording recording = dataset[0] print(f"Loaded recording for subject: {recording.description['subject']}") ``` This is how `eegdash` plugs into a data analysis pipeline, whether the data is remote or local. For contributor resources, see [Developer Notes](developer_notes.md). ## API Configuration By default, `eegdash` connects to the public REST API at `https://data.eegdash.org`. Override it through environment variables: ```bash # Override the default API URL (e.g., for testing) export EEGDASH_API_URL="https://data.eegdash.org" # Admin write operations (required for dataset ingestion) export EEGDASH_API_TOKEN="your-admin-token" ``` Public endpoints are rate-limited to 100 requests per minute per IP. Service status is available at `/health`, and every response carries an `X-Request-ID` header you can use for debugging. For more on the API architecture, see [API Core](api/api_core.md). #### SEE ALSO [Developer Notes](developer_notes.md) captures contributor workflows for the core package. # Datasets Catalog EEG-DaSh is a data-sharing archive for MEEG (EEG, MEG) recordings contributed by collaborating labs. It preserves publicly funded research data and exposes it in a form that machine learning and deep learning workflows can use directly. **736** **40,361** **85,298** **5** Browse the catalog interactively at [eegdash.org/dataset_summary.html](https://eegdash.org/dataset_summary.html). The same data is available programmatically in Python: ```default from eegdash import EEGDashDataset # List every dataset records = EEGDashDataset.list_datasets() # Filter by task, modality, subject count, … rest_datasets = EEGDashDataset.list_datasets(task="rest") ``` Or via the HTTP API at `https://data.eegdash.org` (see the `/docs` Swagger UI and the [api catalog](/.well-known/api-catalog)). The archive is still in beta testing mode, so be kind.

Dataset map: subjects × records × duration

Loading dataset landscape...
EEGDashv0.6.0
Figure: Dataset map. Each bubble represents a dataset: x-axis shows the number of subjects, y-axis the number of records, bubble size encodes recording duration per subject, and color indicates experiment modality. Hover for details, click to open a dataset page, and use the legend to filter.

Clinical breakdown by recording modality

EEGDashv0.6.0
Figure: Breakdown of datasets by clinical status and experimental modality. Use the toggle buttons to switch between the number of studies and the number of subjects.

Cumulative growth of EEG-DaSh datasets

EEGDashv0.6.0
Figure: Cumulative growth of open datasets indexed by EEG Dash over time. Use the toggle buttons to switch between cumulative datasets and cumulative subjects.
### Distribution of Sample Sizes Varies by Experimental Modality
Loading participant distribution...
EEGDashv0.6.0
Figure: Participant distribution by modality. Kernel density estimates summarize how many participants are available for each experimental modality on a logarithmic scale. Individual points show dataset-level counts.
### Subject Distribution Bubble Plot
MOABB Bubble Chart
Legend
Circle size: duration / subject
Opacity: fewer sessions = more opaque
Each circle = 1 subject
EEGDashv0.6.0
Hover to highlight · Scroll/buttons to zoom · Click to open · Click legend to filter
Figure: Circle-packing overview of 700+ datasets (35 000+ subjects) catalogued in EEGDash. Each small circle represents one subject, grouped by dataset and colored by recording modality. Size encodes per-subject recording duration (log-scaled minutes); opacity encodes session count (fewer sessions = more opaque). Interactive: hover to inspect, scroll to zoom, click to navigate, search to filter.

Dataset flow by population, modality, and task

EEGDashv0.6.0
Figure: Dataset flow across population, modality, and cognitive domain. Link thickness is proportional to the total number of subjects, and the tooltip reports both subject and dataset counts. Hover and click legend entries to explore specific segments.
### EEG Datasets Table EEG-DaSh aggregates M/EEG recordings (EEG, MEG, and combined setups) from both healthy participants and clinical cohorts. Disease-bearing datasets span [epilepsy](https://meshb.nlm.nih.gov/record/ui?ui=D004827), [Parkinson disease](https://meshb.nlm.nih.gov/record/ui?ui=D010300), [dementia](https://meshb.nlm.nih.gov/record/ui?ui=D003704), [depressive disorder](https://meshb.nlm.nih.gov/record/ui?ui=D003866), [schizophrenia](https://meshb.nlm.nih.gov/record/ui?ui=D012559) and related [psychotic disorders](https://meshb.nlm.nih.gov/record/ui?ui=D011618), [traumatic brain injury](https://meshb.nlm.nih.gov/record/ui?ui=D000070642), [alcohol use disorder](https://meshb.nlm.nih.gov/record/ui?ui=D000437), [dyslexia](https://meshb.nlm.nih.gov/record/ui?ui=D004410), and [obesity](https://meshb.nlm.nih.gov/record/ui?ui=D009765), alongside neurodevelopmental and surgical cohorts. The catalog also covers resting state, sleep, and a range of cognitive, sensory, and motor tasks. Disease labels link to the NLM [MeSH](https://www.nlm.nih.gov/mesh/meshhome.html) vocabulary for formal definitions. A large share of the archive is converted from [NEMAR](https://nemar.org/), which contributes BIDS-formatted M/EEG datasets to the catalog.
Dataset Author (year) Canonical Source Recording Pathology Modality Type Records Subjects Tasks Sessions Channels Sampling rate Size
EEG2025R1 Shirazi2024_R1_bdf HBN_r1_bdf nemar EEG Development Visual Clinical Intervention 1,342 136 10 0 129 100 20.6 GB
EEG2025R10 Shirazi2025_R10_bdf HBN_r10_bdf nemar EEG Development Visual Clinical Intervention 2,516 533 8 0 129 100 32.1 GB
EEG2025R10MINI Shirazi2025_R10_bdf_mini HBN_r10_bdf_mini nemar EEG Development Visual Clinical Intervention 220 20 8 0 129 100 2.8 GB
EEG2025R11 Shirazi2025_R11_bdf HBN_r11_bdf nemar EEG Development Visual Clinical Intervention 3,397 430 8 0 129 100 43.8 GB
EEG2025R11MINI Shirazi2025_R11_bdf_mini HBN_r11_bdf_mini nemar EEG Development Visual Clinical Intervention 220 20 8 0 129 100 2.8 GB
EEG2025R1MINI Shirazi2024_R1_bdf_mini HBN_r1_bdf_mini nemar EEG Development Visual Clinical Intervention 239 20 10 0 129 100 3.7 GB
EEG2025R2 Shirazi2024_R2_bdf HBN_r2_bdf nemar EEG Development Visual Clinical Intervention 1,405 150 10 0 129 100 22.4 GB
EEG2025R2MINI Shirazi2024_R2_bdf_mini HBN_r2_bdf_mini nemar EEG Development Visual Clinical Intervention 240 20 10 0 129 100 3.8 GB
EEG2025R3 Shirazi2024_R3_bdf HBN_r3_bdf nemar EEG Development Visual Clinical Intervention 1,812 184 10 0 129 100 27.9 GB
EEG2025R3MINI Shirazi2024_R3_bdf_mini HBN_r3_bdf_mini nemar EEG Development Visual Clinical Intervention 240 20 10 0 129 100 3.7 GB
EEG2025R4 Shirazi2024_R4_bdf HBN_r4_bdf nemar EEG Development Visual Clinical Intervention 3,342 324 10 0 129 100 46.0 GB
EEG2025R4MINI Shirazi2024_R4_bdf_mini HBN_r4_bdf_mini nemar EEG Development Visual Clinical Intervention 240 20 10 0 129 100 3.3 GB
EEG2025R5 Shirazi2024_R5_bdf HBN_r5_bdf nemar EEG Development Visual Clinical Intervention 3,326 330 10 0 129 100 44.8 GB
EEG2025R5MINI Shirazi2024_R5_bdf_mini HBN_r5_bdf_mini nemar EEG Development Visual Clinical Intervention 240 20 10 0 129 100 3.2 GB
EEG2025R6 Shirazi2024_R6_bdf HBN_r6_bdf nemar EEG Development Visual Clinical Intervention 1,227 135 10 0 129 100 18.2 GB
EEG2025R6MINI Shirazi2024_R6_bdf_mini HBN_r6_bdf_mini nemar EEG Development Visual Clinical Intervention 237 20 10 0 129 100 3.5 GB
EEG2025R7 Shirazi2024_R7_bdf HBN_r7_bdf nemar EEG Development Visual Clinical Intervention 3,100 381 10 0 129\* 100
EEG2025R7MINI Shirazi2024_R7_bdf_mini HBN_r7_bdf_mini nemar EEG Development Visual Clinical Intervention 239 20 10 0 129 100
EEG2025R8 Shirazi2024_R8_bdf HBN_r8_bdf nemar EEG Development Visual Clinical Intervention 2,320 257 10 0 129 100 31.4 GB
EEG2025R8MINI Shirazi2024_R8_bdf_mini HBN_r8_bdf_mini nemar EEG Development Visual Clinical Intervention 238 20 10 0 129 100 3.2 GB
EEG2025R9 Shirazi2024_R9_bdf HBN_r9_bdf nemar EEG Development Visual Clinical Intervention 2,885 295 10 0 129 100 37.0 GB
EEG2025R9MINI Shirazi2024_R9_bdf_mini HBN_r9_bdf_mini nemar EEG Development Visual Clinical Intervention 237 20 10 0 129 100 3.0 GB
DS000117 Wakeman2018 Wakeman2015, WakemanHenson openneuro MEG Healthy Visual Perception 104 17 2 9 394 1100 87.6 GB
DS000246 Bock2018 openneuro MEG Healthy Auditory Perception 3 2 2 0 340\* 2400 2.3 GB
DS000247 Niso2018 OMEGA openneuro MEG Healthy Resting State Resting-state 10 6 2 6 297\* 2400 10.3 GB
DS000248 Gramfort2018 MNE_Sample_Data openneuro MEG Healthy Multisensory Attention 3 2 2 1 376\* 600 177.6 MB
DS001785 Pereira2019_Evidence openneuro EEG Healthy Tactile Perception 54 18 3 1 71 1024\* 24.9 GB
DS001787 Delorme2019 openneuro EEG Healthy Auditory Attention 40 24 1 3 79 256 5.7 GB
DS001810 Reteig2019 openneuro EEG Healthy Visual Attention 263 47 1 6 73 512 45.7 GB
DS001849 Freedberg2019 openneuro EEG Healthy Multisensory Clinical Intervention 120 20 1 0 30 5000 44.5 GB
DS001971 Wagner2019 openneuro EEG Healthy Auditory Motor 273 20 1 0 115\* 512 32.0 GB
DS002001 Mendola2019 Mendola2020 openneuro MEG Healthy Visual Perception 69 11 2 7 2400 81.7 GB
DS002034 Schneider2019 openneuro EEG Healthy Visual Attention 167 14 4 3 81 512 10.1 GB
DS002094 DS2094_Single_pulse openneuro EEG Other Clinical Intervention 43 20 3 0 30 5000 39.4 GB
DS002158 Pereira2019_Disentangling openneuro EEG Healthy Visual Affect 117 20 1 1 64 5000 76.5 GB
DS002181 Xie2019 openneuro EEG Development Visual Resting-state 226 226 1 0 125 500 150.9 MB
DS002218 Comstock2019 openneuro EEG Healthy Multisensory Perception 18 18 1 0 32 256 1.9 GB
DS002312 Brooks2019 OcularLDT, ocular_ldt openneuro MEG Healthy Visual Perception 23 19 1 0 257 1000 34.1 GB
DS002336 Lioi2019_multi openneuro EEG Healthy Visual Motor 54 10 6 0 64 5000 16.8 GB
DS002338 Lioi2019_multi_modal openneuro EEG Healthy Visual Motor 85 17 4 0 64 5000 24.2 GB
DS002550 Quentin2020 openneuro MEG Healthy Visual Memory 377 22 2 2 308\* 1200\* 167.5 GB
DS002578 Delorme2020_Visual_Oddball_256 openneuro EEG Healthy Visual Attention 2 2 1 0 256 256 1.3 GB
DS002680 Delorme2020_Go_nogo_categorization openneuro EEG Healthy Visual Motor 350 14 1 2 31 1000 9.2 GB
DS002691 Delorme2020_Internal_attention openneuro EEG Healthy Visual Attention 20 20 1 0 32 250 776.7 MB
DS002712 Aurtenetxe2020 openneuro MEG Healthy Visual Perception 82 25 1 0 312\* 1000 101.8 GB
DS002718 Wakeman2020 Wakeman2015, WakemanHenson_EEG_MEG openneuro EEG Healthy Visual Perception 18 18 1 0 74 250 4.3 GB
DS002720 Daly2020_recorded openneuro EEG Healthy Auditory Affect 165 18 0 0 19 1000 2.4 GB
DS002721 Daly2020_recorded_affective openneuro EEG Healthy Auditory Affect 185 31 0 0 19 1000 3.4 GB
DS002722 Daly2020_recorded_development openneuro EEG Healthy Auditory Affect 94 19 0 0 37 1000 6.1 GB
DS002723 Daly2020_session openneuro EEG Healthy Auditory Affect 44 8 0 0 37 1000 2.6 GB
DS002724 Daly2020_sessions openneuro EEG Healthy Auditory Affect 96 10 0 3 37 1000 8.5 GB
DS002725 Daly2020_joint openneuro EEG Healthy Auditory Affect 105 21 5 0 46 1000 15.3 GB
DS002761 Wimmer2020 openneuro MEG Healthy Visual Memory 249 25 2 0 306 600 1.7 MB
DS002778 Rockhill2020 openneuro EEG Parkinson's Resting State Resting-state 46 31 1 3 41 512 545.0 MB
DS002791 Mheich2020_DataSet1 Mheich2020 openneuro EEG Healthy 92 23 0 2 256\* 1000 47.1 GB
DS002799 Thompson2024 openneuro iEEG Epilepsy Other Clinical Intervention 16,824 27 2 2 2\* 18.6 GB
DS002814 Ebrahiminia2020 openneuro EEG Healthy Visual Perception 168 21 1 1 72 1200 27.7 GB
DS002833 Mheich2020_DataSet2 Mheich2024 openneuro EEG Healthy Visual Other 80 20 1 4 257 1000 39.8 GB
DS002885 Kandemir2020 openneuro MEG Other Other Other 7 2 4 0 306\* 19200\* 20.1 GB
DS002893 Westerfield2022 openneuro EEG Healthy Multisensory Attention 52 49 1 0 36 250 7.7 GB
DS002908 Bogacz2020 Bogacz2024 openneuro MEG Attention 53 13 1 6 299 2400 59.8 GB
DS003004 Onton2020 openneuro EEG Healthy Auditory Affect 34 34 1 0 219\* 256 36.0 GB
DS003029 Li2020 openneuro iEEG Epilepsy Other Clinical Intervention 106 35 1 1 129\* 1000\* 10.3 GB
DS003039 Jacobsen2020 openneuro EEG Healthy Motor Motor 19 19 1 0 67 500 7.8 GB
DS003061 Delorme2020_auditory_oddball Delorme openneuro EEG Healthy Auditory Attention 39 13 1 0 79 256 2.3 GB
DS003078 DOMENECH2020 openneuro iEEG Surgery 72 6 1 0 130 11.0 GB
DS003082 Cote2020 Cote2015 openneuro MEG Healthy Auditory Perception 3 2 2 2 300\* 12000\* 13.2 GB
DS003104 Parkkonen2020 MNESomato, Somato, MNESomatoData openneuro MEG Healthy Tactile Perception 1 1 1 0 316 300 333.7 MB
DS003190 MendozaMontoya2020 openneuro EEG Healthy Visual Attention 384 19 2 3 9\* 256 1.0 GB
DS003194 Vega2020_Neuroepo openneuro EEG Parkinson's Resting State Clinical Intervention 29 15 2 2 19\* 200 189.1 MB
DS003195 Vega2020_Placebo openneuro EEG Parkinson's Resting State Clinical Intervention 20 10 2 2 19 200 121.1 MB
DS003343 Schneider2020 openneuro EEG Healthy Tactile Perception 59 20 1 1 20 500 663.4 MB
DS003352 Hermann2020 Hermann2021 openneuro MEG Healthy Visual Perception 138 18 1 2 323 1000 214.3 GB
DS003374 Fedele2020 openneuro iEEG Epilepsy Visual Affect 18 9 1 1 4\* 2000 167.3 MB
DS003392 Zilber2020 openneuro MEG Healthy Visual Perception 33 12 2 11 320 2000 10.1 GB
DS003420 Mheich2020_HD openneuro EEG Healthy Visual Other 92 23 0 2 256\* 1000 47.1 GB
DS003421 Mheich2020_HD_EEGtask openneuro EEG Healthy Multisensory Decision-making 80 20 1 4 257 1000 39.6 GB
DS003458 Cavanagh2021_Three openneuro EEG Healthy Visual Affect 23 23 1 0 66\* 500 4.7 GB
DS003474 Cavanagh2021_Probabilistic openneuro EEG Healthy Visual Decision-making 122 122 1 0 66\* 500 16.6 GB
DS003478 Cavanagh2021_Depression openneuro EEG Healthy Resting State Resting-state 243 122 1 0 66\* 500 10.6 GB
DS003483 Cognitive2021 Maestu2021 openneuro MEG Healthy Decision-making 41 21 2 1 320 1000 24.5 GB
DS003490 Cavanagh2021_3 openneuro EEG Parkinson's Auditory Attention 75 50 1 2 67 500 5.8 GB
DS003498 Fedele2021 openneuro iEEG Epilepsy Resting State Clinical Intervention 385 20 0 1 64\* 2000 44.7 GB
DS003505 Pascucci2021 VEPCON openneuro EEG Healthy Visual Perception 37 19 2 0 128 2048 29.0 GB
DS003506 Cavanagh2021_Reinforcement openneuro EEG Parkinson's Visual Decision-making 84 56 1 2 67 500 16.2 GB
DS003509 Cavanagh2021_Simon openneuro EEG Parkinson's Visual Learning 84 56 1 2 67 500 22.3 GB
DS003516 Holtze2021 openneuro EEG Healthy Auditory Attention 25 25 1 0 49 500 7.6 GB
DS003517 Cavanagh2021_Continuous openneuro EEG Healthy Visual Learning 34 17 1 0 65 500 5.8 GB
DS003518 Cavanagh2021_Simon_Conflict openneuro EEG Healthy Visual Clinical Intervention 137 110 1 2 64 500 39.5 GB
DS003519 Cavanagh2021_Visual openneuro EEG Healthy Visual Clinical Intervention 54 27 1 2 64 500 9.0 GB
DS003522 Cavanagh2021_Three_Stim openneuro EEG TBI Auditory Decision-making 200 96 1 3 65\* 500 25.4 GB
DS003523 Cavanagh2021_Visual_Working openneuro EEG TBI Visual Memory 221 91 1 3 65\* 500 37.5 GB
DS003555 Cserpan2021 openneuro EEG Epilepsy Resting State Clinical Intervention 30 30 1 1 23\* 1024 15.1 GB
DS003568 Liuzzi2021 openneuro MEG Healthy Visual Affect 118 51 2 0 340\* 1200 123.4 GB
DS003570 Goldman2021 openneuro EEG Healthy Auditory Decision-making 40 40 1 0 64 2048 47.6 GB
DS003574 Sterpenich2021 openneuro EEG Healthy Visual Affect 18 18 1 0 69 500 18.0 GB
DS003602 Korucuoglu2021 openneuro EEG Other Visual Decision-making 699 118 6 0 35 1000 73.2 GB
DS003620 Liebherr2021 Runabout openneuro EEG Healthy Auditory Attention 100 44 1 0 35 500 17.0 GB
DS003626 Nieto2021 openneuro EEG Healthy Visual Motor 30 10 1 3 137 18.3 GB
DS003633 Liu2021 ForrestGump_MEG openneuro MEG Healthy Multisensory Perception 96 12 2 8 409\* 600\* 73.5 GB
DS003638 Cavanagh2021_Electrophysiological openneuro EEG Healthy Visual Decision-making 57 57 1 0 72 512 15.3 GB
DS003645 Wakeman2021 openneuro EEG MEG Healthy Visual Perception 224 19 2 8 404\* 1100 106.3 GB
DS003655 Pavlov2021_VerbalWorkingMemory openneuro EEG Healthy Visual Memory 156 156 1 0 21 500 20.3 GB
DS003670 Gebodh2021 openneuro EEG Healthy Visual Clinical Intervention 62 25 1 6 35 2000 72.2 GB
DS003682 Wise2021 openneuro MEG Healthy Learning 336 28 1 1 414 1200 211.6 GB
DS003688 Berezutskaya2021 openneuro iEEG Epilepsy Multisensory Perception 107 51 2 1 74\* 512\* 15.2 GB
DS003690 Ribeiro2021 openneuro EEG Healthy Auditory Decision-making 375 75 3 0 66\* 500 21.5 GB
DS003694 Griffiths2021 MEGMEM openneuro MEG Memory 132 28 1 0 327\* 1000 218.5 GB
DS003702 Gregory2021 openneuro EEG Healthy Visual Memory 47 47 1 0 59 500 17.5 GB
DS003703 Kalenkovich2021 Kalenkovich2019 openneuro MEG Healthy Auditory Perception 102 34 2 0 314 1000 92.3 GB
DS003708 Hermes2021 Miller2021 openneuro iEEG Other Clinical Intervention 1 1 1 1 89 2048 620.1 MB
DS003710 Williams2021 APPLESEED openneuro EEG Healthy Multisensory Perception 48 13 1 4 32 5000 10.2 GB
DS003739 Peterson2021_Perturbed_beam_walking openneuro EEG Healthy Motor Perception 120 30 4 4 149 256 10.9 GB
DS003751 Mishra2021 DENS openneuro EEG Healthy Multisensory Affect 38 38 1 0 131 250 4.7 GB
DS003753 Brown2021_Probabilistic openneuro EEG Healthy Visual Learning 25 25 1 0 66 500 4.6 GB
DS003766 Chen2021 openneuro EEG Healthy Visual Decision-making 124 31 4 0 129 1000 71.3 GB
DS003768 Gu2021 openneuro EEG Healthy Sleep Sleep 255 33 2 0 32 5000 86.6 GB
DS003774 Miyapuram2021 MUSING openneuro EEG Healthy Auditory Affect 240 20 1 12 129 1000\* 10.1 GB
DS003775 HatlestadHall2021 openneuro EEG Healthy Resting State Resting-state 153 111 1 2 64 1024 4.5 GB
DS003800 Lahijanian2021_Auditory openneuro EEG Dementia Auditory Clinical Intervention 24 13 2 0 19 250 189.3 MB
DS003801 Straetmans2021 openneuro EEG Healthy Auditory Attention 20 20 1 0 24 250 1.1 GB
DS003805 Lahijanian2021_Multisensory openneuro EEG Healthy Multisensory Learning 1 1 1 0 19 500 8.8 MB
DS003810 Peterson2021_Motor_Imagery_vs openneuro EEG Healthy Motor Clinical Intervention 50 10 1 0 15 125 69.0 MB
DS003816 Sun2024 openneuro EEG Healthy Other Affect 1,077 48 8 11 128 1000 54.0 GB
DS003822 Brown2021_Probabilistic_Learning openneuro EEG Healthy Visual Affect 25 25 1 0 66 500 5.8 GB
DS003825 Grootswagers2021 THINGS, THINGS_EEG openneuro EEG Healthy Visual Perception 50 50 1 0 63\* 1000 41.2 GB
DS003838 Pavlov2021_pupillometry openneuro EEG Healthy Auditory Memory 130 65 2 0 63 1000 100.2 GB
DS003844 Zweiphenning2021 RESPect_intraop openneuro iEEG Epilepsy Resting State Clinical Intervention 38 6 1 18 33\* 2048\* 2.6 GB
DS003846 Gehrke2021 openneuro EEG Healthy Multisensory Decision-making 50 19 1 5 64 500 9.8 GB
DS003848 Blooijs2021 RESPect_longterm openneuro iEEG Epilepsy Other Clinical Intervention 22 6 6 1 133\* 2048\* 65.0 GB
DS003876 Gunnarsdottir2021 openneuro iEEG Epilepsy Resting State Clinical Intervention 54 39 3 1 128\* 1000\* 5.0 GB
DS003885 Shatek2021_E1 openneuro EEG Healthy Visual Perception 24 24 1 0 128 1000 46.1 GB
DS003887 Shatek2021_E2 openneuro EEG Healthy Visual Perception 24 24 1 0 128 1000 45.7 GB
DS003922 Lerousseau2021 openneuro MEG Healthy Multisensory Perception 164 14 3 9 342\* 1000 75.7 GB
DS003944 Salisbury2021_First openneuro EEG Schizophrenia Psychosis Resting State Clinical Intervention 82 82 1 0 64 1000\* 6.2 GB
DS003947 Salisbury2021_First_Episode openneuro EEG Schizophrenia Psychosis Resting State Clinical Intervention 61 61 1 0 64 3000\* 12.5 GB
DS003969 Delorme2021 openneuro EEG Healthy Auditory Attention 392 98 4 0 79\* 1024\* 54.5 GB
DS003987 Cavanagh2022_Amphetamine_trials_5 openneuro EEG Healthy Visual Attention 69 23 1 3 71 500 25.6 GB
DS004000 Padee2022 openneuro EEG Schizophrenia Psychosis Multisensory Decision-making 86 43 2 0 132 2048 22.5 GB
DS004010 Waschke2022 MAVIS openneuro EEG Healthy Multisensory Attention 24 24 1 0 64 1000 23.1 GB
DS004011 Teichmann2022 openneuro MEG Healthy Visual Perception 132 22 1 0 309 1200 198.1 GB
DS004012 Rani2022 Rani2019 openneuro MEG Healthy 294 30 10 0 383 1000 78.3 GB
DS004015 Holtze2022_Attended openneuro EEG Healthy Auditory Attention 36 36 1 0 18 500 6.0 GB
DS004017 Damsgaard2022 openneuro EEG Healthy Visual Learning 63 21 0 3 65 2048 20.9 GB
DS004018 Grootswagers2022_RSVP openneuro EEG Healthy Visual Learning 32 16 1 0 63 1000 10.6 GB
DS004019 AlatorreCruz2022_Effect openneuro EEG Obese Visual Other 62 62 1 0 128 500 17.3 GB
DS004022 Lee2022 openneuro EEG Other Visual Motor 21 7 1 0 18\* 500 616.6 MB
DS004024 Pavon2022 openneuro EEG Healthy Visual Clinical Intervention 497 13 3 3 69 20000 1021.2 GB
DS004033 Scanlon2022 openneuro EEG Healthy Auditory Attention 36 18 2 2 67 500 19.8 GB
DS004040 Cannard2022 openneuro EEG Healthy Auditory Other 26 13 1 2 64 512 11.6 GB
DS004043 Moerel2022_time openneuro EEG Healthy Visual Attention 20 20 1 0 63 1000 15.4 GB
DS004067 Yoder2022 openneuro EEG Healthy Visual Affect 84 80 1 0 63 2000 100.8 GB
DS004075 Boncz2022 openneuro EEG 116 29 4 0 64 1000 7.4 GB
DS004078 Wang2022_StudyBRAIN openneuro MEG Healthy Auditory Other 720 12 1 0 328 1000 631.1 GB
DS004080 Blooijs2023_CCEP_ECoG RESPect_CCEP openneuro iEEG Epilepsy Other Clinical Intervention 117 74 1 2 133\* 2048\* 269.1 GB
DS004100 Bernabei2022 HUPiEEG openneuro iEEG Epilepsy Other Clinical Intervention 319 57 2 1 122\* 512\* 13.2 GB
DS004105 Garcia2022 BCIT_Auditory_Cueing openneuro EEG Healthy Multisensory Attention 34 17 1 1 74 1024 20.4 GB
DS004106 Touryan2022 BCITAdvancedGuardDuty openneuro EEG Healthy Visual Attention 29 27 1 1 262 1024 67.6 GB
DS004107 Weisend2022 Weisend2007 openneuro MEG Healthy Multisensory Other 89 9 6 15 318\* 1792\* 77.2 GB
DS004117 Onton2022 openneuro EEG Healthy Visual Memory 85 23 1 1 71 250\* 5.8 GB
DS004118 Touryan2022_BCIT_Calibration Touryan1999 openneuro EEG Healthy Visual Attention 247 156 1 7 266\* 1024\* 124.3 GB
DS004119 Touryan2022_BCIT_Basic BCIT openneuro EEG Healthy Visual Attention 22 21 1 1 262 1024 55.1 GB
DS004120 Touryan2022_BCIT_Baseline BCITBaselineDriving openneuro EEG Healthy Visual Attention 131 109 1 3 266\* 1024\* 302.5 GB
DS004121 Touryan2022_BCIT_Mind BCITMindWandering openneuro EEG Healthy Multisensory Attention 60 21 1 1 74 1024 23.9 GB
DS004122 Touryan2022_BCIT_Speed openneuro EEG Healthy Visual Attention 63 32 1 1 74 1024 36.2 GB
DS004123 Touryan2022_BCIT_Traffic BCIT_Traffic_Complexity openneuro EEG Healthy Visual Attention 30 29 1 1 74 1024 17.5 GB
DS004127 Abrego2022 openneuro iEEG Other Tactile Other 73 8 11 0 128\* 20000 187.5 GB
DS004147 Hassall2022_Average openneuro EEG Healthy Visual Learning 12 12 1 0 31 1000 4.0 GB
DS004148 Wang2022_test_retest_resting openneuro EEG Healthy Other Other 900 60 5 3 61 500 30.7 GB
DS004151 AlatorreCruz2022_Effect_obesity openneuro EEG Obese Visual Attention 57 57 1 0 128 500 23.1 GB
DS004152 Hassall2022_Drum openneuro EEG Healthy Multisensory Learning 21 21 1 0 31 1000 4.8 GB
DS004166 Li2022 openneuro EEG Healthy Visual Learning 213 71 1 3 77.4 GB
DS004194 Groen2022 openneuro iEEG Epilepsy Visual Perception 209 14 7 5 265\* 512\* 7.8 GB
DS004196 Liwicki2022 openneuro EEG Healthy Visual Clinical Intervention 4 4 1 1 64 512 9.3 GB
DS004200 Hassall2022_Temporal openneuro EEG Healthy Multisensory Attention 20 20 1 0 37 1000 7.2 GB
DS004212 Hebart2022 THINGS_MEG, THINGSMEG openneuro MEG Healthy Visual Perception 500 5 1 28 310 1200 237.7 GB
DS004229 Mittag2022 openneuro MEG Dyslexia Auditory Perception 3 2 2 1 332 1200 1.8 GB
DS004252 Moerel2022_Rotation openneuro EEG Healthy Visual Perception 1 1 1 0 127 1000 1.3 GB
DS004256 Bialas2022 openneuro EEG Healthy Auditory Perception 53 53 2 0 64 500 18.2 GB
DS004262 Hassall2022_Continuous openneuro EEG Healthy Visual Learning 21 21 1 0 31 1000 3.5 GB
DS004264 Hassall2022_Steer openneuro EEG Healthy Visual Learning 21 21 1 0 31 1000 3.3 GB
DS004276 Gaston2022 openneuro MEG Healthy Auditory Perception 19 19 2 1 193 1000 11.6 GB
DS004278 Kidder2022 Kidder2024 openneuro MEG Healthy Memory 30 30 1 0 306 1200 76.7 GB
DS004279 Araya2022 openneuro EEG Healthy Auditory Perception 60 56 1 4 69 1000 25.2 GB
DS004284 Veillette2022 openneuro EEG Healthy Visual Decision-making 18 18 1 0 129 1000 16.4 GB
DS004295 Stolz2022 openneuro EEG Healthy Multisensory Learning 26 26 1 0 66 1024\* 31.5 GB
DS004306 Wilson2022 openneuro EEG Healthy Multisensory Perception 15 12 1 3 128 1024 654.9 MB
DS004315 Cavanagh2022_E1 openneuro EEG Healthy Multisensory Affect 50 50 1 0 66 500 9.8 GB
DS004317 Cavanagh2022_E2 openneuro EEG Healthy Multisensory Affect 50 50 1 0 66 500 18.3 GB
DS004324 Chacon2022 ToonFaces openneuro EEG Healthy Multisensory Affect 26 26 1 1 38 500 2.5 GB
DS004330 Singer2022 openneuro MEG Healthy Visual Perception 270 30 1 1 310 1000 153.7 GB
DS004346 Ferrante2022 FLUX openneuro MEG Healthy Attention 3 1 1 1 343 1000 3.6 GB
DS004347 Makin2022 openneuro EEG Healthy Visual Perception 24 24 1 0 72 512 2.4 GB
DS004348 Mikkelsen2022 EESM17 openneuro EEG Healthy Sleep Sleep 18 9 2 1 34 200 8.2 GB
DS004350 Delorme2022 openneuro EEG Healthy Visual Memory 240 24 5 2 64 256 9.4 GB
DS004356 Shan2022 openneuro EEG Healthy Auditory Perception 24 22 1 0 34 10000 213.1 GB
DS004357 Grootswagers2022_EEG openneuro EEG Healthy Visual Perception 16 16 1 0 63 1000 19.3 GB
DS004362 Schalk2022 PhysionetMI, EEGMotorMovementImagery openneuro EEG Healthy Visual Motor 1,526 109 1 0 64 160\* 7.8 GB
DS004367 Rouy2022_Meta openneuro EEG Schizophrenia Psychosis Visual Perception 40 40 1 0 68 1200 28.0 GB
DS004368 Rouy2022_Meta_rdk openneuro EEG Schizophrenia Psychosis Visual Perception 40 39 1 2 63 128 997.1 MB
DS004369 Holtze2022_Blink openneuro EEG Healthy Auditory Perception 41 41 1 0 7 500 2.0 GB
DS004370 Blooijs2022_PRIOS PRIOS openneuro iEEG Surgery Anesthesia Clinical Intervention 15 7 2 1 133\* 2048 27.6 GB
DS004381 Selmin2022 openneuro EEG Surgery Other Other 437 18 1 4 4\* 20000 7.7 GB
DS004388 Nierula2023_Somatosensory openneuro EEG Healthy Tactile Perception 399 40 3 0 115\* 10000 682.5 GB
DS004389 Nierula2023_Somatosensory_evoked openneuro EEG Healthy Tactile Perception 260 26 4 0 90 10000 376.5 GB
DS004395 Kahana2023 PEERS openneuro EEG Healthy Visual Memory 6,483 364 3 24 129\* 500\* 8.7 TB
DS004398 Wimmer2023 Wimmer2024 openneuro MEG Visual 1 1 1 0 305 600 1.3 GB
DS004408 Liberto2023 openneuro EEG Healthy Auditory Other 380 19 1 0 128 512 18.7 GB
DS004444 Iwama2023_D1 BMI_HDEEG_D1 openneuro EEG Healthy Visual Motor 465 30 1 16 129 1000 48.6 GB
DS004446 Iwama2023_D2 BMI_HDEEG_D2 openneuro EEG Healthy Visual Motor 237 30 1 8 129 1000 29.2 GB
DS004447 Iwama2023_D3 BMI_HDEEG_D3 openneuro EEG Healthy Visual Motor 418 22 1 20 129 1000 20.7 GB
DS004448 Iwama2023_D4 BMI_HDEEG_D4 openneuro EEG Healthy Visual Motor 280 56 1 5 129 1000 38.2 GB
DS004457 Huang2023 Huang2022 openneuro iEEG Surgery Other Clinical Intervention 5 5 1 1 206\* 2048 10.9 GB
DS004460 Gramann2023 openneuro EEG Healthy Visual Perception 40 20 1 2 160 1000 59.1 GB
DS004473 Rockhill2023 Rockhill2022 openneuro iEEG Epilepsy Visual Motor 8 8 1 0 129 999 6.3 GB
DS004475 Jacobsen2023 openneuro EEG Healthy Motor Motor 30 30 1 0 260\* 512 48.5 GB
DS004477 Papastylianou2023 openneuro EEG Healthy Multisensory Decision-making 9 9 1 0 80 2048 22.3 GB
DS004483 Planton2023 ABSeqMEG openneuro MEG Healthy Auditory Memory 282 19 1 0 396 250 23.4 GB
DS004502 Penalver2023 Penalver2024 openneuro EEG Healthy Attention 48 48 1 0 63\* 1000\* 59.4 GB
DS004504 Miltiadous2023 openneuro EEG Dementia Resting State Clinical Intervention 88 88 1 0 19 500 2.6 GB
DS004505 Studnicki2023 openneuro EEG Healthy Motor Motor 25 25 1 0 313\* 250 34.6 GB
DS004511 Makowski2023_Deception openneuro EEG Healthy Visual Decision-making 134 45 3 1 139 3000 202.3 GB
DS004514 Rybar2023_Simultaneous openneuro EEG fNIRS Healthy Multisensory Other 24 12 2 0 80\* 2048\* 24.1 GB
DS004515 Singh2023 openneuro EEG Other Visual Affect 54 54 1 0 66 500 9.5 GB
DS004517 Rybar2023_semantic openneuro EEG Healthy Visual Other 7 7 1 0 80 2048 12.7 GB
DS004519 Ester2023_Internal Ester2022 openneuro EEG Healthy Visual Attention 40 40 1 0 62 250 12.6 GB
DS004520 Ester2023_Changes Ester2024_E2 openneuro EEG Healthy Visual Memory 33 33 1 0 62 250 10.4 GB
DS004521 Ester2023_Changes_behavioral Ester2024_E1 openneuro EEG Healthy Memory 34 34 1 0 62 250 10.7 GB
DS004532 Cavanagh2023 openneuro EEG Healthy Visual Learning 137 110 1 2 64 500 21.8 GB
DS004541 Ferron2023 Ferron2019 openneuro EEG fNIRS Surgery Anesthesia Clinical Intervention 18 8 1 2 59\* 1000\* 2.9 GB
DS004551 Sakakura2023_children_slow_wave Sakakura2025 openneuro iEEG Epilepsy Sleep Sleep 125 114 1 3 128\* 1000 68.9 GB
DS004554 Volpert2023 openneuro EEG Healthy Visual Decision-making 16 16 1 0 99 1000 8.8 GB
DS004561 Veillette2023 openneuro EEG Healthy Motor Perception 23 23 1 0 64 10000 97.7 GB
DS004563 Smit2023 openneuro EEG Other Multisensory Perception 119 40 1 3 64 2048 100.9 GB
DS004572 Kekecs2023 Kekecs2024 openneuro EEG Healthy Auditory Other 516 52 10 1 61 1000 43.6 GB
DS004574 Singh2023_Cross_modal openneuro EEG Parkinson's Multisensory Clinical Intervention 146 146 1 0 63\* 500 13.5 GB
DS004577 Unit2023 openneuro EEG Healthy Sleep Clinical Intervention 130 103 1 4 19\* 200 652.7 MB
DS004579 Singh2023_Interval_Timing openneuro EEG Parkinson's Visual Decision-making 139 139 1 0 63\* 500 24.1 GB
DS004580 Singh2023_Simon_conflict openneuro EEG Parkinson's Visual Decision-making 147 147 1 0 63\* 500 15.8 GB
DS004582 Makowski2023_FakeFaceEmo openneuro EEG Healthy Visual Affect 73 73 1 1 64 10000 294.2 GB
DS004584 Singh2023_Rest_eyes openneuro EEG Parkinson's Resting State Clinical Intervention 149 149 1 0 63\* 500 2.9 GB
DS004587 Makowski2023_IllusionGameEEG openneuro EEG Healthy Visual Decision-making 114 103 1 1 64 10000 219.3 GB
DS004588 Georgiadis2023 Neuma openneuro EEG Healthy Visual Decision-making 42 42 1 0 24 300 534.1 MB
DS004595 Campbell2023 openneuro EEG Other Visual Decision-making 53 53 1 0 66 500 7.8 GB
DS004598 Faraz2023 Moradi2024 openneuro EEG Dementia Motor Memory 20 9 1 3 16 10000 9.9 GB
DS004602 Clayson2023_Registered openneuro EEG Healthy Visual Perception 546 182 3 0 129 500\* 73.9 GB
DS004603 Lowe2023 VisualContextTrajectory openneuro EEG Healthy Visual Perception 37 37 1 0 65 1024 27.4 GB
DS004621 Patrycja2023_Nencki NenckiSymfonia openneuro EEG Healthy Visual Decision-making 167 42 4 0 127 1000 77.4 GB
DS004624 Mivalt2025 Mivalt2024, BCI2000_Intracranial openneuro iEEG Surgery Multisensory Clinical Intervention 614 3 28 7 36\* 1000 19.3 GB
DS004625 Liu2023 openneuro EEG Healthy Motor Motor 543 32 9 0 284\* 500 62.5 GB
DS004626 Maka2023 openneuro EEG Other Visual Attention 52 52 1 0 68 1000 19.9 GB
DS004635 Bagdasarov2023 openneuro EEG Healthy Multisensory Attention 48 48 1 0 129 1000 26.1 GB
DS004642 Dimakopoulos2023_Intraoperative openneuro iEEG Surgery Other Other 10 10 1 1 8\* 20000 1.2 GB
DS004657 Metcalfe2023_Driving TX20 openneuro EEG Healthy Visual Decision-making 119 24 1 6 74 1024\* 43.1 GB
DS004660 Johnson2023_TNO TNO openneuro EEG Healthy Multisensory Attention 42 21 1 0 38 512\* 7.2 GB
DS004661 Johnson2023_ANDI ANDI openneuro EEG Healthy Multisensory Attention 17 17 1 0 64 128 1.4 GB
DS004696 Valencia2023 openneuro iEEG Epilepsy Other Clinical Intervention 8 8 1 1 226\* 2048 14.2 GB
DS004703 Mai2023 openneuro iEEG Surgery Auditory Memory 11 10 1 2 148\* 1024\* 12.4 GB
DS004706 Rudoler2023 openneuro EEG Healthy Visual Memory 298 34 2 9 137 2048 1.3 TB
DS004718 Momenian2023 openneuro EEG Healthy Auditory Learning 51 51 1 0 64 1000 37.0 GB
DS004738 Bahners2023 openneuro MEG Other Other Other 25 4 2 2 323\* 5000 6.1 GB
DS004745 Kumaravel2023 openneuro EEG Healthy Visual Other 6 6 1 0 8 1000 242.1 MB
DS004752 Dimakopoulos2023_intracranial openneuro EEG iEEG Epilepsy Auditory Memory 136 15 1 8 64\* 2000\* 10.2 GB
DS004770 Ueda2023 openneuro iEEG Epilepsy Visual Memory 22 10 1 2 128\* 1000 8.7 GB
DS004771 Kuo2023 openneuro EEG Healthy Visual Decision-making 61 61 1 0 34 256 1.4 GB
DS004774 Boom2023 ERDetect, ER_Detect openneuro iEEG Epilepsy Other Clinical Intervention 14 14 2 3 133\* 2048 24.8 GB
DS004784 Downey2023 openneuro EEG Healthy Motor Attention 6 1 6 0 264 512 1.0 GB
DS004785 Boebinger2023 openneuro EEG Healthy Motor Motor 17 17 1 0 32 500 351.2 MB
DS004789 Herrema2023_Delayed_Free_Recall openneuro iEEG Epilepsy Visual Memory 983 273 1 12 126\* 1000\* 576.3 GB
DS004796 Patrycja2023_Polish PEARLNeuro openneuro EEG Other Visual Resting State Memory Resting-state 235 79 3 0 127 1000 240.2 GB
DS004802 Bathelt2023 openneuro EEG Other Visual Affect 79 39 1 0 69 512\* 10.1 GB
DS004809 Herrema2023_Categorized_Free_Recall catFR_Categorized_Free_Recall, CatFR openneuro iEEG Epilepsy Visual Memory 889 252 1 10 126\* 1000\* 477.2 GB
DS004816 Grootswagers2023_E1 openneuro EEG Healthy Visual Attention 20 20 1 0 63 1000 9.1 GB
DS004817 Grootswagers2023_E2 openneuro EEG Healthy Visual Attention 20 20 1 0 63 1000 10.1 GB
DS004819 Lee2023 openneuro iEEG Surgery Other Clinical Intervention 8 1 1 1 64 30000 688.7 MB
DS004830 Ning2023 Ning2024 openneuro fNIRS Healthy Visual Attention 14 12 1 0 72\* 50 1.2 GB
DS004837 LopezCaballero2023 openneuro MEG Schizophrenia Psychosis Auditory Perception 106 60 1 1 3000\* 119.9 GB
DS004840 CordobaSilva2023 openneuro EEG Other Auditory Clinical Intervention 51 9 3 2 10\* 1024\* 599.5 MB
DS004841 Larkin2023_TX14 TX14 openneuro EEG Healthy Visual Attention 147 20 1 2 70 256 7.3 GB
DS004842 Larkin2023_TX15 TX15 openneuro EEG Multisensory Attention 102 14 1 2 70\* 256 5.2 GB
DS004843 Johnson2023_T16 openneuro EEG Healthy Visual Attention 92 14 1 0 70 256 7.7 GB
DS004844 Metcalfe2023_T22 openneuro EEG Healthy Visual Decision-making 68 17 1 4 72 1024 22.3 GB
DS004849 Johnson2023_STRONG STRONG openneuro EEG Memory 1 1 1 0 64 128 79.2 MB
DS004850 Johnson2023_ODE Johnson2024 openneuro EEG Memory 1 1 1 0 64 128 79.2 MB
DS004851 Johnson2023_HID HID openneuro EEG 66 66 1 0 72 2048 55.9 GB
DS004852 Johnson2023_InsurgentCivilian Johnson2025 openneuro EEG Memory 1 1 1 0 64 128 79.2 MB
DS004853 Johnson2023_TX17 openneuro EEG Memory 1 1 1 0 64 128 79.2 MB
DS004854 Johnson2023_TX18 TX18 openneuro EEG Memory 1 1 1 0 64 128 79.2 MB
DS004855 Johnson2023_FT openneuro EEG Memory 1 1 1 0 64 128 79.2 MB
DS004859 Sakakura2023_children_Stroop Sakakura2024 openneuro iEEG Visual Attention 9 7 1 2 128\* 1000 2.3 GB
DS004860 Schwartz2023 openneuro EEG Healthy Auditory Decision-making 31 31 1 0 36 512\* 3.8 GB
DS004865 Herrema2023_pyFR_Delayed_Free pyFR openneuro iEEG Surgery Visual Memory 172 42 1 5 100\* 1000\* 97.8 GB
DS004883 Clayson2023_Registerd openneuro EEG Healthy Visual Decision-making 516 172 3 0 129 500 122.8 GB
DS004902 Xiang2023 openneuro EEG Healthy Resting State Resting-state 218 71 2 2 61 500\* 8.3 GB
DS004917 FigueroaVargas2024 openneuro EEG Healthy Multisensory Decision-making 24 24 1 0 66 5000 37.5 GB
DS004929 Gao2024 openneuro fNIRS Motor Motor 36 12 1 0 200 8 302.4 MB
DS004940 Toffolo2024 openneuro EEG Healthy Auditory Attention 48 22 2 0 128 512 118.5 GB
DS004942 Kieffaber2024 openneuro EEG Healthy Visual Memory 62 62 1 0 65 1000 25.1 GB
DS004944 Costa2024 BCI2000_intraop openneuro iEEG Epilepsy Other Clinical Intervention 44 22 1 2 3\* 2000 451.1 MB
DS004951 Haupt2024_Braille Haupt2025 openneuro EEG Other Tactile Learning 23 11 1 2 64\* 1000 22.0 GB
DS004952 Mou2024 openneuro EEG Healthy Visual Attention 245 10 1 2 128 1000 207.1 GB
DS004973 Zhang2024_driving_risk_cognition openneuro fNIRS Healthy Visual Attention 222 20 12 0 16 50 2.3 GB
DS004977 Huang2024 CARLA openneuro iEEG Epilepsy Other Other 6 4 1 1 273\* 4800 1.5 GB
DS004980 Wang2024_architectural_affordances openneuro EEG Healthy Visual Perception 17 17 1 0 64 499\* 15.8 GB
DS004993 Hamilton2024 WIRED_ICM openneuro iEEG Epilepsy Auditory Perception 3 3 3 1 148\* 512\* 305.1 MB
DS004995 Moerel2024 Moerel2023 openneuro EEG Healthy Visual Perception 20 20 1 0 127 1000 27.6 GB
DS004998 Rassoulou2024 openneuro MEG Parkinson's Motor Motor 145 20 6 1 323\* 2000 161.8 GB
DS005007 Kitazawa2024 Kitazawa2025 openneuro iEEG Healthy Auditory Other 42 40 1 2 100\* 1000 8.3 GB
DS005021 Williams2024 openneuro EEG Healthy Visual Attention 36 36 1 0 72 1024 47.5 GB
DS005028 Chandravadia2024 Chandravadia2022 openneuro EEG Visual Attention 105 11 3 2 32 422.1 MB
DS005034 Pavlov2024_effect_theta_tACS openneuro EEG Healthy Visual Memory 100 25 2 2 129 1000 61.4 GB
DS005048 Lahijanian2024 openneuro EEG Dementia Auditory Attention 35 35 1 0 19 250 355.9 MB
DS005059 Herrema2024_Paired PAL openneuro iEEG Epilepsy Visual Memory 282 69 1 7 112\* 1000\* 167.3 GB
DS005065 Russek2024 openneuro MEG Healthy Visual Decision-making 275 21 1 0 415\* 1200 425.8 GB
DS005079 Cohen2024 openneuro EEG Healthy Multisensory Affect 60 1 15 12 65 500 1.7 GB
DS005083 Yang2024 openneuro iEEG Surgery Clinical Intervention 1,357 61 3 2 105\* 281 KB
DS005087 Robinson2024_rapid openneuro EEG Healthy Visual Perception 60 20 3 0 63 1000 12.2 GB
DS005089 AguadoLopez2024 openneuro EEG Healthy Visual Attention 36 36 1 0 63 1000 68.0 GB
DS005095 Zhozhikashvili2024 openneuro EEG Healthy Visual Memory 48 48 1 1 63 1000 14.3 GB
DS005106 Grootswagers2024 openneuro EEG Healthy Visual Attention 42 42 1 0 33 500 1.2 GB
DS005107 Xu2024_DEC openneuro MEG Healthy Visual Perception 350 21 1 2 65 1000 27.6 GB
DS005114 Cavanagh2024 openneuro EEG TBI Visual Attention 223 91 1 3 65\* 500 55.9 GB
DS005121 Siefert2024 openneuro EEG Healthy Sleep Memory 39 34 1 0 65 512 9.0 GB
DS005131 Bialas2024 openneuro EEG Healthy Auditory Attention Memory 63 58 2 2 64 500 22.3 GB
DS005169 Barborica2024 openneuro iEEG Epilepsy Other Clinical Intervention 112 20 1 16 82\* 4096 4.0 GB
DS005170 Zhang2024_Chisco Chisco openneuro EEG Healthy Visual Motor 225 5 1 6 134 90.7 GB
DS005178 Tabar2024 EESM23 openneuro EEG Healthy Sleep Sleep 140 10 1 12 4\* 250 25.7 GB
DS005185 Mikkelsen2024_Ear_Sleep_Monitoring EESM19 openneuro EEG Healthy Sleep Sleep 356 20 3 16 25 500 267.6 GB
DS005189 Helbing2024 openneuro EEG Healthy Visual Memory 30 30 1 0 62 1000 16.1 GB
DS005207 Mikkelsen2024_Surrey_cEEGrid_sleep Surrey_cEEGrid_sleep openneuro EEG Healthy Sleep Sleep 39 20 1 1 13\* 128\* 28.5 GB
DS005241 Rodriguez2024 NeuroMorph, neuromorph openneuro MEG Healthy Other 117 24 2 3 256 1000 140.5 GB
DS005261 Todorovic2024 Todorovic2023 openneuro MEG Healthy Learning 128 17 2 0 248\* 2034 137.2 GB
DS005262 Metwalli2024 ArEEG openneuro EEG Healthy Visual Other 186 12 1 21 8 250 688.8 MB
DS005273 Esteban2024 openneuro EEG Healthy Visual Decision-making 33 33 1 0 63 1000 44.4 GB
DS005274 Ito2024 openneuro EEG Healthy 22 22 1 0 6 500 71.9 MB
DS005279 Wei2024 openneuro MEG Healthy Multisensory Other 90 30 0 1 1200 58.9 GB
DS005280 Xiangyue2024_223_BP openneuro EEG Healthy Tactile Perception 669 223 1 3 64 1000 42.4 GB
DS005284 Xiangyue2024_26_Biosemi openneuro EEG Healthy 26 26 1 0 64 1024\* 1.7 GB
DS005285 Xiangyue2024_29_ANT openneuro EEG Healthy Tactile Perception 116 29 1 4 32 1000 11.8 GB
DS005286 Xiangyue2024_30_ANT openneuro EEG Healthy Tactile Perception 30 30 1 0 32 1000 9.4 GB
DS005289 Xiangyue2024_39_BP openneuro EEG 195 39 1 5 64 1000 7.1 GB
DS005291 Xiangyue2024_65_ANT openneuro EEG Healthy Tactile Perception 65 65 1 0 32 1000 20.5 GB
DS005292 Xiangyue2024_142_Biosemi openneuro EEG Healthy Tactile Perception 426 142 1 3 64 1024\* 50.9 GB
DS005293 Xiangyue2024_95_BP openneuro EEG Healthy Tactile Perception 570 95 1 6 60 1000 98.9 GB
DS005296 Emmorey2024 openneuro EEG Healthy Multisensory Decision-making 62 62 1 0 32 500 8.5 GB
DS005305 Quentin2024 openneuro EEG Healthy Visual Decision-making 165 165 1 0 64 512\* 6.4 GB
DS005307 Nierula2024 Nierula2019 openneuro EEG Healthy Tactile Perception 73 7 1 0 77\* 10000 18.1 GB
DS005340 Polonenko2024_Fundamental openneuro EEG Healthy Auditory Perception 15 15 1 0 2 10000 9.5 GB
DS005342 TrianaGuzman2024 openneuro EEG Healthy Visual Motor 32 32 1 0 17 250 2.0 GB
DS005343 Bagdasarov2024 openneuro EEG Development Multisensory Perception 43 43 1 0 129 1000 22.7 GB
DS005345 Ma2024 LPP openneuro EEG Healthy Auditory Attention 26 26 1 0 64 500 162.5 GB
DS005346 Li2024_Naturalistic_fMRI_viewing openneuro MEG Healthy Multisensory Memory 90 30 3 0 66\* 1000 38.9 GB
DS005356 DS5356_MajorDepression openneuro MEG Depression Visual Learning 116 85 1 1 396\* 1000 161.6 GB
DS005363 Haupt2024_Object ORHA openneuro EEG Healthy Visual Perception 43 43 1 1 64 1000 17.7 GB
DS005383 Bai2024 TMNRED openneuro EEG Healthy Visual Perception 240 30 1 8 31 200 358.2 MB
DS005385 Wascher2024 openneuro EEG Healthy Resting State Resting-state 3,264 608 2 2 64 1000 74.1 GB
DS005397 Hilton2024 openneuro EEG Healthy Visual Affect 26 26 1 0 64 500 12.0 GB
DS005398 Zhang2024_Open_Pediatric_Wayne openneuro iEEG Epilepsy Sleep Clinical Intervention 185 185 1 1 128\* 1000\* 102.2 GB
DS005403 Veillette2024 Veillette2019 openneuro EEG Healthy Auditory Motor 32 32 1 0 66 10000 118.5 GB
DS005406 Formica2024 Formica2025 openneuro EEG Healthy Visual Perception 29 29 1 0 64 1000 13.3 GB
DS005407 Polonenko2024_effect openneuro EEG Healthy Auditory Perception 29 25 1 0 2 10000 37.8 GB
DS005408 Polonenko2024_effect_speech openneuro EEG Healthy Auditory Perception 29 25 1 0 2 10000 15.3 GB
DS005410 Pavlov2024_Semantic_conditioning openneuro EEG Healthy Visual Affect 81 81 1 0 63 1000 19.8 GB
DS005411 Herrema2024_Free openneuro iEEG Epilepsy Visual Memory 193 47 1 7 120\* 1000\* 157.4 GB
DS005415 Rockhill2024 openneuro iEEG Epilepsy Multisensory Perception 13 13 1 0 182\* 1000\* 7.5 GB
DS005416 Wu2024 openneuro EEG Healthy Visual Resting-state 23 23 1 0 64 1000 21.3 GB
DS005420 Gama2024 Gama2019 openneuro EEG Healthy Resting State Resting-state 72 37 2 0 20 500 372.1 MB
DS005429 Rutiku2024 openneuro EEG Healthy Auditory Attention 61 15 3 3 64 2500\* 16.5 GB
DS005448 Jelsma2024 STReEF openneuro iEEG Epilepsy Other Clinical Intervention 18 13 1 1 133\* 2048 44.7 GB
DS005473 Xiangyue2024_29_BP Zhao2024 openneuro EEG Healthy 58 29 1 2 64 1000 6.2 GB
DS005486 Chowdhury2024 openneuro EEG Resting State Resting-state 445 159 1 5 66 5000\* 371.0 GB
DS005489 Herrema2024_Free_Recall openneuro iEEG Visual Memory 154 37 1 7 100\* 500\* 64.9 GB
DS005491 Herrema2024_Categorized catFR_open_loop, RAM_catFR, catFR_stim openneuro iEEG Visual Clinical Intervention 51 19 1 3 64\* 500\* 22.5 GB
DS005494 Herrema2024_Cued Herrema2024 openneuro iEEG Visual Clinical Intervention 51 20 1 3 100\* 500\* 26.3 GB
DS005505 Shirazi2024_R1 HBN_r1 openneuro EEG Development Visual Clinical Intervention 1,342 136 10 0 129 500 103.1 GB
DS005506 Shirazi2024_R2 HBN_r2 openneuro EEG Development Visual Clinical Intervention 1,405 150 10 0 129 500 111.9 GB
DS005507 Shirazi2024_R3 HBN_r3 openneuro EEG Development Visual Clinical Intervention 1,812 184 10 0 129 500 139.4 GB
DS005508 Shirazi2024_R4 HBN_r4 openneuro EEG Development Visual Clinical Intervention 3,342 324 10 0 129 500 229.8 GB
DS005509 Shirazi2024_R5 HBN_r5 openneuro EEG Development Visual Clinical Intervention 3,326 330 10 0 129 500 224.2 GB
DS005510 Shirazi2024_R6 HBN_r6 openneuro EEG Development Visual Clinical Intervention 1,227 135 10 0 129 500 90.8 GB
DS005512 Shirazi2024_R8 HBN_r8 openneuro EEG Development Multisensory Clinical Intervention 2,320 257 10 0 129 500 157.2 GB
DS005514 Shirazi2024_R9 HBN_r9 openneuro EEG Development Visual Clinical Intervention 2,885 295 10 0 129 500 185.0 GB
DS005515 Shirazi2024_R10 HBN_r10 openneuro EEG Development Visual Clinical Intervention 2,516 533 8 0 129 500 160.5 GB
DS005516 Shirazi2024_R11 HBN_r11 openneuro EEG Development Visual Clinical Intervention 3,397 430 8 0 129 500 219.2 GB
DS005520 Li2024_Research_supporting_playing openneuro EEG Healthy Visual Other 69 23 3 0 67 1000 43.9 GB
DS005522 Herrema2024_Spatial openneuro iEEG Visual Memory 176 55 1 6 133\* 1000\* 107.5 GB
DS005523 Herrema2024_Spatial_Memory openneuro iEEG Surgery Visual Memory 102 21 1 7 166\* 1000\* 69.7 GB
DS005530 Greco2024 openneuro EEG Healthy Multisensory Sleep 21 17 1 0 10 500 6.5 GB
DS005540 Xin2024 openneuro EEG Healthy Visual Affect 103 59 1 0 68 600\* 47.3 GB
DS005545 Kanno2024 Kanno2025 openneuro iEEG Surgery Auditory Other 336 106 1 5 128\* 1000 40.0 GB
DS005555 LopezLarraz2024 BOAS openneuro EEG Healthy Sleep Sleep 256 128 1 0 9\* 256 33.5 GB
DS005557 Herrema2024_Classifier openneuro iEEG Other Visual Memory 58 16 1 5 110\* 1000 34.7 GB
DS005558 Herrema2024_Categorized_Free catFR_closed_loop openneuro iEEG Surgery Visual Memory 22 7 1 3 126\* 1000 12.2 GB
DS005565 Lee2024_StudyWITH openneuro EEG Healthy Visual Memory 24 24 1 0 32 500 2.6 GB
DS005571 MartinezMolina2024 openneuro EEG Healthy Attention 45 24 2 0 66\* 5000 63.3 GB
DS005574 Zada2024 Podcast openneuro iEEG Auditory Other 9 9 1 0 178\* 512\* 3.2 GB
DS005586 Baykan2024 openneuro EEG Healthy Visual Perception 23 23 1 0 63 1000 28.3 GB
DS005594 Taylor2024 openneuro EEG Healthy Visual Perception 16 16 1 0 66 1000 10.9 GB
DS005620 Bajwa2024 openneuro EEG Healthy Anesthesia Clinical Intervention 202 21 3 0 64\* 5000 77.3 GB
DS005624 DS5624_ColorChangeDetection openneuro iEEG Visual Memory 35 24 1 4 74\* 512\* 13.8 GB
DS005628 RosadoAiza2024 openneuro EEG Healthy Multisensory Attention 306 102 1 0 8 250 633.7 MB
DS005642 Robinson2024_illusory openneuro EEG Healthy Visual Perception 21 21 1 0 68 1024 13.8 GB
DS005648 Kidder2024 openneuro EEG Healthy Visual Perception 21 21 1 0 64 2048 15.5 GB
DS005662 Smit2024 openneuro EEG Healthy Visual Perception 80 80 1 0 65 2048 107.8 GB
DS005670 Xu2024_SEEG_Resting_State openneuro iEEG Epilepsy Resting State Resting-state 2 2 1 0 186\* 2000 708.8 MB
DS005672 Zhiyuan2024 openneuro EEG Healthy Visual Memory 3 3 1 0 69\* 1000 4.2 GB
DS005688 Tan2024 openneuro EEG Healthy Visual Clinical Intervention 89 20 5 1 5\* 10000\* 8.4 GB
DS005691 Stenner2024_SpinalExpect openneuro iEEG Other Multisensory Attention 8 8 1 0 7\* 2500 723.0 MB
DS005692 Stenner2024_SpinalExpect_NonInvasive openneuro EEG Healthy Multisensory Attention 59 30 1 2 25 5000 92.8 GB
DS005697 Li2024_PerceiveImagine PerceiveImagine openneuro EEG Healthy Visual Memory 51 51 1 0 65\* 1000 66.6 GB
DS005752 Nugent2024 openneuro MEG Healthy Multisensory Other 1,055 123 10 1 305\* 1200\* 662.7 GB
DS005776 Yucel2025_Electrical Yucel2015 openneuro fNIRS Healthy Tactile Motor 46 11 5 1 102 50 1.2 GB
DS005777 Peng2025 Peng2018 openneuro fNIRS Tactile Perception 113 14 2 2 66 25 864.8 MB
DS005779 Khatri2025 openneuro EEG Healthy Other Clinical Intervention 250 19 16 0 67\* 5000 88.7 GB
DS005795 Stadler2025 openneuro EEG Healthy Auditory Learning 39 34 2 0 72 500 6.4 GB
DS005810 Zhang2025_MEG NOD_MEG openneuro MEG Healthy Visual Perception 305 31 2 24 409\* 1200 178.6 GB
DS005811 Zhang2025_EEG NOD_EEG openneuro EEG Healthy Visual Perception 448 19 1 4 64\* 500\* 16.2 GB
DS005815 Chang2025 openneuro EEG Healthy Multisensory Perception 103 20 3 2 31 1000 7.6 GB
DS005841 Karakashevska2025 openneuro EEG Healthy Visual Perception 288 48 6 0 73 512 7.3 GB
DS005857 Broitman2025 Broitman2019 openneuro EEG Visual Memory 110 29 1 6 137 2048 284.4 GB
DS005863 Isbell2025_Cognitive openneuro EEG Healthy Multisensory Other 357 127 4 0 30 500 10.6 GB
DS005866 TerhuneCotter2025_NEAR Flankers_NEAR openneuro EEG Healthy Visual Attention 60 60 1 0 32 500 3.6 GB
DS005868 TerhuneCotter2025_FAR Flankers_FAR openneuro EEG Healthy Visual Attention 48 48 1 0 32 500 2.9 GB
DS005872 Plomecka2025 EEGEyeNet openneuro EEG Healthy Visual Attention 1 1 1 1 129 500 39.9 MB
DS005873 Bhagubai2025 SeizeIT2 openneuro EEG EMG Epilepsy Other Clinical Intervention 5,654 125 1 1 2\* 256 44.4 GB
DS005876 Girard2025 openneuro EEG Healthy Auditory Memory 29 29 1 0 32 1000 7.1 GB
DS005907 Campbell2025 openneuro EEG Alcohol Visual Learning 53 53 1 0 58\* 500 5.6 GB
DS005929 MotionYucel2014 Yucel2014, Motion_Yucel2014 openneuro fNIRS Healthy Motor Motor 7 7 1 1 28 50 68.5 MB
DS005930 Gao2023 openneuro fNIRS Motor Motor 36 12 1 0 200 8 304.2 MB
DS005931 Ueda2025 openneuro iEEG Epilepsy Visual Motor 16 8 1 2 128\* 1000 817.7 MB
DS005932 Holcomb2025 PWIe openneuro EEG Healthy Visual Other 29 29 1 0 32 500 2.3 GB
DS005935 Li2025 openneuro fNIRS Visual Motor 64 21 1 1 120 25 738.6 MB
DS005946 Frau2025 PROMENADE openneuro EEG Healthy Visual Perception 39 39 1 0 60 1000 14.8 GB
DS005953 Winawer2025 openneuro iEEG Surgery Visual Perception 3 2 1 1 96\* 1525\* 577.3 MB
DS005960 Pena2025 openneuro EEG Healthy Visual Attention 41 41 1 0 63 1000 57.7 GB
DS005963 Mesquita2025 Mesquita2019 openneuro fNIRS Motor 40 10 1 4 136 8 233.4 MB
DS005964 Luke2025 Luke2019 openneuro fNIRS Auditory Perception 17 17 1 1 66 5 62.4 MB
DS006012 SableMeyer2025 openneuro MEG Healthy Visual Perception 193 21 2 15 336\* 1000 71.1 GB
DS006018 Isbell2025_Adulthood openneuro EEG Healthy Multisensory Other 357 127 4 0 30 500 10.6 GB
DS006033 Liwicki2025 openneuro EEG Healthy Visual Other 5 3 1 2 66 5000 15.3 GB
DS006035 Lin2025 Lin2019 openneuro MEG Healthy Tactile Motor 15 5 1 1 388\* 1004 3.1 GB
DS006036 Ntetska2025 openneuro EEG Dementia Visual Clinical Intervention 88 88 1 0 19 500 1.0 GB
DS006040 Cha2025 openneuro EEG Healthy Visual Other 392 28 10 0 64 5000 172.5 GB
DS006065 Kragel2025 openneuro iEEG Surgery Other Clinical Intervention 45 7 10 0 168\* 500 9.6 GB
DS006095 Liu2025_Mind_Motion_Older openneuro EEG Healthy Motor Motor 1,182 71 9 0 284\* 500 129.8 GB
DS006104 Moreira2025 openneuro EEG Healthy Auditory Perception 56 24 3 2 61\* 2000 43.0 GB
DS006107 Kuroda2025 Kuroda2024 openneuro iEEG Sleep Sleep 167 166 1 2 128\* 1000 11.9 GB
DS006126 Mensah2025 openneuro EEG Healthy Motor Motor 90 5 6 3 3 5000 1.1 GB
DS006136 Omelyusik2025 Omelyusik2026 openneuro iEEG Epilepsy Visual Memory 14 13 1 2 8\* 1000 285.9 MB
DS006142 MatranFernandez2025 openneuro EEG Healthy Visual Memory 27 27 1 0 65 2048 24.3 GB
DS006159 LeganesFonteneau2025 LeganesFonteneau2024 openneuro EEG Healthy Learning 61 61 1 0 73 1024 14.3 GB
DS006171 Melcon2025 Melcon2024 openneuro EEG Healthy Visual Attention 104 36 3 0 144 1024 67.8 GB
DS006222 Attokaren2025 openneuro EEG Healthy Multisensory Attention 70 69 1 2 40 512 14.7 GB
DS006233 Kochi2025_Picture_naming openneuro iEEG Surgery Visual Other 347 108 1 5 128\* 1000 17.3 GB
DS006234 Kochi2025_Auditory_naming openneuro iEEG Surgery Auditory Other 378 119 1 6 128\* 1000 43.9 GB
DS006253 Goueytes2024 MetaRDK openneuro iEEG Epilepsy Visual Decision-making 201 23 4 1 122\* 656 KB
DS006260 CoronaGonzalez2025 openneuro EEG Development Visual Clinical Intervention 366 76 1 2 32 256 2.7 GB
DS006269 Pritchard2025 openneuro EEG Other Resting State Resting-state 40 24 2 2 33 1000 106.7 GB
DS006317 Zhang2025_Chisco_2_0 Chisco2_0, Chisco20, CHISCO20 openneuro EEG Healthy Motor 64 2 2 4 127 1000 52.9 GB
DS006334 Biau2025 openneuro MEG Healthy Multisensory Memory 128 30 1 0 331\* 1000 166.2 GB
DS006366 Rose2025 MSSV openneuro EEG Healthy Sleep Sleep 148 92 1 0 3\* 128 6.1 GB
DS006367 DS6367_Memory_Reactivation openneuro EEG Healthy Visual Memory 52 52 1 0 30 1000 27.8 GB
DS006370 DS6370_Memory_Reactivation openneuro EEG Healthy Visual Memory 56 56 1 0 30 1000 40.1 GB
DS006374 Pohle2025 Pohle2019 openneuro EEG Healthy Tactile Perception 358 36 2 0 35 2000 31.1 GB
DS006377 Yucel2025_InclusionStudy openneuro fNIRS Motor Motor 690 115 6 0 52 10 1.4 GB
DS006386 Yu2025 Yu2019 openneuro EEG Healthy Motor Other 180 30 1 0 59 1000 23.0 GB
DS006392 Attia2025 Hermes2024 openneuro iEEG Visual Perception 1 1 1 1 166 512 32.0 MB
DS006394 Leong2025 openneuro EEG Healthy Multisensory Attention 60 33 2 0 16 125 534.8 MB
DS006434 Stoll2025 openneuro EEG Healthy Auditory Attention 118 66 5 0 32\* 10000\* 103.0 GB
DS006437 DS6437_LIGHT_Hypnotherapy openneuro EEG Healthy Auditory Clinical Intervention 63 9 5 4 64 256 4.3 GB
DS006446 Kinley2025 Kinley2019 openneuro EEG Healthy Visual Decision-making 29 29 1 0 65 2048 16.1 GB
DS006459 Anderson2025_Sparse openneuro fNIRS Healthy Visual Attention 17 17 1 1 120 24 168.8 MB
DS006460 Anderson2025_HD openneuro fNIRS Healthy Visual Attention 17 17 1 1 428 17 459.7 MB
DS006465 Ma2025 CPSEED_3M, CPSEED openneuro EEG Healthy Visual Motor 80 20 1 4 32\* 500 8.2 GB
DS006466 Kim2025_HeartBEAM_Older_Adult HeartBEAM openneuro EEG Healthy Auditory Attention 1,257 66 6 2 65 1000 117.5 GB
DS006468 Habersetzer2025 MEG_SCANS openneuro MEG Healthy Auditory Perception 189 24 4 0 341\* 1000 101.2 GB
DS006480 Kim2025_Young_Adult_Resting openneuro EEG Healthy Auditory Attention 68 68 1 0 65 1000 64.1 GB
DS006502 Bonstrup2025 openneuro MEG Healthy Visual Learning 380 31 4 3 307\* 600 95.8 GB
DS006519 Barborica2025 openneuro iEEG Epilepsy Other Clinical Intervention 35 21 1 5 35\* 4096\* 1.0 GB
DS006525 Neuroimaging2025 openneuro EEG Resting State Resting-state 34 34 1 0 128\* 250 3.0 GB
DS006545 ReliabilityDubois2024 Dubois2024 openneuro fNIRS Auditory 98 49 1 2 6180\* 3 46.7 GB
DS006547 Ghaffari2025 Ghaffari2024 openneuro EEG Healthy Visual Perception 31 31 1 1 64 500 17.6 GB
DS006554 Su2025 openneuro EEG 47 47 1 0 64 500 12.1 GB
DS006563 Gramann2025 openneuro EEG Healthy Visual Attention 12 12 1 0 64 500 5.6 GB
DS006576 McDevitt2025 openneuro EEG Healthy Sleep Sleep 57 57 1 0 73 512 553.9 GB
DS006593 Celik2025 openneuro EEG Healthy Visual Attention 21 21 1 1 19 300 441.9 MB
DS006629 Chanoine2025 SINGSING openneuro MEG Healthy Auditory Perception 38 19 2 0 339 250 11.2 GB
DS006647 Chaudhuri2025_D2 openneuro EEG Healthy Visual Affect 4 4 1 0 70 512 4.3 GB
DS006648 Chaudhuri2025_D1 openneuro EEG Healthy Visual Affect 47 47 1 0 70 512 45.4 GB
DS006673 Carlton2025 openneuro fNIRS Healthy Motor Motor 67 17 2 0 2440\* 7.8 GB
DS006695 Onton2025 Onton2024 openneuro EEG Healthy Sleep Sleep 19 19 1 0 3 500 9.4 GB
DS006720 Herbst2025 openneuro MEG Healthy Auditory Memory 246 24 3 16 328\* 1000\* 136.5 GB
DS006735 Shan2025 openneuro EEG Healthy Auditory Perception 27 27 1 0 36\* 10000 175.9 GB
DS006761 Moerel2025_Neural openneuro EEG Healthy Visual Decision-making 31 31 1 0 64 2048 78.0 GB
DS006768 Lowe2025 openneuro EEG Healthy Visual Attention 210 30 1 0 64 1000 6.5 GB
DS006801 Alves2025 openneuro EEG Healthy Resting State Learning 42 21 1 2 31 500 1.3 GB
DS006802 Moerel2025_Collaborative openneuro EEG Healthy Visual Learning 24 24 1 0 64 2048 62.2 GB
DS006803 PechCanul2025 openneuro EEG Healthy Visual Learning 126 63 1 2 8 250 1.4 GB
DS006817 Lowe2025 VisualContextTrajectory_v2 openneuro EEG 34 34 1 0 65 1024 9.7 GB
DS006839 Gonzales2025 openneuro EEG Healthy Multisensory Attention 144 36 4 0 29 1000 10.4 GB
DS006840 Cai2025 IACKD openneuro EEG Healthy Motor Motor 128 15 1 0 29\* 1024 6.0 GB
DS006848 Kosachenko2025 openneuro EEG Healthy Visual Memory 52 30 2 0 65 1000 41.4 GB
DS006850 Zaehme2025 openneuro EEG Healthy Visual Affect 126 63 1 2 66 500 34.7 GB
DS006861 Maka2025_Targeted openneuro EEG Healthy Visual Affect 239 120 1 2 37 1000 52.1 GB
DS006866 Maka2025_Discrepancy openneuro EEG Healthy Visual Affect 148 148 1 0 69 1000 116.2 GB
DS006890 Yang2025_Longitudinal openneuro iEEG Healthy Multisensory Motor 870 2 5 251 50\* 1000 41.2 GB
DS006902 Geisler2025 openneuro fNIRS Healthy Motor Perception 42 42 1 0 112 7 5.5 GB
DS006903 here2025 openneuro fNIRS Healthy Motor Motor 67 17 2 0 1134\* 4\* 5.4 GB
DS006910 Kochi2025_Auditory_Naming_EC openneuro iEEG Auditory Other 384 121 1 6 128\* 1000 44.6 GB
DS006914 Kochi2025_Visual_Naming_EC openneuro iEEG Epilepsy Visual Other 353 110 1 5 128\* 1000 17.5 GB
DS006921 Ramne2025 openneuro EEG Other Resting State Clinical Intervention 152 38 2 5 128\* 2400 64.4 GB
DS006923 Polo2025 openneuro EEG Other Resting State Clinical Intervention 280 140 1 0 128 128 8.1 GB
DS006940 Sarkar2025_StudyOF openneuro EEG Healthy Motor Motor 935 7 15 9 64 100 3.6 GB
DS006945 Sarkar2025_T1_Weighted_Structural openneuro EEG Healthy Visual Motor 14 5 3 1 64 5000 5.4 GB
DS006963 Ozdemir2025 openneuro EEG Healthy Visual Memory 32 32 1 0 64 1000 52.8 GB
DS006979 Ramzaoui2025 Ramzaoui2024 openneuro EEG Healthy Visual Memory 56 53 3 0 69\* 512\* 38.5 GB
DS007006 Wu2025 openneuro EEG Healthy Multisensory Affect 50 10 5 0 64 256 918.7 MB
DS007020 Jamshidi2025 openneuro EEG Parkinson's Resting State Clinical Intervention 94 94 1 1 63\* 500 1.7 GB
DS007028 Kajikawa2025 Kajikawa2000 openneuro EEG Other Auditory Perception 3 3 1 1 64 20000 13.9 GB
DS007052 Couperus2025_N400 Couperus2021_N400 openneuro EEG Healthy Visual Memory 288 288 1 0 32 500 9.0 GB
DS007056 Couperus2025_P300 Couperus2021_P300 openneuro EEG Healthy Visual Attention 286 286 1 0 32 500 7.8 GB
DS007069 Couperus2025_MMN Couperus2021_MMN openneuro EEG Healthy Auditory Perception 281 281 1 0 32 500 12.4 GB
DS007081 Ylmaz2025 openneuro EEG Healthy Visual Memory 41 41 1 0 32 1000 11.3 GB
DS007095 Feng2025 openneuro iEEG Epilepsy Other Clinical Intervention 6,019 8 1 68 2 200 497.8 MB
DS007096 Couperus2025_PURSUE_N170_Face Couperus2017 openneuro EEG Healthy Visual Perception 292 292 1 0 32 500 11.6 GB
DS007118 Hatano2025_part1 Hatano openneuro iEEG Sleep Sleep 82 65 1 1 128\* 1000 33.8 GB
DS007119 Hatano2025_part3 openneuro iEEG Sleep Sleep 106 103 1 1 128\* 1000 32.6 GB
DS007120 Hatano2025_part2 openneuro iEEG Epilepsy Sleep Sleep 70 65 1 1 128\* 1000 33.0 GB
DS007137 Couperus2025_N2PC Couperus2021_N2pc openneuro EEG Healthy Visual Attention 294 294 1 0 32 500 12.2 GB
DS007139 Couperus2025_LRP Couperus2021_LRP openneuro EEG Healthy Visual Attention 292 292 1 0 32 500 14.5 GB
DS007162 DS7162_VisualRecognition openneuro EEG Healthy Visual Perception 69 34 1 1 63 1000 60.9 GB
DS007169 Barras2026_Multimodal Barras2021 openneuro EEG Healthy Visual Memory 18 18 1 0 24 250 421.7 MB
DS007172 Reinke2026 EEGAsymmetries openneuro EEG Healthy Visual Attention 501 100 6 0 32\* 500\* 11.0 GB
DS007175 DS7175_FFR_ActiveListening openneuro EEG Healthy Auditory Perception 41 41 1 0 65 5000 200.4 GB
DS007176 Isaza2026_Longitudinal openneuro EEG Healthy Resting State Resting-state 300 45 2 5 60 1000 21.1 GB
DS007180 FuentesGuerra2026 FuentesGuerra2024 openneuro EEG Healthy 25 25 1 0 63 500 14.7 GB
DS007181 Li2026 openneuro EEG Other Sleep Clinical Intervention 59 59 1 0 24 1024 59.2 GB
DS007216 Kucyi2026 Kucyi2024 openneuro EEG Healthy Visual Attention 187 24 2 2 36 5000 104.7 GB
DS007221 Xinwei2026 openneuro EEG Healthy Visual Motor 1,265 84 4 2 69\* 1000 124.8 GB
DS007262 Barras2026_Cognitive Barras2025 openneuro EEG Healthy Attention 18 18 1 0 24 250 378.9 MB
DS007314 Martzoukou2026_tACS Martzoukou2024_Post openneuro EEG Other Visual Clinical Intervention 14 2 1 7 32 500 1.1 GB
DS007315 Martzoukou2026_tACS_Patients Martzoukou2024_Post_A openneuro EEG Other Visual Clinical Intervention 14 2 1 7 32 500 1.1 GB
DS007322 Mishra2026 Mishra2024 openneuro EEG Healthy Auditory Attention 57 57 1 0 64\* 1000 42.5 GB
DS007338 Plomecka2026 EEGEyeNet_v2, EEGEYENET openneuro EEG Healthy Visual Perception 1 1 1 1 129 500 39.9 MB
DS007347 Elias2026 openneuro EEG Cancer Resting State Clinical Intervention 10 5 1 3 50\* 256\* 1.6 GB
DS007353 Zhang2026 HAD_MEEG, HADMEEG openneuro EEG MEG Healthy Visual Perception 473 32 2 11 409\* 1200\* 180.6 GB
DS007358 Vianney2026 Vianney2025 openneuro EEG Healthy Resting State Resting-state 6,000 2,000 3 0 62\* 128\* 16.1 GB
DS007406 Edit2026 Edit2024 openneuro EEG Healthy Multisensory Affect 10 10 1 0 14 256 25.8 MB
DS007420 Gao2026_Light_Weight_Multi Gao2024 openneuro fNIRS Healthy Motor Motor 60 12 4 3 200 8\* 560.7 MB
DS007427 Isaza2026_Comprehensive HenaoIsaza2026 openneuro EEG Dementia Resting State Clinical Intervention 44 44 1 1 60 1000 3.1 GB
DS007431 Ataseven2026 Ataseven2024 openneuro EEG Healthy Visual Memory 47 47 1 0 66 1000 144.6 GB
DS007445 Panchavati2026 openneuro iEEG Epilepsy Other Clinical Intervention 66 19 1 14 140\* 200\* 50.5 GB
DS007454 DS7454_TimePerception openneuro EEG Healthy Visual Perception 42 42 1 0 64 1000 29.6 GB
DS007463 Fogarty2026_Very Fogarty2025 openneuro fNIRS Healthy Visual Perception 88 8 14 2 19086\* 7 69.3 GB
DS007471 Zhou2026 Zhou2024 openneuro EEG Healthy Auditory Other 31 31 1 0 64 1000 8.1 GB
DS007473 Fogarty2026_High Tripathy2024 openneuro fNIRS Healthy Multisensory Perception 189 5 19 8 6782\* 10 36.3 GB
DS007477 Niu2026 openneuro fNIRS Other 36 18 1 2 1 10 9 KB
DS007521 Moerel2026 Moerel2025 openneuro EEG Healthy Visual Attention 46 23 1 2 64 100 29.0 GB
DS007523 Bel2026 Dascoli2025 openneuro MEG Healthy Auditory Perception 579 58 1 1 346\* 1000 444.8 GB
DS007524 Pallier2025 LittlePrince openneuro MEG Healthy Visual Other 500 50 1 1 346\* 1000 298.6 GB
DS007526 Katzir2026 PD_EEG, PDEEG openneuro EEG Parkinson's Motor Clinical Intervention 277 144 2 0 65 250 4.3 GB
DS007554 Ajra2026 openneuro EEG fNIRS Healthy Other 1,034 30 7 3 32 10\* 4.2 GB
DS007558 Qi2026 openneuro EEG Resting State Clinical Intervention 121 67 1 2 19\* 200 686.4 MB
DS007591 Sato2026_Delineating Sato2025 openneuro EEG Healthy Motor 21 3 3 6 139 256 1.6 GB
DS007602 Sato2026_Speech Sato2024 openneuro EEG Healthy Visual Motor 113 3 1 15 134 1200 49.6 GB
DS007609 Shalamberidze2026 Shalamberidze2025 openneuro EEG Healthy Resting State Affect 51 51 1 0 256 500 7.0 GB
DS007615 Normannseth2026 openneuro EEG Healthy Auditory Perception 192 69 2 0 68 2048 34.6 GB
NM000103 Shirazi2017 HealthyBrainNetwork, HBN_EEG_NC, HBN_NoCommercial nemar EEG 3,522 447 10 0 129 500 250.3 GB
NM000104 Sivakumar2024 emg2qwerty nemar EMG 1,136 108 1 1135 32 2000 223.3 GB
NM000105 Kaifosh2025 FRL_DiscreteGestures nemar EMG 100 100 1 1 16 2000 20.6 GB
NM000106 Kaifosh2025_106 FRL_Handwriting nemar EMG 807 100 1 26 16 2000 45.3 GB
NM000107 Kaifosh2025_107 FRL_WristControl nemar EMG 182 100 1 2 16 2000 24.9 GB
NM000108 Jiang2021 HySER, Hyser nemar EMG 1,514 20 38 2 256 108.2 GB
NM000109 Zyma2019 nemar EEG 72 36 2 0 21 500 174.5 MB
NM000110 Connolly2010 CHBMIT, CHB_MIT nemar EEG 686 24 1 0 23\* 256 42.6 GB
NM000112 Liu2024_112 FACED nemar EEG 123 123 1 0 32 1000\* 31.4 GB
NM000113 Lee2020 nemar EEG 45 15 1 0 64 256 585.2 MB
NM000114 Mumtaz2017 nemar EEG 181 64 3 0 22\* 256 812.8 MB
NM000115 Zhou2016 nemar EEG 24 4 1 3 14 250 152.1 MB
NM000118 Nakanishi2015 nemar EEG Healthy Visual Perception 9 9 1 1 8 256 65.4 MB
NM000119 Oikonomou2016_MAMEM1 Oikonomou2016 nemar EEG Healthy Visual Perception 47 11 1 1 256 250 5.4 GB
NM000120 Oikonomou2016_MAMEM2 MAMEM2, SSVEPMAMEM2, MAMEM2_SSVEP nemar EEG Healthy Visual Attention 55 11 1 1 256 250 4.4 GB
NM000121 Oikonomou2016_MAMEM3 MAMEM3, SSVEP_MAMEM3 nemar EEG Healthy Visual Perception 110 11 1 1 14 128 120.2 MB
NM000122 Chen2017 nemar EEG Healthy Visual Perception 12 12 1 1 32 512 741.9 MB
NM000123 Kalunga2016 nemar EEG Healthy Visual Perception 30 12 1 1 8 256 78.2 MB
NM000124 Han2024 nemar EEG Healthy Visual Perception 48 24 1 2 64 1000 17.0 GB
NM000125 Lee2021_SSVEP nemar EEG Healthy Visual Perception 85 23 1 4 73\* 100 1.3 GB
NM000126 Wang2016 nemar EEG Healthy Visual Perception 34 34 1 1 64 250 3.1 GB
NM000127 Kim2025_SSVEP Kim2025 nemar EEG Healthy Visual Perception 240 40 1 6 31 1024 8.1 GB
NM000128 Dong2023 nemar EEG Healthy Visual Perception 59 59 1 1 8 250 397.1 MB
NM000129 Liu2020 BetaSSVEP, BETA_SSVEP, BETA nemar EEG Healthy Visual Perception 70 70 1 1 64 250 2.8 GB
NM000130 Liu2022 EldBETA, eldBETA, Liu2022EldBETA nemar EEG Healthy Visual Perception 700 100 1 7 64 1000 17.4 GB
NM000131 Wang2021 nemar EEG Healthy Visual Attention 22 8 1 1 31 1000 2.6 GB
NM000132 Kappenman2021 ERPCORE, ERP_CORE nemar EEG 240 40 6 0 33 1024 17.5 GB
NM000133 Xu2024 Alljoined1, Alljoined nemar EEG 13 8 1 2 64 512 7.6 GB
NM000134 Xu2025 Alljoined16M, Alljoined_16M, Alljoined1p6M nemar EEG 1,525 20 1 5 32 256 8.2 GB
NM000135 Leeb2014 BNCI2014004 nemar EEG Healthy Visual Motor 5 1 1 5 3 250 22.6 MB
NM000136 GuttmannFlury2025 nemar EEG Healthy Visual Attention 63 31 1 3 65 1000 7.3 GB
NM000137 Kaya2018 nemar EEG Healthy Visual Motor 17 7 1 3 19 200 623.4 MB
NM000138 Barachant2012 AlexMI, AlexMotorImagery, AlexandreMotorImagery nemar EEG Healthy Visual Motor 8 8 1 1 16 512 99.7 MB
NM000139 Tangermann2014 BNCI2014001, BCICIV1, BCICompIV1 nemar EEG Healthy Multisensory Motor 108 9 1 2 22 250 672.8 MB
NM000140 Faller2015 BNCI2015, BNCI2015001 nemar EEG Healthy Visual Motor 28 12 1 3 13 512 1.1 GB
NM000141 Wairagkar2018 nemar EEG Healthy Visual Motor 14 14 1 1 19 1024 571.7 MB
NM000142 Wu2020 nemar EEG Healthy Visual Motor 13 6 1 1 122 1000 4.9 GB
NM000143 BNCI2003 BCICIII_IVa, BCICompIII_IVa, BNCI2003_IVa nemar EEG Healthy Visual Motor 5 5 1 1 118 100 492.7 MB
NM000144 Scherer2015 BNCI2015 nemar EEG Other Visual Motor 18 9 1 2 30 256 1.1 GB
NM000145 GrosseWentrup2009 nemar EEG Healthy Visual Motor 10 10 1 1 128 500 5.4 GB
NM000146 Yi2014 Weibo2014 nemar EEG Healthy Visual Motor 10 10 1 1 60 200 1.6 GB
NM000147 RomaniBF2025 Romani2025 nemar EEG Healthy Visual Learning 120 22 1 6 8 250 134.3 MB
NM000148 Rozado2015 nemar EEG Healthy Auditory Motor 60 30 1 1 32 512 975.3 MB
NM000149 Ofner2019 nemar EEG Other Visual Motor 90 10 1 1 61 256 1.2 GB
NM000150 Liu2025_NEMAR nemar EEG 0 0
NM000151 Tavakolan2017 nemar EEG Healthy Visual Motor 46 12 1 4 32 1000 3.2 GB
NM000152 Zhang2017 nemar EEG Healthy Visual Motor 180 12 1 1 17 1000 1.6 GB
NM000155 Caillet2023 nemar EMG 11 6 2 0 259 2048 448.3 MB
NM000157 Mainsah2025 nemar EEG 544 19 1 8 16 256 1.2 GB
NM000158 Liu2024 nemar EEG Other Multisensory Motor 50 50 1 1 29 500 673.7 MB
NM000159 Avrillon2024 nemar EMG 124 16 8 0 258 2048 5.5 GB
NM000160 Yi2025 nemar EEG Healthy Visual Motor 141 18 1 1 62 1000 20.3 GB
NM000161 Crell2024 nemar EEG Healthy Visual Motor 40 20 1 1 60 500 10.2 GB
NM000162 Srisrisawang2025 BNCI2025 nemar EEG Healthy Visual Motor 20 20 1 1 67 500 15.0 GB
NM000163 Castillos2023_VEP nemar EEG Healthy Visual Attention 12 12 1 1 32 500 160.1 MB
NM000165 Grison2025 nemar EMG 10 1 10 0 131 10240 1.3 GB
NM000166 Huang2018 nemar EEG 2,469 95 13 2 64 250 21.6 GB
NM000167 Ma2020 nemar EEG Healthy Visual Motor 375 25 1 15 64\* 1000 22.4 GB
NM000168 Chavarriaga2015 Chavarriaga2010 nemar EEG Healthy Visual Attention 120 6 1 20 64 512 2.0 GB
NM000169 Riccio2014 BNCI2014008 nemar EEG Other Visual Attention 8 8 1 1 8 256 75.9 MB
NM000170 Pulferer2025 BNCI2025 nemar EEG Other Visual Motor 90 10 1 3 60 200 3.4 GB
NM000171 Steyrl2014 BNCI2014002 nemar EEG Healthy Visual Motor 112 14 1 1 15 512 554.3 MB
NM000172 Schirrmeister2017 nemar EEG Healthy Visual Motor 28 14 1 1 128 500 18.5 GB
NM000173 Ofner2017 nemar EEG Healthy Visual Motor 300 15 1 2 61 512 8.5 GB
NM000175 Luke2024 nemar fNIRS 5 5 1 0 56 7 47.5 MB
NM000176 Mainsah2025_BigP3BCI BigP3BCI_StudyK, BigP3BCI_K nemar EEG Healthy Visual Perception 128 5 1 2 16 256 168.3 MB
NM000179 Babayan2018 LEMON nemar EEG 215 215 1 0 62 2500\* 126.9 GB
NM000180 Brennan2019 nemar EEG 45 45 1 0 62 500 3.8 GB
NM000181 Khan2019 nemar EEG 2,417 2,417 1 0 21 200 13.8 GB
NM000185 Kemp2000 SleepEDF, SleepEDFExpanded nemar EEG 197 100 1 2 7\* 100 8.1 GB
NM000186 Mainsah2025_BigP3BCI_E BigP3BCI_StudyE, BigP3BCI_E nemar EEG Healthy Visual Attention 88 8 1 1 16 256 104.7 MB
NM000187 Mainsah2025_BigP3BCI_N BigP3BCI_StudyN nemar EEG Other Visual Attention 160 8 1 2 16 256 353.2 MB
NM000188 Arico2014 BNCI2014_009_P300 nemar EEG Healthy Visual Attention 30 10 1 3 16 256 70.9 MB
NM000189 Schreuder2015_P300 BNCI2015_P300, BNCI2015_003_P300, BNCI2015_003_AMUSE nemar EEG Healthy Auditory Attention 20 10 1 1 8 256 21.8 MB
NM000190 Hohne2015 BNCI2015 nemar EEG Healthy Auditory Attention 20 10 1 1 63 250 2.2 GB
NM000191 Mainsah2025_BigP3BCI_F BigP3BCI_StudyF, BigP3BCI_F nemar EEG Other Visual Attention 270 10 1 3 16 256 551.9 MB
NM000192 Treder2015_BNCI_006_Music BNCI2015_BNCI_006_Music, BNCI_2015_006_Music, BNCI2015_006_MusicBCI nemar EEG Healthy Auditory Attention 11 11 1 1 64 200 4.4 GB
NM000193 Kojima2024A_P300 nemar EEG Healthy Auditory Attention 66 11 1 1 64 1000 3.7 GB
NM000194 Acqualagna2015 BNCI2015 nemar EEG Healthy Visual Attention 24 12 1 1 63\* 200 2.1 GB
NM000195 Hubner2018 Huebner2018 nemar EEG Healthy Visual Attention 360 12 1 3 31 1000 4.8 GB
NM000196 Thielen2015 nemar EEG Healthy Visual Attention 36 12 1 1 64 2048 3.5 GB
NM000197 Mainsah2025_BigP3BCI_M BigP3BCI_StudyM, BigP3BCI_M nemar EEG Other Visual Attention 420 21 1 1 16 256 491.6 MB
NM000198 Treder2015_P300 BNCI2015_P300, BNCI2015_008_P300, BNCI2015_008_CenterSpeller nemar EEG Healthy Visual Attention 26 13 1 1 63 250 3.1 GB
NM000199 Hubner2017 Huebner2017 nemar EEG Healthy Visual Attention 342 13 1 3 31 1000 5.1 GB
NM000200 Mainsah2025_BigP3BCI_I BigP3BCI_StudyI, BigP3BCI_I nemar EEG Healthy Visual Attention 265 13 1 1 16 256 324.4 MB
NM000201 Lee2021_ERP nemar EEG Healthy Visual Attention 113 24 1 5 48\* 500\* 5.2 GB
NM000204 Lee2024_Bluetooth_speaker_14 nemar EEG Healthy Visual Attention 420 14 1 1 31 500 323.0 MB
NM000205 Zheng2020 nemar EEG Healthy Visual Attention 84 14 1 2 62 1000 5.3 GB
NM000206 Hinss2021_Neuroergonomic Hinss2021 nemar EEG Healthy Visual Attention 30 15 1 2 61 500 1.2 GB
NM000207 Kojima2024B_P300 nemar EEG Healthy Auditory Attention 180 15 1 1 64 1000 13.9 GB
NM000208 Lee2024_Door_lock_control nemar EEG Healthy Visual Attention 434 14 1 1 31 500 609.6 MB
NM000209 Forenzo2023 nemar EEG Healthy Visual Motor 150 25 1 2 64 1000 4.9 GB
NM000210 Simoes2020 BCIAUTP300, BCIAUT_P300, BCIAUT nemar EEG Development Visual Clinical Intervention 210 15 1 7 8 250 3.8 GB
NM000211 Zhang2025_RSVP Zhang2025 nemar EEG Healthy Visual Attention 240 15 1 4 57 1000 8.7 GB
NM000212 Schaeff2015 BNCI2015 nemar EEG Healthy Visual Attention 32 16 1 1 63 100 1.3 GB
NM000213 Lee2024_Television_control_30 nemar EEG Healthy Visual Attention 2,300 30 1 1 31 500 1.4 GB
NM000214 Thielen2021 nemar EEG Healthy Visual Perception 150 30 1 1 8 512 1.5 GB
NM000215 Korczowski2014_P300 BrainInvaders2014b, BI2014b, BrainInvadersBI2014b nemar EEG Healthy Visual Attention 38 38 1 1 32 512 401.8 MB
NM000216 Korczowski2015_P300 BrainInvaders2015a, BI2015a nemar EEG Healthy Visual Perception 129 43 1 3 32 512 1.9 GB
NM000217 Korczowski2015_P300_BI2015b BrainInvaders2015b, BI2015b nemar EEG Healthy Visual Attention 176 44 1 1 32 512 4.3 GB
NM000218 Mainsah2025_BigP3BCI_H BigP3BCI_StudyH, BigP3BCI_H nemar EEG Healthy Visual Attention 372 16 1 1 16 256 326.5 MB
NM000219 Reichert2020 BNCI2020, BNCI2020_002_AttentionShift, BNCI2020_002_CovertSpatialAttention nemar EEG Healthy Visual Attention 18 18 1 1 30 250 1023.6 MB
NM000221 Cattan2017 Alphawaves, Rodrigues2017, AlphaWaves nemar EEG Healthy Resting State Resting-state 19 19 1 1 16 512 81.7 MB
NM000222 Lee2024_Air_conditioner_control nemar EEG Healthy Visual Attention 305 10 1 1 25 500 415.3 MB
NM000223 Lee2024_Electric_light_control nemar EEG Healthy Visual Attention 465 15 1 1 31 500 632.4 MB
NM000225 Ghassemi2018 nemar EEG 1,983 1,983 1 2 13 200 401.1 GB
NM000226 Zhou2016_226 Zhou2016_NEMAR nemar EEG 24 4 1 3 14 100 528.3 MB
NM000227 GuttmannFlury2025_Eye GuttmannFlury2025_ME nemar EEG Healthy Visual Motor 63 31 1 3 66 1000 4.7 GB
NM000228 Nieuwland2018 nemar EEG 397 356 2 0 66\* 500\* 102.7 GB
NM000229 Gwilliams2023 MASC_MEG, MEG_MASC nemar EEG 1,360 29 79 2 208 1000
NM000230 Zuo2025 nemar EEG Other Visual Motor 118 30 1 5 30 500 5.8 GB
NM000231 Hoffmann2008 EPFLP300, EPFL_P300, EPFLP300Dataset nemar EEG Other Visual Attention 192 8 1 4 32 2048 1.9 GB
NM000232 Gifford2019 nemar EEG 638 10 5 4 63 1000 203.9 GB
NM000234 Schreuder2015_ERP BNCI2015_ERP nemar EEG Healthy Auditory Attention 42 21 1 1 60 250 4.6 GB
NM000235 GuttmannFlury2025_Eye_BCI GuttmannFlury2025_MIME nemar EEG Healthy Visual Motor 63 31 1 3 66 1000 4.6 GB
NM000236 Cattan2019_P300 nemar EEG Healthy Visual Attention 2,520 21 1 2 16 512 373.3 MB
NM000237 Zhou2021 nemar EEG Healthy Visual Motor 833 20 1 7 41\* 500 16.0 GB
NM000238 Accou2024 nemar EEG 4,088 87 366 11 64 8192
NM000239 MartinezCagigal2023 nemar EEG Healthy Visual Perception 640 16 1 5 16 256\* 783.0 MB
NM000240 FernandezRodriguez2025 FernandezRodriguez2023 nemar EEG Healthy Visual Perception 383 16 1 8 16 256 637.7 MB
NM000241 Zhang2019 nemar iEEG 18 2 9 0 158\* 200 1.9 GB
NM000242 Gao2026_Visual_imagery_et Gao2026 nemar EEG Healthy Visual Other 125 22 1 2 32 1000 31.7 GB
NM000243 Haufe2016 BNCI2016, BNCI2016002 nemar EEG Healthy Visual Motor 15 15 1 1 59 200 4.0 GB
NM000244 Korczowski2014_P300_BI2014a BrainInvaders2014a, BI2014a nemar EEG Healthy Visual Attention 64 64 1 1 16 512 1.0 GB
NM000245 Cho2017 nemar EEG Healthy Visual Motor 52 52 1 1 64 512 6.7 GB
NM000246 Yang2025_Multi WBCIC_SHU, WBCICSHU nemar EEG Healthy Visual Motor 153 51 1 3 59 1000 58.4 GB
NM000247 Mainsah2025_BigP3BCI_S1 BigP3BCI_StudyS1, BigP3BCI_S1 nemar EEG Healthy Visual Attention 120 10 1 1 32 256 477.9 MB
NM000248 Mainsah2025_BigP3BCI_L nemar EEG Other Visual Attention 330 11 1 1 16 256 780.5 MB
NM000249 Jao2022 Jao2020 nemar EEG Healthy Visual Attention 13 13 1 1 64 256 3.0 GB
NM000250 Dreyer2023 nemar EEG Healthy Visual Motor 520 87 1 1 27 512 8.8 GB
NM000251 He2025 nemar iEEG 6 1 3 0 110 1000 1.9 GB
NM000253 Wang2024_et_al_Brain BrainTreeBank nemar iEEG 26 10 1 0 164\* 2048 257.3 GB
NM000254 Telesford2024 nemar EEG 942 22 12 3 64 5000 256.0 GB
NM000255 Madsen2024_E2 nemar EEG 291 30 5 2 64 128 5.3 GB
NM000256 Madsen2024_E3 nemar EEG 332 29 6 2 64 128 7.5 GB
NM000259 Speier2017 nemar EEG Healthy Visual Attention 60 10 1 2 32 256 290.2 MB
NM000260 BrainInvaders2012 BI2012, BrainInvaders nemar EEG Healthy Visual Attention 46 23 1 1 17 128 164.8 MB
NM000264 BrainInvaders2013 BrainInvaders2013a, BI2013a nemar EEG Healthy Visual Attention 292 24 1 8 16 512 1.7 GB
NM000265 GuttmannFlury2025_MI nemar EEG Healthy Visual Motor 126 31 1 3 65 1000 9.2 GB
NM000266 Sosulski2019 nemar EEG Healthy Auditory Attention 1,060 13 1 841 37 1000 3.7 GB
NM000267 Shin2017_Shin2017A Shin2017A nemar EEG Healthy Visual Motor 174 29 1 6 32 200 1.9 GB
NM000268 Shin2017_Shin2017B Shin2017B nemar EEG Healthy Visual Memory 174 29 1 6 32 200 1.9 GB
NM000270 Liu2025 nemar EEG Motor 797 27 3 1 64 1000
NM000271 Chang2025_2 Chang2025 nemar EEG Visual Motor 1,245 28 3 6 59 1000
NM000272 Romani2025_BF_ERP Romani2025_erp nemar EEG Visual Attention 1,022 22 3 6 8 250
NM000277 Mainsah2025_G BigP3BCI_G, BigP3BCI_StudyG nemar EEG Healthy Visual Attention 320 20 1 1 16 256 333.2 MB
NM000301 Mainsah2025_D nemar EEG Healthy Visual Attention 307 17 1 1 32 256 738.1 MB
NM000303 Mainsah2025_O nemar EEG Other Visual Perception 347 18 1 2 32 256 992.2 MB
NM000310 GuttmannFlury2025_SSVEP nemar EEG Healthy Visual Perception 26 11 1 3 65 1000 2.1 GB
NM000311 Jeong2020 nemar EEG Healthy Visual Motor 213 25 1 3 71 1000 88.6 GB
NM000313 Mainsah2025_S2 nemar EEG Healthy Visual Perception 288 24 1 1 32 256 1.1 GB
NM000321 Mainsah2025_Q nemar EEG Other Visual Clinical Intervention 360 36 1 1 32 256 1.1 GB
NM000323 Lee2019_ERP OpenBMI_ERP, OpenBMI_P300 nemar EEG Healthy Visual Attention 216 54 1 2 66 1000 38.6 GB
NM000326 Mainsah2025_C nemar EEG Healthy Visual Attention 341 19 1 1 32 256 1.2 GB
NM000329 Brandl2020 nemar EEG Healthy Auditory Motor 112 16 1 1 63 1000 61.6 GB
NM000336 Mainsah2025_R nemar EEG Other Visual Attention 480 20 1 2 32 256 2.0 GB
NM000338 Lee2019_MI OpenBMI_MI nemar EEG Healthy Visual Motor 216 54 1 2 66 1000 60.8 GB
NM000339 Stieger2021 nemar EEG Healthy Visual Learning 598 62 1 11 60 1000 371.5 GB
NM000340 Mainsah2025_J nemar EEG Healthy Visual Attention 502 20 1 1 16 256 433.6 MB
NM000341 Cattan2019_PHMD nemar EEG Healthy Auditory Resting-state 12 12 1 1 16 512 231.3 MB
NM000342 Castillos2023_CastillosCVEP40 CastillosCVEP40 nemar EEG Healthy Visual Attention 12 12 1 1 32 500 145.3 MB
NM000343 Hinss2021 Hinss2021_v2 nemar EEG Healthy Visual Attention 30 15 1 2 61 500 1.2 GB
NM000344 Castillos2023_CastillosBurstVEP100 nemar EEG Healthy Visual Attention 12 12 1 1 32 500 150.0 MB
NM000345 Castillos2023_CastillosBurstVEP40 nemar EEG Healthy Visual Attention 12 12 1 1 32 500 144.2 MB
NM000346 Castillos2023_CastillosCVEP100 nemar EEG Healthy Visual Attention 12 12 1 1 32 500 150.6 MB
NM000347 HefmiIch2025 HEFMI_ICH, HEFMIICH nemar EEG Other Multisensory Motor 98 37 1 6 32 256 2.6 GB
NM000348 Yang2025 nemar EEG Healthy Visual Motor 153 51 1 3 64 1000 63.4 GB
NM000351 Mainsah2025_P nemar EEG Other Visual Attention 228 19 1 2 32 256 1.5 GB
Total 735 datasets 222,745 40,360 2033 3690 39.9 TB
Sortable catalogue of EEG‑DaSh datasets. Click any column header to sort, use the Filters chip to slice by recording / pathology / modality / type / source, or the Columns chip to show hidden metadata (Author, Canonical name, Source, Sessions). The Total row stays pinned at the bottom across filters.
Trailing \* in Channels / Sampling rate marks a median across multiple recordings; em‑dashes mean the metadata hasn't been extracted yet.
Pathology, modality, and dataset type appear as color-coded tags, so the table is quick to scan.

Dataset treemap

EEGDashv0.6.0
Figure: Treemap of EEG Dash datasets. The top level groups population type, the second level breaks down experimental modality, and leaves list individual datasets. Tile area encodes the total number of subjects; hover to view aggregated hours (or records when unavailable).
# Developer Notes This guide is for project maintainers and contributors who need to work on the EEGDash package, manage the data ingestion pipeline, or administer supporting services. ## Package Overview EEGDash (`eegdash`) is a single Python interface for EEG datasets spread across multiple public archives. The package breaks down as: **Core Modules** | `eegdash.api` | `EEGDash` client for querying metadata via REST API and coordinating downloads | |---------------------------|-------------------------------------------------------------------------------------| | `eegdash.dataset` | `EEGDashDataset`, `EEGChallengeDataset`, and dynamically registered dataset classes | | `eegdash.schemas` | Schema definitions for `Dataset` and `Record` TypedDicts | | `eegdash.http_api_client` | HTTP connection management for the EEGDash API gateway | | `eegdash.downloader` | S3 and HTTPS download utilities with progress tracking | | `eegdash.features` | Feature extraction utilities for EEG analysis | **Configuration** Configuration defaults live in `eegdash.const`. Key environment variables: - `EEGDASH_API_URL` - Override API endpoint (default: `https://data.eegdash.org`) - `EEGDASH_ADMIN_TOKEN` - Admin token for write operations ## Local Development **Setup** ```bash # Clone and install in editable mode git clone https://github.com/eegdash/EEGDash.git cd EEGDash pip install -e .[dev,digestion] # Verify installation python -c "from eegdash import EEGDash; print(EEGDash)" ``` **Code Quality** ```bash pip install pre-commit pre-commit install pre-commit run -a ``` The pre-commit suite runs Ruff for linting/formatting and Codespell for spelling. **Running Tests** ```bash pytest tests/ -v ``` ## Database Architecture EEGDash uses MongoDB with a **two-level schema** optimized for different query patterns: **1. Datasets Collection** (discovery & filtering) One document per dataset containing metadata for browsing and filtering: ```json { "dataset_id": "ds002718", "name": "A multi-subject EEG dataset", "source": "openneuro", "recording_modality": "eeg", "modalities": ["eeg"], "bids_version": "1.6.0", "license": "CC0", "tasks": ["RestingState", "GoNoGo"], "sessions": ["01", "02"], "demographics": { "subjects_count": 32, "age_mean": 28.5, "sex_distribution": {"m": 16, "f": 16} }, "external_links": { "source_url": "https://openneuro.org/datasets/ds002718" }, "timestamps": { "digested_at": "2024-01-15T10:30:00Z" } } ``` **2. Records Collection** (fast file loading) One document per EEG file with storage information for direct loading: ```json { "dataset": "ds002718", "data_name": "ds002718_sub-012_task-RestingState_eeg.set", "bids_relpath": "sub-012/eeg/sub-012_task-RestingState_eeg.set", "datatype": "eeg", "suffix": "eeg", "extension": ".set", "entities": { "subject": "012", "task": "RestingState", "session": "01" }, "entities_mne": { "subject": "012", "task": "RestingState", "session": "01" }, "storage": { "backend": "s3", "base": "s3://openneuro.org/ds002718", "raw_key": "sub-012/eeg/sub-012_task-RestingState_eeg.set", "dep_keys": [ "sub-012/eeg/sub-012_task-RestingState_events.tsv", "sub-012/eeg/sub-012_task-RestingState_eeg.fdt" ] }, "digested_at": "2024-01-15T10:30:00Z" } ``` **Note on \`\`dep_keys\`\`**: The digester automatically detects companion files required for loading: - `.fdt` files for EEGLAB `.set` format - `.vmrk` and `.eeg` files for BrainVision `.vhdr` format - BIDS sidecar files (`_events.tsv`, `_channels.tsv`, `_electrodes.tsv`, `_coordsystem.json`) ## Data Ingestion Pipeline The ingestion pipeline fetches BIDS datasets from 8 sources and transforms them into MongoDB documents. All scripts are in `scripts/ingestions/`. **Supported Sources** | Source | Storage | Fetch Method | Clone Strategy | |-------------|-----------|-------------------|---------------------------------------------| | OpenNeuro | S3 | GraphQL API | Git shallow clone (`GIT_LFS_SKIP_SMUDGE=1`) | | NEMAR | HTTPS | GitHub API | Git shallow clone | | EEGManyLabs | HTTPS | GIN API | Git shallow clone | | Figshare | HTTPS | REST API | API manifest (no clone) | | Zenodo | HTTPS | REST API | API manifest (no clone) | | OSF | HTTPS | REST API | Recursive folder traversal | | ScienceDB | HTTPS | Query Service API | Metadata only (auth required for files) | | data.ru.nl | HTTPS | REST API | WebDAV PROPFIND | **Pipeline Scripts** The pipeline consists of 4 steps: ```text 1_fetch_sources/ → consolidated/*.json (dataset listings) ↓ 2_clone.py → data/cloned/*/ (shallow clones / manifests) ↓ 3_digest.py → digestion_output/*/ (Dataset + Records JSON) ↓ validate_output.py → validation report (optional but recommended) ↓ 4_inject.py → MongoDB (datasets + records collections) ``` **Step 1: Fetch** - Retrieve dataset listings from each source: ```bash # Fetch OpenNeuro datasets python scripts/ingestions/1_fetch_sources/openneuro.py \ --output consolidated/openneuro_datasets.json # Available scripts: openneuro.py, nemar.py, eegmanylabs.py, # figshare.py, zenodo.py, osf.py, scidb.py, datarn.py ``` **Step 2: Clone** - Shallow clone without downloading raw data: ```bash # Clone all datasets from consolidated files python scripts/ingestions/2_clone.py \ --input consolidated \ --output data/cloned \ --workers 4 # Clone specific sources python scripts/ingestions/2_clone.py \ --input consolidated \ --output data/cloned \ --sources openneuro nemar ``` The clone script uses source-specific strategies: - **Git sources**: Shallow clone with `GIT_LFS_SKIP_SMUDGE=1` (~300KB per dataset) - **API sources**: REST API manifest fetching (no files downloaded) - **WebDAV**: PROPFIND recursive directory listing **Note on Git-Annex**: OpenNeuro and other git sources create **broken symlinks** (pointers to `.git/annex/objects/`) rather than actual files. The digester handles these correctly using `Path.is_symlink()` to detect files and extract metadata without requiring actual file content. **Step 3: Digest** - Extract BIDS metadata and generate documents: ```bash python scripts/ingestions/3_digest.py \ --input data/cloned \ --output digestion_output \ --workers 4 ``` Output structure: ```text digestion_output/ ├── ds001785/ │ ├── ds001785_dataset.json # Dataset document │ ├── ds001785_records.json # Records array │ └── ds001785_summary.json # Processing stats ├── ds002718/ │ └── ... └── BATCH_SUMMARY.json ``` **Step 4: Validate** (optional but recommended): ```bash python scripts/ingestions/validate_output.py ``` Checks for missing mandatory fields, invalid storage URLs, empty datasets, and ZIP placeholders. **Step 5: Inject** - Upload to MongoDB: ```bash # Dry run (validate without uploading) python scripts/ingestions/4_inject.py \ --input digestion_output \ --database eegdash_staging \ --dry-run # Actual injection python scripts/ingestions/4_inject.py \ --input digestion_output \ --database eegdash # Inject only datasets or records python scripts/ingestions/4_inject.py \ --input digestion_output \ --database eegdash \ --only-datasets ``` ## CI/CD Workflows Automated GitHub Actions workflows handle the full pipeline: **Fetch Workflows** (`1-fetch-*.yml`) Run weekly on Monday to update dataset listings: - `1-fetch-openneuro.yml`, `1-fetch-nemar.yml`, etc. - `1-fetch-all.yml` - Orchestrates all sources **Digest Workflows** (`2-digest-*.yml`) Triggered automatically after fetch completes: - `2-digest-openneuro.yml`, `2-digest-nemar.yml`, etc. - Uses `2-clone-digest.yml` reusable workflow **Inject Workflow** (`3-inject-all.yml`) Runs weekly on Tuesday to upload digested data: - Injects to `eegdash_staging` by default (dry run) - Manual trigger to inject to production `eegdash` **Full Pipeline** (`full-pipeline.yml`) Manual workflow for end-to-end processing: ```yaml # Trigger via GitHub Actions UI with options: # - sources: all / openneuro / nemar / ... # - database: eegdashstaging / eegdash # - dry_run: true / false # - max_datasets: 0 (all) or limit ``` Data is stored in the `eegdash-dataset-listings` repository: ```text eegdash-dataset-listings/ ├── consolidated/ # Fetched dataset listings │ ├── openneuro_datasets.json │ ├── nemar_datasets.json │ └── ... ├── cloned/ # Shallow clones / manifests │ ├── ds001785/ │ └── ... └── digested/ # MongoDB-ready documents ├── ds001785/ └── ... ``` ## API Server The API server (`mongodb-eegdash-server/`) is a FastAPI application: **Environment Configuration** Create `.env` in `mongodb-eegdash-server/api/`: ```bash MONGO_URI=mongodb://user:password@host:27017 MONGO_DB=eegdash ADMIN_TOKEN=your-secure-token # Optional REDIS_URL=redis://localhost:6379/0 ENABLE_METRICS=true ``` **API Endpoints** ```text GET / - API info GET /health - Health check GET /metrics - Prometheus metrics GET /api/{db}/records - Query records GET /api/{db}/count - Count records GET /api/{db}/datasets - List dataset IDs GET /api/{db}/metadata/{dataset_id} - Get dataset metadata POST /admin/{db}/records - Insert records (auth required) POST /admin/{db}/records/bulk - Bulk insert (auth required) POST /admin/{db}/datasets - Insert datasets (auth required) ``` **Rate Limiting**: 100 requests/minute per IP on public endpoints. ## Release Process 1. Update version in `pyproject.toml` 2. Update `CHANGELOG.md` 3. Build and upload: ```bash python -m build python -m twine upload dist/* ``` 4. Create GitHub release with tag `v{version}` ## Documentation Build documentation locally: ```bash cd docs pip install -r requirements.txt make html-noplot # Fast build (no examples) make html # Full build with examples ``` Documentation is auto-deployed to [https://eegdash.org](https://eegdash.org) via GitHub Pages. # Unused API Entries No API entries found, not computed. # Computation times **12:59.512** total execution time for 20 files **from all galleries**: | Example | Time | Mem (MB) | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|------------| | [Challenge 2: Predicting the p-factor from EEG](generated/auto_examples/eeg2025/tutorial_challenge_2.md#sphx-glr-generated-auto-examples-eeg2025-tutorial-challenge-2-py) (`../../examples/eeg2025/tutorial_challenge_2.py`) | 06:19.254 | 0 | | [Challenge 1: Cross-Task Transfer Learning!](generated/auto_examples/eeg2025/tutorial_challenge_1.md#sphx-glr-generated-auto-examples-eeg2025-tutorial-challenge-1-py) (`../../examples/eeg2025/tutorial_challenge_1.py`) | 04:00.429 | 0 | | [Minimal Tutorial](generated/auto_examples/core/tutorial_minimal.md#sphx-glr-generated-auto-examples-core-tutorial-minimal-py) (`../../examples/core/tutorial_minimal.py`) | 01:48.298 | 0 | | [Eyes Open vs. Closed Classification](generated/auto_examples/hpc/tutorial_eoec.md#sphx-glr-generated-auto-examples-hpc-tutorial-eoec-py) (`../../examples/hpc/tutorial_eoec.py`) | 00:31.978 | 0 | | [Working Offline with EEGDash](generated/auto_examples/eeg2025/tutorial_eegdash_offline.md#sphx-glr-generated-auto-examples-eeg2025-tutorial-eegdash-offline-py) (`../../examples/eeg2025/tutorial_eegdash_offline.py`) | 00:09.670 | 0 | | [Eyes Open vs. Closed Classification](generated/auto_examples/core/tutorial_eoec.md#sphx-glr-generated-auto-examples-core-tutorial-eoec-py) (`../../examples/core/tutorial_eoec.py`) | 00:06.323 | 0 | | [EEGDash Feature Extractor](generated/auto_examples/core/tutorial_feature_extractor_open_close_eye.md#sphx-glr-generated-auto-examples-core-tutorial-feature-extractor-open-close-eye-py) (`../../examples/core/tutorial_feature_extractor_open_close_eye.py`) | 00:02.698 | 0 | | [EEGDash API Tutorial](generated/auto_examples/tutorials/tutorial_api.md#sphx-glr-generated-auto-examples-tutorials-tutorial-api-py) (`../../examples/tutorials/tutorial_api.py`) | 00:00.543 | 0 | | [Transfer Learning with EEGDash](generated/auto_examples/tutorials/tutorial_transfer_learning.md#sphx-glr-generated-auto-examples-tutorials-tutorial-transfer-learning-py) (`../../examples/tutorials/tutorial_transfer_learning.py`) | 00:00.318 | 0 | | [Clinical Dataset Summary](generated/auto_examples/tutorials/plot_clinical_summary.md#sphx-glr-generated-auto-examples-tutorials-plot-clinical-summary-py) (`../../examples/tutorials/plot_clinical_summary.py`) | 00:00.002 | 0 | | [EEG P3 Transfer Learning with AS-MMD](generated/auto_examples/core/p300_transfer_learning.md#sphx-glr-generated-auto-examples-core-p300-transfer-learning-py) (`../../examples/core/p300_transfer_learning.py`) | 00:00.000 | 0 | | [Exploring Braindecode’s BIDSDataset](generated/auto_examples/dev_scripts/debug_pybids_braindecode.md#sphx-glr-generated-auto-examples-dev-scripts-debug-pybids-braindecode-py) (`../../examples/dev_scripts/debug_pybids_braindecode.py`) | 00:00.000 | 0 | | [Age Prediction from EEG](generated/auto_examples/tutorials/noplot_tutorial_age_prediction.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-age-prediction-py) (`../../examples/tutorials/noplot_tutorial_age_prediction.py`) | 00:00.000 | 0 | | [Oddball Classification](generated/auto_examples/tutorials/noplot_tutorial_audi_oddball.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-audi-oddball-py) (`../../examples/tutorials/noplot_tutorial_audi_oddball.py`) | 00:00.000 | 0 | | [EEG Features for Sex Classification](generated/auto_examples/tutorials/noplot_tutorial_feature_extraction.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-feature-extraction-py) (`../../examples/tutorials/noplot_tutorial_feature_extraction.py`) | 00:00.000 | 0 | | [Eyes Open vs. Closed Features](generated/auto_examples/tutorials/noplot_tutorial_features_eoec.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-features-eoec-py) (`../../examples/tutorials/noplot_tutorial_features_eoec.py`) | 00:00.000 | 0 | | [P3 Visual Oddball Classification](generated/auto_examples/tutorials/noplot_tutorial_p3_oddball.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-p3-oddball-py) (`../../examples/tutorials/noplot_tutorial_p3_oddball.py`) | 00:00.000 | 0 | | [Predicting p-factor from EEG](generated/auto_examples/tutorials/noplot_tutorial_pfactor_features.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-pfactor-features-py) (`../../examples/tutorials/noplot_tutorial_pfactor_features.py`) | 00:00.000 | 0 | | [P-Factor Regression Tutorial](generated/auto_examples/tutorials/noplot_tutorial_pfactor_regression.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-pfactor-regression-py) (`../../examples/tutorials/noplot_tutorial_pfactor_regression.py`) | 00:00.000 | 0 | | [Sex Classification Tutorial](generated/auto_examples/tutorials/noplot_tutorial_sex_classification_cnn.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-sex-classification-cnn-py) (`../../examples/tutorials/noplot_tutorial_sex_classification_cnn.py`) | 00:00.000 | 0 | # API Reference The EEGDash API reference curates everything you need to integrate, extend, and automate EEGDash—from core dataset helpers to feature extraction and rich dataset metadata. The focus is interoperability, extensibility, and ease of use.

What's inside EEGDash

Everything you need to discover, prepare, and benchmark EEG and MEG data.

Search metadata, modalities, tasks, and cohorts with unified filters. One-command pipelines with EEGPrep, MNE, and BIDS alignment. Export model-ready features and compare baselines across datasets. ![BIDS](_static/bids_logo_black.svg) Keep metadata consistent and portable across teams and tools. The API is organized into three main components: **Core API** Build, query, and manage EEGDash datasets and utilities. [→ Explore Core API](api_core.md) **Feature engineering** Extract statistical, spectral, and machine-learning-ready features. [→ Explore Feature Engineering](api_features.md) **Dataset catalog** Browse dynamically generated dataset classes with rich metadata. [→ Explore the Dataset API](dataset/api_dataset.md) ## REST API Endpoints The EEGDash metadata server exposes a FastAPI REST interface for discovery and querying. Base URL: [https://data.eegdash.org](https://data.eegdash.org). Below is a concise map of the main entrypoints and their purpose. ### Meta Endpoints - `GET /` Returns API name, version, and available databases. - `GET /health` Returns API health and MongoDB connection status. - `GET /metrics` Prometheus metrics (if enabled). ### Public Data Endpoints - `GET /api/{database}/records` Query records (files) with filter and pagination. - `GET /api/{database}/count` Count records matching a filter. - `GET /api/{database}/datasets/names` List unique dataset names from records. - `GET /api/{database}/metadata/{dataset}` Get metadata for a single dataset (from records). - `GET /api/{database}/datasets/summary` Get summary statistics and metadata for all datasets (with pagination, filtering). Query params: `limit` (1-1000), `skip`, `modality` (eeg/meg/ieeg), `source` (openneuro/nemar/zenodo/etc.). Response includes aggregate totals for datasets, subjects, files, and size. - `GET /api/{database}/datasets/summary/{dataset_id}` Get detailed summary for a specific dataset. `dataset_id` may be the dataset ID or dataset name. - `GET /api/{database}/datasets/{dataset_id}` Get a specific dataset document by ID. - `GET /api/{database}/datasets` List dataset documents (with filtering and pagination). - `GET /api/{database}/datasets/stats/records` Get aggregated `nchans` and `sampling_frequency` counts for all datasets. Used to generate summary tables efficiently. ### Admin Endpoints (require Bearer token) - `POST /admin/{database}/records` Insert a single record (file document). - `POST /admin/{database}/records/bulk` Insert multiple records (max 1000 per request). - `POST /admin/{database}/datasets` Insert or update a single dataset document (upsert by `dataset_id`). - `POST /admin/{database}/datasets/bulk` Insert or update multiple dataset documents (max 500 per request). - `PATCH /admin/{database}/records` Update records matching a filter (only `$set` allowed). - `GET /admin/security/blocked` List blocked IPs and offense counts. - `POST /admin/security/unblock` Unblock a specific IP. ## Related Guides - [Tutorial gallery](../generated/auto_examples/index.md) - [Dataset summary](../dataset_summary.md) - [Installation guide](../install/install.md) # Core API EEGDash provides a comprehensive interface for accessing and processing EEG data through a three-tier architecture that combines metadata management, cloud storage, and standardized data organization. ## Architecture Overview The EEGDash core API is built around a REST API gateway that provides secure, scalable access to the underlying data infrastructure: ```text +-----------------+ | REST API | | (FastAPI+Redis) | +-----------------+ | v +-----------------+ +-----------------+ | MongoDB | | Redis | | (Metadata) | | (Rate Limit) | +-----------------+ +-----------------+ | v +-----------v-----------+ +-----------------+ | eegdash |<---->| S3 Filesystem | | Interface | | (Raw Data) | +-----------------------+ +-----------------+ | v +-----------v-----------+ | BIDS Parser | +-----------------------+ ``` **REST API Gateway** : A FastAPI-based REST API (`https://data.eegdash.org`) provides secure access to the metadata database. Features include:
- Rate limiting (100 requests/minute for public endpoints) - Redis-backed distributed rate limiting for scalability - Prometheus metrics for monitoring (`/metrics`) - Request tracing with `X-Request-ID` headers - Health checks (`/health`) for service monitoring **MongoDB Metadata Layer** : Centralized NoSQL database storing EEG dataset metadata including subject information, session details, task parameters, and experimental conditions. Enables fast querying and filtering of large-scale datasets. **File Cloud Storage** : Scalable object storage for raw EEG data files. Provides reliable access to large datasets with on-demand downloading capabilities, reducing local storage requirements. At the moment, AWS S3 is the only supported storage backend. **BIDS Standardization** : Brain Imaging Data Structure (BIDS) parser ensuring consistent data organization and interpretation across different datasets and experiments. Used to perform the digest of BIDS datasets and extract relevant metadata for the MongoDB database. ## API Endpoints The REST API provides the following endpoints: **Public Endpoints (Rate Limited)** | Method | Endpoint | Description | |----------|--------------------------------------|-----------------------------------------| | GET | `/` | API information and available endpoints | | GET | `/health` | Health check with service status | | GET | `/metrics` | Prometheus-compatible metrics | | GET | `/api/{database}/records` | Query records with filters | | GET | `/api/{database}/count` | Count documents matching filter | | GET | `/api/{database}/datasets` | List all unique dataset names | | GET | `/api/{database}/metadata/{dataset}` | Get metadata for specific dataset | **Admin Endpoints (Token Required)** | Method | Endpoint | Description | |----------|-----------------------------------|-------------------------------------| | POST | `/admin/{database}/records` | Insert single record | | POST | `/admin/{database}/records/bulk` | Insert multiple records (max 1000) | | POST | `/admin/{database}/datasets` | Insert dataset metadata | | POST | `/admin/{database}/datasets/bulk` | Insert multiple datasets (max 1000) | ## Database Schema EEGDash uses a two-level MongoDB schema optimized for different query patterns: **Datasets Collection** (for discovery/filtering) One document per dataset containing metadata for browsing: ```python from eegdash.schemas import Dataset, create_dataset dataset = create_dataset( dataset_id="ds002718", name="A multi-subject EEG dataset", source="openneuro", recording_modality="eeg", tasks=["RestingState"], subjects_count=32, ) ``` **Records Collection** (for fast file loading) One document per EEG file with storage location: ```python from eegdash.schemas import Record, create_record record = create_record( dataset="ds002718", storage_base="s3://openneuro.org/ds002718", bids_relpath="sub-012/eeg/sub-012_task-RestingState_eeg.set", subject="012", task="RestingState", ) ``` ## Core Modules The API is organized into focused modules that handle specific aspects of EEG data processing: * `api` - Main `EEGDash` client for data access and querying * `schemas` - Schema definitions (`Dataset`, `Record` TypedDicts) * `http_api_client` - HTTP REST API client for database operations * `downloader` - S3 and HTTPS download utilities * `bids_metadata` - BIDS-compliant metadata handling * `const` - Constants and configuration defaults * `paths` - File system and storage path management * [Datasets API](dataset/api_dataset.md) - Dataset classes and registry ## Configuration The API URL can be configured via environment variables: ```bash # Override the default API URL export EEGDASH_API_URL="https://data.eegdash.org" # For admin write operations export EEGDASH_API_TOKEN="your-admin-token" ``` ## API Reference | `api` | High-level interface to the EEGDash metadata database. | |-------------------|-------------------------------------------------------------------| | `schemas` | EEGDash Data Schemas | | `http_api_client` | HTTP API client for EEGDash REST API. | | `downloader` | File downloading utilities for EEG data from cloud storage. | | `bids_metadata` | BIDS metadata processing and query building utilities. | | `const` | Configuration constants and mappings for EEGDash. | | `logging` | Logging configuration for EEGDash. | | `paths` | Path utilities and cache directory management. | | `hbn` | Healthy Brain Network (HBN) specific utilities and preprocessing. | # Feature API * [Feature Package Overview](features_overview.md) | `eegdash.features.base_utils` | Basic Feature Extraction Utilities | |------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------| | `eegdash.features.datasets` | Datasets for Feature Management. | | `eegdash.features.decorators` | Feature Metadata Decorators. | | `eegdash.features.extractors` | Core Feature Extraction Orchestration. | | `eegdash.features.inspect` | Feature Bank Inspection and Discovery. | | `eegdash.features.kinds` | Feature Channel-processing Kinds. | | `eegdash.features.serialization` | Serialization Utilities for Feature Datasets. | | `eegdash.features.output_types` | Core Output Types. | | `eegdash.features.trainable` | Core Trainable Feature Interface. | | `eegdash.features.utils` | Feature Extraction Utilities. | | `eegdash.features.feature_bank.complexity` | Complexity Feature Extraction | | `eegdash.features.feature_bank.connectivity` | Connectivity Feature Extraction | | `eegdash.features.feature_bank.csp` | Common Spatial Pattern Features Extraction

This module provides the Common Spatial Pattern (CSP) feature extractor
for signal classification. | | `eegdash.features.feature_bank.dimensionality` | Dimensionality Features Extraction | | `eegdash.features.feature_bank.pick` | Channel-picking feature preprocessors | | `eegdash.features.feature_bank.signal` | Signal-Level Feature Extraction | | `eegdash.features.feature_bank.spectral` | Spectral Feature Extraction | | `eegdash.features.feature_bank.utils` | Feature Extraction Utilities | # Datasets API The `eegdash.dataset` package exposes dataset classes that are registered dynamically at import time. See [eegdash.dataset package](eegdash.dataset.md) for the module-level API, including [`EEGChallengeDataset`](eegdash.dataset.EEGChallengeDataset.md#eegdash.dataset.EEGChallengeDataset) and helper utilities. ## What’s in the registry EEGDash exposes **700+ OpenNeuro EEG datasets**, registered dynamically from MongoDB. The table below summarises the breakdown by experimental type (1106 datasets in this build). ## Base Dataset API * [EEGDashDataset](eegdash.EEGDashDataset.md) * [EEGChallengeDataset](eegdash.dataset.EEGChallengeDataset.md) #### Dataset counts by experimental type | Experimental Type | Datasets | |-----------------------|------------| | nan | 207 | | Attention | 116 | | Perception | 97 | | Clinical/Intervention | 82 | | Motor | 70 | | Memory | 49 | | Other | 36 | | Learning | 16 | | Unknown | 15 | | Affect | 14 | | Resting-state | 14 | | Sleep | 12 | | Decision-making | 8 | ## All Datasets ## Individual Datasets * [Nemar Datasets](source_nemar.md) * [Openneuro Datasets](source_openneuro.md) * [Other Datasets](source_other.md) # EEGDashDataset ### *class* eegdash.EEGDashDataset(cache_dir: str | Path, query: dict[str, Any] = None, description_fields: list[str] | None = None, s3_bucket: str | None = None, records: list[dict] | None = None, download: bool = True, n_jobs: int = -1, eeg_dash_instance: Any = None, database: str | None = None, auth_token: str | None = None, on_error: str = 'raise', \*\*kwargs) Bases: `BaseConcatDataset` Create a new EEGDashDataset from a given query or local BIDS dataset directory and dataset name. An EEGDashDataset is pooled collection of EEGDashBaseDataset instances (individual recordings) and is a subclass of braindecode’s BaseConcatDataset. ### Examples Basic usage with dataset and subject filtering: ```pycon >>> from eegdash import EEGDashDataset >>> dataset = EEGDashDataset( ... cache_dir="./data", ... dataset="ds002718", ... subject="012" ... ) >>> print(f"Number of recordings: {len(dataset)}") ``` Filter by multiple subjects and specific task: ```pycon >>> subjects = ["012", "013", "014"] >>> dataset = EEGDashDataset( ... cache_dir="./data", ... dataset="ds002718", ... subject=subjects, ... task="RestingState" ... ) ``` Load and inspect EEG data from recordings: ```pycon >>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}") ... print(f"Duration: {raw.times[-1]:.1f} seconds") ``` Advanced filtering with raw MongoDB queries: ```pycon >>> from eegdash import EEGDashDataset >>> query = { ... "dataset": "ds002718", ... "subject": {"$in": ["012", "013"]}, ... "task": "RestingState" ... } >>> dataset = EEGDashDataset(cache_dir="./data", query=query) ``` Working with dataset collections and braindecode integration: ```pycon >>> # EEGDashDataset is a braindecode BaseConcatDataset >>> for i, recording in enumerate(dataset): ... if i >= 2: # limit output ... break ... print(f"Recording {i}: {recording.description}") ... raw = recording.load() ... print(f" Channels: {len(raw.ch_names)}, Duration: {raw.times[-1]:.1f}s") ``` * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Raw MongoDB query to filter records. If provided, it is merged with keyword filtering arguments (see `**kwargs`) using logical AND. You must provide at least a `dataset` (either in `query` or as a keyword argument). Only fields in `ALLOWED_QUERY_FIELDS` are considered for filtering. * **dataset** (*str*) – Dataset identifier (e.g., `"ds002718"`). Required if `query` does not already specify a dataset. * **task** (*str* *|* *list* *[**str* *]*) – Task name(s) to filter by (e.g., `"RestingState"`). * **subject** (*str* *|* *list* *[**str* *]*) – Subject identifier(s) to filter by (e.g., `"NDARCA153NKE"`). * **session** (*str* *|* *list* *[**str* *]*) – Session identifier(s) to filter by (e.g., `"1"`). * **run** (*str* *|* *list* *[**str* *]*) – Run identifier(s) to filter by (e.g., `"1"`). * **description_fields** (*list* *[**str* *]*) – Fields to extract from each record and include in dataset descriptions (e.g., “subject”, “session”, “run”, “task”). * **s3_bucket** (*str* *|* *None*) – Optional S3 bucket URI (e.g., “s3://mybucket”) to use instead of the default OpenNeuro bucket when downloading data files. * **records** (*list* *[**dict* *]* *|* *None*) – Pre-fetched metadata records. If provided, the dataset is constructed directly from these records and no MongoDB query is performed. * **download** (*bool* *,* *default True*) – If False, load from local BIDS files only. Local data are expected under `cache_dir / dataset`; no DB or S3 access is attempted. * **n_jobs** (*int*) – Number of parallel jobs to use where applicable (-1 uses all cores). * **eeg_dash_instance** (*EEGDash* *|* *None*) – Optional existing EEGDash client to reuse for DB queries. If None, a new client is created on demand, not used in the case of no download. * **database** (*str* *|* *None*) – Database name to use (e.g., “eegdash”, “eegdash_staging”). If None, uses the default database. * **auth_token** (*str* *|* *None*) – Authentication token for accessing protected databases. Required for staging or admin operations. * **on_error** (*str* *,* *default "raise"*) – How to handle `DataIntegrityError` when accessing `.raw` on individual recordings: - `"raise"` (default): propagate the exception. - `"warn"`: log the error as a warning and set `.raw` to `None`. - `"skip"`: silently set `.raw` to `None`. Use [`drop_bad()`](#eegdash.EEGDashDataset.drop_bad) after iteration to remove skipped recordings. * **\*\*kwargs** (*dict*) – Additional keyword arguments serving two purposes: - Filtering: any keys present in `ALLOWED_QUERY_FIELDS` are treated as query filters (e.g., `dataset`, `subject`, `task`, …). - Dataset options: remaining keys are forwarded to `EEGDashRaw`. #### drop_bad() → list[dict] Remove skipped datasets and return their records. Call after accessing `.raw` on all datasets (e.g. after iteration or preprocessing) to clean up the dataset list. * **Returns:** Records that were removed because loading failed. * **Return type:** list of dict #### drop_short(min_samples: int) → list[dict] Remove recordings shorter than *min_samples* and return their records. This is useful when downstream processing (e.g., fixed-length windowing) requires a minimum number of samples per recording. Recordings whose `.raw` is `None` (failed to load) are also dropped. * **Parameters:** **min_samples** (*int*) – Minimum number of time-domain samples a recording must have to be kept. * **Returns:** Records that were removed. * **Return type:** list of dict #### *property* cumulative_sizes *: list[int]* Recompute cumulative sizes from current dataset lengths. Overrides the cached version from BaseConcatDataset because individual dataset lengths can change after lazy raw loading (estimated ntimes from JSON metadata may differ from actual n_times in the raw file). #### download_all(n_jobs: int | None = None) → None Download missing remote files in parallel. * **Parameters:** **n_jobs** (*int* *|* *None*) – Number of parallel workers to use. If None, defaults to `self.n_jobs`. #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None #### *property* cummulative_sizes #### *static* cumsum(sequence) #### *property* description *: DataFrame* #### get_metadata() → DataFrame Concatenate the metadata and description of the wrapped Epochs. * **Returns:** **metadata** – DataFrame containing as many rows as there are windows in the BaseConcatDataset, with the metadata and description information for each window. * **Return type:** pd.DataFrame #### *classmethod* pull_from_hub(repo_id: str, preload: bool = True, token: str | None = None, cache_dir: str | Path | None = None, force_download: bool = False, \*\*kwargs) Load a dataset from the Hugging Face Hub. * **Parameters:** * **repo_id** (*str*) – Repository ID on the Hugging Face Hub (e.g., “username/dataset-name”). * **preload** (*bool* *,* *default=True*) – Whether to preload the data into memory. If False, uses lazy loading (when supported by the format). * **token** (*str* *|* *None*) – Hugging Face API token. If None, uses cached token. * **cache_dir** (*str* *|* *Path* *|* *None*) – Directory to cache the downloaded dataset. If None, uses default cache directory (~/.cache/huggingface/datasets). * **force_download** (*bool* *,* *default=False*) – Whether to force re-download even if cached. * **\*\*kwargs** – Additional arguments (currently unused). * **Returns:** The loaded dataset. * **Return type:** BaseConcatDataset * **Raises:** * **ImportError** – If huggingface-hub is not installed. * **FileNotFoundError** – If the repository or dataset files are not found. ### Examples ```pycon >>> from braindecode.datasets import BaseConcatDataset >>> dataset = BaseConcatDataset.pull_from_hub("username/nmt-dataset") >>> print(f"Loaded {len(dataset)} windows") >>> >>> # Use with PyTorch >>> from torch.utils.data import DataLoader >>> loader = DataLoader(dataset, batch_size=32, shuffle=True) ``` #### push_to_hub(repo_id: str, private: bool = False, token: str | None = None, compression: str = 'blosc', compression_level: int = 5, pipeline_name: str = 'braindecode', chunk_size: int = 5000000, local_cache_dir: str | Path | None = None, \*\*kwargs) → str Upload the dataset to the Hugging Face Hub in BIDS-like Zarr format. The dataset is converted to Zarr format with blosc compression, which provides optimal random access performance for PyTorch training. The data is stored in a BIDS sourcedata-like structure with events.tsv, channels.tsv, and participants.tsv sidecar files. * **Parameters:** * **repo_id** (*str*) – Repository ID on the Hugging Face Hub (e.g., “username/dataset-name”). * **private** (*bool* *,* *default=False*) – Whether to create a private repository. * **token** (*str* *|* *None*) – Hugging Face API token. If None, uses cached token. * **compression** (*str* *,* *default="blosc"*) – Compression algorithm for Zarr. Options: “blosc”, “zstd”, “gzip”, None. * **compression_level** (*int* *,* *default=5*) – Compression level (0-9). Level 5 provides optimal balance. * **pipeline_name** (*str* *,* *default="braindecode"*) – Name of the processing pipeline for BIDS sourcedata. * **chunk_size** (*int* *,* *default=5_000_000*) – Number of samples per chunk in Zarr along the time/window dimension. Larger chunk sizes create fewer but larger chunks/files. This parameter is used for both continuous data (e.g., RawDataset, EEGWindowsDataset) and pre-cut windows (WindowsDataset). For WindowsDataset, multiple windows may be stored in a single chunk depending on their duration and the chosen `chunk_size`. * **local_cache_dir** (*str* *|* *Path* *|* *None*) – Local directory to use for temporary files during upload. If None, uses the system temp directory and cleans it up after upload. If provided, the directory is used as a persistent cache: - If the directory is empty (or does not exist), the cache is built there and a lock file (`format_info.json`) is written once the cache is complete, before the upload starts. The file contains the zarr conversion parameters as JSON. - If the lock file is present and its JSON parameters match the current call, cache creation is skipped and the upload resumes directly (useful for retrying interrupted uploads). - If the lock file is present but its JSON parameters differ from the current call, a `ValueError` is raised. - If the directory is non-empty but the lock file is absent, a `ValueError` is raised listing the files found. * **\*\*kwargs** – Additional arguments passed to huggingface_hub.upload_large_folder(). * **Returns:** URL of the uploaded dataset on the Hub. * **Return type:** str * **Raises:** * **ImportError** – If huggingface-hub is not installed. * **ValueError** – If the dataset is empty or format is invalid. ### Examples ```pycon >>> dataset = NMT(path=path, preload=True) >>> # Upload with BIDS-like structure >>> url = dataset.push_to_hub( ... repo_id="myusername/nmt-dataset", ... ) ``` #### set_description(description: dict | DataFrame, overwrite: bool = False) Update (add or overwrite) the dataset description. * **Parameters:** * **description** (*dict* *|* *pd.DataFrame*) – Description in the form key: value where the length of the value has to match the number of datasets. * **overwrite** (*bool*) – Has to be True if a key in description already exists in the dataset description. #### split(by: str | list[int] | list[list[int]] | dict[str, list[int]] | None = None, property: str | None = None, split_ids: list[int] | list[list[int]] | dict[str, list[int]] | None = None) → dict[str, BaseConcatDataset] Split the dataset based on information listed in its description. The format could be based on a DataFrame or based on indices. * **Parameters:** * **by** (*str* *|* *list* *|* *dict*) – If `by` is a string, splitting is performed based on the description DataFrame column with this name. If `by` is a (list of) list of integers, the position in the first list corresponds to the split id and the integers to the datapoints of that split. If a dict then each key will be used in the returned splits dict and each value should be a list of int. * **property** (*str*) – Deprecated Some property which is listed in the info DataFrame. * **split_ids** (*list* *|* *dict*) – Deprecated List of indices to be combined in a subset. It can be a list of int or a list of list of int. * **Returns:** **splits** – A dictionary with the name of the split (a string) as key and the dataset as value. * **Return type:** dict #### *property* target_transform #### to_epochs_dataset() → BaseConcatDataset[WindowsDataset] Converts this `BaseConcatDataset` such that all datasets are `WindowsDataset` with `mne.Epochs`. In Braindecode, the data can either be stored as `mne.io.Raw` (in `EEGWindowsDataset`) or as `mne.Epochs` (in `WindowsDataset`). This function converts all the underlying datasets to `WindowsDataset` with `mne.Epochs`. This can be useful for reducing disk space when you want to save a dataset. * **Returns:** A new `BaseConcatDataset` where all datasets are `WindowsDataset` with `mne.Epochs`. * **Return type:** BaseConcatDataset[WindowsDataset] * **Raises:** **ValueError** – If any of the underlying datasets is a `RawDataset` or any other type that is not `EEGWindowsDataset` or `WindowsDataset`, as they cannot be converted to epochs. #### *property* transform #### datasets ## Usage Example ```python from eegdash import EEGDashDataset dataset = EEGDashDataset(cache_dir="./data", dataset="ds002718") print(f"Number of recordings: {len(dataset)}") ``` ## See Also * `eegdash.dataset` * [`eegdash.dataset.EEGChallengeDataset`](eegdash.dataset.EEGChallengeDataset.md#eegdash.dataset.EEGChallengeDataset) # eegdash.api module High-level interface to the EEGDash metadata database. This module provides the main EEGDash class which serves as the primary entry point for interacting with the EEGDash ecosystem. It offers methods to query, insert, and update metadata records stored in the EEGDash database via REST API. ### *class* eegdash.api.EEGDash(, database: str = 'eegdash', api_url: str | None = None, auth_token: str | None = None) Bases: `object` High-level interface to the EEGDash metadata database. Provides methods to query, insert, and update metadata records stored in the EEGDash database via REST API gateway. For working with collections of recordings as PyTorch datasets, prefer `EEGDashDataset`. Create a new EEGDash client. * **Parameters:** * **database** (*str* *,* *default "eegdash"*) – Name of the MongoDB database to connect to. Common values: `"eegdash"` (production), `"eegdash_staging"` (staging), `"eegdash_v1"` (legacy archive). * **api_url** (*str* *,* *optional*) – Override the default API URL. If not provided, uses the default public endpoint or the `EEGDASH_API_URL` environment variable. * **auth_token** (*str* *,* *optional*) – Authentication token for admin write operations. Not required for public read operations. ### Examples ```pycon >>> eegdash = EEGDash() # production >>> eegdash = EEGDash(database="eegdash_staging") # staging >>> records = eegdash.find({"dataset": "ds002718"}) ``` #### count(query: dict[str, Any] = None, , \*\*kwargs) → int Count documents matching the query. * **Parameters:** * **query** (*dict* *,* *optional*) – Complete query dictionary. This is a positional-only argument. * **\*\*kwargs** – User-friendly field filters (same as find()). * **Returns:** Number of matching documents. * **Return type:** int ### Examples ```pycon >>> eeg = EEGDash() >>> count = eeg.count({}) # count all >>> count = eeg.count(dataset="ds002718") # count by dataset ``` #### exists(query: dict[str, Any] = None, , \*\*kwargs) → bool Check if at least one record matches the query. * **Parameters:** * **query** (*dict* *,* *optional*) – Complete query dictionary. This is a positional-only argument. * **\*\*kwargs** – User-friendly field filters (same as find()). * **Returns:** True if at least one matching record exists; False otherwise. * **Return type:** bool ### Examples ```pycon >>> eeg = EEGDash() >>> eeg.exists(dataset="ds002718") # check by dataset >>> eeg.exists({"data_name": "ds002718_sub-001_eeg.set"}) # check by data_name ``` #### find(query: dict[str, Any] = None, , \*\*kwargs) → list[Mapping[str, Any]] Find records in the collection. ### Examples ```pycon >>> from eegdash import EEGDash >>> eegdash = EEGDash() >>> eegdash.find({"dataset": "ds002718", "subject": {"$in": ["012", "013"]}}) # pre-built query >>> eegdash.find(dataset="ds002718", subject="012") # keyword filters >>> eegdash.find(dataset="ds002718", subject=["012", "013"]) # sequence -> $in >>> eegdash.find({}) # fetch all (use with care) >>> eegdash.find({"dataset": "ds002718"}, subject=["012", "013"]) # combine query + kwargs (AND) ``` * **Parameters:** * **query** (*dict* *,* *optional*) – Complete MongoDB query dictionary. This is a positional-only argument. * **\*\*kwargs** – User-friendly field filters that are converted to a MongoDB query. Values can be scalars (e.g., `"sub-01"`) or sequences (translated to `$in` queries). Special parameters: `limit` (int) and `skip` (int) for pagination. * **Returns:** DB records that match the query. * **Return type:** list of dict #### find_datasets(query: dict[str, Any] | None = None, limit: int = 1000) → list[Mapping[str, Any]] Find datasets matching query. * **Parameters:** * **query** (*dict*) – Filter query. * **limit** (*int*) – Max number of datasets to return. * **Returns:** List of dataset metadata documents. * **Return type:** list of dict #### find_one(query: dict[str, Any] = None, , \*\*kwargs) → Mapping[str, Any] | None Find a single record matching the query. * **Parameters:** * **query** (*dict* *,* *optional*) – Complete query dictionary. This is a positional-only argument. * **\*\*kwargs** – User-friendly field filters (same as find()). * **Returns:** The first matching record, or None if no match. * **Return type:** dict or None ### Examples ```pycon >>> eeg = EEGDash() >>> record = eeg.find_one(data_name="ds002718_sub-001_eeg.set") ``` #### get_dataset(dataset_id: str) → Mapping[str, Any] | None Fetch metadata for a specific dataset. * **Parameters:** **dataset_id** (*str*) – The unique identifier of the dataset (e.g., ‘ds002718’). * **Returns:** The dataset metadata document, or None if not found. * **Return type:** dict or None #### insert(records: dict[str, Any] | list[dict[str, Any]]) → int Insert one or more records (requires auth_token). * **Parameters:** **records** (*dict* *or* *list* *of* *dict*) – A single record or list of records to insert. * **Returns:** Number of records inserted. * **Return type:** int ### Examples ```pycon >>> eeg = EEGDash(auth_token="...") >>> eeg.insert({"dataset": "ds001", "subject": "01", ...}) # single >>> eeg.insert([record1, record2, record3]) # batch ``` #### update_dataset(dataset_id: str, update: dict[str, Any]) → int Update metadata for a specific dataset (requires auth_token). * **Parameters:** * **dataset_id** (*str*) – The unique identifier of the dataset (e.g., ‘ds002718’). * **update** (*dict*) – Dictionary of fields to update. * **Returns:** Number of documents modified (0 or 1). * **Return type:** int ### Examples ```pycon >>> eeg = EEGDash(auth_token="...") >>> eeg.update_dataset("ds002718", {"clinical.is_clinical": True}) ``` #### update_field(query: dict[str, Any] = None, , , update: dict[str, Any], \*\*kwargs) → tuple[int, int] Update fields on records matching the query (requires auth_token). Use this to add or modify fields across matching records, e.g., after re-extracting entities with an improved algorithm. * **Parameters:** * **query** (*dict* *,* *optional*) – Filter query to match records. This is a positional-only argument. * **update** (*dict*) – Fields to update. Keys are field names, values are new values. * **\*\*kwargs** – User-friendly field filters (same as find()). * **Returns:** Number of records matched and actually modified. * **Return type:** tuple of (matched_count, modified_count) ### Examples ```pycon >>> eeg = EEGDash(auth_token="...") >>> # Update entities for all records in a dataset >>> eeg.update_field({"dataset": "ds002718"}, update={"entities": {"subject": "01"}}) >>> # Using kwargs for filter >>> eeg.update_field(dataset="ds002718", update={"entities": new_entities}) >>> # Combine query + kwargs >>> eeg.update_field({"dataset": "ds002718"}, subject="01", update={"entities": new_entities}) ``` # eegdash.bids_metadata module BIDS metadata processing and query building utilities. This module provides functions for building database queries from user parameters and enriching metadata records with participant information from BIDS datasets. ### eegdash.bids_metadata.attach_participants_extras(raw: Any, description: Any, extras: dict[str, Any]) → None Attach extra participant data to a raw object and its description. * **Parameters:** * **raw** (*mne.io.Raw*) – The MNE Raw object to be updated. * **description** (*dict* *or* *pandas.Series*) – The description object to be updated. * **extras** (*dict*) – Extra participant information to attach. ### eegdash.bids_metadata.build_query_from_kwargs(\*\*kwargs) → dict[str, Any] Build and validate a MongoDB query from keyword arguments. Converts user-friendly keyword arguments into a valid MongoDB query dictionary. Scalar values become exact matches; list-like values become `$in` queries. Entity fields (subject, task, session, run) are queried at the top level since the inject script flattens these from nested entities. * **Parameters:** **\*\*kwargs** – Query filters. Allowed keys are in `eegdash.const.ALLOWED_QUERY_FIELDS`. * **Returns:** A MongoDB query dictionary. * **Return type:** dict * **Raises:** **ValueError** – If an unsupported field is provided, or if a value is None/empty. ### eegdash.bids_metadata.enrich_from_participants(bids_root: str | Path, bidspath: Any, raw: Any, description: Any) → dict[str, Any] Read participants.tsv and attach extra info for the subject. * **Parameters:** * **bids_root** (*str* *or* *Path*) – Root directory of the BIDS dataset. * **bidspath** (*mne_bids.BIDSPath*) – BIDSPath object for the current data file. * **raw** (*mne.io.Raw*) – The MNE Raw object to be updated. * **description** (*dict* *or* *pandas.Series*) – The description object to be updated. * **Returns:** The extras that were attached. * **Return type:** dict ### eegdash.bids_metadata.get_entities_from_record(record: dict[str, Any], entities: tuple[str, ...] = ('subject', 'session', 'run', 'task')) → dict[str, Any] Get multiple entity values from a record. * **Parameters:** * **record** (*dict*) – A record dictionary. * **entities** (*tuple* *of* *str*) – Entity names to extract. * **Returns:** Dictionary of entity values (only non-None values included). * **Return type:** dict ### eegdash.bids_metadata.get_entity_from_record(record: dict[str, Any], entity: str) → Any Get an entity value from a record, supporting both v1 (flat) and v2 (nested) formats. * **Parameters:** * **record** (*dict*) – A record dictionary. * **entity** (*str*) – Entity name (e.g., “subject”, “task”, “session”, “run”). * **Returns:** The entity value, or None if not found. * **Return type:** Any ### Examples ```pycon >>> # v2 record (nested) >>> rec = {"entities": {"subject": "01", "task": "rest"}} >>> get_entity_from_record(rec, "subject") '01' >>> # v1 record (flat) >>> rec = {"subject": "01", "task": "rest"} >>> get_entity_from_record(rec, "subject") '01' ``` ### eegdash.bids_metadata.merge_participants_fields(description: dict[str, Any], participants_row: dict[str, Any] | None, description_fields: list[str] | None = None) → dict[str, Any] Merge fields from a participants.tsv row into a description dict. * **Parameters:** * **description** (*dict*) – The description dictionary to enrich. * **participants_row** (*dict* *or* *None*) – A row from participants.tsv. If None, returns description unchanged. * **description_fields** (*list* *of* *str* *,* *optional*) – Specific fields to include (matched using normalized keys). * **Returns:** The enriched description dictionary. * **Return type:** dict ### eegdash.bids_metadata.merge_query(query: dict[str, Any] | None = None, require_query: bool = True, \*\*kwargs) → dict[str, Any] Merge a raw query dict with keyword arguments into a final query. * **Parameters:** * **query** (*dict* *or* *None*) – Raw MongoDB query dictionary. Pass `{}` to match all documents. * **require_query** (*bool* *,* *default True*) – If True, raise ValueError when no query or kwargs provided. * **\*\*kwargs** – User-friendly field filters (converted via `build_query_from_kwargs`). * **Returns:** The merged MongoDB query. * **Return type:** dict * **Raises:** **ValueError** – If `require_query=True` and neither query nor kwargs provided, or if conflicting constraints are detected. ### eegdash.bids_metadata.normalize_key(key: str) → str Normalize a string key for robust matching. Converts to lowercase, replaces non-alphanumeric chars with underscores. ### eegdash.bids_metadata.participants_extras_from_tsv(bids_root: str | Path, subject: str, , id_columns: tuple[str, ...] = ('participant_id', 'participant', 'subject'), na_like: tuple[str, ...] = ('', 'n/a', 'na', 'nan', 'unknown', 'none')) → dict[str, Any] Extract additional participant information from participants.tsv. * **Parameters:** * **bids_root** (*str* *or* *Path*) – Root directory of the BIDS dataset. * **subject** (*str*) – Subject identifier. * **id_columns** (*tuple* *of* *str*) – Column names treated as identifiers (excluded from output). * **na_like** (*tuple* *of* *str*) – Values considered as “Not Available” (excluded). * **Returns:** Extra participant information. * **Return type:** dict ### eegdash.bids_metadata.participants_row_for_subject(bids_root: str | Path, subject: str, id_columns: tuple[str, ...] = ('participant_id', 'participant', 'subject')) → Series | None Load participants.tsv and return the row for a specific subject. * **Parameters:** * **bids_root** (*str* *or* *Path*) – Root directory of the BIDS dataset. * **subject** (*str*) – Subject identifier (e.g., “01” or “sub-01”). * **id_columns** (*tuple* *of* *str*) – Column names to search for the subject identifier. * **Returns:** Subject’s data if found, otherwise None. * **Return type:** pandas.Series or None # eegdash.const module Configuration constants and mappings for EEGDash. This module contains global configuration settings, allowed query fields, and mapping constants used throughout the EEGDash package. It defines the interface between EEGDash releases and OpenNeuro dataset identifiers, as well as validation rules for database queries. ### eegdash.const.ALLOWED_QUERY_FIELDS *= {'data_name', 'dataset', 'modality', 'nchans', 'ntimes', 'run', 'sampling_frequency', 'session', 'subject', 'task'}* A set of field names that are permitted in database queries constructed via `find()` with keyword arguments. * **Type:** set ### eegdash.const.RELEASE_TO_OPENNEURO_DATASET_MAP *= {'R1': 'ds005505', 'R10': 'ds005515', 'R11': 'ds005516', 'R2': 'ds005506', 'R3': 'ds005507', 'R4': 'ds005508', 'R5': 'ds005509', 'R6': 'ds005510', 'R7': 'ds005511', 'R8': 'ds005512', 'R9': 'ds005514'}* A mapping from Healthy Brain Network (HBN) release identifiers (e.g., “R11”) to their corresponding OpenNeuro dataset identifiers (e.g., “ds005516”). * **Type:** dict ### eegdash.const.SUBJECT_MINI_RELEASE_MAP *= {'R1': ['NDARAC904DMU', 'NDARAM704GKZ', 'NDARAP359UM6', 'NDARBD879MBX', 'NDARBH024NH2', 'NDARBK082PDD', 'NDARCA153NKE', 'NDARCE721YB5', 'NDARCJ594BWQ', 'NDARCN669XPR', 'NDARCW094JCG', 'NDARCZ947WU5', 'NDARDH670PXH', 'NDARDL511UND', 'NDARDU986RBM', 'NDAREM731BYM', 'NDAREN519BLJ', 'NDARFK610GY5', 'NDARFT581ZW5', 'NDARFW972KFQ'], 'R10': ['NDARAR935TGZ', 'NDARAV474ADJ', 'NDARCB869VM8', 'NDARCJ667UPL', 'NDARCM677TC1', 'NDARET671FTC', 'NDARKM061NHZ', 'NDARLD501HDK', 'NDARLL176DJR', 'NDARMT791WDH', 'NDARMW299ZAB', 'NDARNC405WJA', 'NDARNP962TJK', 'NDARPB967KU7', 'NDARRU560AGK', 'NDARTB173LY2', 'NDARUW377KAE', 'NDARVH565FX9', 'NDARVP799KGY', 'NDARVY962GB5'], 'R11': ['NDARAB678VYW', 'NDARAG788YV9', 'NDARAM946HJE', 'NDARAY977BZT', 'NDARAZ532KK0', 'NDARCE912ZXW', 'NDARCM214WFE', 'NDARDL033XRG', 'NDARDT889RT9', 'NDARDZ794ZVP', 'NDAREV869CPW', 'NDARFN221WW5', 'NDARFV289RKB', 'NDARFY623ZTE', 'NDARGA890MKA', 'NDARHN206XY3', 'NDARHP518FUR', 'NDARJL292RYV', 'NDARKM199DXW', 'NDARKW236TN7'], 'R2': ['NDARAB793GL3', 'NDARAM675UR8', 'NDARBM839WR5', 'NDARBU730PN8', 'NDARCT974NAJ', 'NDARCW933FD5', 'NDARCZ770BRG', 'NDARDW741HCF', 'NDARDZ058NZN', 'NDAREC377AU2', 'NDAREM500WWH', 'NDAREV527ZRF', 'NDAREV601CE7', 'NDARFF070XHV', 'NDARFR108JNB', 'NDARFT305CG1', 'NDARGA056TMW', 'NDARGH775KF5', 'NDARGJ878ZP4', 'NDARHA387FPM'], 'R3': ['NDARAA948VFH', 'NDARAD774HAZ', 'NDARAE828CML', 'NDARAG340ERT', 'NDARBA839HLG', 'NDARBE641DGZ', 'NDARBG574KF4', 'NDARBM642JFT', 'NDARCL016NHB', 'NDARCV944JA6', 'NDARCY178KJP', 'NDARDY150ZP9', 'NDAREC542MH3', 'NDAREK549XUQ', 'NDAREM887YY8', 'NDARFA815FXE', 'NDARFF644ZGD', 'NDARFV557XAA', 'NDARFV780ABD', 'NDARGB102NWJ'], 'R4': ['NDARAC350BZ0', 'NDARAD615WLJ', 'NDARAG584XLU', 'NDARAH503YG1', 'NDARAX272ZJL', 'NDARAY461TZZ', 'NDARBC734UVY', 'NDARBL444FBA', 'NDARBT640EBN', 'NDARBU098PJT', 'NDARBU928LV0', 'NDARBV059CGE', 'NDARCG037CX4', 'NDARCG947ZC0', 'NDARCH001CN2', 'NDARCU001ZN7', 'NDARCW497XW2', 'NDARCX053GU5', 'NDARDF568GL5', 'NDARDJ092YKH'], 'R5': ['NDARAH793FBF', 'NDARAJ689BVN', 'NDARAP785CTE', 'NDARAU708TL8', 'NDARBE091BGD', 'NDARBE103DHM', 'NDARBF851NH6', 'NDARBH228RDW', 'NDARBJ674TVU', 'NDARBM433VER', 'NDARCA740UC8', 'NDARCU633GCZ', 'NDARCU736GZ1', 'NDARCU744XWL', 'NDARDC843HHM', 'NDARDH086ZKK', 'NDARDL305BT8', 'NDARDU853XZ6', 'NDARDV245WJG', 'NDAREC480KFA'], 'R6': ['NDARAD224CRB', 'NDARAE301XTM', 'NDARAT680GJA', 'NDARCA578CEB', 'NDARDZ147ETZ', 'NDARFL793LDE', 'NDARFX710UZA', 'NDARGE994BMX', 'NDARGP191YHN', 'NDARGV436PFT', 'NDARHF545HFW', 'NDARHP039DBU', 'NDARHT774ZK1', 'NDARJA830BYV', 'NDARKB614KGY', 'NDARKM250ET5', 'NDARKZ085UKQ', 'NDARLB581AXF', 'NDARNJ899HW7', 'NDARRZ606EDP'], 'R7': ['NDARAY475AKD', 'NDARBW026UGE', 'NDARCK162REX', 'NDARCK481KRH', 'NDARCV378MMX', 'NDARCX462NVA', 'NDARDJ970ELG', 'NDARDU617ZW1', 'NDAREM609ZXW', 'NDAREW074ZM2', 'NDARFE555KXB', 'NDARFT176NJP', 'NDARGK442YHH', 'NDARGM439FZD', 'NDARGT634DUJ', 'NDARHE283KZN', 'NDARHG260BM9', 'NDARHL684WYU', 'NDARHN224TPA', 'NDARHP841RMR'], 'R8': ['NDARAB514MAJ', 'NDARAD571FLB', 'NDARAF003VCL', 'NDARAG191AE8', 'NDARAJ977PRJ', 'NDARAP912JK3', 'NDARAV454VF0', 'NDARAY298THW', 'NDARBJ375VP4', 'NDARBT436PMT', 'NDARBV630BK6', 'NDARCB627KDN', 'NDARCC059WTH', 'NDARCM953HKD', 'NDARCN681CXW', 'NDARCT889DMB', 'NDARDJ204EPU', 'NDARDJ544BU5', 'NDARDP292DVC', 'NDARDW178AC6'], 'R9': ['NDARAC589YMB', 'NDARAC853CR6', 'NDARAH239PGG', 'NDARAL897CYV', 'NDARAN160GUF', 'NDARAP049KXJ', 'NDARAP457WB5', 'NDARAW216PM7', 'NDARBA004KBT', 'NDARBD328NUQ', 'NDARBF042LDM', 'NDARBH019KPD', 'NDARBH728DFK', 'NDARBM370JCB', 'NDARBU183TDJ', 'NDARBW971DCW', 'NDARBZ444ZHK', 'NDARCC620ZFT', 'NDARCD182XT1', 'NDARCK113CJM']}* A mapping from HBN release identifiers to a list of subject IDs. This is used to select a small, representative subset of subjects for creating “mini” datasets for testing and demonstration purposes. * **Type:** dict ### eegdash.const.config *= {'accepted_query_fields': ['data_name', 'dataset'], 'attributes': {'bidspath': 'str', 'data_name': 'str', 'dataset': 'str', 'modality': 'str', 'nchans': 'int', 'ntimes': 'int', 'run': 'str', 'sampling_frequency': 'float', 'session': 'str', 'subject': 'str', 'task': 'str'}, 'bids_dependencies_files': ['dataset_description.json', 'participants.tsv', 'events.tsv', 'events.json', 'eeg.json', 'electrodes.tsv', 'channels.tsv', 'coordsystem.json'], 'description_fields': ['subject', 'session', 'run', 'task', 'age', 'gender', 'sex'], 'required_fields': ['data_name']}* A global configuration dictionary for the EEGDash package. # ABSeqMEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import ABSeqMEG dataset = ABSeqMEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = ABSeqMEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = ABSeqMEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{abseqmeg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ABSEQMEG` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ABSEQMEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/abseqmeg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=abseqmeg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [abseqmeg](https://openneuro.org/datasets/abseqmeg) - NeMAR: [abseqmeg](https://nemar.org/dataexplorer/detail?dataset_id=abseqmeg) ## API Reference Use the `ABSeqMEG` class to access this dataset programmatically. ### eegdash.dataset.ABSeqMEG alias of [`DS004483`](eegdash.dataset.DS004483.md#eegdash.dataset.DS004483) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/abseqmeg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=abseqmeg) # ANDI: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import ANDI dataset = ANDI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = ANDI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = ANDI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{andi, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ANDI` | |----------------|-------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ANDI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/andi) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=andi) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [andi](https://openneuro.org/datasets/andi) - NeMAR: [andi](https://nemar.org/dataexplorer/detail?dataset_id=andi) ## API Reference Use the `ANDI` class to access this dataset programmatically. ### eegdash.dataset.ANDI alias of [`DS004661`](eegdash.dataset.DS004661.md#eegdash.dataset.DS004661) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/andi) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=andi) # APPLESEED: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import APPLESEED dataset = APPLESEED(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = APPLESEED(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = APPLESEED( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{appleseed, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `APPLESEED` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `APPLESEED` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/appleseed) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=appleseed) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [appleseed](https://openneuro.org/datasets/appleseed) - NeMAR: [appleseed](https://nemar.org/dataexplorer/detail?dataset_id=appleseed) ## API Reference Use the `APPLESEED` class to access this dataset programmatically. ### eegdash.dataset.APPLESEED alias of [`DS003710`](eegdash.dataset.DS003710.md#eegdash.dataset.DS003710) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/appleseed) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=appleseed) # AlexMI: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import AlexMI dataset = AlexMI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = AlexMI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = AlexMI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{alexmi, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ALEXMI` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ALEXMI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/alexmi) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=alexmi) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [alexmi](https://openneuro.org/datasets/alexmi) - NeMAR: [alexmi](https://nemar.org/dataexplorer/detail?dataset_id=alexmi) ## API Reference Use the `AlexMI` class to access this dataset programmatically. ### eegdash.dataset.AlexMI alias of [`NM000138`](eegdash.dataset.NM000138.md#eegdash.dataset.NM000138) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/alexmi) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=alexmi) # AlexMotorImagery: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import AlexMotorImagery dataset = AlexMotorImagery(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = AlexMotorImagery(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = AlexMotorImagery( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{alexmotorimagery, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ALEXMOTORIMAGERY` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ALEXMOTORIMAGERY` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/alexmotorimagery) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=alexmotorimagery) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [alexmotorimagery](https://openneuro.org/datasets/alexmotorimagery) - NeMAR: [alexmotorimagery](https://nemar.org/dataexplorer/detail?dataset_id=alexmotorimagery) ## API Reference Use the `AlexMotorImagery` class to access this dataset programmatically. ### eegdash.dataset.AlexMotorImagery alias of [`NM000138`](eegdash.dataset.NM000138.md#eegdash.dataset.NM000138) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/alexmotorimagery) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=alexmotorimagery) # AlexandreMotorImagery: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import AlexandreMotorImagery dataset = AlexandreMotorImagery(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = AlexandreMotorImagery(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = AlexandreMotorImagery( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{alexandremotorimagery, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ALEXANDREMOTORIMAGERY` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ALEXANDREMOTORIMAGERY` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/alexandremotorimagery) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=alexandremotorimagery) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [alexandremotorimagery](https://openneuro.org/datasets/alexandremotorimagery) - NeMAR: [alexandremotorimagery](https://nemar.org/dataexplorer/detail?dataset_id=alexandremotorimagery) ## API Reference Use the `AlexandreMotorImagery` class to access this dataset programmatically. ### eegdash.dataset.AlexandreMotorImagery alias of [`NM000138`](eegdash.dataset.NM000138.md#eegdash.dataset.NM000138) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/alexandremotorimagery) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=alexandremotorimagery) # Alljoined: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Alljoined dataset = Alljoined(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Alljoined(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Alljoined( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{alljoined, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ALLJOINED` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ALLJOINED` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/alljoined) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=alljoined) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [alljoined](https://openneuro.org/datasets/alljoined) - NeMAR: [alljoined](https://nemar.org/dataexplorer/detail?dataset_id=alljoined) ## API Reference Use the `Alljoined` class to access this dataset programmatically. ### eegdash.dataset.Alljoined alias of [`NM000133`](eegdash.dataset.NM000133.md#eegdash.dataset.NM000133) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/alljoined) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=alljoined) # Alljoined1: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Alljoined1 dataset = Alljoined1(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Alljoined1(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Alljoined1( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{alljoined1, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ALLJOINED1` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ALLJOINED1` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/alljoined1) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=alljoined1) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [alljoined1](https://openneuro.org/datasets/alljoined1) - NeMAR: [alljoined1](https://nemar.org/dataexplorer/detail?dataset_id=alljoined1) ## API Reference Use the `Alljoined1` class to access this dataset programmatically. ### eegdash.dataset.Alljoined1 alias of [`NM000133`](eegdash.dataset.NM000133.md#eegdash.dataset.NM000133) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/alljoined1) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=alljoined1) # Alljoined16M: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Alljoined16M dataset = Alljoined16M(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Alljoined16M(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Alljoined16M( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{alljoined16m, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ALLJOINED16M` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ALLJOINED16M` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/alljoined16m) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=alljoined16m) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [alljoined16m](https://openneuro.org/datasets/alljoined16m) - NeMAR: [alljoined16m](https://nemar.org/dataexplorer/detail?dataset_id=alljoined16m) ## API Reference Use the `Alljoined16M` class to access this dataset programmatically. ### eegdash.dataset.Alljoined16M alias of [`NM000134`](eegdash.dataset.NM000134.md#eegdash.dataset.NM000134) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/alljoined16m) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=alljoined16m) # Alljoined1p6M: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Alljoined1p6M dataset = Alljoined1p6M(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Alljoined1p6M(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Alljoined1p6M( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{alljoined1p6m, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ALLJOINED1P6M` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ALLJOINED1P6M` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/alljoined1p6m) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=alljoined1p6m) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [alljoined1p6m](https://openneuro.org/datasets/alljoined1p6m) - NeMAR: [alljoined1p6m](https://nemar.org/dataexplorer/detail?dataset_id=alljoined1p6m) ## API Reference Use the `Alljoined1p6M` class to access this dataset programmatically. ### eegdash.dataset.Alljoined1p6M alias of [`NM000134`](eegdash.dataset.NM000134.md#eegdash.dataset.NM000134) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/alljoined1p6m) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=alljoined1p6m) # Alljoined_16M: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Alljoined_16M dataset = Alljoined_16M(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Alljoined_16M(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Alljoined_16M( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{alljoined_16m, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ALLJOINED_16M` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ALLJOINED_16M` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/alljoined_16m) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=alljoined_16m) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [alljoined_16m](https://openneuro.org/datasets/alljoined_16m) - NeMAR: [alljoined_16m](https://nemar.org/dataexplorer/detail?dataset_id=alljoined_16m) ## API Reference Use the `Alljoined_16M` class to access this dataset programmatically. ### eegdash.dataset.Alljoined_16M alias of [`NM000134`](eegdash.dataset.NM000134.md#eegdash.dataset.NM000134) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/alljoined_16m) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=alljoined_16m) # AlphaWaves: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import AlphaWaves dataset = AlphaWaves(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = AlphaWaves(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = AlphaWaves( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{alphawaves, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ALPHAWAVES` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ALPHAWAVES` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/alphawaves) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=alphawaves) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [alphawaves](https://openneuro.org/datasets/alphawaves) - NeMAR: [alphawaves](https://nemar.org/dataexplorer/detail?dataset_id=alphawaves) ## API Reference Use the `AlphaWaves` class to access this dataset programmatically. ### eegdash.dataset.AlphaWaves alias of [`NM000221`](eegdash.dataset.NM000221.md#eegdash.dataset.NM000221) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/alphawaves) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=alphawaves) # Alphawaves: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Alphawaves dataset = Alphawaves(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Alphawaves(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Alphawaves( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{alphawaves, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ALPHAWAVES` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ALPHAWAVES` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/alphawaves) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=alphawaves) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [alphawaves](https://openneuro.org/datasets/alphawaves) - NeMAR: [alphawaves](https://nemar.org/dataexplorer/detail?dataset_id=alphawaves) ## API Reference Use the `Alphawaves` class to access this dataset programmatically. ### eegdash.dataset.Alphawaves alias of [`NM000221`](eegdash.dataset.NM000221.md#eegdash.dataset.NM000221) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/alphawaves) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=alphawaves) # ArEEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import ArEEG dataset = ArEEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = ArEEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = ArEEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{areeg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `AREEG` | |----------------|---------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `AREEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/areeg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=areeg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [areeg](https://openneuro.org/datasets/areeg) - NeMAR: [areeg](https://nemar.org/dataexplorer/detail?dataset_id=areeg) ## API Reference Use the `ArEEG` class to access this dataset programmatically. ### eegdash.dataset.ArEEG alias of [`DS005262`](eegdash.dataset.DS005262.md#eegdash.dataset.DS005262) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/areeg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=areeg) # Ataseven2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Ataseven2024 dataset = Ataseven2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Ataseven2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Ataseven2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ataseven2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ATASEVEN2024` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ATASEVEN2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/ataseven2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ataseven2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [ataseven2024](https://openneuro.org/datasets/ataseven2024) - NeMAR: [ataseven2024](https://nemar.org/dataexplorer/detail?dataset_id=ataseven2024) ## API Reference Use the `Ataseven2024` class to access this dataset programmatically. ### eegdash.dataset.Ataseven2024 alias of [`DS007431`](eegdash.dataset.DS007431.md#eegdash.dataset.DS007431) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ataseven2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ataseven2024) # BCI2000_Intracranial: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCI2000_Intracranial dataset = BCI2000_Intracranial(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCI2000_Intracranial(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCI2000_Intracranial( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bci2000_intracranial, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCI2000_INTRACRANIAL` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCI2000_INTRACRANIAL` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bci2000_intracranial) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bci2000_intracranial) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bci2000_intracranial](https://openneuro.org/datasets/bci2000_intracranial) - NeMAR: [bci2000_intracranial](https://nemar.org/dataexplorer/detail?dataset_id=bci2000_intracranial) ## API Reference Use the `BCI2000_Intracranial` class to access this dataset programmatically. ### eegdash.dataset.BCI2000_Intracranial alias of [`DS004624`](eegdash.dataset.DS004624.md#eegdash.dataset.DS004624) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bci2000_intracranial) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bci2000_intracranial) # BCI2000_intraop: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCI2000_intraop dataset = BCI2000_intraop(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCI2000_intraop(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCI2000_intraop( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bci2000_intraop, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCI2000_INTRAOP` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCI2000_INTRAOP` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bci2000_intraop) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bci2000_intraop) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bci2000_intraop](https://openneuro.org/datasets/bci2000_intraop) - NeMAR: [bci2000_intraop](https://nemar.org/dataexplorer/detail?dataset_id=bci2000_intraop) ## API Reference Use the `BCI2000_intraop` class to access this dataset programmatically. ### eegdash.dataset.BCI2000_intraop alias of [`DS004944`](eegdash.dataset.DS004944.md#eegdash.dataset.DS004944) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bci2000_intraop) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bci2000_intraop) # BCIAUT: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCIAUT dataset = BCIAUT(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCIAUT(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCIAUT( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bciaut, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCIAUT` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCIAUT` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bciaut) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bciaut) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bciaut](https://openneuro.org/datasets/bciaut) - NeMAR: [bciaut](https://nemar.org/dataexplorer/detail?dataset_id=bciaut) ## API Reference Use the `BCIAUT` class to access this dataset programmatically. ### eegdash.dataset.BCIAUT alias of [`NM000210`](eegdash.dataset.NM000210.md#eegdash.dataset.NM000210) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bciaut) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bciaut) # BCIAUTP300: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCIAUTP300 dataset = BCIAUTP300(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCIAUTP300(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCIAUTP300( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bciautp300, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCIAUTP300` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCIAUTP300` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bciautp300) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bciautp300) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bciautp300](https://openneuro.org/datasets/bciautp300) - NeMAR: [bciautp300](https://nemar.org/dataexplorer/detail?dataset_id=bciautp300) ## API Reference Use the `BCIAUTP300` class to access this dataset programmatically. ### eegdash.dataset.BCIAUTP300 alias of [`NM000210`](eegdash.dataset.NM000210.md#eegdash.dataset.NM000210) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bciautp300) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bciautp300) # BCIAUT_P300: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCIAUT_P300 dataset = BCIAUT_P300(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCIAUT_P300(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCIAUT_P300( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bciaut_p300, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCIAUT_P300` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCIAUT_P300` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bciaut_p300) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bciaut_p300) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bciaut_p300](https://openneuro.org/datasets/bciaut_p300) - NeMAR: [bciaut_p300](https://nemar.org/dataexplorer/detail?dataset_id=bciaut_p300) ## API Reference Use the `BCIAUT_P300` class to access this dataset programmatically. ### eegdash.dataset.BCIAUT_P300 alias of [`NM000210`](eegdash.dataset.NM000210.md#eegdash.dataset.NM000210) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bciaut_p300) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bciaut_p300) # BCICIII_IVa: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCICIII_IVa dataset = BCICIII_IVa(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCICIII_IVa(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCICIII_IVa( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bciciii_iva, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCICIII_IVA` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCICIII_IVA` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bciciii_iva) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bciciii_iva) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bciciii_iva](https://openneuro.org/datasets/bciciii_iva) - NeMAR: [bciciii_iva](https://nemar.org/dataexplorer/detail?dataset_id=bciciii_iva) ## API Reference Use the `BCICIII_IVa` class to access this dataset programmatically. ### eegdash.dataset.BCICIII_IVa alias of [`NM000143`](eegdash.dataset.NM000143.md#eegdash.dataset.NM000143) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bciciii_iva) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bciciii_iva) # BCICIV1: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCICIV1 dataset = BCICIV1(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCICIV1(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCICIV1( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bciciv1, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCICIV1` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCICIV1` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bciciv1) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bciciv1) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bciciv1](https://openneuro.org/datasets/bciciv1) - NeMAR: [bciciv1](https://nemar.org/dataexplorer/detail?dataset_id=bciciv1) ## API Reference Use the `BCICIV1` class to access this dataset programmatically. ### eegdash.dataset.BCICIV1 alias of [`NM000139`](eegdash.dataset.NM000139.md#eegdash.dataset.NM000139) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bciciv1) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bciciv1) # BCICompIII_IVa: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCICompIII_IVa dataset = BCICompIII_IVa(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCICompIII_IVa(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCICompIII_IVa( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bcicompiii_iva, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCICOMPIII_IVA` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCICOMPIII_IVA` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bcicompiii_iva) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bcicompiii_iva) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bcicompiii_iva](https://openneuro.org/datasets/bcicompiii_iva) - NeMAR: [bcicompiii_iva](https://nemar.org/dataexplorer/detail?dataset_id=bcicompiii_iva) ## API Reference Use the `BCICompIII_IVa` class to access this dataset programmatically. ### eegdash.dataset.BCICompIII_IVa alias of [`NM000143`](eegdash.dataset.NM000143.md#eegdash.dataset.NM000143) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bcicompiii_iva) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bcicompiii_iva) # BCICompIV1: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCICompIV1 dataset = BCICompIV1(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCICompIV1(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCICompIV1( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bcicompiv1, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCICOMPIV1` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCICOMPIV1` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bcicompiv1) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bcicompiv1) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bcicompiv1](https://openneuro.org/datasets/bcicompiv1) - NeMAR: [bcicompiv1](https://nemar.org/dataexplorer/detail?dataset_id=bcicompiv1) ## API Reference Use the `BCICompIV1` class to access this dataset programmatically. ### eegdash.dataset.BCICompIV1 alias of [`NM000139`](eegdash.dataset.NM000139.md#eegdash.dataset.NM000139) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bcicompiv1) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bcicompiv1) # BCIT: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCIT dataset = BCIT(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCIT(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCIT( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bcit, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCIT` | |----------------|-------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCIT` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bcit) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bcit) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bcit](https://openneuro.org/datasets/bcit) - NeMAR: [bcit](https://nemar.org/dataexplorer/detail?dataset_id=bcit) ## API Reference Use the `BCIT` class to access this dataset programmatically. ### eegdash.dataset.BCIT alias of [`DS004119`](eegdash.dataset.DS004119.md#eegdash.dataset.DS004119) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bcit) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bcit) # BCITAdvancedGuardDuty: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCITAdvancedGuardDuty dataset = BCITAdvancedGuardDuty(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCITAdvancedGuardDuty(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCITAdvancedGuardDuty( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bcitadvancedguardduty, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCITADVANCEDGUARDDUTY` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCITADVANCEDGUARDDUTY` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bcitadvancedguardduty) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bcitadvancedguardduty) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bcitadvancedguardduty](https://openneuro.org/datasets/bcitadvancedguardduty) - NeMAR: [bcitadvancedguardduty](https://nemar.org/dataexplorer/detail?dataset_id=bcitadvancedguardduty) ## API Reference Use the `BCITAdvancedGuardDuty` class to access this dataset programmatically. ### eegdash.dataset.BCITAdvancedGuardDuty alias of [`DS004106`](eegdash.dataset.DS004106.md#eegdash.dataset.DS004106) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bcitadvancedguardduty) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bcitadvancedguardduty) # BCITBaselineDriving: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCITBaselineDriving dataset = BCITBaselineDriving(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCITBaselineDriving(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCITBaselineDriving( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bcitbaselinedriving, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCITBASELINEDRIVING` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCITBASELINEDRIVING` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bcitbaselinedriving) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bcitbaselinedriving) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bcitbaselinedriving](https://openneuro.org/datasets/bcitbaselinedriving) - NeMAR: [bcitbaselinedriving](https://nemar.org/dataexplorer/detail?dataset_id=bcitbaselinedriving) ## API Reference Use the `BCITBaselineDriving` class to access this dataset programmatically. ### eegdash.dataset.BCITBaselineDriving alias of [`DS004120`](eegdash.dataset.DS004120.md#eegdash.dataset.DS004120) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bcitbaselinedriving) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bcitbaselinedriving) # BCITMindWandering: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCITMindWandering dataset = BCITMindWandering(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCITMindWandering(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCITMindWandering( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bcitmindwandering, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCITMINDWANDERING` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCITMINDWANDERING` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bcitmindwandering) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bcitmindwandering) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bcitmindwandering](https://openneuro.org/datasets/bcitmindwandering) - NeMAR: [bcitmindwandering](https://nemar.org/dataexplorer/detail?dataset_id=bcitmindwandering) ## API Reference Use the `BCITMindWandering` class to access this dataset programmatically. ### eegdash.dataset.BCITMindWandering alias of [`DS004121`](eegdash.dataset.DS004121.md#eegdash.dataset.DS004121) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bcitmindwandering) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bcitmindwandering) # BCIT_Auditory_Cueing: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCIT_Auditory_Cueing dataset = BCIT_Auditory_Cueing(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCIT_Auditory_Cueing(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCIT_Auditory_Cueing( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bcit_auditory_cueing, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCIT_AUDITORY_CUEING` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCIT_AUDITORY_CUEING` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bcit_auditory_cueing) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bcit_auditory_cueing) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bcit_auditory_cueing](https://openneuro.org/datasets/bcit_auditory_cueing) - NeMAR: [bcit_auditory_cueing](https://nemar.org/dataexplorer/detail?dataset_id=bcit_auditory_cueing) ## API Reference Use the `BCIT_Auditory_Cueing` class to access this dataset programmatically. ### eegdash.dataset.BCIT_Auditory_Cueing alias of [`DS004105`](eegdash.dataset.DS004105.md#eegdash.dataset.DS004105) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bcit_auditory_cueing) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bcit_auditory_cueing) # BCIT_Traffic_Complexity: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BCIT_Traffic_Complexity dataset = BCIT_Traffic_Complexity(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BCIT_Traffic_Complexity(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BCIT_Traffic_Complexity( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bcit_traffic_complexity, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BCIT_TRAFFIC_COMPLEXITY` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BCIT_TRAFFIC_COMPLEXITY` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bcit_traffic_complexity) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bcit_traffic_complexity) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bcit_traffic_complexity](https://openneuro.org/datasets/bcit_traffic_complexity) - NeMAR: [bcit_traffic_complexity](https://nemar.org/dataexplorer/detail?dataset_id=bcit_traffic_complexity) ## API Reference Use the `BCIT_Traffic_Complexity` class to access this dataset programmatically. ### eegdash.dataset.BCIT_Traffic_Complexity alias of [`DS004123`](eegdash.dataset.DS004123.md#eegdash.dataset.DS004123) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bcit_traffic_complexity) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bcit_traffic_complexity) # BETA: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BETA dataset = BETA(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BETA(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BETA( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{beta, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BETA` | |----------------|-------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BETA` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/beta) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=beta) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [beta](https://openneuro.org/datasets/beta) - NeMAR: [beta](https://nemar.org/dataexplorer/detail?dataset_id=beta) ## API Reference Use the `BETA` class to access this dataset programmatically. ### eegdash.dataset.BETA alias of [`NM000129`](eegdash.dataset.NM000129.md#eegdash.dataset.NM000129) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/beta) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=beta) # BETA_SSVEP: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BETA_SSVEP dataset = BETA_SSVEP(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BETA_SSVEP(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BETA_SSVEP( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{beta_ssvep, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BETA_SSVEP` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BETA_SSVEP` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/beta_ssvep) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=beta_ssvep) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [beta_ssvep](https://openneuro.org/datasets/beta_ssvep) - NeMAR: [beta_ssvep](https://nemar.org/dataexplorer/detail?dataset_id=beta_ssvep) ## API Reference Use the `BETA_SSVEP` class to access this dataset programmatically. ### eegdash.dataset.BETA_SSVEP alias of [`NM000129`](eegdash.dataset.NM000129.md#eegdash.dataset.NM000129) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/beta_ssvep) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=beta_ssvep) # BI2012: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BI2012 dataset = BI2012(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BI2012(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BI2012( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bi2012, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BI2012` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BI2012` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bi2012) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bi2012) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bi2012](https://openneuro.org/datasets/bi2012) - NeMAR: [bi2012](https://nemar.org/dataexplorer/detail?dataset_id=bi2012) ## API Reference Use the `BI2012` class to access this dataset programmatically. ### eegdash.dataset.BI2012 alias of [`NM000260`](eegdash.dataset.NM000260.md#eegdash.dataset.NM000260) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bi2012) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bi2012) # BI2013a: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BI2013a dataset = BI2013a(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BI2013a(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BI2013a( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bi2013a, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BI2013A` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BI2013A` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bi2013a) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bi2013a) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bi2013a](https://openneuro.org/datasets/bi2013a) - NeMAR: [bi2013a](https://nemar.org/dataexplorer/detail?dataset_id=bi2013a) ## API Reference Use the `BI2013a` class to access this dataset programmatically. ### eegdash.dataset.BI2013a alias of [`NM000264`](eegdash.dataset.NM000264.md#eegdash.dataset.NM000264) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bi2013a) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bi2013a) # BI2014a: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BI2014a dataset = BI2014a(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BI2014a(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BI2014a( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bi2014a, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BI2014A` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BI2014A` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bi2014a) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bi2014a) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bi2014a](https://openneuro.org/datasets/bi2014a) - NeMAR: [bi2014a](https://nemar.org/dataexplorer/detail?dataset_id=bi2014a) ## API Reference Use the `BI2014a` class to access this dataset programmatically. ### eegdash.dataset.BI2014a alias of [`NM000244`](eegdash.dataset.NM000244.md#eegdash.dataset.NM000244) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bi2014a) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bi2014a) # BI2014b: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BI2014b dataset = BI2014b(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BI2014b(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BI2014b( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bi2014b, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BI2014B` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BI2014B` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bi2014b) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bi2014b) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bi2014b](https://openneuro.org/datasets/bi2014b) - NeMAR: [bi2014b](https://nemar.org/dataexplorer/detail?dataset_id=bi2014b) ## API Reference Use the `BI2014b` class to access this dataset programmatically. ### eegdash.dataset.BI2014b alias of [`NM000215`](eegdash.dataset.NM000215.md#eegdash.dataset.NM000215) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bi2014b) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bi2014b) # BI2015a: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BI2015a dataset = BI2015a(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BI2015a(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BI2015a( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bi2015a, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BI2015A` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BI2015A` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bi2015a) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bi2015a) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bi2015a](https://openneuro.org/datasets/bi2015a) - NeMAR: [bi2015a](https://nemar.org/dataexplorer/detail?dataset_id=bi2015a) ## API Reference Use the `BI2015a` class to access this dataset programmatically. ### eegdash.dataset.BI2015a alias of [`NM000216`](eegdash.dataset.NM000216.md#eegdash.dataset.NM000216) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bi2015a) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bi2015a) # BI2015b: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BI2015b dataset = BI2015b(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BI2015b(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BI2015b( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bi2015b, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BI2015B` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BI2015B` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bi2015b) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bi2015b) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bi2015b](https://openneuro.org/datasets/bi2015b) - NeMAR: [bi2015b](https://nemar.org/dataexplorer/detail?dataset_id=bi2015b) ## API Reference Use the `BI2015b` class to access this dataset programmatically. ### eegdash.dataset.BI2015b alias of [`NM000217`](eegdash.dataset.NM000217.md#eegdash.dataset.NM000217) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bi2015b) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bi2015b) # BMI_HDEEG_D1: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BMI_HDEEG_D1 dataset = BMI_HDEEG_D1(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BMI_HDEEG_D1(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BMI_HDEEG_D1( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bmi_hdeeg_d1, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BMI_HDEEG_D1` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BMI_HDEEG_D1` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bmi_hdeeg_d1) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bmi_hdeeg_d1) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bmi_hdeeg_d1](https://openneuro.org/datasets/bmi_hdeeg_d1) - NeMAR: [bmi_hdeeg_d1](https://nemar.org/dataexplorer/detail?dataset_id=bmi_hdeeg_d1) ## API Reference Use the `BMI_HDEEG_D1` class to access this dataset programmatically. ### eegdash.dataset.BMI_HDEEG_D1 alias of [`DS004444`](eegdash.dataset.DS004444.md#eegdash.dataset.DS004444) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bmi_hdeeg_d1) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bmi_hdeeg_d1) # BMI_HDEEG_D2: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BMI_HDEEG_D2 dataset = BMI_HDEEG_D2(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BMI_HDEEG_D2(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BMI_HDEEG_D2( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bmi_hdeeg_d2, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BMI_HDEEG_D2` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BMI_HDEEG_D2` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bmi_hdeeg_d2) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bmi_hdeeg_d2) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bmi_hdeeg_d2](https://openneuro.org/datasets/bmi_hdeeg_d2) - NeMAR: [bmi_hdeeg_d2](https://nemar.org/dataexplorer/detail?dataset_id=bmi_hdeeg_d2) ## API Reference Use the `BMI_HDEEG_D2` class to access this dataset programmatically. ### eegdash.dataset.BMI_HDEEG_D2 alias of [`DS004446`](eegdash.dataset.DS004446.md#eegdash.dataset.DS004446) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bmi_hdeeg_d2) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bmi_hdeeg_d2) # BMI_HDEEG_D3: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BMI_HDEEG_D3 dataset = BMI_HDEEG_D3(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BMI_HDEEG_D3(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BMI_HDEEG_D3( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bmi_hdeeg_d3, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BMI_HDEEG_D3` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BMI_HDEEG_D3` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bmi_hdeeg_d3) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bmi_hdeeg_d3) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bmi_hdeeg_d3](https://openneuro.org/datasets/bmi_hdeeg_d3) - NeMAR: [bmi_hdeeg_d3](https://nemar.org/dataexplorer/detail?dataset_id=bmi_hdeeg_d3) ## API Reference Use the `BMI_HDEEG_D3` class to access this dataset programmatically. ### eegdash.dataset.BMI_HDEEG_D3 alias of [`DS004447`](eegdash.dataset.DS004447.md#eegdash.dataset.DS004447) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bmi_hdeeg_d3) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bmi_hdeeg_d3) # BMI_HDEEG_D4: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BMI_HDEEG_D4 dataset = BMI_HDEEG_D4(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BMI_HDEEG_D4(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BMI_HDEEG_D4( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bmi_hdeeg_d4, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BMI_HDEEG_D4` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BMI_HDEEG_D4` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bmi_hdeeg_d4) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bmi_hdeeg_d4) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bmi_hdeeg_d4](https://openneuro.org/datasets/bmi_hdeeg_d4) - NeMAR: [bmi_hdeeg_d4](https://nemar.org/dataexplorer/detail?dataset_id=bmi_hdeeg_d4) ## API Reference Use the `BMI_HDEEG_D4` class to access this dataset programmatically. ### eegdash.dataset.BMI_HDEEG_D4 alias of [`DS004448`](eegdash.dataset.DS004448.md#eegdash.dataset.DS004448) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bmi_hdeeg_d4) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bmi_hdeeg_d4) # BNCI2003_IVa: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2003_IVa dataset = BNCI2003_IVa(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2003_IVa(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2003_IVa( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2003_iva, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2003_IVA` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2003_IVA` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2003_iva) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2003_iva) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2003_iva](https://openneuro.org/datasets/bnci2003_iva) - NeMAR: [bnci2003_iva](https://nemar.org/dataexplorer/detail?dataset_id=bnci2003_iva) ## API Reference Use the `BNCI2003_IVa` class to access this dataset programmatically. ### eegdash.dataset.BNCI2003_IVa alias of [`NM000143`](eegdash.dataset.NM000143.md#eegdash.dataset.NM000143) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2003_iva) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2003_iva) # BNCI2014001: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2014001 dataset = BNCI2014001(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2014001(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2014001( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2014001, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2014001` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2014001` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2014001) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014001) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2014001](https://openneuro.org/datasets/bnci2014001) - NeMAR: [bnci2014001](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014001) ## API Reference Use the `BNCI2014001` class to access this dataset programmatically. ### eegdash.dataset.BNCI2014001 alias of [`NM000139`](eegdash.dataset.NM000139.md#eegdash.dataset.NM000139) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2014001) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014001) # BNCI2014002: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2014002 dataset = BNCI2014002(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2014002(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2014002( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2014002, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2014002` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2014002` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2014002) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014002) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2014002](https://openneuro.org/datasets/bnci2014002) - NeMAR: [bnci2014002](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014002) ## API Reference Use the `BNCI2014002` class to access this dataset programmatically. ### eegdash.dataset.BNCI2014002 alias of [`NM000171`](eegdash.dataset.NM000171.md#eegdash.dataset.NM000171) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2014002) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014002) # BNCI2014004: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2014004 dataset = BNCI2014004(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2014004(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2014004( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2014004, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2014004` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2014004` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2014004) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014004) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2014004](https://openneuro.org/datasets/bnci2014004) - NeMAR: [bnci2014004](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014004) ## API Reference Use the `BNCI2014004` class to access this dataset programmatically. ### eegdash.dataset.BNCI2014004 alias of [`NM000135`](eegdash.dataset.NM000135.md#eegdash.dataset.NM000135) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2014004) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014004) # BNCI2014008: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2014008 dataset = BNCI2014008(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2014008(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2014008( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2014008, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2014008` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2014008` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2014008) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014008) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2014008](https://openneuro.org/datasets/bnci2014008) - NeMAR: [bnci2014008](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014008) ## API Reference Use the `BNCI2014008` class to access this dataset programmatically. ### eegdash.dataset.BNCI2014008 alias of [`NM000169`](eegdash.dataset.NM000169.md#eegdash.dataset.NM000169) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2014008) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014008) # BNCI2014_009_P300: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2014_009_P300 dataset = BNCI2014_009_P300(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2014_009_P300(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2014_009_P300( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2014_009_p300, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2014_009_P300` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2014_009_P300` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2014_009_p300) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014_009_p300) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2014_009_p300](https://openneuro.org/datasets/bnci2014_009_p300) - NeMAR: [bnci2014_009_p300](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014_009_p300) ## API Reference Use the `BNCI2014_009_P300` class to access this dataset programmatically. ### eegdash.dataset.BNCI2014_009_P300 alias of [`NM000188`](eegdash.dataset.NM000188.md#eegdash.dataset.NM000188) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2014_009_p300) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2014_009_p300) # BNCI2015: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2015 dataset = BNCI2015(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2015(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2015( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2015, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2015` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2015` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2015) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2015](https://openneuro.org/datasets/bnci2015) - NeMAR: [bnci2015](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015) ## API Reference Use the `BNCI2015` class to access this dataset programmatically. ### eegdash.dataset.BNCI2015 alias of [`NM000140`](eegdash.dataset.NM000140.md#eegdash.dataset.NM000140) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2015) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015) # BNCI2015001: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2015001 dataset = BNCI2015001(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2015001(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2015001( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2015001, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2015001` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2015001` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2015001) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015001) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2015001](https://openneuro.org/datasets/bnci2015001) - NeMAR: [bnci2015001](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015001) ## API Reference Use the `BNCI2015001` class to access this dataset programmatically. ### eegdash.dataset.BNCI2015001 alias of [`NM000140`](eegdash.dataset.NM000140.md#eegdash.dataset.NM000140) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2015001) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015001) # BNCI2015_003_AMUSE: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2015_003_AMUSE dataset = BNCI2015_003_AMUSE(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2015_003_AMUSE(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2015_003_AMUSE( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2015_003_amuse, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2015_003_AMUSE` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2015_003_AMUSE` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2015_003_amuse) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_003_amuse) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2015_003_amuse](https://openneuro.org/datasets/bnci2015_003_amuse) - NeMAR: [bnci2015_003_amuse](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_003_amuse) ## API Reference Use the `BNCI2015_003_AMUSE` class to access this dataset programmatically. ### eegdash.dataset.BNCI2015_003_AMUSE alias of [`NM000189`](eegdash.dataset.NM000189.md#eegdash.dataset.NM000189) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2015_003_amuse) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_003_amuse) # BNCI2015_003_P300: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2015_003_P300 dataset = BNCI2015_003_P300(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2015_003_P300(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2015_003_P300( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2015_003_p300, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2015_003_P300` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2015_003_P300` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2015_003_p300) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_003_p300) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2015_003_p300](https://openneuro.org/datasets/bnci2015_003_p300) - NeMAR: [bnci2015_003_p300](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_003_p300) ## API Reference Use the `BNCI2015_003_P300` class to access this dataset programmatically. ### eegdash.dataset.BNCI2015_003_P300 alias of [`NM000189`](eegdash.dataset.NM000189.md#eegdash.dataset.NM000189) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2015_003_p300) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_003_p300) # BNCI2015_006_MusicBCI: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2015_006_MusicBCI dataset = BNCI2015_006_MusicBCI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2015_006_MusicBCI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2015_006_MusicBCI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2015_006_musicbci, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2015_006_MUSICBCI` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2015_006_MUSICBCI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2015_006_musicbci) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_006_musicbci) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2015_006_musicbci](https://openneuro.org/datasets/bnci2015_006_musicbci) - NeMAR: [bnci2015_006_musicbci](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_006_musicbci) ## API Reference Use the `BNCI2015_006_MusicBCI` class to access this dataset programmatically. ### eegdash.dataset.BNCI2015_006_MusicBCI alias of [`NM000192`](eegdash.dataset.NM000192.md#eegdash.dataset.NM000192) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2015_006_musicbci) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_006_musicbci) # BNCI2015_008_CenterSpeller: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2015_008_CenterSpeller dataset = BNCI2015_008_CenterSpeller(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2015_008_CenterSpeller(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2015_008_CenterSpeller( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2015_008_centerspeller, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2015_008_CENTERSPELLER` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2015_008_CENTERSPELLER` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2015_008_centerspeller) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_008_centerspeller) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2015_008_centerspeller](https://openneuro.org/datasets/bnci2015_008_centerspeller) - NeMAR: [bnci2015_008_centerspeller](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_008_centerspeller) ## API Reference Use the `BNCI2015_008_CenterSpeller` class to access this dataset programmatically. ### eegdash.dataset.BNCI2015_008_CenterSpeller alias of [`NM000198`](eegdash.dataset.NM000198.md#eegdash.dataset.NM000198) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2015_008_centerspeller) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_008_centerspeller) # BNCI2015_008_P300: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2015_008_P300 dataset = BNCI2015_008_P300(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2015_008_P300(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2015_008_P300( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2015_008_p300, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2015_008_P300` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2015_008_P300` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2015_008_p300) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_008_p300) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2015_008_p300](https://openneuro.org/datasets/bnci2015_008_p300) - NeMAR: [bnci2015_008_p300](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_008_p300) ## API Reference Use the `BNCI2015_008_P300` class to access this dataset programmatically. ### eegdash.dataset.BNCI2015_008_P300 alias of [`NM000198`](eegdash.dataset.NM000198.md#eegdash.dataset.NM000198) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2015_008_p300) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_008_p300) # BNCI2015_BNCI_006_Music: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2015_BNCI_006_Music dataset = BNCI2015_BNCI_006_Music(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2015_BNCI_006_Music(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2015_BNCI_006_Music( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2015_bnci_006_music, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2015_BNCI_006_MUSIC` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2015_BNCI_006_MUSIC` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2015_bnci_006_music) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_bnci_006_music) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2015_bnci_006_music](https://openneuro.org/datasets/bnci2015_bnci_006_music) - NeMAR: [bnci2015_bnci_006_music](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_bnci_006_music) ## API Reference Use the `BNCI2015_BNCI_006_Music` class to access this dataset programmatically. ### eegdash.dataset.BNCI2015_BNCI_006_Music alias of [`NM000192`](eegdash.dataset.NM000192.md#eegdash.dataset.NM000192) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2015_bnci_006_music) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_bnci_006_music) # BNCI2015_ERP: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2015_ERP dataset = BNCI2015_ERP(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2015_ERP(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2015_ERP( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2015_erp, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2015_ERP` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2015_ERP` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2015_erp) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_erp) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2015_erp](https://openneuro.org/datasets/bnci2015_erp) - NeMAR: [bnci2015_erp](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_erp) ## API Reference Use the `BNCI2015_ERP` class to access this dataset programmatically. ### eegdash.dataset.BNCI2015_ERP alias of [`NM000234`](eegdash.dataset.NM000234.md#eegdash.dataset.NM000234) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2015_erp) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_erp) # BNCI2015_P300: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2015_P300 dataset = BNCI2015_P300(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2015_P300(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2015_P300( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2015_p300, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2015_P300` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2015_P300` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2015_p300) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_p300) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2015_p300](https://openneuro.org/datasets/bnci2015_p300) - NeMAR: [bnci2015_p300](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_p300) ## API Reference Use the `BNCI2015_P300` class to access this dataset programmatically. ### eegdash.dataset.BNCI2015_P300 alias of [`NM000189`](eegdash.dataset.NM000189.md#eegdash.dataset.NM000189) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2015_p300) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2015_p300) # BNCI2016: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2016 dataset = BNCI2016(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2016(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2016( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2016, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2016` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2016` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2016) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2016) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2016](https://openneuro.org/datasets/bnci2016) - NeMAR: [bnci2016](https://nemar.org/dataexplorer/detail?dataset_id=bnci2016) ## API Reference Use the `BNCI2016` class to access this dataset programmatically. ### eegdash.dataset.BNCI2016 alias of [`NM000243`](eegdash.dataset.NM000243.md#eegdash.dataset.NM000243) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2016) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2016) # BNCI2016002: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2016002 dataset = BNCI2016002(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2016002(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2016002( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2016002, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2016002` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2016002` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2016002) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2016002) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2016002](https://openneuro.org/datasets/bnci2016002) - NeMAR: [bnci2016002](https://nemar.org/dataexplorer/detail?dataset_id=bnci2016002) ## API Reference Use the `BNCI2016002` class to access this dataset programmatically. ### eegdash.dataset.BNCI2016002 alias of [`NM000243`](eegdash.dataset.NM000243.md#eegdash.dataset.NM000243) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2016002) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2016002) # BNCI2020: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2020 dataset = BNCI2020(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2020(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2020( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2020, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2020` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2020` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2020) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2020) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2020](https://openneuro.org/datasets/bnci2020) - NeMAR: [bnci2020](https://nemar.org/dataexplorer/detail?dataset_id=bnci2020) ## API Reference Use the `BNCI2020` class to access this dataset programmatically. ### eegdash.dataset.BNCI2020 alias of [`NM000219`](eegdash.dataset.NM000219.md#eegdash.dataset.NM000219) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2020) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2020) # BNCI2020_002_AttentionShift: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2020_002_AttentionShift dataset = BNCI2020_002_AttentionShift(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2020_002_AttentionShift(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2020_002_AttentionShift( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2020_002_attentionshift, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2020_002_ATTENTIONSHIFT` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2020_002_ATTENTIONSHIFT` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2020_002_attentionshift) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2020_002_attentionshift) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2020_002_attentionshift](https://openneuro.org/datasets/bnci2020_002_attentionshift) - NeMAR: [bnci2020_002_attentionshift](https://nemar.org/dataexplorer/detail?dataset_id=bnci2020_002_attentionshift) ## API Reference Use the `BNCI2020_002_AttentionShift` class to access this dataset programmatically. ### eegdash.dataset.BNCI2020_002_AttentionShift alias of [`NM000219`](eegdash.dataset.NM000219.md#eegdash.dataset.NM000219) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2020_002_attentionshift) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2020_002_attentionshift) # BNCI2020_002_CovertSpatialAttention: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2020_002_CovertSpatialAttention dataset = BNCI2020_002_CovertSpatialAttention(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2020_002_CovertSpatialAttention(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2020_002_CovertSpatialAttention( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2020_002_covertspatialattention, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2020_002_COVERTSPATIALATTENTION` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2020_002_COVERTSPATIALATTENTION` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2020_002_covertspatialattention) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2020_002_covertspatialattention) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2020_002_covertspatialattention](https://openneuro.org/datasets/bnci2020_002_covertspatialattention) - NeMAR: [bnci2020_002_covertspatialattention](https://nemar.org/dataexplorer/detail?dataset_id=bnci2020_002_covertspatialattention) ## API Reference Use the `BNCI2020_002_CovertSpatialAttention` class to access this dataset programmatically. ### eegdash.dataset.BNCI2020_002_CovertSpatialAttention alias of [`NM000219`](eegdash.dataset.NM000219.md#eegdash.dataset.NM000219) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2020_002_covertspatialattention) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2020_002_covertspatialattention) # BNCI2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI2025 dataset = BNCI2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI2025` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci2025](https://openneuro.org/datasets/bnci2025) - NeMAR: [bnci2025](https://nemar.org/dataexplorer/detail?dataset_id=bnci2025) ## API Reference Use the `BNCI2025` class to access this dataset programmatically. ### eegdash.dataset.BNCI2025 alias of [`NM000162`](eegdash.dataset.NM000162.md#eegdash.dataset.NM000162) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci2025) # BNCI_2015_006_Music: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BNCI_2015_006_Music dataset = BNCI_2015_006_Music(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BNCI_2015_006_Music(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BNCI_2015_006_Music( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bnci_2015_006_music, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BNCI_2015_006_MUSIC` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BNCI_2015_006_MUSIC` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bnci_2015_006_music) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bnci_2015_006_music) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bnci_2015_006_music](https://openneuro.org/datasets/bnci_2015_006_music) - NeMAR: [bnci_2015_006_music](https://nemar.org/dataexplorer/detail?dataset_id=bnci_2015_006_music) ## API Reference Use the `BNCI_2015_006_Music` class to access this dataset programmatically. ### eegdash.dataset.BNCI_2015_006_Music alias of [`NM000192`](eegdash.dataset.NM000192.md#eegdash.dataset.NM000192) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bnci_2015_006_music) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bnci_2015_006_music) # BOAS: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BOAS dataset = BOAS(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BOAS(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BOAS( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{boas, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BOAS` | |----------------|-------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BOAS` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/boas) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=boas) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [boas](https://openneuro.org/datasets/boas) - NeMAR: [boas](https://nemar.org/dataexplorer/detail?dataset_id=boas) ## API Reference Use the `BOAS` class to access this dataset programmatically. ### eegdash.dataset.BOAS alias of [`DS005555`](eegdash.dataset.DS005555.md#eegdash.dataset.DS005555) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/boas) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=boas) # Barras2021: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Barras2021 dataset = Barras2021(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Barras2021(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Barras2021( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{barras2021, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BARRAS2021` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BARRAS2021` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/barras2021) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=barras2021) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [barras2021](https://openneuro.org/datasets/barras2021) - NeMAR: [barras2021](https://nemar.org/dataexplorer/detail?dataset_id=barras2021) ## API Reference Use the `Barras2021` class to access this dataset programmatically. ### eegdash.dataset.Barras2021 alias of [`DS007169`](eegdash.dataset.DS007169.md#eegdash.dataset.DS007169) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/barras2021) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=barras2021) # Barras2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Barras2025 dataset = Barras2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Barras2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Barras2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{barras2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BARRAS2025` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BARRAS2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/barras2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=barras2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [barras2025](https://openneuro.org/datasets/barras2025) - NeMAR: [barras2025](https://nemar.org/dataexplorer/detail?dataset_id=barras2025) ## API Reference Use the `Barras2025` class to access this dataset programmatically. ### eegdash.dataset.Barras2025 alias of [`DS007262`](eegdash.dataset.DS007262.md#eegdash.dataset.DS007262) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/barras2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=barras2025) # BetaSSVEP: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BetaSSVEP dataset = BetaSSVEP(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BetaSSVEP(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BetaSSVEP( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{betassvep, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BETASSVEP` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BETASSVEP` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/betassvep) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=betassvep) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [betassvep](https://openneuro.org/datasets/betassvep) - NeMAR: [betassvep](https://nemar.org/dataexplorer/detail?dataset_id=betassvep) ## API Reference Use the `BetaSSVEP` class to access this dataset programmatically. ### eegdash.dataset.BetaSSVEP alias of [`NM000129`](eegdash.dataset.NM000129.md#eegdash.dataset.NM000129) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/betassvep) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=betassvep) # BigP3BCI_E: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_E dataset = BigP3BCI_E(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_E(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_E( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_e, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_E` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_E` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_e) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_e) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_e](https://openneuro.org/datasets/bigp3bci_e) - NeMAR: [bigp3bci_e](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_e) ## API Reference Use the `BigP3BCI_E` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_E alias of [`NM000186`](eegdash.dataset.NM000186.md#eegdash.dataset.NM000186) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_e) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_e) # BigP3BCI_F: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_F dataset = BigP3BCI_F(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_F(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_F( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_f, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_F` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_F` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_f) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_f) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_f](https://openneuro.org/datasets/bigp3bci_f) - NeMAR: [bigp3bci_f](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_f) ## API Reference Use the `BigP3BCI_F` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_F alias of [`NM000191`](eegdash.dataset.NM000191.md#eegdash.dataset.NM000191) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_f) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_f) # BigP3BCI_G: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_G dataset = BigP3BCI_G(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_G(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_G( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_g, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_G` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_G` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_g) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_g) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_g](https://openneuro.org/datasets/bigp3bci_g) - NeMAR: [bigp3bci_g](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_g) ## API Reference Use the `BigP3BCI_G` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_G alias of [`NM000277`](eegdash.dataset.NM000277.md#eegdash.dataset.NM000277) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_g) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_g) # BigP3BCI_H: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_H dataset = BigP3BCI_H(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_H(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_H( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_h, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_H` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_H` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_h) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_h) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_h](https://openneuro.org/datasets/bigp3bci_h) - NeMAR: [bigp3bci_h](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_h) ## API Reference Use the `BigP3BCI_H` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_H alias of [`NM000218`](eegdash.dataset.NM000218.md#eegdash.dataset.NM000218) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_h) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_h) # BigP3BCI_I: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_I dataset = BigP3BCI_I(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_I(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_I( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_i, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_I` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_I` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_i) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_i) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_i](https://openneuro.org/datasets/bigp3bci_i) - NeMAR: [bigp3bci_i](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_i) ## API Reference Use the `BigP3BCI_I` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_I alias of [`NM000200`](eegdash.dataset.NM000200.md#eegdash.dataset.NM000200) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_i) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_i) # BigP3BCI_K: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_K dataset = BigP3BCI_K(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_K(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_K( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_k, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_K` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_K` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_k) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_k) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_k](https://openneuro.org/datasets/bigp3bci_k) - NeMAR: [bigp3bci_k](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_k) ## API Reference Use the `BigP3BCI_K` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_K alias of [`NM000176`](eegdash.dataset.NM000176.md#eegdash.dataset.NM000176) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_k) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_k) # BigP3BCI_M: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_M dataset = BigP3BCI_M(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_M(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_M( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_m, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_M` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_M` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_m) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_m) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_m](https://openneuro.org/datasets/bigp3bci_m) - NeMAR: [bigp3bci_m](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_m) ## API Reference Use the `BigP3BCI_M` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_M alias of [`NM000197`](eegdash.dataset.NM000197.md#eegdash.dataset.NM000197) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_m) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_m) # BigP3BCI_S1: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_S1 dataset = BigP3BCI_S1(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_S1(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_S1( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_s1, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_S1` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_S1` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_s1) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_s1) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_s1](https://openneuro.org/datasets/bigp3bci_s1) - NeMAR: [bigp3bci_s1](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_s1) ## API Reference Use the `BigP3BCI_S1` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_S1 alias of [`NM000247`](eegdash.dataset.NM000247.md#eegdash.dataset.NM000247) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_s1) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_s1) # BigP3BCI_StudyE: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_StudyE dataset = BigP3BCI_StudyE(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_StudyE(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_StudyE( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_studye, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_STUDYE` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_STUDYE` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_studye) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studye) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_studye](https://openneuro.org/datasets/bigp3bci_studye) - NeMAR: [bigp3bci_studye](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studye) ## API Reference Use the `BigP3BCI_StudyE` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_StudyE alias of [`NM000186`](eegdash.dataset.NM000186.md#eegdash.dataset.NM000186) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_studye) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studye) # BigP3BCI_StudyF: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_StudyF dataset = BigP3BCI_StudyF(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_StudyF(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_StudyF( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_studyf, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_STUDYF` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_STUDYF` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_studyf) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyf) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_studyf](https://openneuro.org/datasets/bigp3bci_studyf) - NeMAR: [bigp3bci_studyf](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyf) ## API Reference Use the `BigP3BCI_StudyF` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_StudyF alias of [`NM000191`](eegdash.dataset.NM000191.md#eegdash.dataset.NM000191) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_studyf) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyf) # BigP3BCI_StudyG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_StudyG dataset = BigP3BCI_StudyG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_StudyG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_StudyG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_studyg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_STUDYG` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_STUDYG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_studyg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_studyg](https://openneuro.org/datasets/bigp3bci_studyg) - NeMAR: [bigp3bci_studyg](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyg) ## API Reference Use the `BigP3BCI_StudyG` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_StudyG alias of [`NM000277`](eegdash.dataset.NM000277.md#eegdash.dataset.NM000277) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_studyg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyg) # BigP3BCI_StudyH: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_StudyH dataset = BigP3BCI_StudyH(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_StudyH(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_StudyH( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_studyh, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_STUDYH` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_STUDYH` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_studyh) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyh) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_studyh](https://openneuro.org/datasets/bigp3bci_studyh) - NeMAR: [bigp3bci_studyh](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyh) ## API Reference Use the `BigP3BCI_StudyH` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_StudyH alias of [`NM000218`](eegdash.dataset.NM000218.md#eegdash.dataset.NM000218) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_studyh) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyh) # BigP3BCI_StudyI: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_StudyI dataset = BigP3BCI_StudyI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_StudyI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_StudyI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_studyi, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_STUDYI` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_STUDYI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_studyi) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyi) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_studyi](https://openneuro.org/datasets/bigp3bci_studyi) - NeMAR: [bigp3bci_studyi](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyi) ## API Reference Use the `BigP3BCI_StudyI` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_StudyI alias of [`NM000200`](eegdash.dataset.NM000200.md#eegdash.dataset.NM000200) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_studyi) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyi) # BigP3BCI_StudyK: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_StudyK dataset = BigP3BCI_StudyK(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_StudyK(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_StudyK( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_studyk, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_STUDYK` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_STUDYK` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_studyk) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyk) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_studyk](https://openneuro.org/datasets/bigp3bci_studyk) - NeMAR: [bigp3bci_studyk](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyk) ## API Reference Use the `BigP3BCI_StudyK` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_StudyK alias of [`NM000176`](eegdash.dataset.NM000176.md#eegdash.dataset.NM000176) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_studyk) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyk) # BigP3BCI_StudyM: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_StudyM dataset = BigP3BCI_StudyM(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_StudyM(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_StudyM( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_studym, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_STUDYM` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_STUDYM` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_studym) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studym) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_studym](https://openneuro.org/datasets/bigp3bci_studym) - NeMAR: [bigp3bci_studym](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studym) ## API Reference Use the `BigP3BCI_StudyM` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_StudyM alias of [`NM000197`](eegdash.dataset.NM000197.md#eegdash.dataset.NM000197) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_studym) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studym) # BigP3BCI_StudyN: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_StudyN dataset = BigP3BCI_StudyN(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_StudyN(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_StudyN( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_studyn, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_STUDYN` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_STUDYN` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_studyn) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyn) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_studyn](https://openneuro.org/datasets/bigp3bci_studyn) - NeMAR: [bigp3bci_studyn](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyn) ## API Reference Use the `BigP3BCI_StudyN` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_StudyN alias of [`NM000187`](eegdash.dataset.NM000187.md#eegdash.dataset.NM000187) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_studyn) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studyn) # BigP3BCI_StudyS1: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BigP3BCI_StudyS1 dataset = BigP3BCI_StudyS1(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BigP3BCI_StudyS1(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BigP3BCI_StudyS1( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bigp3bci_studys1, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BIGP3BCI_STUDYS1` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BIGP3BCI_STUDYS1` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bigp3bci_studys1) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studys1) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bigp3bci_studys1](https://openneuro.org/datasets/bigp3bci_studys1) - NeMAR: [bigp3bci_studys1](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studys1) ## API Reference Use the `BigP3BCI_StudyS1` class to access this dataset programmatically. ### eegdash.dataset.BigP3BCI_StudyS1 alias of [`NM000247`](eegdash.dataset.NM000247.md#eegdash.dataset.NM000247) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bigp3bci_studys1) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bigp3bci_studys1) # Bogacz2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Bogacz2024 dataset = Bogacz2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Bogacz2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Bogacz2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{bogacz2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BOGACZ2024` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BOGACZ2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/bogacz2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=bogacz2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [bogacz2024](https://openneuro.org/datasets/bogacz2024) - NeMAR: [bogacz2024](https://nemar.org/dataexplorer/detail?dataset_id=bogacz2024) ## API Reference Use the `Bogacz2024` class to access this dataset programmatically. ### eegdash.dataset.Bogacz2024 alias of [`DS002908`](eegdash.dataset.DS002908.md#eegdash.dataset.DS002908) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/bogacz2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=bogacz2024) # BrainInvaders: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BrainInvaders dataset = BrainInvaders(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BrainInvaders(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BrainInvaders( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{braininvaders, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BRAININVADERS` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BRAININVADERS` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/braininvaders) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [braininvaders](https://openneuro.org/datasets/braininvaders) - NeMAR: [braininvaders](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders) ## API Reference Use the `BrainInvaders` class to access this dataset programmatically. ### eegdash.dataset.BrainInvaders alias of [`NM000260`](eegdash.dataset.NM000260.md#eegdash.dataset.NM000260) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/braininvaders) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders) # BrainInvaders2013a: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BrainInvaders2013a dataset = BrainInvaders2013a(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BrainInvaders2013a(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BrainInvaders2013a( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{braininvaders2013a, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BRAININVADERS2013A` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BRAININVADERS2013A` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/braininvaders2013a) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2013a) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [braininvaders2013a](https://openneuro.org/datasets/braininvaders2013a) - NeMAR: [braininvaders2013a](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2013a) ## API Reference Use the `BrainInvaders2013a` class to access this dataset programmatically. ### eegdash.dataset.BrainInvaders2013a alias of [`NM000264`](eegdash.dataset.NM000264.md#eegdash.dataset.NM000264) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/braininvaders2013a) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2013a) # BrainInvaders2014a: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BrainInvaders2014a dataset = BrainInvaders2014a(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BrainInvaders2014a(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BrainInvaders2014a( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{braininvaders2014a, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BRAININVADERS2014A` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BRAININVADERS2014A` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/braininvaders2014a) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2014a) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [braininvaders2014a](https://openneuro.org/datasets/braininvaders2014a) - NeMAR: [braininvaders2014a](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2014a) ## API Reference Use the `BrainInvaders2014a` class to access this dataset programmatically. ### eegdash.dataset.BrainInvaders2014a alias of [`NM000244`](eegdash.dataset.NM000244.md#eegdash.dataset.NM000244) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/braininvaders2014a) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2014a) # BrainInvaders2014b: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BrainInvaders2014b dataset = BrainInvaders2014b(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BrainInvaders2014b(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BrainInvaders2014b( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{braininvaders2014b, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BRAININVADERS2014B` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BRAININVADERS2014B` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/braininvaders2014b) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2014b) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [braininvaders2014b](https://openneuro.org/datasets/braininvaders2014b) - NeMAR: [braininvaders2014b](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2014b) ## API Reference Use the `BrainInvaders2014b` class to access this dataset programmatically. ### eegdash.dataset.BrainInvaders2014b alias of [`NM000215`](eegdash.dataset.NM000215.md#eegdash.dataset.NM000215) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/braininvaders2014b) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2014b) # BrainInvaders2015a: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BrainInvaders2015a dataset = BrainInvaders2015a(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BrainInvaders2015a(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BrainInvaders2015a( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{braininvaders2015a, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BRAININVADERS2015A` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BRAININVADERS2015A` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/braininvaders2015a) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2015a) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [braininvaders2015a](https://openneuro.org/datasets/braininvaders2015a) - NeMAR: [braininvaders2015a](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2015a) ## API Reference Use the `BrainInvaders2015a` class to access this dataset programmatically. ### eegdash.dataset.BrainInvaders2015a alias of [`NM000216`](eegdash.dataset.NM000216.md#eegdash.dataset.NM000216) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/braininvaders2015a) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2015a) # BrainInvaders2015b: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BrainInvaders2015b dataset = BrainInvaders2015b(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BrainInvaders2015b(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BrainInvaders2015b( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{braininvaders2015b, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BRAININVADERS2015B` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BRAININVADERS2015B` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/braininvaders2015b) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2015b) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [braininvaders2015b](https://openneuro.org/datasets/braininvaders2015b) - NeMAR: [braininvaders2015b](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2015b) ## API Reference Use the `BrainInvaders2015b` class to access this dataset programmatically. ### eegdash.dataset.BrainInvaders2015b alias of [`NM000217`](eegdash.dataset.NM000217.md#eegdash.dataset.NM000217) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/braininvaders2015b) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=braininvaders2015b) # BrainInvadersBI2014b: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BrainInvadersBI2014b dataset = BrainInvadersBI2014b(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BrainInvadersBI2014b(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BrainInvadersBI2014b( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{braininvadersbi2014b, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BRAININVADERSBI2014B` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BRAININVADERSBI2014B` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/braininvadersbi2014b) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=braininvadersbi2014b) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [braininvadersbi2014b](https://openneuro.org/datasets/braininvadersbi2014b) - NeMAR: [braininvadersbi2014b](https://nemar.org/dataexplorer/detail?dataset_id=braininvadersbi2014b) ## API Reference Use the `BrainInvadersBI2014b` class to access this dataset programmatically. ### eegdash.dataset.BrainInvadersBI2014b alias of [`NM000215`](eegdash.dataset.NM000215.md#eegdash.dataset.NM000215) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/braininvadersbi2014b) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=braininvadersbi2014b) # BrainTreeBank: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import BrainTreeBank dataset = BrainTreeBank(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = BrainTreeBank(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = BrainTreeBank( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{braintreebank, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BRAINTREEBANK` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BRAINTREEBANK` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/braintreebank) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=braintreebank) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [braintreebank](https://openneuro.org/datasets/braintreebank) - NeMAR: [braintreebank](https://nemar.org/dataexplorer/detail?dataset_id=braintreebank) ## API Reference Use the `BrainTreeBank` class to access this dataset programmatically. ### eegdash.dataset.BrainTreeBank alias of [`NM000253`](eegdash.dataset.NM000253.md#eegdash.dataset.NM000253) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/braintreebank) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=braintreebank) # Broitman2019: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Broitman2019 dataset = Broitman2019(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Broitman2019(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Broitman2019( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{broitman2019, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `BROITMAN2019` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `BROITMAN2019` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/broitman2019) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=broitman2019) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [broitman2019](https://openneuro.org/datasets/broitman2019) - NeMAR: [broitman2019](https://nemar.org/dataexplorer/detail?dataset_id=broitman2019) ## API Reference Use the `Broitman2019` class to access this dataset programmatically. ### eegdash.dataset.Broitman2019 alias of [`DS005857`](eegdash.dataset.DS005857.md#eegdash.dataset.DS005857) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/broitman2019) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=broitman2019) # CARLA: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import CARLA dataset = CARLA(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = CARLA(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = CARLA( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{carla, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CARLA` | |----------------|---------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CARLA` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/carla) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=carla) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [carla](https://openneuro.org/datasets/carla) - NeMAR: [carla](https://nemar.org/dataexplorer/detail?dataset_id=carla) ## API Reference Use the `CARLA` class to access this dataset programmatically. ### eegdash.dataset.CARLA alias of [`DS004977`](eegdash.dataset.DS004977.md#eegdash.dataset.DS004977) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/carla) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=carla) # CHBMIT: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import CHBMIT dataset = CHBMIT(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = CHBMIT(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = CHBMIT( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{chbmit, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CHBMIT` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CHBMIT` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/chbmit) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=chbmit) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [chbmit](https://openneuro.org/datasets/chbmit) - NeMAR: [chbmit](https://nemar.org/dataexplorer/detail?dataset_id=chbmit) ## API Reference Use the `CHBMIT` class to access this dataset programmatically. ### eegdash.dataset.CHBMIT alias of [`NM000110`](eegdash.dataset.NM000110.md#eegdash.dataset.NM000110) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/chbmit) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=chbmit) # CHB_MIT: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import CHB_MIT dataset = CHB_MIT(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = CHB_MIT(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = CHB_MIT( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{chb_mit, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CHB_MIT` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CHB_MIT` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/chb_mit) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=chb_mit) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [chb_mit](https://openneuro.org/datasets/chb_mit) - NeMAR: [chb_mit](https://nemar.org/dataexplorer/detail?dataset_id=chb_mit) ## API Reference Use the `CHB_MIT` class to access this dataset programmatically. ### eegdash.dataset.CHB_MIT alias of [`NM000110`](eegdash.dataset.NM000110.md#eegdash.dataset.NM000110) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/chb_mit) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=chb_mit) # CHISCO20: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import CHISCO20 dataset = CHISCO20(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = CHISCO20(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = CHISCO20( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{chisco20, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CHISCO20` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CHISCO20` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/chisco20) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=chisco20) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [chisco20](https://openneuro.org/datasets/chisco20) - NeMAR: [chisco20](https://nemar.org/dataexplorer/detail?dataset_id=chisco20) ## API Reference Use the `CHISCO20` class to access this dataset programmatically. ### eegdash.dataset.CHISCO20 alias of [`DS006317`](eegdash.dataset.DS006317.md#eegdash.dataset.DS006317) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/chisco20) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=chisco20) # CPSEED: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import CPSEED dataset = CPSEED(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = CPSEED(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = CPSEED( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{cpseed, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CPSEED` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CPSEED` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/cpseed) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=cpseed) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [cpseed](https://openneuro.org/datasets/cpseed) - NeMAR: [cpseed](https://nemar.org/dataexplorer/detail?dataset_id=cpseed) ## API Reference Use the `CPSEED` class to access this dataset programmatically. ### eegdash.dataset.CPSEED alias of [`DS006465`](eegdash.dataset.DS006465.md#eegdash.dataset.DS006465) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/cpseed) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=cpseed) # CPSEED_3M: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import CPSEED_3M dataset = CPSEED_3M(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = CPSEED_3M(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = CPSEED_3M( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{cpseed_3m, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CPSEED_3M` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CPSEED_3M` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/cpseed_3m) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=cpseed_3m) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [cpseed_3m](https://openneuro.org/datasets/cpseed_3m) - NeMAR: [cpseed_3m](https://nemar.org/dataexplorer/detail?dataset_id=cpseed_3m) ## API Reference Use the `CPSEED_3M` class to access this dataset programmatically. ### eegdash.dataset.CPSEED_3M alias of [`DS006465`](eegdash.dataset.DS006465.md#eegdash.dataset.DS006465) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/cpseed_3m) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=cpseed_3m) # CastillosCVEP40: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import CastillosCVEP40 dataset = CastillosCVEP40(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = CastillosCVEP40(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = CastillosCVEP40( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{castilloscvep40, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CASTILLOSCVEP40` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CASTILLOSCVEP40` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/castilloscvep40) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=castilloscvep40) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [castilloscvep40](https://openneuro.org/datasets/castilloscvep40) - NeMAR: [castilloscvep40](https://nemar.org/dataexplorer/detail?dataset_id=castilloscvep40) ## API Reference Use the `CastillosCVEP40` class to access this dataset programmatically. ### eegdash.dataset.CastillosCVEP40 alias of [`NM000342`](eegdash.dataset.NM000342.md#eegdash.dataset.NM000342) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/castilloscvep40) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=castilloscvep40) # CatFR: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import CatFR dataset = CatFR(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = CatFR(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = CatFR( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{catfr, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CATFR` | |----------------|---------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CATFR` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/catfr) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=catfr) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [catfr](https://openneuro.org/datasets/catfr) - NeMAR: [catfr](https://nemar.org/dataexplorer/detail?dataset_id=catfr) ## API Reference Use the `CatFR` class to access this dataset programmatically. ### eegdash.dataset.CatFR alias of [`DS004809`](eegdash.dataset.DS004809.md#eegdash.dataset.DS004809) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/catfr) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=catfr) # Chandravadia2022: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Chandravadia2022 dataset = Chandravadia2022(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Chandravadia2022(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Chandravadia2022( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{chandravadia2022, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CHANDRAVADIA2022` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CHANDRAVADIA2022` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/chandravadia2022) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=chandravadia2022) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [chandravadia2022](https://openneuro.org/datasets/chandravadia2022) - NeMAR: [chandravadia2022](https://nemar.org/dataexplorer/detail?dataset_id=chandravadia2022) ## API Reference Use the `Chandravadia2022` class to access this dataset programmatically. ### eegdash.dataset.Chandravadia2022 alias of [`DS005028`](eegdash.dataset.DS005028.md#eegdash.dataset.DS005028) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/chandravadia2022) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=chandravadia2022) # Chang2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Chang2025 dataset = Chang2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Chang2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Chang2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{chang2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CHANG2025` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CHANG2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/chang2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=chang2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [chang2025](https://openneuro.org/datasets/chang2025) - NeMAR: [chang2025](https://nemar.org/dataexplorer/detail?dataset_id=chang2025) ## API Reference Use the `Chang2025` class to access this dataset programmatically. ### eegdash.dataset.Chang2025 alias of [`NM000271`](eegdash.dataset.NM000271.md#eegdash.dataset.NM000271) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/chang2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=chang2025) # Chavarriaga2010: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Chavarriaga2010 dataset = Chavarriaga2010(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Chavarriaga2010(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Chavarriaga2010( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{chavarriaga2010, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CHAVARRIAGA2010` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CHAVARRIAGA2010` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/chavarriaga2010) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=chavarriaga2010) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [chavarriaga2010](https://openneuro.org/datasets/chavarriaga2010) - NeMAR: [chavarriaga2010](https://nemar.org/dataexplorer/detail?dataset_id=chavarriaga2010) ## API Reference Use the `Chavarriaga2010` class to access this dataset programmatically. ### eegdash.dataset.Chavarriaga2010 alias of [`NM000168`](eegdash.dataset.NM000168.md#eegdash.dataset.NM000168) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/chavarriaga2010) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=chavarriaga2010) # Chisco: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Chisco dataset = Chisco(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Chisco(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Chisco( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{chisco, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CHISCO` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CHISCO` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/chisco) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=chisco) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [chisco](https://openneuro.org/datasets/chisco) - NeMAR: [chisco](https://nemar.org/dataexplorer/detail?dataset_id=chisco) ## API Reference Use the `Chisco` class to access this dataset programmatically. ### eegdash.dataset.Chisco alias of [`DS005170`](eegdash.dataset.DS005170.md#eegdash.dataset.DS005170) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/chisco) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=chisco) # Chisco20: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Chisco20 dataset = Chisco20(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Chisco20(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Chisco20( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{chisco20, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CHISCO20` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CHISCO20` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/chisco20) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=chisco20) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [chisco20](https://openneuro.org/datasets/chisco20) - NeMAR: [chisco20](https://nemar.org/dataexplorer/detail?dataset_id=chisco20) ## API Reference Use the `Chisco20` class to access this dataset programmatically. ### eegdash.dataset.Chisco20 alias of [`DS006317`](eegdash.dataset.DS006317.md#eegdash.dataset.DS006317) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/chisco20) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=chisco20) # Chisco2_0: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Chisco2_0 dataset = Chisco2_0(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Chisco2_0(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Chisco2_0( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{chisco2_0, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CHISCO2_0` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CHISCO2_0` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/chisco2_0) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=chisco2_0) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [chisco2_0](https://openneuro.org/datasets/chisco2_0) - NeMAR: [chisco2_0](https://nemar.org/dataexplorer/detail?dataset_id=chisco2_0) ## API Reference Use the `Chisco2_0` class to access this dataset programmatically. ### eegdash.dataset.Chisco2_0 alias of [`DS006317`](eegdash.dataset.DS006317.md#eegdash.dataset.DS006317) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/chisco2_0) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=chisco2_0) # Cote2015: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Cote2015 dataset = Cote2015(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Cote2015(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Cote2015( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{cote2015, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `COTE2015` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `COTE2015` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/cote2015) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=cote2015) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [cote2015](https://openneuro.org/datasets/cote2015) - NeMAR: [cote2015](https://nemar.org/dataexplorer/detail?dataset_id=cote2015) ## API Reference Use the `Cote2015` class to access this dataset programmatically. ### eegdash.dataset.Cote2015 alias of [`DS003082`](eegdash.dataset.DS003082.md#eegdash.dataset.DS003082) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/cote2015) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=cote2015) # Couperus2017: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Couperus2017 dataset = Couperus2017(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Couperus2017(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Couperus2017( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{couperus2017, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `COUPERUS2017` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `COUPERUS2017` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/couperus2017) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=couperus2017) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [couperus2017](https://openneuro.org/datasets/couperus2017) - NeMAR: [couperus2017](https://nemar.org/dataexplorer/detail?dataset_id=couperus2017) ## API Reference Use the `Couperus2017` class to access this dataset programmatically. ### eegdash.dataset.Couperus2017 alias of [`DS007096`](eegdash.dataset.DS007096.md#eegdash.dataset.DS007096) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/couperus2017) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=couperus2017) # Couperus2021_LRP: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Couperus2021_LRP dataset = Couperus2021_LRP(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Couperus2021_LRP(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Couperus2021_LRP( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{couperus2021_lrp, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `COUPERUS2021_LRP` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `COUPERUS2021_LRP` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/couperus2021_lrp) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_lrp) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [couperus2021_lrp](https://openneuro.org/datasets/couperus2021_lrp) - NeMAR: [couperus2021_lrp](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_lrp) ## API Reference Use the `Couperus2021_LRP` class to access this dataset programmatically. ### eegdash.dataset.Couperus2021_LRP alias of [`DS007139`](eegdash.dataset.DS007139.md#eegdash.dataset.DS007139) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/couperus2021_lrp) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_lrp) # Couperus2021_MMN: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Couperus2021_MMN dataset = Couperus2021_MMN(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Couperus2021_MMN(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Couperus2021_MMN( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{couperus2021_mmn, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `COUPERUS2021_MMN` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `COUPERUS2021_MMN` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/couperus2021_mmn) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_mmn) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [couperus2021_mmn](https://openneuro.org/datasets/couperus2021_mmn) - NeMAR: [couperus2021_mmn](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_mmn) ## API Reference Use the `Couperus2021_MMN` class to access this dataset programmatically. ### eegdash.dataset.Couperus2021_MMN alias of [`DS007069`](eegdash.dataset.DS007069.md#eegdash.dataset.DS007069) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/couperus2021_mmn) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_mmn) # Couperus2021_N2pc: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Couperus2021_N2pc dataset = Couperus2021_N2pc(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Couperus2021_N2pc(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Couperus2021_N2pc( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{couperus2021_n2pc, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `COUPERUS2021_N2PC` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `COUPERUS2021_N2PC` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/couperus2021_n2pc) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_n2pc) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [couperus2021_n2pc](https://openneuro.org/datasets/couperus2021_n2pc) - NeMAR: [couperus2021_n2pc](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_n2pc) ## API Reference Use the `Couperus2021_N2pc` class to access this dataset programmatically. ### eegdash.dataset.Couperus2021_N2pc alias of [`DS007137`](eegdash.dataset.DS007137.md#eegdash.dataset.DS007137) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/couperus2021_n2pc) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_n2pc) # Couperus2021_N400: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Couperus2021_N400 dataset = Couperus2021_N400(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Couperus2021_N400(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Couperus2021_N400( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{couperus2021_n400, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `COUPERUS2021_N400` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `COUPERUS2021_N400` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/couperus2021_n400) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_n400) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [couperus2021_n400](https://openneuro.org/datasets/couperus2021_n400) - NeMAR: [couperus2021_n400](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_n400) ## API Reference Use the `Couperus2021_N400` class to access this dataset programmatically. ### eegdash.dataset.Couperus2021_N400 alias of [`DS007052`](eegdash.dataset.DS007052.md#eegdash.dataset.DS007052) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/couperus2021_n400) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_n400) # Couperus2021_P300: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Couperus2021_P300 dataset = Couperus2021_P300(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Couperus2021_P300(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Couperus2021_P300( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{couperus2021_p300, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `COUPERUS2021_P300` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `COUPERUS2021_P300` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/couperus2021_p300) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_p300) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [couperus2021_p300](https://openneuro.org/datasets/couperus2021_p300) - NeMAR: [couperus2021_p300](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_p300) ## API Reference Use the `Couperus2021_P300` class to access this dataset programmatically. ### eegdash.dataset.Couperus2021_P300 alias of [`DS007056`](eegdash.dataset.DS007056.md#eegdash.dataset.DS007056) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/couperus2021_p300) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=couperus2021_p300) # DENS: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DENS dataset = DENS(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DENS(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DENS( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{dens, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DENS` | |----------------|-------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `DENS` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/dens) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=dens) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [dens](https://openneuro.org/datasets/dens) - NeMAR: [dens](https://nemar.org/dataexplorer/detail?dataset_id=dens) ## API Reference Use the `DENS` class to access this dataset programmatically. ### eegdash.dataset.DENS alias of [`DS003751`](eegdash.dataset.DS003751.md#eegdash.dataset.DS003751) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/dens) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=dens) # DS000117: meg dataset, 17 subjects *Multisubject, multimodal face processing* Access recordings and metadata through EEGDash. **Citation:** Wakeman, DG, Henson, RN (2018). *Multisubject, multimodal face processing*. [10.18112/openneuro.ds000117.v1.1.0](https://doi.org/10.18112/openneuro.ds000117.v1.1.0) Modality: meg Subjects: 17 Recordings: 104 License: CC0 Source: openneuro Citations: 77.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS000117 dataset = DS000117(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS000117(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS000117( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds000117, title = {Multisubject, multimodal face processing}, author = {Wakeman, DG and Henson, RN}, doi = {10.18112/openneuro.ds000117.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds000117.v1.1.0}, } ``` ## About This Dataset This dataset was obtained from the OpenNeuro project ([https://www.openneuro.org](https://www.openneuro.org)). Accession #: ds000117 The same dataset is also available here: [ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/](ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/), but in a non-BIDS format (which may be easier to download by subject rather than by modality) Note that it is a subset of the data available on OpenfMRI ([http://www.openfmri.org](http://www.openfmri.org); Accession #: ds000117). Description: Multi-subject, multi-modal (sMRI+fMRI+MEG+EEG) neuroimaging dataset on face processing Please cite the following reference if you use these data: > Wakeman, D.G. & Henson, R.N. (2015). A multi-subject, multi-modal human neuroimaging dataset. Sci. Data 2:150001 doi: 10.1038/sdata.2015.1 The data have been used in several publications including, for example: : Henson, R.N., Abdulrahman, H., Flandin, G. & Litvak, V. (2019). Multimodal integration of M/EEG and f/MRI data in SPM12. Frontiers in Neuroscience, Methods, 13, 300. : Henson, R.N., Wakeman, D.G., Litvak, V. & Friston, K.J. (2011). A Parametric Empirical Bayesian framework for the EEG/MEG inverse problem: generative models for multisubject and multimodal integration. Frontiers in Human Neuroscience, 5, 76, 1-16. ### View full README This dataset was obtained from the OpenNeuro project ([https://www.openneuro.org](https://www.openneuro.org)). Accession #: ds000117 The same dataset is also available here: [ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/](ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/), but in a non-BIDS format (which may be easier to download by subject rather than by modality) Note that it is a subset of the data available on OpenfMRI ([http://www.openfmri.org](http://www.openfmri.org); Accession #: ds000117). Description: Multi-subject, multi-modal (sMRI+fMRI+MEG+EEG) neuroimaging dataset on face processing Please cite the following reference if you use these data: > Wakeman, D.G. & Henson, R.N. (2015). A multi-subject, multi-modal human neuroimaging dataset. Sci. Data 2:150001 doi: 10.1038/sdata.2015.1 The data have been used in several publications including, for example: : Henson, R.N., Abdulrahman, H., Flandin, G. & Litvak, V. (2019). Multimodal integration of M/EEG and f/MRI data in SPM12. Frontiers in Neuroscience, Methods, 13, 300. : Henson, R.N., Wakeman, D.G., Litvak, V. & Friston, K.J. (2011). A Parametric Empirical Bayesian framework for the EEG/MEG inverse problem: generative models for multisubject and multimodal integration. Frontiers in Human Neuroscience, 5, 76, 1-16. Chapter 42 of the SPM12 manual ([http://www.fil.ion.ucl.ac.uk/spm/doc/manual.pdf](http://www.fil.ion.ucl.ac.uk/spm/doc/manual.pdf)) (see [ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/Publications](ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/Publications) for full list), as well as the BioMag2010 data competition and the Kaggle competition: [https://www.kaggle.com/c/decoding-the-human-brain](https://www.kaggle.com/c/decoding-the-human-brain)) **func/** Unlike in v1-v3 of this dataset, the first two (dummy) volumes have now been removed (as stated in \*.json), so event onset times correctly refer to t=0 at start of third volume Note that, owing to scanner error, Subject 10 only has 170 volumes in last run (Run 9) **meg/** Three anatomical fiducials were digitized for aligning the MEG with the MRI: the nasion (lowest depression between the eyes) and the left and right ears (lowest depression between the tragus and the helix, above the tragus). This procedure is illustrated here: [http://neuroimage.usc.edu/brainstorm/CoordinateSystems#Subject_Coordinate_System_.28SCS_.2F_CTF.29](http://neuroimage.usc.edu/brainstorm/CoordinateSystems#Subject_Coordinate_System_.28SCS_.2F_CTF.29) and in task-facerecognition_fidinfo.pdf The following triggers are included in the .fif files and are also used in the “trigger” column of the meg and bold events files: Trigger Label Simplified Label 5 Initial Famous Face IniFF 6 Immediate Repeat Famous Face ImmFF 7 Delayed Repeat Famous Face DelFF 13 Initial Unfamiliar Face IniUF 14 Immediate Repeat Unfamiliar Face ImmUF 15 Delayed Repeat Unfamiliar Face DelUF 17 Initial Scrambled Face IniSF 18 Immediate Repeat Scrambled Face ImmSF 19 Delayed Repeat Scrambled Face DelSF **stimuli/meg/** The .bmp files correspond to those described in the text. There are 6 additional images in this directory, which were used in the practice experiment to familiarize participants with the task (hence some more BIDS validator warnings) **stimuli/mri/** The .bmp files correspond to those described in the text. **Defacing** Defacing of MPRAGE T1 images was performed by the submitter. A subset of subjects have given consent for non-defaced versions to be shared - in which case, please contact [rik.henson@mrc-cbu.cam.ac.uk](mailto:rik.henson@mrc-cbu.cam.ac.uk). **Quality Control** Mriqc was run on the dataset. Results are located in derivatives/mriqc. Learn more about it here: [https://mriqc.readthedocs.io/en/latest/](https://mriqc.readthedocs.io/en/latest/) **Known Issues** N/A **Relationship of Subject Numbering relative to other versions of Dataset** There are multiple versions of the dataset available on the web (see notes above), and these entailed a renumbering of the subjects for various reasons. Here are all the versions and how to match subjects between them (plus some rationale and history for different versions): 1. Original Paper (N=19): Wakeman & Henson (2015): doi:10.1038/sdata.2015.1 > Number refers to order that tested (and some, eg 4, 7, 13 etc were excluded for not completing both MRI and MEG sessions) 1. openfMRI, renumbered from paper: [http://openfmri.org/s3-browser/?prefix=ds000117/ds000117_R0.1.1/uncompressed/](http://openfmri.org/s3-browser/?prefix=ds000117/ds000117_R0.1.1/uncompressed/) : Numbers 1-19 just made contiguous 2. FTP subset of N=16: ftp: [ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/](ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/) : This set was used for SPM Courses Designed to illustrate multimodal integration, so wanted good MRI+MEG+EEG data for all subjects Removed original subject_01 and subject_06 because bad EEG data; subject_19 because poor EEG and fMRI data (And renumbered subject_14 for some reason). 3. Current OpenNeuro subset N=16 used for (BIDS): [https://openneuro.org/datasets/ds000117](https://openneuro.org/datasets/ds000117) : OpenNeuro was rebranding of openfMRI, and enforced BIDS format Since this version designed to illustrate multi-modal BIDS, kept same numbering as FTP W&H2015 openfMRI FTP openNeuro ======== ====== === ======= subject_01 sub001 subject_02 sub002 Sub01 sub-01 subject_03 sub003 Sub02 sub-02 subject_05 sub004 Sub03 sub-03 subject_06 sub005 subject_08 sub006 Sub05 sub-05 subject_09 sub007 Sub06 sub-06 subject_10 sub008 Sub07 sub-07 subject_11 sub009 Sub08 sub-08 subject_12 sub010 Sub09 sub-09 subject_14 sub011 Sub04 sub-04 subject_15 sub012 Sub10 sub-10 subject_16 sub013 Sub11 sub-11 subject_17 sub014 Sub12 sub-12 subject_18 sub015 Sub13 sub-13 subject_19 sub016 subject_23 sub017 Sub14 sub-14 subject_24 sub018 Sub15 sub-15 subject_25 sub019 Sub16 sub-16 ## Dataset Information | Dataset ID | `DS000117` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Multisubject, multimodal face processing | | Author (year) | `Wakeman2018` | | Canonical | `Wakeman2015`, `WakemanHenson` | | Importable as | `DS000117`, `Wakeman2018`, `Wakeman2015`, `WakemanHenson` | | Year | 2018 | | Authors | Wakeman, DG, Henson, RN | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds000117.v1.1.0](https://doi.org/10.18112/openneuro.ds000117.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds000117) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds000117) | [Source URL](https://openneuro.org/datasets/ds000117) | ### Copy-paste BibTeX ```bibtex @dataset{ds000117, title = {Multisubject, multimodal face processing}, author = {Wakeman, DG and Henson, RN}, doi = {10.18112/openneuro.ds000117.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds000117.v1.1.0}, } ``` ## Technical Details - Subjects: 17 - Recordings: 104 - Tasks: 2 - Channels: 394 - Sampling rate (Hz): 1100.0 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 87.6 GB - File count: 104 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds000117.v1.1.0 - Source: openneuro - OpenNeuro: [ds000117](https://openneuro.org/datasets/ds000117) - NeMAR: [ds000117](https://nemar.org/dataexplorer/detail?dataset_id=ds000117) ## API Reference Use the `DS000117` class to access this dataset programmatically. ### *class* eegdash.dataset.DS000117(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multisubject, multimodal face processing * **Study:** `ds000117` (OpenNeuro) * **Author (year):** `Wakeman2018` * **Canonical:** `Wakeman2015`, `WakemanHenson` Also importable as: `DS000117`, `Wakeman2018`, `Wakeman2015`, `WakemanHenson`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 17; recordings: 104; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds000117](https://openneuro.org/datasets/ds000117) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds000117](https://nemar.org/dataexplorer/detail?dataset_id=ds000117) DOI: [https://doi.org/10.18112/openneuro.ds000117.v1.1.0](https://doi.org/10.18112/openneuro.ds000117.v1.1.0) NEMAR citation count: 77 ### Examples ```pycon >>> from eegdash.dataset import DS000117 >>> dataset = DS000117(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds000117) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds000117) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) * [eegdash.dataset.DS002312](eegdash.dataset.DS002312.md) # DS000246: meg dataset, 2 subjects *MEG-BIDS Brainstorm data sample* Access recordings and metadata through EEGDash. **Citation:** Elizabeth Bock, Peter Donhauser, Francois Tadel, Guiomar Niso, Sylvain Baillet (2018). *MEG-BIDS Brainstorm data sample*. [10.18112/openneuro.ds000246.v1.0.1](https://doi.org/10.18112/openneuro.ds000246.v1.0.1) Modality: meg Subjects: 2 Recordings: 3 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS000246 dataset = DS000246(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS000246(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS000246( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds000246, title = {MEG-BIDS Brainstorm data sample}, author = {Elizabeth Bock and Peter Donhauser and Francois Tadel and Guiomar Niso and Sylvain Baillet}, doi = {10.18112/openneuro.ds000246.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds000246.v1.0.1}, } ``` ## About This Dataset **Brainstorm - Auditory Dataset** **License** This dataset (MEG and MRI data) was collected by the MEG Unit Lab, McConnell Brain Imaging Center, Montreal Neurological Institute, McGill University, Canada. The original purpose was to serve as a tutorial data example for the Brainstorm software project ([http://neuroimage.usc.edu/brainstorm](http://neuroimage.usc.edu/brainstorm)). It is presently released in the Public Domain, and is not subject to copyright in any jurisdiction. We would appreciate though that you reference this dataset in your publications: please acknowledge its authors (Elizabeth Bock, Peter Donhauser, Francois Tadel and Sylvain Baillet) and cite the Brainstorm project seminal publication (also in open access): [http://www.hindawi.com/journals/cin/2011/879716/](http://www.hindawi.com/journals/cin/2011/879716/) ### View full README **Brainstorm - Auditory Dataset** **License** This dataset (MEG and MRI data) was collected by the MEG Unit Lab, McConnell Brain Imaging Center, Montreal Neurological Institute, McGill University, Canada. The original purpose was to serve as a tutorial data example for the Brainstorm software project ([http://neuroimage.usc.edu/brainstorm](http://neuroimage.usc.edu/brainstorm)). It is presently released in the Public Domain, and is not subject to copyright in any jurisdiction. We would appreciate though that you reference this dataset in your publications: please acknowledge its authors (Elizabeth Bock, Peter Donhauser, Francois Tadel and Sylvain Baillet) and cite the Brainstorm project seminal publication (also in open access): [http://www.hindawi.com/journals/cin/2011/879716/](http://www.hindawi.com/journals/cin/2011/879716/) **Presentation of the experiment** **Experiment** \* One subject, two acquisition runs of 6 minutes each \* Subject stimulated binaurally with intra-aural earphones (air tubes+transducers) \* Each run contains: > * 200 regular beeps (440Hz) > * 40 easy deviant beeps (554.4Hz, 4 semitones higher) \* Random inter-stimulus interval: between 0.7s and 1.7s seconds, uniformly distributed \* The subject presses a button when detecting a deviant with the right index finger \* Auditory stimuli generated with the Matlab Psychophysics toolbox \* The specifications of this dataset were discussed initially on [the FieldTrip bug tracker](http://bugzilla.fcdonders.nl/show_bug.cgi?id=2300) **MEG acquisition** \* Acquisition at \*\*2400Hz\*\*, with a \*\*CTF 275\*\* system, subject in seating position \* Recorded at the Montreal Neurological Institute in December 2013 \* Anti-aliasing low-pass filter at 600Hz, files saved with the 3rd order gradient \* Recorded channels (340): > * 1 Stim channel indicating the presentation times of the audio stimuli: UPPT001 (#1) > * 1 Audio signal sent to the subject: UADC001 (#316) > * 1 Response channel recordings the finger taps in response to the deviants: UDIO001 (#2) > * 26 MEG reference sensors (#5-#30) > * 274 MEG axial gradiometers (#31-#304) > * 2 EEG electrodes: Cz, Pz (#305 and #306) > * 1 ECG bipolar (#307) > * 2 EOG bipolar (vertical #308, horizontal #309) > * 12 Head tracking channels: Nasion XYZ, Left XYZ, Right XYZ, Error N/L/R (#317-#328) > * 20 Unused channels (#3, #4, #310-#315, #329-340) \* 3 datasets: : * \*\*S01_AEF_20131218_01.ds\*\*: Run #1, 360s, 200 standard + 40 deviants * \*\*S01_AEF_20131218_02.ds\*\*: Run #2, 360s, 200 standard + 40 deviants * \*\*S01_Noise_20131218_01.ds\*\*: Empty room recordings, 30s long * File name: S01=Subject01, AEF=Auditory evoked field, 20131218=date(Dec 18 2013), 01=run \* Use of the .ds, not the AUX (standard at the MNI) because they are easier to manipulate in FieldTrip **Stimulation delays** \* \*\*Delay #1\*\*: Production of the sound. Between the stim markers (channel UDIO001) and the moment where the sound card plays the sound (channel UADC001). This is mostly due to the software running on the computer (stimulation software, operating system, sound card drivers, sound card electronics). The delay can be measured from the recorded files by comparing the triggers in the two channels: Delay **between 11.5ms and 12.8ms\*\*(std = 0.3ms) This delay is \*\* not constant**, we will need to correct for it. \* \*\*Delay #2\*\*: Transmission of the sound. Between when the sound card plays the sound and when the subject receives the sound in the ears. This is the time it takes for the transducer to convert the analog audio signal into a sound, plus the time it takes to the sound to travel through the air tubes from the transducer to the subject’s ears. This delay cannot be estimated from the recorded signals: before the acquisition, we placed a sound meter at the extremity of the tubes to record when the sound is delivered. Delay **between 4.8ms and 5.0ms\*\*(std = 0.08ms). At a sampling rate of 2400Hz, this delay can be considered \*\* constant**, we will not compensate for it. \* \*\*Delay #3\*\*: Recording of the signals. The CTF MEG systems have a constant delay of \*\*4 samples\*\*between the MEG/EEG channels and the analog channels (such as the audio signal UADC001), because of an anti-aliasing filtered that is applied to the first and not the second. This translate here to a \*\* constant delay\*\*of \*\*1.7ms\*\*. \* \*\*Delay #4\*\*: Over-compensation of delay #1. When correcting of delay #1, the process we use to detect the beginning of the triggers on the audio signal (UADC001) sets the trigger in the middle of the ramp between silence and the beep. We “over-compensate” the delay #1 by 1.7ms. This can be considered as \*\*constant delay\*\*of about \*\*-1.7ms\*\*. \* **Uncorrected delays**: We will correct for the delay #1, and keep the other delays (#2, #3 and #4). After we compensate for delay #1 our MEG signals will have a **constant delay** of about 4.9 + 1.7 - 1.7 = ``` ** ``` 4.9 ms\*\*. We decide not to compensate for th3se delays because they do not introduce any jitter in the responses and they are not going to change anything in the interpretation of the data. **Head shape and fiducial points** ``` * ``` 3D digitization using a Polhemus Fastrak device driven by Brainstorm ( ``` S01_20131218_ ``` \*.pos) \* More information: [Digitize EEG electrodes and head shape](http://neuroimage.usc.edu/brainstorm/Tutorials/TutDigitize) \* The output file is copied to each .ds folder and contains the following entries: > * The position of the center of CTF coils > * The position of the anatomical references we use in Brainstorm: Nasion and connections tragus/helix, as illustrated [here](http://neuroimage.usc.edu/brainstorm/CoordinateSystems#Pre-auricular_points_.28LPA.2C_RPA.29). \* Around 150 head points distributed on the hard parts of the head (no soft tissues) **Subject anatomy** \* Subject with 1.5T MRI \* Marker on the left cheek \* Processed with FreeSurfer 5.3 ## Dataset Information | Dataset ID | `DS000246` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MEG-BIDS Brainstorm data sample | | Author (year) | `Bock2018` | | Canonical | — | | Importable as | `DS000246`, `Bock2018` | | Year | 2018 | | Authors | Elizabeth Bock, Peter Donhauser, Francois Tadel, Guiomar Niso, Sylvain Baillet | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds000246.v1.0.1](https://doi.org/10.18112/openneuro.ds000246.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds000246) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds000246) | [Source URL](https://openneuro.org/datasets/ds000246) | ### Copy-paste BibTeX ```bibtex @dataset{ds000246, title = {MEG-BIDS Brainstorm data sample}, author = {Elizabeth Bock and Peter Donhauser and Francois Tadel and Guiomar Niso and Sylvain Baillet}, doi = {10.18112/openneuro.ds000246.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds000246.v1.0.1}, } ``` ## Technical Details - Subjects: 2 - Recordings: 3 - Tasks: 2 - Channels: 340 (2), 301 - Sampling rate (Hz): 2400.0 - Duration (hours): 0.2083333333333333 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 2.3 GB - File count: 3 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds000246.v1.0.1 - Source: openneuro - OpenNeuro: [ds000246](https://openneuro.org/datasets/ds000246) - NeMAR: [ds000246](https://nemar.org/dataexplorer/detail?dataset_id=ds000246) ## API Reference Use the `DS000246` class to access this dataset programmatically. ### *class* eegdash.dataset.DS000246(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEG-BIDS Brainstorm data sample * **Study:** `ds000246` (OpenNeuro) * **Author (year):** `Bock2018` * **Canonical:** — Also importable as: `DS000246`, `Bock2018`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 2; recordings: 3; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds000246](https://openneuro.org/datasets/ds000246) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds000246](https://nemar.org/dataexplorer/detail?dataset_id=ds000246) DOI: [https://doi.org/10.18112/openneuro.ds000246.v1.0.1](https://doi.org/10.18112/openneuro.ds000246.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS000246 >>> dataset = DS000246(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds000246) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds000246) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) * [eegdash.dataset.DS002312](eegdash.dataset.DS002312.md) # DS000247: meg dataset, 6 subjects *MEG-BIDS OMEGA RestingState_sample* Access recordings and metadata through EEGDash. **Citation:** Guiomar Niso, Jeremy Moreau, Elizabeth Bock, Francois Tadel, Sylvain Baillet (2018). *MEG-BIDS OMEGA RestingState_sample*. [10.18112/openneuro.ds000247.v1.0.2](https://doi.org/10.18112/openneuro.ds000247.v1.0.2) Modality: meg Subjects: 6 Recordings: 10 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS000247 dataset = DS000247(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS000247(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS000247( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds000247, title = {MEG-BIDS OMEGA RestingState_sample}, author = {Guiomar Niso and Jeremy Moreau and Elizabeth Bock and Francois Tadel and Sylvain Baillet}, doi = {10.18112/openneuro.ds000247.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds000247.v1.0.2}, } ``` ## About This Dataset **OMEGA - Resting State Sample Dataset** **License** - This dataset was obtained from **The Open MEG Archive** (OMEGA, [https://omega.bic.mni.mcgill.ca](https://omega.bic.mni.mcgill.ca)). - You are free to use all data in OMEGA for research purposes; please acknowledge its authors and cite the following reference in your publications if you have used data from OMEGA: - Niso G., Rogers C., Moreau J.T., Chen L.Y., Madjar C., Das S., Bock E., Tadel F., Evans A.C., Jolicoeur P., Baillet S. (2016). OMEGA: The Open MEG Archive. NeuroImage 124, 1182-1187. doi: [https://doi.org/10.1016/j.neuroimage.2015.04.028](https://doi.org/10.1016/j.neuroimage.2015.04.028). OMEGA is available at: [https://omega.bic.mni.mcgill.ca](https://omega.bic.mni.mcgill.ca) ### View full README **OMEGA - Resting State Sample Dataset** **License** - This dataset was obtained from **The Open MEG Archive** (OMEGA, [https://omega.bic.mni.mcgill.ca](https://omega.bic.mni.mcgill.ca)). - You are free to use all data in OMEGA for research purposes; please acknowledge its authors and cite the following reference in your publications if you have used data from OMEGA: - Niso G., Rogers C., Moreau J.T., Chen L.Y., Madjar C., Das S., Bock E., Tadel F., Evans A.C., Jolicoeur P., Baillet S. (2016). OMEGA: The Open MEG Archive. NeuroImage 124, 1182-1187. doi: [https://doi.org/10.1016/j.neuroimage.2015.04.028](https://doi.org/10.1016/j.neuroimage.2015.04.028). OMEGA is available at: [https://omega.bic.mni.mcgill.ca](https://omega.bic.mni.mcgill.ca) **Description** **Experiment** - 5 subjects x 5 minute resting sessions, eyes open **MEG acquisition** - Recorded at the Montreal Neurological Institute in 2012-2016 - Acquisition with CTF 275 MEG system at 2400Hz sampling rate - Anti-aliasing low-pass filter at 600Hz, files may be saved with or without the CTF 3rd order gradient compensation - Recorded channels (at least 297), include: > * 26 MEG reference sensors (#2-#27) > * 270 MEG axial gradiometers (#28-#297) > * 1 ECG bipolar (EEG057/#298) - Not available in the empty room recordings > * 1 vertical EOG bipolar (EEG058/#299) - Not available in the empty room recordings > * 1 horizontal EOG bipolar (EEG059/#300) - Not available in the empty room recordings **Head shape and fiducial points** - 3D digitization using a Polhemus Fastrak device driven by Brainstorm. The .pos files contain: > * The center of the CTF coils > * The anatomical references we use in Brainstorm: nasion and ears as illustrated here > * Around 100 head points distributed on the hard parts of the head (no soft tissues). **Subject anatomy** - Structural T1 image (defaced for anonymization purposes) - Processed with FreeSurfer 5.3 - The anatomical fiducials (NAS, LPA, RPA) have already been marked and saved in the files fiducials.m **BIDS** - The data in this dataset has been organized according to the MEG-BIDS specification (Brain Imaging Data Structure, [http://bids.neuroimaging.io](http://bids.neuroimaging.io)) (Niso et al. 2018) - Niso G., Gorgolewski K.J., Bock E., Brooks T.L., Flandin G., Gramfort A., Henson R.N., Jas M., Litvak V., Moreau J., Oostenveld R., Schoffelen J.M., Tadel F., Wexler J., Baillet S. (2018). MEG-BIDS: an extension to the Brain Imaging Data Structure for magnetoencephalography. Scientific Data; 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) *Release history:* - 2016-12-01: initial release - 2018-07-18: release OpenNeuro ds000247 (00001 and 00002) ## Dataset Information | Dataset ID | `DS000247` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MEG-BIDS OMEGA RestingState_sample | | Author (year) | `Niso2018` | | Canonical | `OMEGA` | | Importable as | `DS000247`, `Niso2018`, `OMEGA` | | Year | 2018 | | Authors | Guiomar Niso, Jeremy Moreau, Elizabeth Bock, Francois Tadel, Sylvain Baillet | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds000247.v1.0.2](https://doi.org/10.18112/openneuro.ds000247.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds000247) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds000247) | [Source URL](https://openneuro.org/datasets/ds000247) | ### Copy-paste BibTeX ```bibtex @dataset{ds000247, title = {MEG-BIDS OMEGA RestingState_sample}, author = {Guiomar Niso and Jeremy Moreau and Elizabeth Bock and Francois Tadel and Sylvain Baillet}, doi = {10.18112/openneuro.ds000247.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds000247.v1.0.2}, } ``` ## Technical Details - Subjects: 6 - Recordings: 10 - Tasks: 2 - Channels: 297 (5), 330 (3), 300 (2) - Sampling rate (Hz): 2400.0 - Duration (hours): 1.0158333333333334 - Pathology: Healthy - Modality: Resting State - Type: Resting-state - Size on disk: 10.3 GB - File count: 10 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds000247.v1.0.2 - Source: openneuro - OpenNeuro: [ds000247](https://openneuro.org/datasets/ds000247) - NeMAR: [ds000247](https://nemar.org/dataexplorer/detail?dataset_id=ds000247) ## API Reference Use the `DS000247` class to access this dataset programmatically. ### *class* eegdash.dataset.DS000247(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEG-BIDS OMEGA RestingState_sample * **Study:** `ds000247` (OpenNeuro) * **Author (year):** `Niso2018` * **Canonical:** `OMEGA` Also importable as: `DS000247`, `Niso2018`, `OMEGA`. Modality: `meg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 6; recordings: 10; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds000247](https://openneuro.org/datasets/ds000247) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds000247](https://nemar.org/dataexplorer/detail?dataset_id=ds000247) DOI: [https://doi.org/10.18112/openneuro.ds000247.v1.0.2](https://doi.org/10.18112/openneuro.ds000247.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS000247 >>> dataset = DS000247(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds000247) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds000247) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) * [eegdash.dataset.DS002312](eegdash.dataset.DS002312.md) # DS000248: meg dataset, 2 subjects *MNE-Sample-Data* Access recordings and metadata through EEGDash. **Citation:** Alexandre Gramfort, Matti S Hämäläinen (2018). *MNE-Sample-Data*. [10.18112/openneuro.ds000248.v1.2.4](https://doi.org/10.18112/openneuro.ds000248.v1.2.4) Modality: meg Subjects: 2 Recordings: 3 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS000248 dataset = DS000248(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS000248(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS000248( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds000248, title = {MNE-Sample-Data}, author = {Alexandre Gramfort and Matti S Hämäläinen}, doi = {10.18112/openneuro.ds000248.v1.2.4}, url = {https://doi.org/10.18112/openneuro.ds000248.v1.2.4}, } ``` ## About This Dataset **MNE-Sample-Data** The MNE software is accompanied by a sample data set. These data were acquired with the Neuromag Vectorview system at MGH/HMS/MIT Athinoula A. Martinos Center Biomedical Imaging. EEG data from a 60-channel electrode cap was acquired simultaneously with the MEG. The original MRI data set was acquired with a Siemens 1.5 T Sonata scanner using an MPRAGE sequence. In the MEG/EEG experiment, checkerboard patterns were presented into the left and right visual field, interspersed by tones to the left or right ear. The interval between the stimuli was 750 ms. Occasionally a smiley face was presented at the center of the visual field. The subject was asked to press a key with the right index finger as soon as possible after the appearance of the face. **Freesurfer derivatives** - Calls from the command line: - `recon-all -i sub-01/anat/sub-01_T1w.nii.gz -s sub-01 -all` - `mne make_scalp_surfaces -s sub-01 --overwrite --force` - `mne flash_bem -s sub-01 --overwrite` - `mne watershed_bem -s sub-01 --overwrite` **References** A. Gramfort, M. Luessi, E. Larson, D. Engemann, D. Strohmeier, C. Brodbeck, L. Parkkonen, M. Hämäläinen, MNE software for processing MEG and EEG data, NeuroImage, Volume 86, 1 February 2014, Pages 446-460, ISSN 1053-8119 A. Gramfort, M. Luessi, E. Larson, D. Engemann, D. Strohmeier, C. Brodbeck, R. Goj, M. Jas, T. Brooks, L. Parkkonen, M. Hämäläinen, MEG and EEG data analysis with MNE-Python, Frontiers in Neuroscience, Volume 7, 2013, ISSN 1662-453X” Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [http://doi.org/10.1038/sdata.2018.110](http://doi.org/10.1038/sdata.2018.110) **References** Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `DS000248` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MNE-Sample-Data | | Author (year) | `Gramfort2018` | | Canonical | `MNE_Sample_Data` | | Importable as | `DS000248`, `Gramfort2018`, `MNE_Sample_Data` | | Year | 2018 | | Authors | Alexandre Gramfort, Matti S Hämäläinen | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds000248.v1.2.4](https://doi.org/10.18112/openneuro.ds000248.v1.2.4) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds000248) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds000248) | [Source URL](https://openneuro.org/datasets/ds000248) | ### Copy-paste BibTeX ```bibtex @dataset{ds000248, title = {MNE-Sample-Data}, author = {Alexandre Gramfort and Matti S Hämäläinen}, doi = {10.18112/openneuro.ds000248.v1.2.4}, url = {https://doi.org/10.18112/openneuro.ds000248.v1.2.4}, } ``` ## Technical Details - Subjects: 2 - Recordings: 3 - Tasks: 2 - Channels: 376, 315 - Sampling rate (Hz): 600.614990234375 - Duration (hours): 0.1076979446929193 - Pathology: Healthy - Modality: Multisensory - Type: Attention - Size on disk: 177.6 MB - File count: 3 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds000248.v1.2.4 - Source: openneuro - OpenNeuro: [ds000248](https://openneuro.org/datasets/ds000248) - NeMAR: [ds000248](https://nemar.org/dataexplorer/detail?dataset_id=ds000248) ## API Reference Use the `DS000248` class to access this dataset programmatically. ### *class* eegdash.dataset.DS000248(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MNE-Sample-Data * **Study:** `ds000248` (OpenNeuro) * **Author (year):** `Gramfort2018` * **Canonical:** `MNE_Sample_Data` Also importable as: `DS000248`, `Gramfort2018`, `MNE_Sample_Data`. Modality: `meg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 2; recordings: 3; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds000248](https://openneuro.org/datasets/ds000248) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds000248](https://nemar.org/dataexplorer/detail?dataset_id=ds000248) DOI: [https://doi.org/10.18112/openneuro.ds000248.v1.2.4](https://doi.org/10.18112/openneuro.ds000248.v1.2.4) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS000248 >>> dataset = DS000248(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds000248) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds000248) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) * [eegdash.dataset.DS002312](eegdash.dataset.DS002312.md) # DS001785: eeg dataset, 18 subjects *Evidence accumulation relates to perceptual consciousness and monitoring* Access recordings and metadata through EEGDash. **Citation:** Michael Pereira, Pierre Mégevand, Mi Xue Tan, Wenwen Chang, Shuo Wang, Ali Rezai, Margitta Seeck, Marco Corniola, Shahan Momjian, Fosco Bernasconi, Olaf Blanke, Nathan Faivre (2019). *Evidence accumulation relates to perceptual consciousness and monitoring*. [10.18112/openneuro.ds001785.v1.1.1](https://doi.org/10.18112/openneuro.ds001785.v1.1.1) Modality: eeg Subjects: 18 Recordings: 54 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS001785 dataset = DS001785(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS001785(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS001785( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds001785, title = {Evidence accumulation relates to perceptual consciousness and monitoring}, author = {Michael Pereira and Pierre Mégevand and Mi Xue Tan and Wenwen Chang and Shuo Wang and Ali Rezai and Margitta Seeck and Marco Corniola and Shahan Momjian and Fosco Bernasconi and Olaf Blanke and Nathan Faivre}, doi = {10.18112/openneuro.ds001785.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds001785.v1.1.1}, } ``` ## About This Dataset This dataset contains the EEG used in the paper: Evidence accumulation relates to perceptual consciousness and monitoring, 2021, Nature Communications Participants: Twenty healthy participants (7 woman; age: 25.2, SD = 4.1) participated in this study for a 40 CHF compensation. Subjects gave written informed consent prior to participating and all experimental procedures were approved by the Commission Cantonale d’Ethique de la Recherche de la République et Canton de Genève (2015-00092 15-273). One patient with intractable epilepsy. The patient provided informed written consent for the present study which was approved by the Commission Cantonale d’Ethique de la Recherche de la République et Canton de Genève (2016-01856). Stimuli were applied on the lateral palmar side of the right wrist using a MMC3 Haptuator vibrotactile device from TactileLabs Inc. (Montreal, Canada) driven by a 230 Hz sinusoid audio signal lasting 100 ms. Experiments started by a simple estimation of the individual detection threshold. The tactile stimulus was applied with decreasing intensity with steps corresponding to 2% of the initial intensity until the participant reported not feeling it anymore three times in a row. We then repeated the same procedure but with increasing intensity and until the participant reported feeling the vibration three times in a row. The perceptual threshold was estimated to be the average between the two thresholds found using this procedure. This approximation was then used as a seed value for an adaptive staircase during the main experiment. Participants sat in front of a computer screen. A white fixation cross appeared in the middle of the screen for 2 s. From the moment the fixation cross turned green, participants were told that a tactile stimulus could be applied at any moment during the next 2 s. During this period, stimulus onset was uniformly distributed in 80% of trials, the 20% remaining trials served as catch trials. In all trials, 1 second after the green cross disappeared, participants were prompted to answer with the keyboard whether they felt the stimulus or not. Following a 500 ms stimulus onset asynchrony, participants were asked to report the confidence in their first order response by moving a slider on a visual analog scale with marks at 0 (certainty that the first-order response was erroneous), 0.5 (unsure about the first-order response) and 1.0 (certainty that the first-order response was correct). Detection and confidence reports were provided with the left (non-stimulated) hand, using different keys. The total experiment included 500 trials divided in 10 blocks and lasted about 2 hours. Recordings: Electroencephalographic data were acquired from 62 active electrodes (10-20 montage) using a WaveGuard EEG cap and amplifier (ANTNeuro, Hengelo, The Netherlands) and digitized at a sampling rate of 1024 Hz. Horizontal and vertical electrooculography (EOG) was derived using bipolar referenced electrodes placed around participants’ eyes. The audio signal driving the vibrotactile actuator was recorded as an extra channel to precisely realign data to stimulus onset. For the patient, electrocorticographical data was obtained through a 24 electrode ECoG grid (Ad-Tech Medical) covering the left hemisphere from the premotor cortex to the superior parietal lobule. The electrodes had a 4 mm diameter with 2.3 mm exposed corresponding to an area of 4.15 mm2. The data was amplified and sampled at 2048 Hz (Brain Quick LTM, Micromed, Treviso, Italy). ## Dataset Information | Dataset ID | `DS001785` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Evidence accumulation relates to perceptual consciousness and monitoring | | Author (year) | `Pereira2019_Evidence` | | Canonical | — | | Importable as | `DS001785`, `Pereira2019_Evidence` | | Year | 2019 | | Authors | Michael Pereira, Pierre Mégevand, Mi Xue Tan, Wenwen Chang, Shuo Wang, Ali Rezai, Margitta Seeck, Marco Corniola, Shahan Momjian, Fosco Bernasconi, Olaf Blanke, Nathan Faivre | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds001785.v1.1.1](https://doi.org/10.18112/openneuro.ds001785.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds001785) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds001785) | [Source URL](https://openneuro.org/datasets/ds001785) | ### Copy-paste BibTeX ```bibtex @dataset{ds001785, title = {Evidence accumulation relates to perceptual consciousness and monitoring}, author = {Michael Pereira and Pierre Mégevand and Mi Xue Tan and Wenwen Chang and Shuo Wang and Ali Rezai and Margitta Seeck and Marco Corniola and Shahan Momjian and Fosco Bernasconi and Olaf Blanke and Nathan Faivre}, doi = {10.18112/openneuro.ds001785.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds001785.v1.1.1}, } ``` ## Technical Details - Subjects: 18 - Recordings: 54 - Tasks: 3 - Channels: 71 - Sampling rate (Hz): 1024.0 (53), 1000.0 - Duration (hours): 25.714475 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 24.9 GB - File count: 54 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds001785.v1.1.1 - Source: openneuro - OpenNeuro: [ds001785](https://openneuro.org/datasets/ds001785) - NeMAR: [ds001785](https://nemar.org/dataexplorer/detail?dataset_id=ds001785) ## API Reference Use the `DS001785` class to access this dataset programmatically. ### *class* eegdash.dataset.DS001785(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Evidence accumulation relates to perceptual consciousness and monitoring * **Study:** `ds001785` (OpenNeuro) * **Author (year):** `Pereira2019_Evidence` * **Canonical:** — Also importable as: `DS001785`, `Pereira2019_Evidence`. Modality: `eeg`. Subjects: 18; recordings: 54; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001785](https://openneuro.org/datasets/ds001785) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001785](https://nemar.org/dataexplorer/detail?dataset_id=ds001785) DOI: [https://doi.org/10.18112/openneuro.ds001785.v1.1.1](https://doi.org/10.18112/openneuro.ds001785.v1.1.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS001785 >>> dataset = DS001785(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds001785) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds001785) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) * [eegdash.dataset.DS002034](eegdash.dataset.DS002034.md) # DS001787: eeg dataset, 24 subjects *EEG meditation study* Access recordings and metadata through EEGDash. **Citation:** Arnaud Delorme, Tracy Brandmeyer (2019). *EEG meditation study*. [10.18112/openneuro.ds001787.v1.1.1](https://doi.org/10.18112/openneuro.ds001787.v1.1.1) Modality: eeg Subjects: 24 Recordings: 40 License: CC0 Source: openneuro Citations: 6.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS001787 dataset = DS001787(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS001787(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS001787( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds001787, title = {EEG meditation study}, author = {Arnaud Delorme and Tracy Brandmeyer}, doi = {10.18112/openneuro.ds001787.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds001787.v1.1.1}, } ``` ## About This Dataset This meditation experiment contains 24 subjects. Subjects were meditating and were interupted about every 2 minutes to indicate their level of concentration and mind wandering. The scientific article (see Reference) contains all methodological details. Note that although the original files were recorded at 2048 Hz, they were downsampled to 256 Hz using the BDF decimator provided by BIOSEMI ([https://www.biosemi.com/download.htm](https://www.biosemi.com/download.htm)). - Arnaud Delorme (October 17, 2018; updated June 2024) ## Dataset Information | Dataset ID | `DS001787` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG meditation study | | Author (year) | `Delorme2019` | | Canonical | — | | Importable as | `DS001787`, `Delorme2019` | | Year | 2019 | | Authors | Arnaud Delorme, Tracy Brandmeyer | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds001787.v1.1.1](https://doi.org/10.18112/openneuro.ds001787.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds001787) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds001787) | [Source URL](https://openneuro.org/datasets/ds001787) | ### Copy-paste BibTeX ```bibtex @dataset{ds001787, title = {EEG meditation study}, author = {Arnaud Delorme and Tracy Brandmeyer}, doi = {10.18112/openneuro.ds001787.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds001787.v1.1.1}, } ``` ## Technical Details - Subjects: 24 - Recordings: 40 - Tasks: 1 - Channels: 79 - Sampling rate (Hz): 256.0 - Duration (hours): 27.607222222222223 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 5.7 GB - File count: 40 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds001787.v1.1.1 - Source: openneuro - OpenNeuro: [ds001787](https://openneuro.org/datasets/ds001787) - NeMAR: [ds001787](https://nemar.org/dataexplorer/detail?dataset_id=ds001787) ## API Reference Use the `DS001787` class to access this dataset programmatically. ### *class* eegdash.dataset.DS001787(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG meditation study * **Study:** `ds001787` (OpenNeuro) * **Author (year):** `Delorme2019` * **Canonical:** — Also importable as: `DS001787`, `Delorme2019`. Modality: `eeg`. Subjects: 24; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001787](https://openneuro.org/datasets/ds001787) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001787](https://nemar.org/dataexplorer/detail?dataset_id=ds001787) DOI: [https://doi.org/10.18112/openneuro.ds001787.v1.1.1](https://doi.org/10.18112/openneuro.ds001787.v1.1.1) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS001787 >>> dataset = DS001787(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds001787) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds001787) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) * [eegdash.dataset.DS002034](eegdash.dataset.DS002034.md) # DS001810: eeg dataset, 47 subjects *EEG study of the attentional blink; before, during, and after transcranial Direct Current Stimulation (tDCS)* Access recordings and metadata through EEGDash. **Citation:** Leon C. Reteig, Lionel A. Newman, K. Richard Ridderinkhof, Heleen A. Slagter (2019). *EEG study of the attentional blink; before, during, and after transcranial Direct Current Stimulation (tDCS)*. [10.18112/openneuro.ds001810.v1.1.0](https://doi.org/10.18112/openneuro.ds001810.v1.1.0) Modality: eeg Subjects: 47 Recordings: 263 License: CC0 Source: openneuro Citations: 6.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS001810 dataset = DS001810(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS001810(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS001810( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds001810, title = {EEG study of the attentional blink; before, during, and after transcranial Direct Current Stimulation (tDCS)}, author = {Leon C. Reteig and Lionel A. Newman and K. Richard Ridderinkhof and Heleen A. Slagter}, doi = {10.18112/openneuro.ds001810.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds001810.v1.1.0}, } ``` ## About This Dataset **Overview** AB_tDCS-EEG: EEG data from participants who performed an attentional blink (AB) task before, during and after transcranial Direct Current Stimulation (tDCS). They visited the lab twice (complete data for 40 participants; some did only one session); on each visit they received either anodal or cathodal tDCS. **Sessions** Both lab visits were about one week apart. During each visit, three EEG files were recorded, each about 20 minutes long: before applying tDCS (“pre”), during tDCS (“tDCS”), and after tDCS (“post”). The order of the visits (anodal vs. cathodal) differs across participants. Each block for each visit is considered a session here (you could also argue this dataset should have 2 “sessions” [visits] with 3 “runs” [blocks], but given there was a manipulation between the runs, it seemed better to count everything as a session). So each sub\*-folder with complete data has 6 sessions: 1. anodalpost (data from anodal visit, after tDCS) ### View full README **Overview** AB_tDCS-EEG: EEG data from participants who performed an attentional blink (AB) task before, during and after transcranial Direct Current Stimulation (tDCS). They visited the lab twice (complete data for 40 participants; some did only one session); on each visit they received either anodal or cathodal tDCS. **Sessions** Both lab visits were about one week apart. During each visit, three EEG files were recorded, each about 20 minutes long: before applying tDCS (“pre”), during tDCS (“tDCS”), and after tDCS (“post”). The order of the visits (anodal vs. cathodal) differs across participants. Each block for each visit is considered a session here (you could also argue this dataset should have 2 “sessions” [visits] with 3 “runs” [blocks], but given there was a manipulation between the runs, it seemed better to count everything as a session). So each sub\*-folder with complete data has 6 sessions: 1. anodalpost (data from anodal visit, after tDCS) 2. anodalpre (data from anodal visit, before tDCS) 3. anodaltDCS (data from anodal visit, during tDCS) 4. cathodalpost (data from cathodal visit, after tDCS) 5. cathodalpre (data from cathodal visit, before tDCS) 6. cathodaltDCS (data from cathodal visit, during tDCS) See the \_sessions.tsv and \_sessions.json files in each sub\*-folder for details. **Concurrent tDCS-EEG** Recording EEG during tDCS introduces some problems, most importantly blocked channels and artifacts. **Blocked channels** The rubber tDCS electrodes were affixed to the scalp using conductive paste. Because they sat under the EEG headcap, the electrodes blocked a few of the slots, meaning these channels could not be plugged in. The channels affected vary from subject to subject (and even slightly from visit to visit). They are marked with “status” = “bad” in each session’s \_channels.tsv file. The tDCS montage was the following: 1. 7x5 mm electrode centered over F3 (long side parallel to midline; connector at posterior end) 2. 7x5 mm electrode centered on right forehead (approximately Fp2) (short side parallel to midline, connector at lateral end) Anodal or cathodal tDCS is defined according to the electrode over F3, i.e. “anodal tDCS” means this was the anode (with the cathode over Fp2); “cathodal tDCS” means this was the cathode (with the anode over Fp2). **tDCS artifacts** tDCS was applied at an intensity of 1 mA for 20m (with a 1-minute ramp-up period before, and a 1-minute ramp-down period after). This often caused channels close to the tDCS electrodes to drift outside of the amplifier range, or become very noisy otherwise. This occurred mostly in the block during tDCS obviously, but sometimes channels take a while to “recover”, so they can still be affected during the post-block. The signal on channels further away from the electrodes normally looks fine (one exception is at the exact moment the ramp-down ends, which causes a huge artifact across all channels). However, though most of the artifacts should be gone after DC correction, this does not mean that the channels are completely artifact-free. Compared to the EEG signal, the tDCS artifact is massive, and can cause further problems by nonlinearly interacting with other physiological signals, such as heart-rate and respiration. However, assuming these artifacts do not vary systematically across conditions and time, comparing event-related responses across conditions should be less of a problem. See [1,2] for more information. **Triggers** The event codes (see \_events.json and each session’s \_events.tsv file) are all in the 61000 range. This differs from the original values sent out (see below for a list of those). The transformation happens when reading the .\*bdf sourcedata files, e.g. with both EEGLAB and fieldtrip. Because the data were converted to the BIDS-compliant BrainVision format using fieldtrip, the trigger codes are now all >61000. The problem comes from that the event codes were sent in 8-bit, but are read in 16 bits. Since bits 13-16 on the trigger channel are always open, the event codes are shifted by 1111000000000000 in binary, which is 61440. The original trigger codes were: “10”: “onset of pre-stream fixation cross (duration: 1500 ms)”, “23”: “onset of letter stream; lag 3 trial (duration: 1375 ms)”, “28”: “onset of letter stream; lag 8 trial (duration: 1375 ms)”, “31”: “onset of T1 (first target, in red, duration: 91.66 ms)”, “32”: “onset of T2 (second target, in green duration: 91.66 ms)”, “40”: “onset of post-stream fixation cross (duration: 1000 ms)”, “50”: “onset of T1 question (‘Which letter was red?’)”, “60”: “onset of T2 question (‘Which letter was green?’); T1 question answered incorrectly”, “61”: “onset of T2 question (‘Which letter was green?’); T1 question answered correctly”, “70”: “onset of post-response fixation cross (duration: 250 ms); T2 question answered incorrectly”, “71”: “onset of post-response fixation cross (duration: 250 ms); T2 question answered correctly”, “254”: “trigger to start EEG recording (occurs at start of block after a break)” When reading in the data, 61440 will likely be added to each of them. **Exceptions** Was unable to create BrainVision format data for sub-07/cathodal-post; fieldtrip crashed on the .bdf file in sourcedata: > Error in read_biosemi_bdf>readLowLevel (line 274) > : buf = read_24bit(filename, offset, numwords); The .bdf seems to be read fine by EEGLAB, so still included it in case this can be solved. ## Dataset Information | Dataset ID | `DS001810` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG study of the attentional blink; before, during, and after transcranial Direct Current Stimulation (tDCS) | | Author (year) | `Reteig2019` | | Canonical | — | | Importable as | `DS001810`, `Reteig2019` | | Year | 2019 | | Authors | Leon C. Reteig, Lionel A. Newman, K. Richard Ridderinkhof, Heleen A. Slagter | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds001810.v1.1.0](https://doi.org/10.18112/openneuro.ds001810.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds001810) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds001810) | [Source URL](https://openneuro.org/datasets/ds001810) | ### Copy-paste BibTeX ```bibtex @dataset{ds001810, title = {EEG study of the attentional blink; before, during, and after transcranial Direct Current Stimulation (tDCS)}, author = {Leon C. Reteig and Lionel A. Newman and K. Richard Ridderinkhof and Heleen A. Slagter}, doi = {10.18112/openneuro.ds001810.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds001810.v1.1.0}, } ``` ## Technical Details - Subjects: 47 - Recordings: 263 - Tasks: 1 - Channels: 73 - Sampling rate (Hz): 512.0 - Duration (hours): 91.205 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 45.7 GB - File count: 263 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds001810.v1.1.0 - Source: openneuro - OpenNeuro: [ds001810](https://openneuro.org/datasets/ds001810) - NeMAR: [ds001810](https://nemar.org/dataexplorer/detail?dataset_id=ds001810) ## API Reference Use the `DS001810` class to access this dataset programmatically. ### *class* eegdash.dataset.DS001810(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG study of the attentional blink; before, during, and after transcranial Direct Current Stimulation (tDCS) * **Study:** `ds001810` (OpenNeuro) * **Author (year):** `Reteig2019` * **Canonical:** — Also importable as: `DS001810`, `Reteig2019`. Modality: `eeg`. Subjects: 47; recordings: 263; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001810](https://openneuro.org/datasets/ds001810) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001810](https://nemar.org/dataexplorer/detail?dataset_id=ds001810) DOI: [https://doi.org/10.18112/openneuro.ds001810.v1.1.0](https://doi.org/10.18112/openneuro.ds001810.v1.1.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS001810 >>> dataset = DS001810(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds001810) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds001810) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) * [eegdash.dataset.DS002034](eegdash.dataset.DS002034.md) # DS001849: eeg dataset, 20 subjects *RS_TMSEEG_Data* Access recordings and metadata through EEGDash. **Citation:** Michael Freedberg, Jack A. Reeves, Sara J. Hussain, Kareem A. Zaghloul, Eric M. Wassermann (2019). *RS_TMSEEG_Data*. [10.18112/openneuro.ds001849.v1.0.2](https://doi.org/10.18112/openneuro.ds001849.v1.0.2) Modality: eeg Subjects: 20 Recordings: 120 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS001849 dataset = DS001849(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS001849(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS001849( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds001849, title = {RS_TMSEEG_Data}, author = {Michael Freedberg and Jack A. Reeves and Sara J. Hussain and Kareem A. Zaghloul and Eric M. Wassermann}, doi = {10.18112/openneuro.ds001849.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds001849.v1.0.2}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS001849` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | RS_TMSEEG_Data | | Author (year) | `Freedberg2019` | | Canonical | — | | Importable as | `DS001849`, `Freedberg2019` | | Year | 2019 | | Authors | Michael Freedberg, Jack A. Reeves, Sara J. Hussain, Kareem A. Zaghloul, Eric M. Wassermann | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds001849.v1.0.2](https://doi.org/10.18112/openneuro.ds001849.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds001849) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds001849) | [Source URL](https://openneuro.org/datasets/ds001849) | ### Copy-paste BibTeX ```bibtex @dataset{ds001849, title = {RS_TMSEEG_Data}, author = {Michael Freedberg and Jack A. Reeves and Sara J. Hussain and Kareem A. Zaghloul and Eric M. Wassermann}, doi = {10.18112/openneuro.ds001849.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds001849.v1.0.2}, } ``` ## Technical Details - Subjects: 20 - Recordings: 120 - Tasks: 1 - Channels: 30 - Sampling rate (Hz): 5000.0 - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: 44.5 GB - File count: 120 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds001849.v1.0.2 - Source: openneuro - OpenNeuro: [ds001849](https://openneuro.org/datasets/ds001849) - NeMAR: [ds001849](https://nemar.org/dataexplorer/detail?dataset_id=ds001849) ## API Reference Use the `DS001849` class to access this dataset programmatically. ### *class* eegdash.dataset.DS001849(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RS_TMSEEG_Data * **Study:** `ds001849` (OpenNeuro) * **Author (year):** `Freedberg2019` * **Canonical:** — Also importable as: `DS001849`, `Freedberg2019`. Modality: `eeg`. Subjects: 20; recordings: 120; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001849](https://openneuro.org/datasets/ds001849) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001849](https://nemar.org/dataexplorer/detail?dataset_id=ds001849) DOI: [https://doi.org/10.18112/openneuro.ds001849.v1.0.2](https://doi.org/10.18112/openneuro.ds001849.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS001849 >>> dataset = DS001849(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds001849) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds001849) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) * [eegdash.dataset.DS002034](eegdash.dataset.DS002034.md) # DS001971: eeg dataset, 20 subjects *Audiocue walking study* Access recordings and metadata through EEGDash. **Citation:** Johanna Wagner, Ramon Martinez-Cancino, Scott Makeig, Arnaud Delorme, Christa Neuper, Teodoro Solis-Escalante, Gernot Mueller-Putz (2019). *Audiocue walking study*. [10.18112/openneuro.ds001971.v1.1.1](https://doi.org/10.18112/openneuro.ds001971.v1.1.1) Modality: eeg Subjects: 20 Recordings: 273 License: Creative commons Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS001971 dataset = DS001971(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS001971(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS001971( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds001971, title = {Audiocue walking study}, author = {Johanna Wagner and Ramon Martinez-Cancino and Scott Makeig and Arnaud Delorme and Christa Neuper and Teodoro Solis-Escalante and Gernot Mueller-Putz}, doi = {10.18112/openneuro.ds001971.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds001971.v1.1.1}, } ``` ## About This Dataset This mobile brain body imaging (MoBI) gait adaptation experiment contains 18 subjects. Participants were walking on a treadmill at a constant speed and were required to step in time to an auditory tone sequence and adapt their step length and rate to occasional shifts in tempo of the pacing stimulus (i.e., following shifts to a faster or slower tempo). The scientific article (see Reference) contains all methodological details - Johanna Wagner (June 6, 2019) ## Dataset Information | Dataset ID | `DS001971` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Audiocue walking study | | Author (year) | `Wagner2019` | | Canonical | — | | Importable as | `DS001971`, `Wagner2019` | | Year | 2019 | | Authors | Johanna Wagner, Ramon Martinez-Cancino, Scott Makeig, Arnaud Delorme, Christa Neuper, Teodoro Solis-Escalante, Gernot Mueller-Putz | | License | Creative commons | | Citation / DOI | [10.18112/openneuro.ds001971.v1.1.1](https://doi.org/10.18112/openneuro.ds001971.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds001971) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds001971) | [Source URL](https://openneuro.org/datasets/ds001971) | ### Copy-paste BibTeX ```bibtex @dataset{ds001971, title = {Audiocue walking study}, author = {Johanna Wagner and Ramon Martinez-Cancino and Scott Makeig and Arnaud Delorme and Christa Neuper and Teodoro Solis-Escalante and Gernot Mueller-Putz}, doi = {10.18112/openneuro.ds001971.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds001971.v1.1.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 273 - Tasks: 1 - Channels: 115 (206), 112 (67) - Sampling rate (Hz): 512.0 - Duration (hours): 39.84517252604167 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 32.0 GB - File count: 273 - Format: BIDS - License: Creative commons - DOI: 10.18112/openneuro.ds001971.v1.1.1 - Source: openneuro - OpenNeuro: [ds001971](https://openneuro.org/datasets/ds001971) - NeMAR: [ds001971](https://nemar.org/dataexplorer/detail?dataset_id=ds001971) ## API Reference Use the `DS001971` class to access this dataset programmatically. ### *class* eegdash.dataset.DS001971(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Audiocue walking study * **Study:** `ds001971` (OpenNeuro) * **Author (year):** `Wagner2019` * **Canonical:** — Also importable as: `DS001971`, `Wagner2019`. Modality: `eeg`. Subjects: 20; recordings: 273; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001971](https://openneuro.org/datasets/ds001971) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001971](https://nemar.org/dataexplorer/detail?dataset_id=ds001971) DOI: [https://doi.org/10.18112/openneuro.ds001971.v1.1.1](https://doi.org/10.18112/openneuro.ds001971.v1.1.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS001971 >>> dataset = DS001971(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds001971) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds001971) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS002034](eegdash.dataset.DS002034.md) # DS002001: meg dataset, 11 subjects *Rivalry_Tagging* Access recordings and metadata through EEGDash. **Citation:** Janine Mendola, Elizabeth Bock (2019). *Rivalry_Tagging*. [10.18112/openneuro.ds002001.v1.0.0](https://doi.org/10.18112/openneuro.ds002001.v1.0.0) Modality: meg Subjects: 11 Recordings: 69 License: PD Source: openneuro Citations: 3.0 Metadata: Good (80%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002001 dataset = DS002001(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002001(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002001( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002001, title = {Rivalry_Tagging}, author = {Janine Mendola and Elizabeth Bock}, doi = {10.18112/openneuro.ds002001.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds002001.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS002001` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Rivalry_Tagging | | Author (year) | `Mendola2019` | | Canonical | `Mendola2020` | | Importable as | `DS002001`, `Mendola2019`, `Mendola2020` | | Year | 2019 | | Authors | Janine Mendola, Elizabeth Bock | | License | PD | | Citation / DOI | [10.18112/openneuro.ds002001.v1.0.0](https://doi.org/10.18112/openneuro.ds002001.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002001) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002001) | [Source URL](https://openneuro.org/datasets/ds002001) | ### Copy-paste BibTeX ```bibtex @dataset{ds002001, title = {Rivalry_Tagging}, author = {Janine Mendola and Elizabeth Bock}, doi = {10.18112/openneuro.ds002001.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds002001.v1.0.0}, } ``` ## Technical Details - Subjects: 11 - Recordings: 69 - Tasks: 2 - Channels: Varies - Sampling rate (Hz): 2400 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 81.7 GB - File count: 69 - Format: BIDS - License: PD - DOI: 10.18112/openneuro.ds002001.v1.0.0 - Source: openneuro - OpenNeuro: [ds002001](https://openneuro.org/datasets/ds002001) - NeMAR: [ds002001](https://nemar.org/dataexplorer/detail?dataset_id=ds002001) ## API Reference Use the `DS002001` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002001(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Rivalry_Tagging * **Study:** `ds002001` (OpenNeuro) * **Author (year):** `Mendola2019` * **Canonical:** `Mendola2020` Also importable as: `DS002001`, `Mendola2019`, `Mendola2020`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 11; recordings: 69; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002001](https://openneuro.org/datasets/ds002001) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002001](https://nemar.org/dataexplorer/detail?dataset_id=ds002001) DOI: [https://doi.org/10.18112/openneuro.ds002001.v1.0.0](https://doi.org/10.18112/openneuro.ds002001.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS002001 >>> dataset = DS002001(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002001) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002001) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002312](eegdash.dataset.DS002312.md) # DS002034: eeg dataset, 14 subjects *Real-time EEG feedback on alpha power lateralization leads to behavioral improvements in a covert attention task* Access recordings and metadata through EEGDash. **Citation:** Christoph Schneider, Michael Pereira, Luca Tonin, Jose del R. Millan (2019). *Real-time EEG feedback on alpha power lateralization leads to behavioral improvements in a covert attention task*. [10.18112/openneuro.ds002034.v1.0.3](https://doi.org/10.18112/openneuro.ds002034.v1.0.3) Modality: eeg Subjects: 14 Recordings: 167 License: CC0 Source: openneuro Citations: 7.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002034 dataset = DS002034(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002034(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002034( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002034, title = {Real-time EEG feedback on alpha power lateralization leads to behavioral improvements in a covert attention task}, author = {Christoph Schneider and Michael Pereira and Luca Tonin and Jose del R. Millan}, doi = {10.18112/openneuro.ds002034.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds002034.v1.0.3}, } ``` ## About This Dataset This dataset contains the EEG recordings used in the paper: “Real-time EEG Feedback on Alpha Power Lateralization Leads to Behavioral Improvements in a Covert Attention Task” (Schneider, C., Pereira, M., Tonin, L. et al. Brain Topogr (2019). [https://doi.org/10.1007/s10548-019-00725-9](https://doi.org/10.1007/s10548-019-00725-9)) Participants: Fourteen healthy subjects (seven female, seven male), age 23±1.52 years, with normal or corrected to normal vision took part in the study. All gave informed written consent and received course credits for their participation. The study was covered by the ethical protocol No PB_2017-00295 of the ethics commissions of the cantons of Vaud and Geneva, Switzerland and complied with the standards of the Declaration of Helsinki. Experimental paradigm: Each trial started with the presentation of a gray central fixation point at 0.5° visual angle and subjects were instructed to neither move nor blink until the trial was over. After 1 to 2 s (random duration), a cue—corresponding to the task to perform—was presented for 100 ms: half a circle (line width 0.1°, radius 2°) to the left or to the right indicated the side to attend to, a full circle around the fixation point indicated a central fixation trial (no covert attention shift). This was followed by the sustained attention phase—1 to 5 s—where subjects were instructed to covertly attend to the target placeholder indicated by the cue. Target placeholders were circles with an inscribed cross (line width 0.2°, radius 2°, centered at 12° extremity from the center point and at a downward angle of 30° from the horizontal midline; Fig. 1b). The target placeholder at the non-cued side is also called a distractor. To be consistent with the real-time feedback runs where color represented the decoded α-LI (see below), the color of both target placeholders varied randomly between isoluminant red and green (L\*a\*b color space (CIELAB), L and b constant, a varied between − 80 and 80). A trial ended when the inscribed cross disappeared in the to-attend target (valid cue) or on the opposite side (invalid cue). Subjects were instructed to react to the trial end as fast as possible with a button press using the right index finger. The inter-trial interval was 2–3 s long. In online runs, the min. and max. duration of the sustained attention period was between 2 and 20 s and inter-trial intervals ranged from 4-5 seconds. Recordings: The EEG was recorded with an active 64 channel HIamp EEG amplifier (g.tec, Schiedlberg, Austria) at 512Hz and referenced to the linked ears. The electrodes were positioned according to the international 10-10 system with the ground electrode on FCz. For more details please refer to the paper. The study involved recordings (sessions) on three different days. One recording session lasted approximately 90 min, including the technical setup. Time on task was less than 40 min per session, with breaks after each run (every 9–10 min). On the first recording day subjects practiced for one run to familiarize with the task. Then they performed four offline runs (no feedback, 80 trials each) to calibrate their individual decoder for the real-time feedback (Fig. 1a, “Offline Paradigm”). On day two and three the α-power lateralization index (α-LI) feedback was administered in a single-blinded crossover design. Subjects were randomly assigned to either receive real or sham α-LI feedback on day two and then switched the other feedback group on day three (“Real-time feedback paradigm” and “Sham feedback”). Therefore, both days had the same run structure: they started and ended with one offline run (80 trials each, including catch trials), while the real-time feedback was given during two middle runs (40 long trials each). ## Dataset Information | Dataset ID | `DS002034` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Real-time EEG feedback on alpha power lateralization leads to behavioral improvements in a covert attention task | | Author (year) | `Schneider2019` | | Canonical | — | | Importable as | `DS002034`, `Schneider2019` | | Year | 2019 | | Authors | Christoph Schneider, Michael Pereira, Luca Tonin, Jose del R. Millan | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds002034.v1.0.3](https://doi.org/10.18112/openneuro.ds002034.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002034) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002034) | [Source URL](https://openneuro.org/datasets/ds002034) | ### Copy-paste BibTeX ```bibtex @dataset{ds002034, title = {Real-time EEG feedback on alpha power lateralization leads to behavioral improvements in a covert attention task}, author = {Christoph Schneider and Michael Pereira and Luca Tonin and Jose del R. Millan}, doi = {10.18112/openneuro.ds002034.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds002034.v1.0.3}, } ``` ## Technical Details - Subjects: 14 - Recordings: 167 - Tasks: 4 - Channels: 81 - Sampling rate (Hz): 512.0 - Duration (hours): 35.84861111111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 10.1 GB - File count: 167 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds002034.v1.0.3 - Source: openneuro - OpenNeuro: [ds002034](https://openneuro.org/datasets/ds002034) - NeMAR: [ds002034](https://nemar.org/dataexplorer/detail?dataset_id=ds002034) ## API Reference Use the `DS002034` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002034(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Real-time EEG feedback on alpha power lateralization leads to behavioral improvements in a covert attention task * **Study:** `ds002034` (OpenNeuro) * **Author (year):** `Schneider2019` * **Canonical:** — Also importable as: `DS002034`, `Schneider2019`. Modality: `eeg`. Subjects: 14; recordings: 167; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002034](https://openneuro.org/datasets/ds002034) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002034](https://nemar.org/dataexplorer/detail?dataset_id=ds002034) DOI: [https://doi.org/10.18112/openneuro.ds002034.v1.0.3](https://doi.org/10.18112/openneuro.ds002034.v1.0.3) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS002034 >>> dataset = DS002034(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002034) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002034) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002094: eeg dataset, 20 subjects *Single-pulse open-loop TMS-EEG dataset* Access recordings and metadata through EEGDash. **Citation:** Unknown (2019). *Single-pulse open-loop TMS-EEG dataset*. Modality: eeg Subjects: 20 Recordings: 43 License: CC0 Source: openneuro Citations: 30.0 Metadata: Good (70%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002094 dataset = DS002094(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002094(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002094( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002094, title = {Single-pulse open-loop TMS-EEG dataset}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS002094` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Single-pulse open-loop TMS-EEG dataset | | Author (year) | `DS2094_Single_pulse` | | Canonical | — | | Importable as | `DS002094`, `DS2094_Single_pulse` | | Year | 2019 | | Authors | Unknown | | License | CC0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002094) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002094) | [Source URL](https://openneuro.org/datasets/ds002094) | ## Technical Details - Subjects: 20 - Recordings: 43 - Tasks: 3 - Channels: 30 - Sampling rate (Hz): 5000.0 - Duration (hours): 19.60131094444445 - Pathology: Not specified - Modality: Other - Type: Clinical/Intervention - Size on disk: 39.4 GB - File count: 43 - Format: BIDS - License: CC0 - DOI: — - Source: openneuro - OpenNeuro: [ds002094](https://openneuro.org/datasets/ds002094) - NeMAR: [ds002094](https://nemar.org/dataexplorer/detail?dataset_id=ds002094) ## API Reference Use the `DS002094` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002094(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Single-pulse open-loop TMS-EEG dataset * **Study:** `ds002094` (OpenNeuro) * **Author (year):** `DS2094_Single_pulse` * **Canonical:** — Also importable as: `DS002094`, `DS2094_Single_pulse`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 20; recordings: 43; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002094](https://openneuro.org/datasets/ds002094) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002094](https://nemar.org/dataexplorer/detail?dataset_id=ds002094) NEMAR citation count: 30 ### Examples ```pycon >>> from eegdash.dataset import DS002094 >>> dataset = DS002094(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002094) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002094) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002158: eeg dataset, 20 subjects *Disentangling the origins of confidence in speeded perceptual judgments through multimodal imaging* Access recordings and metadata through EEGDash. **Citation:** Michael Pereira, Nathan Faivre, Inaki Iturrate, Marco Wirthlin, Luana Serafini, Stephanie Martin, Arnaud Desvachez, Olaf Blanke, Dimitri Van de Ville, Jose del R. Millan (2019). *Disentangling the origins of confidence in speeded perceptual judgments through multimodal imaging*. [10.18112/openneuro.ds002158.v1.0.2](https://doi.org/10.18112/openneuro.ds002158.v1.0.2) Modality: eeg Subjects: 20 Recordings: 117 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002158 dataset = DS002158(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002158(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002158( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002158, title = {Disentangling the origins of confidence in speeded perceptual judgments through multimodal imaging}, author = {Michael Pereira and Nathan Faivre and Inaki Iturrate and Marco Wirthlin and Luana Serafini and Stephanie Martin and Arnaud Desvachez and Olaf Blanke and Dimitri Van de Ville and Jose del R. Millan}, doi = {10.18112/openneuro.ds002158.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds002158.v1.0.2}, } ``` ## About This Dataset This dataset contains the data in Pereira, M., Faivre, N., Iturrate, I., Wirthlin, M., Serafini, L., Martin, S., Desvachez, A., Blanke, O., Van De Ville, D., Millan, JdR. (2020). Disentangling the origins of confidence in speeded perceptual judgments through multimodal imaging. Proceedings of the National Academy of Science, 117 (15) pp. 8382-8390 [https://doi.org/10.1073/pnas.1918335117](https://doi.org/10.1073/pnas.1918335117) Preprint: [https://www.biorxiv.org/content/10.1101/496877v1](https://www.biorxiv.org/content/10.1101/496877v1) ABSTRACT The human capacity to compute the likelihood that a decision is correct—known as metacognition—has proven difficult to study in isolation as it usually cooccurs with decision making. Here, we isolated postdecisional from decisional contributions to metacognition by analyzing neural correlates of confidence with multimodal imaging. Healthy volunteers reported their confidence in the accuracy of decisions they made or decisions they observed. We found better metacognitive performance for committed vs. observed decisions, indicating that committing to a decision may improve confidence. Relying on concurrent electroencephalography and hemodynamic recordings, we found a common correlate of confidence following committed and observed decisions in the inferior frontal gyrus and a dissociation in the anterior prefrontal cortex and anterior insula. We discuss these results in light of decisional and postdecisional accounts of confidence and propose a computational model of confidence in which metacognitive performance naturally improves when evidence accumulation is constrained upon committing a decision. preregistration: [https://osf.io/a5qmv/](https://osf.io/a5qmv/) The dataset contains raw fMRI scans, raw EEG in BrainVision format as well as anatomical scans (T1) and field mapping. We also included preprocessed EEG and fMRI data in derivatives/eegprep and derivatives/fmriprep. EEG PREPROCESSING MR-gradient artifacts were removed using sliding window average template subtraction. TP10 electrode on the right mastoid was used to detect heartbeats for ballistocardiogram artifact (BCG) removal using a semi-automatic procedure in BrainVision Analyzer 2. Data were then filtered using a Butterworth, 4th order zero-phase (two-pass) bandpass filter between 1 and 10 Hz, epoched [-0.2, 0.6 s] around the response onset (i.e. the button press in the active condition or the appearance of the virtual hand for in observation condition), re-referenced to a common average, and input to independent component analysis (ICA) to remove residual BCG and ocular artifacts. In order to ensure numerical stability when estimating the independent components, we retained 99% of the variance from the electrode space, leading to an average of 19 (SD = 6) components estimated for each participant and condition. Independent components (ICs) were then fitted with a dipolar source localization method (66). ICs whose dipole lied outside the brain, or resembled muscular or ocular artifacts were eliminated. A total of 8 (SD = 3) components were finally kept. All preprocessing steps were performed using EEGLAB and in-house scripts under Matlab (The MathWorks, Inc., Natick, Massachusetts, United States). FMRI PREPROCESSING We modeled the BOLD signal using a general linear model (GLM) with two separate regressors (stick functions at stimulus onset) for the active and observation condition as well as their spatial and temporal derivatives. We then parametrically modulated the regressors with three behavioral variables: the confidence ratings, the response times, and the numerosity difference between the two arrays of dots (i.e., perceptual evidence). Empirical cross-correlation between regressors confirmed limited collinearity for the active (resp. observation) condition (max(abs(R)) = 0.26 ± 0.02 resp., max(abs(R)) = 0.25 ± 0.02). Bad trials as defined in the behavioral analysis section were modeled by two separate regressors (one for active and one for observation) and their spatial and temporal derivatives. We added six realignments parameters as regressors of no interest. All second-level (group-level) results are reported at a significance-level of p < 0.05 using cluster-extent family-wise error (FWE) correction with a voxel-height threshold of p < 0.001. We used the anatomical automatic labelling (AAL) atlas for brain parcellation (Tzourio-Mazoyer et al., 2002). ## Dataset Information | Dataset ID | `DS002158` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Disentangling the origins of confidence in speeded perceptual judgments through multimodal imaging | | Author (year) | `Pereira2019_Disentangling` | | Canonical | — | | Importable as | `DS002158`, `Pereira2019_Disentangling` | | Year | 2019 | | Authors | Michael Pereira, Nathan Faivre, Inaki Iturrate, Marco Wirthlin, Luana Serafini, Stephanie Martin, Arnaud Desvachez, Olaf Blanke, Dimitri Van de Ville, Jose del R. Millan | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002158.v1.0.2](https://doi.org/10.18112/openneuro.ds002158.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002158) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002158) | [Source URL](https://openneuro.org/datasets/ds002158) | ### Copy-paste BibTeX ```bibtex @dataset{ds002158, title = {Disentangling the origins of confidence in speeded perceptual judgments through multimodal imaging}, author = {Michael Pereira and Nathan Faivre and Inaki Iturrate and Marco Wirthlin and Luana Serafini and Stephanie Martin and Arnaud Desvachez and Olaf Blanke and Dimitri Van de Ville and Jose del R. Millan}, doi = {10.18112/openneuro.ds002158.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds002158.v1.0.2}, } ``` ## Technical Details - Subjects: 20 - Recordings: 117 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 5000.0 - Duration (hours): 17.080816666666667 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 76.5 GB - File count: 117 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002158.v1.0.2 - Source: openneuro - OpenNeuro: [ds002158](https://openneuro.org/datasets/ds002158) - NeMAR: [ds002158](https://nemar.org/dataexplorer/detail?dataset_id=ds002158) ## API Reference Use the `DS002158` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002158(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Disentangling the origins of confidence in speeded perceptual judgments through multimodal imaging * **Study:** `ds002158` (OpenNeuro) * **Author (year):** `Pereira2019_Disentangling` * **Canonical:** — Also importable as: `DS002158`, `Pereira2019_Disentangling`. Modality: `eeg`. Subjects: 20; recordings: 117; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002158](https://openneuro.org/datasets/ds002158) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002158](https://nemar.org/dataexplorer/detail?dataset_id=ds002158) DOI: [https://doi.org/10.18112/openneuro.ds002158.v1.0.2](https://doi.org/10.18112/openneuro.ds002158.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002158 >>> dataset = DS002158(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002158) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002158) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002181: eeg dataset, 226 subjects *CRYPTO and PROVIDE EEG Baseline Data* Access recordings and metadata through EEGDash. **Citation:** Wanze Xie, Sarah Jensen, Mark Wade, Swapna Kumar, Alissa Westerlund, Shahria Kakon, Rashidul Haque, William A Petri, Charles A Nelson (2019). *CRYPTO and PROVIDE EEG Baseline Data*. [mockDOI](https://doi.org/mockDOI) Modality: eeg Subjects: 226 Recordings: 226 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002181 dataset = DS002181(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002181(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002181( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002181, title = {CRYPTO and PROVIDE EEG Baseline Data}, author = {Wanze Xie and Sarah Jensen and Mark Wade and Swapna Kumar and Alissa Westerlund and Shahria Kakon and Rashidul Haque and William A Petri and Charles A Nelson}, doi = {mockDOI}, url = {https://doi.org/mockDOI}, } ``` ## About This Dataset These are the EEG baseline data used in the study on the association between stunting and EEG brain functional connectivity in Bangladeshi children ([https://doi.org/10.1101/447722](https://doi.org/10.1101/447722)). Data with an ID < 2000 were collected for a cohort of 36-month-old toddlers, and those with an ID > 2000 were collected for a cohort of 6-month-old infants. The children were watching screen savers for 2 minutes. ## Dataset Information | Dataset ID | `DS002181` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | CRYPTO and PROVIDE EEG Baseline Data | | Author (year) | `Xie2019` | | Canonical | — | | Importable as | `DS002181`, `Xie2019` | | Year | 2019 | | Authors | Wanze Xie, Sarah Jensen, Mark Wade, Swapna Kumar, Alissa Westerlund, Shahria Kakon, Rashidul Haque, William A Petri, Charles A Nelson | | License | CC0 | | Citation / DOI | [mockDOI](https://doi.org/mockDOI) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002181) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002181) | [Source URL](https://openneuro.org/datasets/ds002181) | ### Copy-paste BibTeX ```bibtex @dataset{ds002181, title = {CRYPTO and PROVIDE EEG Baseline Data}, author = {Wanze Xie and Sarah Jensen and Mark Wade and Swapna Kumar and Alissa Westerlund and Shahria Kakon and Rashidul Haque and William A Petri and Charles A Nelson}, doi = {mockDOI}, url = {https://doi.org/mockDOI}, } ``` ## Technical Details - Subjects: 226 - Recordings: 226 - Tasks: 1 - Channels: 125 - Sampling rate (Hz): 500.0 - Duration (hours): 7.675835 - Pathology: Development - Modality: Visual - Type: Resting-state - Size on disk: 150.9 MB - File count: 226 - Format: BIDS - License: CC0 - DOI: mockDOI - Source: openneuro - OpenNeuro: [ds002181](https://openneuro.org/datasets/ds002181) - NeMAR: [ds002181](https://nemar.org/dataexplorer/detail?dataset_id=ds002181) ## API Reference Use the `DS002181` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002181(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CRYPTO and PROVIDE EEG Baseline Data * **Study:** `ds002181` (OpenNeuro) * **Author (year):** `Xie2019` * **Canonical:** — Also importable as: `DS002181`, `Xie2019`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Development`. Subjects: 226; recordings: 226; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002181](https://openneuro.org/datasets/ds002181) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002181](https://nemar.org/dataexplorer/detail?dataset_id=ds002181) DOI: [https://doi.org/mockDOI](https://doi.org/mockDOI) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002181 >>> dataset = DS002181(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002181) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002181) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002218: eeg dataset, 18 subjects *Auditory and Visual Rhythm Omission EEG* Access recordings and metadata through EEGDash. **Citation:** Daniel C Comstock, Ramesh Balasubramaniam (2019). *Auditory and Visual Rhythm Omission EEG*. [mockDOI](https://doi.org/mockDOI) Modality: eeg Subjects: 18 Recordings: 18 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002218 dataset = DS002218(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002218(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002218( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002218, title = {Auditory and Visual Rhythm Omission EEG}, author = {Daniel C Comstock and Ramesh Balasubramaniam}, doi = {mockDOI}, url = {https://doi.org/mockDOI}, } ``` ## About This Dataset This EEG dataset was recorded as part of a study of the predictive mechanisms of rhythm perception by using an omission paradigm to separate out predictive neural activity from sensory evoked neural activity. The study had 18 participants listen to auditory rhythms and watch visual flashing rhythms separately. The stimulus trains of both kinds of rhythms contained occasional omissions. Code for preprocessing, time/freq computation, frequency band extraction and statistics is provided. Cluster formation was performed using the EEGLAB Study function. ## Dataset Information | Dataset ID | `DS002218` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Auditory and Visual Rhythm Omission EEG | | Author (year) | `Comstock2019` | | Canonical | — | | Importable as | `DS002218`, `Comstock2019` | | Year | 2019 | | Authors | Daniel C Comstock, Ramesh Balasubramaniam | | License | CC0 | | Citation / DOI | [mockDOI](https://doi.org/mockDOI) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002218) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002218) | [Source URL](https://openneuro.org/datasets/ds002218) | ### Copy-paste BibTeX ```bibtex @dataset{ds002218, title = {Auditory and Visual Rhythm Omission EEG}, author = {Daniel C Comstock and Ramesh Balasubramaniam}, doi = {mockDOI}, url = {https://doi.org/mockDOI}, } ``` ## Technical Details - Subjects: 18 - Recordings: 18 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 256.0 - Duration (hours): 16.52023003472222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 1.9 GB - File count: 18 - Format: BIDS - License: CC0 - DOI: mockDOI - Source: openneuro - OpenNeuro: [ds002218](https://openneuro.org/datasets/ds002218) - NeMAR: [ds002218](https://nemar.org/dataexplorer/detail?dataset_id=ds002218) ## API Reference Use the `DS002218` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002218(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory and Visual Rhythm Omission EEG * **Study:** `ds002218` (OpenNeuro) * **Author (year):** `Comstock2019` * **Canonical:** — Also importable as: `DS002218`, `Comstock2019`. Modality: `eeg`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002218](https://openneuro.org/datasets/ds002218) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002218](https://nemar.org/dataexplorer/detail?dataset_id=ds002218) DOI: [https://doi.org/mockDOI](https://doi.org/mockDOI) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002218 >>> dataset = DS002218(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002218) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002218) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002312: meg dataset, 19 subjects *OcularLDT* Access recordings and metadata through EEGDash. **Citation:** Teon L Brooks, Laura Gwilliams, Alexandre Gramfort, Alec Marantz (2019). *OcularLDT*. [10.18112/openneuro.ds002312.v1.0.0](https://doi.org/10.18112/openneuro.ds002312.v1.0.0) Modality: meg Subjects: 19 Recordings: 23 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002312 dataset = DS002312(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002312(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002312( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002312, title = {OcularLDT}, author = {Teon L Brooks and Laura Gwilliams and Alexandre Gramfort and Alec Marantz}, doi = {10.18112/openneuro.ds002312.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds002312.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS002312` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | OcularLDT | | Author (year) | `Brooks2019` | | Canonical | `OcularLDT`, `ocular_ldt` | | Importable as | `DS002312`, `Brooks2019`, `OcularLDT`, `ocular_ldt` | | Year | 2019 | | Authors | Teon L Brooks, Laura Gwilliams, Alexandre Gramfort, Alec Marantz | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002312.v1.0.0](https://doi.org/10.18112/openneuro.ds002312.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002312) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002312) | [Source URL](https://openneuro.org/datasets/ds002312) | ### Copy-paste BibTeX ```bibtex @dataset{ds002312, title = {OcularLDT}, author = {Teon L Brooks and Laura Gwilliams and Alexandre Gramfort and Alec Marantz}, doi = {10.18112/openneuro.ds002312.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds002312.v1.0.0}, } ``` ## Technical Details - Subjects: 19 - Recordings: 23 - Tasks: 1 - Channels: 257 - Sampling rate (Hz): 1000.0 - Duration (hours): 7.096010277777777 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 34.1 GB - File count: 23 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002312.v1.0.0 - Source: openneuro - OpenNeuro: [ds002312](https://openneuro.org/datasets/ds002312) - NeMAR: [ds002312](https://nemar.org/dataexplorer/detail?dataset_id=ds002312) ## API Reference Use the `DS002312` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002312(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) OcularLDT * **Study:** `ds002312` (OpenNeuro) * **Author (year):** `Brooks2019` * **Canonical:** `OcularLDT`, `ocular_ldt` Also importable as: `DS002312`, `Brooks2019`, `OcularLDT`, `ocular_ldt`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 19; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002312](https://openneuro.org/datasets/ds002312) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002312](https://nemar.org/dataexplorer/detail?dataset_id=ds002312) DOI: [https://doi.org/10.18112/openneuro.ds002312.v1.0.0](https://doi.org/10.18112/openneuro.ds002312.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS002312 >>> dataset = DS002312(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002312) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002312) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS002336: eeg dataset, 10 subjects *A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP1* Access recordings and metadata through EEGDash. **Citation:** Giulia Lioi, Claire Cury, Lorraine Perronnet, Marsel Mano, Elise Bannier, Anatole Lecuyer, Christian Barillot (2019). *A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP1*. [10.18112/openneuro.ds002336.v2.0.2](https://doi.org/10.18112/openneuro.ds002336.v2.0.2) Modality: eeg Subjects: 10 Recordings: 54 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002336 dataset = DS002336(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002336(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002336( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002336, title = {A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP1}, author = {Giulia Lioi and Claire Cury and Lorraine Perronnet and Marsel Mano and Elise Bannier and Anatole Lecuyer and Christian Barillot}, doi = {10.18112/openneuro.ds002336.v2.0.2}, url = {https://doi.org/10.18112/openneuro.ds002336.v2.0.2}, } ``` ## About This Dataset **ORIGINAL PAPERS** Lioi, G., Cury, C., Perronnet, L., Mano, M., Bannier, E., Lécuyer, A., & Barillot, C. (2019). Simultaneous MRI-EEG during a motor imagery neurofeedback task: an open access brain imaging dataset for multi-modal data integration Authors. BioRxiv. [https://doi.org/https://doi.org/10.1101/862375](https://doi.org/https://doi.org/10.1101/862375) Mano, Marsel, Anatole Lécuyer, Elise Bannier, Lorraine Perronnet, Saman Noorzadeh, and Christian Barillot. 2017. “How to Build a Hybrid Neurofeedback Platform Combining EEG and FMRI.” Frontiers in Neuroscience 11 (140). [https://doi.org/10.3389/fnins.2017.00140](https://doi.org/10.3389/fnins.2017.00140) Perronnet, Lorraine, L Anatole, Marsel Mano, Elise Bannier, Maureen Clerc, Christian Barillot, Lorraine Perronnet, et al. 2017. “Unimodal Versus Bimodal EEG-FMRI Neurofeedback of a Motor Imagery Task.” Frontiers in Human Neuroscience 11 (193). [https://doi.org/10.3389/fnhum.2017.00193](https://doi.org/10.3389/fnhum.2017.00193). This dataset named XP1 can be pull together with the dataset XP2, available here : [https://openneuro.org/datasets/ds002338](https://openneuro.org/datasets/ds002338). Data acquisition methods have been described in Perronnet et al. (2017, Frontiers in Human Neuroscience). Simultaneous 64 channels EEG and fMRI during right-hand motor imagery and neurofeedback (NF) were acquired in this study (as well as in XP2). For this study, 10 subjects performed three types of NF runs (bimodal EEG-fMRI NF, unimodal EEG-NF and fMRI-NF). ### View full README **ORIGINAL PAPERS** Lioi, G., Cury, C., Perronnet, L., Mano, M., Bannier, E., Lécuyer, A., & Barillot, C. (2019). Simultaneous MRI-EEG during a motor imagery neurofeedback task: an open access brain imaging dataset for multi-modal data integration Authors. BioRxiv. [https://doi.org/https://doi.org/10.1101/862375](https://doi.org/https://doi.org/10.1101/862375) Mano, Marsel, Anatole Lécuyer, Elise Bannier, Lorraine Perronnet, Saman Noorzadeh, and Christian Barillot. 2017. “How to Build a Hybrid Neurofeedback Platform Combining EEG and FMRI.” Frontiers in Neuroscience 11 (140). [https://doi.org/10.3389/fnins.2017.00140](https://doi.org/10.3389/fnins.2017.00140) Perronnet, Lorraine, L Anatole, Marsel Mano, Elise Bannier, Maureen Clerc, Christian Barillot, Lorraine Perronnet, et al. 2017. “Unimodal Versus Bimodal EEG-FMRI Neurofeedback of a Motor Imagery Task.” Frontiers in Human Neuroscience 11 (193). [https://doi.org/10.3389/fnhum.2017.00193](https://doi.org/10.3389/fnhum.2017.00193). This dataset named XP1 can be pull together with the dataset XP2, available here : [https://openneuro.org/datasets/ds002338](https://openneuro.org/datasets/ds002338). Data acquisition methods have been described in Perronnet et al. (2017, Frontiers in Human Neuroscience). Simultaneous 64 channels EEG and fMRI during right-hand motor imagery and neurofeedback (NF) were acquired in this study (as well as in XP2). For this study, 10 subjects performed three types of NF runs (bimodal EEG-fMRI NF, unimodal EEG-NF and fMRI-NF). **EXPERIMENTAL PARADIGM** Subjects were instructed to perform a kinaesthetic motor imagery of the right hand and to find their own strategy to control and bring the ball to the target. The experimental protocol consisted of 6 EEG-fMRI runs with a 20s block design alternating rest and task motor localizer run (task-motorloc) - 8 blocks X (20s rest+20 s task) motor imagery run without NF (task-MIpre) -5 blocks X (20s rest+20 s task) three NF runs with different NF conditions (task-eegNF, task-fmriNF, task-eegfmriNF) occurring in random order- 10 blocks X (20s rest+20 s task) **motor imagery run without NF (task-MIpost) - 5 blocks X (20s rest+20 s task)** **EEG DATA** EEG data was recorded using a 64-channel MR compatible solution from Brain Products (Brain Products GmbH, Gilching, Germany). RAW EEG DATA EEG was sampled at 5kHz with FCz as the reference electrode and AFz as the ground electrode, and a resolution of 0.5 microV. Following the BIDs arborescence, raw eeg data for each task can be found for each subject in XP1/sub-xp1\*/eeg in Brain Vision Recorder format (File Version 1.0). Each raw EEG recording includes three files: the data file (\*.eeg), the header file (\*.vhdr) and the marker file (\*.vmrk). The header file contains information about acquisition parameters and amplifier setup. For each electrode, the impedance at the beginning of the recording is also specified. For all subjects, channel 32 is the ECG channel. The 63 other channels are EEG channels. The marker file contains the list of markers assigned to the EEG recordings and their properties (marker type, marker ID and position in data points). Three type of markers are relevant for the EEG processing: R128 (Response): is the fMRI volume marker to correct for the gradient artifact S 99 (Stimulus): is the protocol marker indicating the start of the Rest block S 2 (Stimulus): is the protocol marker indicating the start of the Task (Motor Execution Motor Imagery or Neurofeedback) Warning : in few EEG data, the first S99 marker might be missing, but can be easily “added” 20 s before the first S 2. PREPROCESSED EEG DATA Following the BIDs arborescence, processed eeg data for each task and subject in the pre-processed data folder : XP1/derivatives/sub-xp1\\\*/eeg_pp/\\\*eeg_pp.\* and following the Brain Analyzer format. Each processed EEG recording includes three files: the data file (\*.dat), the header file (\*.vhdr) and the marker file (\*.vmrk), containing information similar to those described for raw data. In the header file of preprocessed data channels location are also specified. In the marker file the location in data points of the identified heart pulse (R marker) are specified as well. EEG data were pre-processed using BrainVision Analyzer II Software, with the following steps: Automatic gradient artifact correction using the artifact template subtraction method (Sliding average calculation with 21 intervals for sliding average and all channels enabled for correction. Downsampling with factor: 25 (200 Hz) Low Pass FIR Filter:Cut-off Frequency: 50 Hz. Ballistocardiogram (pulse) artifact correction using a semiautomatic procedure (Pulse Template searched between 40 s and 240 s in the ECG channel with the following parameters:Coherence Trigger = 0.5, Minimal Amplitude = 0.5, Maximal Amplitude = 1.3. The identified pulses were marked with R. Segmentation relative to the first block marker (S 99) for all the length of the training protocol (las S 2 + 20 s). EEG NF SCORES Neurofeedback scores can be found in the .mat structures in XP1/derivatives/sub-xp1\*/NF_eeg/d_sub\*NFeeg_scores.mat Structures names NF_eeg are composed of the following subfields: NF_eeg → .nf_laterality (NF score computed as for real-time calculation - equation (1)) → .filteegpow_left (Bandpower of the filtered eeg signal in C1) → .filteegpow_right (Bandpower of the filtered eeg signal in C2) → .nf (vector of NF scores -4 per s- computed as in eq 3) for comparison with XP2 → .smoothed → .eegdata (64 X 200 X 400 matrix, with the pre-processed EEG signals according to the steps described above) → .method Where the subfield method contains information about the laplacian filtered used and the frequency band of interest. **BOLD fMRI DATA** All DICOM files were converted to Nifti-1 and then in BIDs format (version 2.1.4) using the software dcm2niix (version v1.0.20190720 GVV7.4.0) fMRI acquisitions were performed using echo- planar imaging (EPI) and covering the entire brain with the following parameters 3T Siemens Verio EPI sequence TR=2 s TE=23 ms Resolution 2x2x4 mm3 FOV = 210×210mm2 N of slices: 32 No slice gap As specified in the relative task event files in XP1\*events.tsv files onset, the scanner began the EPI pulse sequence two seconds prior to the start of the protocol (first rest block), so the the first two TRs should be discarded. The useful TRs for the runs are therefore task-motorloc: 320 s (2 to 322) task-MIpre and task-MIpost: 200 s (2 to 202) task-eegNF, task-fmriNF, task-eegfmriNF: 400 s (2 to 402) In task events files for the different tasks, each column represents: - ‘onset’: onset time (sec) of an event - ‘duration’: duration (sec) of the event - ‘trial_type’: trial (block) type: rest or task (Rest, Task-ME, Task-MI, Task-NF) - ‘’stim_file’: image presented in a stimulus block: during Rest, Motor Imagery (Task-MI) or Motor execution (Task-ME) instructions were presented. On the other hand, during Neurofeedback blocks (Task-NF) the image presented was a ball moving in a square that the subject could control self-regulating his EEG and/or fMRI brain activity. Following the BIDs arborescence, the functional data and relative metadata are found for each subject in the following directory XP1/sub-xp1\*/func BOLD-NF SCORES For each subject and NF session, a matlab structure with BOLD-NF features can be found in XP1/derivatives/sub-xp1\*/NF_bold/ For each subject and NF session, a Matlab structure with BOLD-NF features can be found in XP1/derivatives/sub-xp1\*/NF_bold/ In view of BOLD-NF scores computation, fMRI data were preprocessed using SPM8 and with the following steps: slice-time correction, spatial realignment and coregistration with the anatomical scan, spatial smoothing with a 6 mm Gaussian kernel and normalization to the Montreal Neurological Institute (MNI) template. For each session, a first level general linear model analysis was then performed. The resulting activation maps (voxel-wise Family-Wise error corrected at p < 0.05) were used to define two ROIs (9x9x3 voxels) around the maximum of activation in the left and right motor cortex. The BOLD-NF scores (fMRI laterality index) were calculated as the difference between percentage signal change in the left and right motor ROIs as for the online NF calculation. A smoothed and normalized version of the NF scores over the precedent three volumes was also computed. To allow for comparison and aggregation of the two datasets XP1 and XP2 we also computed NF scores considering the left motor cortex and a background as for online NF calculation in XP2. In the NF_bold folder, the Matlab files sub-xp1\*_task-\*_NFbold_scores.mat have therefore the following structure : NF_bold → .nf_laterality (calculated as for online NF calculation) → .smoothnf_laterality → .normnf_laterality → .nf (calculated as for online NF calculation in XP2) → .roimean_left (averaged BOLD signal in the left motor ROI) → .roimean_right (averaged BOLD signal in the right motor ROI) → .bgmean (averaged BOLD signal in the background slice) → .method Where the subfield “.method” contains information about the ROI size (.roisize), the background mask (.bgmask) and ROI masks (.roimask_left,.roimask_right ). More details about signal processing and NF calculation can be found in Perronnet et al. 2017 and Perronnet et al. 2018. **ANATOMICAL MRI DATA** As a structural reference for the fMRI analysis, a high resolution 3D T1 MPRAGE sequence was acquired with the following parameters 3T Siemens Verio 3D T1 MPRAGE TR=1.9 s TE=22.6 ms Resolution 1x1x1 mm3 FOV = 256×256 mm2 N of slices: 176 Defacing of MPRAGE T1 images was performed by the submitter using pydeface ([https://github.com/poldracklab/pydeface](https://github.com/poldracklab/pydeface)) Following the BIDs arborescence, the functional data and relative metadata are found for each subject in the following directory XP1/sub-xp1\*/anat ## Dataset Information | Dataset ID | `DS002336` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP1 | | Author (year) | `Lioi2019_multi` | | Canonical | — | | Importable as | `DS002336`, `Lioi2019_multi` | | Year | 2019 | | Authors | Giulia Lioi, Claire Cury, Lorraine Perronnet, Marsel Mano, Elise Bannier, Anatole Lecuyer, Christian Barillot | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002336.v2.0.2](https://doi.org/10.18112/openneuro.ds002336.v2.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002336) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002336) | [Source URL](https://openneuro.org/datasets/ds002336) | ### Copy-paste BibTeX ```bibtex @dataset{ds002336, title = {A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP1}, author = {Giulia Lioi and Claire Cury and Lorraine Perronnet and Marsel Mano and Elise Bannier and Anatole Lecuyer and Christian Barillot}, doi = {10.18112/openneuro.ds002336.v2.0.2}, url = {https://doi.org/10.18112/openneuro.ds002336.v2.0.2}, } ``` ## Technical Details - Subjects: 10 - Recordings: 54 - Tasks: 6 - Channels: 64 - Sampling rate (Hz): 5000.0 - Duration (hours): 11.043072222222222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 16.8 GB - File count: 54 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002336.v2.0.2 - Source: openneuro - OpenNeuro: [ds002336](https://openneuro.org/datasets/ds002336) - NeMAR: [ds002336](https://nemar.org/dataexplorer/detail?dataset_id=ds002336) ## API Reference Use the `DS002336` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002336(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP1 * **Study:** `ds002336` (OpenNeuro) * **Author (year):** `Lioi2019_multi` * **Canonical:** — Also importable as: `DS002336`, `Lioi2019_multi`. Modality: `eeg`. Subjects: 10; recordings: 54; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002336](https://openneuro.org/datasets/ds002336) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002336](https://nemar.org/dataexplorer/detail?dataset_id=ds002336) DOI: [https://doi.org/10.18112/openneuro.ds002336.v2.0.2](https://doi.org/10.18112/openneuro.ds002336.v2.0.2) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS002336 >>> dataset = DS002336(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002336) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002336) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002338: eeg dataset, 17 subjects *A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP2* Access recordings and metadata through EEGDash. **Citation:** Giulia Lioi, Claire Cury, Lorraine Perronnet, Marsel Mano, Elise Bannier, Anatole Lecuyer, Christian Barillot (2019). *A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP2*. [10.18112/openneuro.ds002338.v2.0.1](https://doi.org/10.18112/openneuro.ds002338.v2.0.1) Modality: eeg Subjects: 17 Recordings: 85 License: CC0 Source: openneuro Citations: 11.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002338 dataset = DS002338(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002338(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002338( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002338, title = {A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP2}, author = {Giulia Lioi and Claire Cury and Lorraine Perronnet and Marsel Mano and Elise Bannier and Anatole Lecuyer and Christian Barillot}, doi = {10.18112/openneuro.ds002338.v2.0.1}, url = {https://doi.org/10.18112/openneuro.ds002338.v2.0.1}, } ``` ## About This Dataset **ORIGINAL PAPERS** Lioi, G., Cury, C., Perronnet, L., Mano, M., Bannier, E., Lécuyer, A., & Barillot, C. (2019). Simultaneous MRI-EEG during a motor imagery neurofeedback task: an open access brain imaging dataset for multi-modal data integration Authors. Accepted for publication in Scientific Data. [https://doi.org/https://doi.org/10.1101/862375](https://doi.org/https://doi.org/10.1101/862375) Mano, Marsel, Anatole Lécuyer, Elise Bannier, Lorraine Perronnet, Saman Noorzadeh, and Christian Barillot. 2017. “How to Build a Hybrid Neurofeedback Platform Combining EEG and FMRI.” Frontiers in Neuroscience 11 (140). [https://doi.org/10.3389/fnins.2017.00140](https://doi.org/10.3389/fnins.2017.00140) Lorraine Perronnet, Anatole Lecuyer, Marsel Mano, Mathis Fleury, Giulia Lioi, Claire Cury, Maureen Clerc, Fabien Lotte, and Christian Barillot. 2018. “Learning 2-in-1 : Towards Integrated EEG-FMRI-Neurofeedback.” BioRxiv, no. 397729. [https://doi.org/10.1101/397729](https://doi.org/10.1101/397729). **OVERVIEW** This dataset XP2 can be pull together with the dataset XP1, available here : [https://openneuro.org/datasets/ds002336](https://openneuro.org/datasets/ds002336). ### View full README **ORIGINAL PAPERS** Lioi, G., Cury, C., Perronnet, L., Mano, M., Bannier, E., Lécuyer, A., & Barillot, C. (2019). Simultaneous MRI-EEG during a motor imagery neurofeedback task: an open access brain imaging dataset for multi-modal data integration Authors. Accepted for publication in Scientific Data. [https://doi.org/https://doi.org/10.1101/862375](https://doi.org/https://doi.org/10.1101/862375) Mano, Marsel, Anatole Lécuyer, Elise Bannier, Lorraine Perronnet, Saman Noorzadeh, and Christian Barillot. 2017. “How to Build a Hybrid Neurofeedback Platform Combining EEG and FMRI.” Frontiers in Neuroscience 11 (140). [https://doi.org/10.3389/fnins.2017.00140](https://doi.org/10.3389/fnins.2017.00140) Lorraine Perronnet, Anatole Lecuyer, Marsel Mano, Mathis Fleury, Giulia Lioi, Claire Cury, Maureen Clerc, Fabien Lotte, and Christian Barillot. 2018. “Learning 2-in-1 : Towards Integrated EEG-FMRI-Neurofeedback.” BioRxiv, no. 397729. [https://doi.org/10.1101/397729](https://doi.org/10.1101/397729). **OVERVIEW** This dataset XP2 can be pull together with the dataset XP1, available here : [https://openneuro.org/datasets/ds002336](https://openneuro.org/datasets/ds002336). Data acquisition methods have been described in Perronnet et al. (2017, Frontiers in Human Neuroscience). Simultaneous 64 channel EEG and fMRI during right-hand motor imagery and neurofeedback (NF) were acquired in this study (as well as in XP1). This study involved 16 subjects randomly assigned to two groups: in a first group they performed bimodal EEG-fMRI NF with a bi-dimensional feedback metaphor, in the second group the same task was executed with a mono-dimensional feedback. **EXPERIMENTAL PARADIGM** The experimental protocol consisted of 5 EEG-fMRI runs with a 20s block design alternating rest and task. 1 block = 20s rest + 20s task. Task description : \_task-MIpre : motor imagery run without NF. 8 blocks. \_task-1dNF or \_task-2dNF : bimodal neurofeedback, with either a mono-dimensional neurofeedback display (mean of EEG NF and fMRI NF scores), either a bi-dimensional display (one modality per dimension). The list of subjects with 1d or 2d is given above. Each subjects had 3 runs. 8 blocks per run. \_task-MIpost : motor imagery run without NF. 8 blocks. Subjects with mono-dimensional feedback display : xp201 : 1D xp202 : 1D xp203 : 1D xp206 : 1D xp211 : 1D xp218 : 1D xp219 : 1D xp220 : 1D xp222 : 1D Subjects with bi-dimensional feedback display : xp204 : 2D xp205 : 2D xp207 : 2D xp210: 2D xp213 : 2D xp216 : 2D xp217 : 2D **xp221 : 2D** **EEG DATA** EEG data was recorded using a 64-channel MR compatible solution from Brain Products (Brain Products GmbH, Gilching, Germany). RAW EEG DATA EEG was sampled at 5kHz with FCz as the reference electrode and AFz as the ground electrode, and a resolution of 0.5 microV. Following the BIDs arborescence, raw eeg data for each task can be found for each subject in XP2/sub-xp2\*/eeg in Brain Vision Recorder format (File Version 1.0). Each raw EEG recording includes three files: the data file (\*.eeg), the header file (\*.vhdr) and the marker file (\*.vmrk). The header file contains information about acquisition parameters and amplifier setup. For each electrode, the impedance at the beginning of the recording is also specified. For all subjects, channel 32 is the ECG channel. The 63 other channels are EEG channels. The marker file contains the list of markers assigned to the EEG recordings and their properties (marker type, marker ID and position in data points). Three type of markers are relevant for the EEG processing: R128 (Response): is the fMRI volume marker to correct for the gradient artifact S 99 (Stimulus): is the protocol marker indicating the start of the Rest block S 2 (Stimulus): is the protocol marker indicating the start of the Task (Motor Execution Motor Imagery or Neurofeedback) Warning : in few EEG data, the first S99 marker might be missing, but can be easily “added” 20 s before the first S 2. PREPROCESSED EEG DATA Following the BIDs arborescence, processed eeg data for each task can be found for each subject in XP2/derivatives/sub-xp2\\\*/eeg_pp/\\\*eeg_pp.\* and following the Brain Analyzer format. Each processed EEG recording includes three files: the data file (\*.dat), the header file (\*.vhdr) and the marker file (\*.vmrk), containing information similar to those described for raw data. In the header file of preprocessed data channels location are also specified. In the marker file the location in data points of the identified heart pulse (R marker) are specified as well. EEG data were pre-processed using BrainVision Analyzer II Software, with the following steps: Automatic gradient artifact correction using the artifact template subtraction method (Sliding average calculation with 21 intervals for sliding average and all channels enabled for correction. Downsampling with factor: 25 (200 Hz) Low Pass FIR Filter:Cut-off Frequency: 50 Hz. Ballistocardiogram (pulse) artifact correction using a semiautomatic procedure (Pulse Template searched between 40 s and 240 s in the ECG channel with the following parameters:Coherence Trigger = 0.5, Minimal Amplitude = 0.5, Maximal Amplitude = 1.3). A Pulse Artifact marker R was associated to each identified pulse. Segmentation relative to the first block marker (S 99) for all the length of the training protocol (las S 2 + 20 s). EEG-NF SCORES Neurofeedback scores can be found in the .mat structures in XP2/derivatives/sub-xp2\*/NF_eeg/d_sub\*NFeeg_scores.mat Structures names NF_eeg are composed by the following subfields: ID : Subject ID, for example sub-xp201 lapC3_ERD : a 1x1280 vector of neurofeedback scores. 4 scores per secondes, for the whole session. eeg : a 64x80200 matrix, with the pre-processed EEG signals with the step described above, filtered between 8 and 30 Hz. lapC3_bandpower_8Hz_30Hz : 1x1280 vector. Bandpower of the filtered signal with a laplacian centred on C3, used to estimate the lapC3_ERD. **lapC3_filter : 1x64 vector. Laplacian filter centred above C3 channel.** **BOLD fMRI DATA** All DICOM files were converted to Nifti-1 and then in BIDs format (version 2.1.4) using the software dcm2niix (version v1.0.20190720 GVV7.4.0) fMRI acquisitions were performed using echo- planar imaging (EPI) and covered the superior half of the brain with the following parameters 3T Siemens Verio EPI sequence TR=1 s TE=23 ms Resolution 2x2x4 mm N of slices: 16 No slice gap As specified in the relative task event files in XP2\*events.tsv files onset, the scanner began the EPI pulse sequence two seconds prior to the start of the protocol (first rest block), so the the first two TRs should be discarded. The useful TRs for the runs are therefore -task-MIpre and task-MIpost: 320 s (2 to 302) -task-1dNF and task-2dNF: 320 s (2 to 302) In task events files for the different tasks, each column represents: - ‘onset’: onset time (sec) of an event - ‘duration’: duration (sec) of the event - ‘trial_type’: trial (block) type: rest or task (Rest, Task-MI, Task-NF) - ‘stim_file’: image presented in a stimulus block. During Rest or Motor Imagery (Task-MI) instructions were presented to the subject. On the other hand, during Neurofeedback blocks (Task-NF) the image presented was a ball moving in a square for the bidimensional NF (task-2dNF) or a ball moving along a gauge for the unidimensional NF (task-1dNF) that the subject could control self-regulating his EEG and fMRI brain activity. Following the BIDs arborescence, the functional data and relative metadata are found for each subject in the following directory XP2/sub-xp2\*/func BOLD-NF SCORES For each subject and NF session, a matlab structure with BOLD-NF features can be found in XP2/derivatives/sub-xp2\*/NF_bold/ In view of BOLD-NF scores computation, fMRI data were preprocessed using AutoMRI, a software based on spm8 and with the following steps: slice-time correction, spatial realignment and coregistration with the anatomical scan, spatial smoothing with a 8 mm Gaussian kernel and normalization to the Montreal Neurological Institute template For each session, a first level general linear model analysis modeling was then performed. The resulting activation maps (voxel-wise Family-Wise error corrected at p < 0.05) were used to define two ROIs (9x9x3 voxels) around the maximum of activation in the ipsilesional primary motor area (M1) and supplementary motor area (SMA) respectively. The BOLD-NF scores were calculated as the difference between percentage signal change in the two ROIs (SMA and M1) and a large deep background region (slice 3 out of 16) whose activity is not correlated with the NF task. A smoothed version of the NF scores over the precedent three volumes was also computed. The NF_boldi structure has the following structure NF_bold > → .m1→ .nf > : → .smoothnf > → .roimean (averaged BOLD signal in the ROI) > → .bgmean (averaged BOLD signal in the background slice) > → .method NFscores.fmri : → .sma→ .nf : → .smoothnf → .roimean (averaged BOLD signal in the ROI) → .bgmean (averaged BOLD signal in the background slice) → .method Where the subfield method contains information about the ROI size (.roisize), the background mask (.bgmask) and ROI mask (.roimask). More details about signal processing and NF calculation can be found in Perronnet et al. 2017 and Perronnet et al. 2018. **ANATOMICAL MRI DATA** As a structural reference for the fMRI analysis, a high resolution 3D T1 MPRAGE sequence was acquired with the following parameters 3T Siemens Verio 3D T1 MPRAGE TR=1.9 s TE=22.6 ms Resolution 1x1x1 mm3 FOV = 256×256 mm2 N of slices: 176 Defacing of MPRAGE T1 images was performed by the submitter using pydeface ([https://github.com/poldracklab/pydeface](https://github.com/poldracklab/pydeface)) Following the BIDs arborescence, the anatomical data and relative metadata are found for each subject in the following directory XP2/sub-xp2\*/anat ## Dataset Information | Dataset ID | `DS002338` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP2 | | Author (year) | `Lioi2019_multi_modal` | | Canonical | — | | Importable as | `DS002338`, `Lioi2019_multi_modal` | | Year | 2019 | | Authors | Giulia Lioi, Claire Cury, Lorraine Perronnet, Marsel Mano, Elise Bannier, Anatole Lecuyer, Christian Barillot | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002338.v2.0.1](https://doi.org/10.18112/openneuro.ds002338.v2.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002338) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002338) | [Source URL](https://openneuro.org/datasets/ds002338) | ### Copy-paste BibTeX ```bibtex @dataset{ds002338, title = {A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP2}, author = {Giulia Lioi and Claire Cury and Lorraine Perronnet and Marsel Mano and Elise Bannier and Anatole Lecuyer and Christian Barillot}, doi = {10.18112/openneuro.ds002338.v2.0.1}, url = {https://doi.org/10.18112/openneuro.ds002338.v2.0.1}, } ``` ## Technical Details - Subjects: 17 - Recordings: 85 - Tasks: 4 - Channels: 64 - Sampling rate (Hz): 5000.0 - Duration (hours): 15.745905555555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 24.2 GB - File count: 85 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002338.v2.0.1 - Source: openneuro - OpenNeuro: [ds002338](https://openneuro.org/datasets/ds002338) - NeMAR: [ds002338](https://nemar.org/dataexplorer/detail?dataset_id=ds002338) ## API Reference Use the `DS002338` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002338(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP2 * **Study:** `ds002338` (OpenNeuro) * **Author (year):** `Lioi2019_multi_modal` * **Canonical:** — Also importable as: `DS002338`, `Lioi2019_multi_modal`. Modality: `eeg`. Subjects: 17; recordings: 85; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002338](https://openneuro.org/datasets/ds002338) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002338](https://nemar.org/dataexplorer/detail?dataset_id=ds002338) DOI: [https://doi.org/10.18112/openneuro.ds002338.v2.0.1](https://doi.org/10.18112/openneuro.ds002338.v2.0.1) NEMAR citation count: 11 ### Examples ```pycon >>> from eegdash.dataset import DS002338 >>> dataset = DS002338(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002338) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002338) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002550: meg dataset, 22 subjects *Differential brain mechanisms of selection and maintenance of information during working memory (MEG data)* Access recordings and metadata through EEGDash. **Citation:** Romain Quentin, Jean-Remi King, Etienne Sallard, Nathan Fishman, Ryan Thompson, Ethan Buch, Leonardo Cohen (2020). *Differential brain mechanisms of selection and maintenance of information during working memory (MEG data)*. [10.18112/openneuro.ds002550.v1.0.1](https://doi.org/10.18112/openneuro.ds002550.v1.0.1) Modality: meg Subjects: 22 Recordings: 377 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002550 dataset = DS002550(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002550(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002550( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002550, title = {Differential brain mechanisms of selection and maintenance of information during working memory (MEG data)}, author = {Romain Quentin and Jean-Remi King and Etienne Sallard and Nathan Fishman and Ryan Thompson and Ethan Buch and Leonardo Cohen}, doi = {10.18112/openneuro.ds002550.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds002550.v1.0.1}, } ``` ## About This Dataset OpenNeuro curator note: This dataset was previously accessible at ds001750. The dataset was reuploaded due to privacy considerations. **Data folder corresponding to this manuscript** **Differential brain mechanisms of selection and maintenance of information during working memory** Note: One participant didn’t sign a sharing data agreement so data of 22 participants are available here (vs. 23 in the manuscript). Results and conclusion are not different with only 22 participants. Participant folder are organized as: - ##### ‘ses-mri/anat’: ### View full README OpenNeuro curator note: This dataset was previously accessible at ds001750. The dataset was reuploaded due to privacy considerations. **Data folder corresponding to this manuscript** **Differential brain mechanisms of selection and maintenance of information during working memory** Note: One participant didn’t sign a sharing data agreement so data of 22 participants are available here (vs. 23 in the manuscript). Results and conclusion are not different with only 22 participants. Participant folder are organized as: - ##### ‘ses-mri/anat’: contains T1 MRI of the participant - ##### ses-01: contains MEG data in BIDS format, behavioral data and HPI position in surface RAS MRI coordinates for session 1 - ##### ses-02: contains MEG data in BIDS format, behavioral data and HPI position in surface RAS MRI coordinates for session 2 **Description of non-MEG files:** - ##### behavioral task scripts: Matlab (psychtoolbox 3) script for Working Memory (WorkMem) and one-back control task (LocaCue) - ##### hpi_mri_surf.txt: Contains the X, Y Z coordinates of the nasion, left and right HPI (head position indicator) in surface MRI coordinates. Names of the electrode are NEC (nasion), LEC (left) and REC (right). Others coordinates are for co-registration during the session (not useful here). These HPI coordinates have been acquired from the neuronavigation system brainsight ([https://www.rogue-research.com/tms/brainsight-tms/](https://www.rogue-research.com/tms/brainsight-tms/)) - ##### WorkMem+subNumber+date.csv: **Contains behavioral results:** - NbTrial: trial number - FixNbTrial: trial number with good eye fixation - isFixed: whether the participant fixed the central dot during the trial (1:correct fixation, 0:broke fixation) - GaborLeft: left gabor (25 possible, 5 spatial frequency\* 5 orientations) - GaborRight: right gabor (25 possible, 5 spatial frequency\* 5 orientations) - Cue: cue (4 possible, 1: left dotted, 2: left solid, 3: right dotted, 4: left solid) - Change: whether the cued stimulus attribute is different from the corresponding probe attribute (1: different, 0: same) - sfLeft: spatial frequency of the left gabor (5 possible) - orientLeft: line orientation of the left gabor (5 possible) - phaseLeft: phase of the left gabor (5 possible) - sfRight: spatial frequency of the right gabor (5 possible) - orientRight: line orientation of the right gabor (5 possible) - phaseRight: phase of the right gabor (5 possible) - randomSF: probe spatial frequency if change=1 - randomOrient: line orientation if change=1 - phaseResp: phase of the probe - Response: response of the participant (1: different, 0: similar) - isCorrect: correctness of the response (1: correct, 0: uncorrect) - reactionTime: reaction time from probe onset to participant response - TrialTime: total trial duration - runningTime: running time - fixcrossTime: duration of the fixation dot presentation before stimulus onset (should be between 0.350 and 0.450 s) - gaborTime: duration of stimulus presentation (should be 0.1 s) - precueTime: duration between stimulus offset and cue onset (should be between 0.75 and 0.85 s) - cueTime: duration of the cue presentation (should be 0.1 s) - postcueTime: duration between cue offset and probe onset (should be between 1.45 and 1.55 s) - feedbackTime: duration of the feedback (green or red dot, should be 0.1 s) - triggGabor: trigger sent to MEG acquisition at the stimulus onset - triggCue: trigger sent to MEG acquisition at the cue onset - triggProbe: trigger sent to MEG acquisition at the probe onset - ##### locacue+subNumber+date.csv: - NbTrial: trial number - FixNbTrial: trial number with good eye fixation - isFixed: whether the participant fixed the central dot during the trial (1:correct fixation, 0:broke fixation) - same: whether 2 consecutive lines are similar (1: similar, 0: different) - Cue: cue (4 possible, 1: left dotted, 2: left solid, 3: right dotted, 4: left solid) - Side: side of the cue (1: right, 0: left) - Press: whether the participant press the button (1: press, 0: no press) - isCorrect: correctness of the response (1: correct, 0: uncorrect) - ReactionTime: reaction time when a button is pressed - TrialTime: total trial duration - runningTime: running time - fixcrossTime: duration of the fixation dot - cueTime: duration of the cue presentation (should be 0.1 s) - postcueTime: duration between the cue offset and the beginning of the next trial (should be 1.2 s) ## Dataset Information | Dataset ID | `DS002550` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Differential brain mechanisms of selection and maintenance of information during working memory (MEG data) | | Author (year) | `Quentin2020` | | Canonical | — | | Importable as | `DS002550`, `Quentin2020` | | Year | 2020 | | Authors | Romain Quentin, Jean-Remi King, Etienne Sallard, Nathan Fishman, Ryan Thompson, Ethan Buch, Leonardo Cohen | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002550.v1.0.1](https://doi.org/10.18112/openneuro.ds002550.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002550) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002550) | [Source URL](https://openneuro.org/datasets/ds002550) | ### Copy-paste BibTeX ```bibtex @dataset{ds002550, title = {Differential brain mechanisms of selection and maintenance of information during working memory (MEG data)}, author = {Romain Quentin and Jean-Remi King and Etienne Sallard and Nathan Fishman and Ryan Thompson and Ethan Buch and Leonardo Cohen}, doi = {10.18112/openneuro.ds002550.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds002550.v1.0.1}, } ``` ## Technical Details - Subjects: 22 - Recordings: 377 - Tasks: 2 - Channels: 308 (367), 307 (8), 304 (2) - Sampling rate (Hz): 1200.0 (374), 12000.0 (3) - Duration (hours): 30.509356643518515 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 167.5 GB - File count: 377 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002550.v1.0.1 - Source: openneuro - OpenNeuro: [ds002550](https://openneuro.org/datasets/ds002550) - NeMAR: [ds002550](https://nemar.org/dataexplorer/detail?dataset_id=ds002550) ## API Reference Use the `DS002550` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002550(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Differential brain mechanisms of selection and maintenance of information during working memory (MEG data) * **Study:** `ds002550` (OpenNeuro) * **Author (year):** `Quentin2020` * **Canonical:** — Also importable as: `DS002550`, `Quentin2020`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 22; recordings: 377; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002550](https://openneuro.org/datasets/ds002550) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002550](https://nemar.org/dataexplorer/detail?dataset_id=ds002550) DOI: [https://doi.org/10.18112/openneuro.ds002550.v1.0.1](https://doi.org/10.18112/openneuro.ds002550.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS002550 >>> dataset = DS002550(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002550) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002550) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS002578: eeg dataset, 2 subjects *Visual Oddball Task (256 channels)* Access recordings and metadata through EEGDash. **Citation:** Arnaud Delorme, Scott Makeig (2020). *Visual Oddball Task (256 channels)*. [10.18112/openneuro.ds002578.v1.1.0](https://doi.org/10.18112/openneuro.ds002578.v1.1.0) Modality: eeg Subjects: 2 Recordings: 2 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002578 dataset = DS002578(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002578(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002578( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002578, title = {Visual Oddball Task (256 channels)}, author = {Arnaud Delorme and Scott Makeig}, doi = {10.18112/openneuro.ds002578.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds002578.v1.1.0}, } ``` ## About This Dataset Data for this selective attention task was collected in 2004 at the Swartz Center for Computational Neuroscience at UCSD. These datasets are part of a larger corpus of 32-channel data collected a few years prior. The experiment is identical although the number of channel is larger (256), the electrode positions are scanned and the anatomical MRI is provided (allowing for precise source localization). See publication for more details. Raw data manipulation before export: - Fuse all BDF BIOSEMI files and reference to electrode 135 (see loadallbdf_2020.m) - Fuse with presentation file information (see loadallbdf_2020.m) - Remove spurious events of type ‘condition’ and ‘201’ (see clean_events.m) - Add HED tags (see addHEDTags.m) - Convert MRI to NIFTI format (MRIcron) and reorient (MRIcrogl) (see convert_nifti.m) ## Dataset Information | Dataset ID | `DS002578` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Visual Oddball Task (256 channels) | | Author (year) | `Delorme2020_Visual_Oddball_256` | | Canonical | — | | Importable as | `DS002578`, `Delorme2020_Visual_Oddball_256` | | Year | 2020 | | Authors | Arnaud Delorme, Scott Makeig | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002578.v1.1.0](https://doi.org/10.18112/openneuro.ds002578.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002578) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002578) | [Source URL](https://openneuro.org/datasets/ds002578) | ### Copy-paste BibTeX ```bibtex @dataset{ds002578, title = {Visual Oddball Task (256 channels)}, author = {Arnaud Delorme and Scott Makeig}, doi = {10.18112/openneuro.ds002578.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds002578.v1.1.0}, } ``` ## Technical Details - Subjects: 2 - Recordings: 2 - Tasks: 1 - Channels: 256 - Sampling rate (Hz): 256.0 - Duration (hours): 1.455 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 1.3 GB - File count: 2 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002578.v1.1.0 - Source: openneuro - OpenNeuro: [ds002578](https://openneuro.org/datasets/ds002578) - NeMAR: [ds002578](https://nemar.org/dataexplorer/detail?dataset_id=ds002578) ## API Reference Use the `DS002578` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002578(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual Oddball Task (256 channels) * **Study:** `ds002578` (OpenNeuro) * **Author (year):** `Delorme2020_Visual_Oddball_256` * **Canonical:** — Also importable as: `DS002578`, `Delorme2020_Visual_Oddball_256`. Modality: `eeg`. Subjects: 2; recordings: 2; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002578](https://openneuro.org/datasets/ds002578) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002578](https://nemar.org/dataexplorer/detail?dataset_id=ds002578) DOI: [https://doi.org/10.18112/openneuro.ds002578.v1.1.0](https://doi.org/10.18112/openneuro.ds002578.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002578 >>> dataset = DS002578(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002578) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002578) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002680: eeg dataset, 14 subjects *Go-nogo categorization and detection task* Access recordings and metadata through EEGDash. **Citation:** Arnaud Delorme (2020). *Go-nogo categorization and detection task*. [10.18112/openneuro.ds002680.v1.2.0](https://doi.org/10.18112/openneuro.ds002680.v1.2.0) Modality: eeg Subjects: 14 Recordings: 350 License: CC0 Source: openneuro Citations: 5.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002680 dataset = DS002680(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002680(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002680( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002680, title = {Go-nogo categorization and detection task}, author = {Arnaud Delorme}, doi = {10.18112/openneuro.ds002680.v1.2.0}, url = {https://doi.org/10.18112/openneuro.ds002680.v1.2.0}, } ``` ## About This Dataset Participants seated in a dimly lit room at 110 cm from a computer screen piloted from a PC computer. Two tasks alternated: a categorization task and a recognition task. In both tasks, target images and non-target images were equally likely presented. Participants were tested in two recording phases. The first day was composed of 13 series, the second day of 12 series, with 100 images per series (see details of the series below). To start a series, subjects had to press a touch-sensitive button. A small fixation point (smaller than 0.1 degree of visual angle) was drawn in the middle of a black screen. Then, an 8 bit color vertical photograph (256 pixels wide by 384 pixels high which roughly correspond to 4.5 degree of visual angle in width and 6.5 degree in height) was flashed for 20 ms (2 frames of a 100 Hz SVGA screen) using a programmable graphic board (VSG 2.1, Cambridge Research Systems). This short presentation time avoid that subjects use exploratory eye movement to respond. Participants gave their responses following a go/nogo paradigm. For each target, they had to lift their finger from the button as quickly and accurately as possible (releasing the button restored a focused light beam between an optic fiber led and its receiver; the response latency of this apparatus was under 1 ms). Participants were given 1000 ms to respond, after what any response was considered as a nogo response. The stimulus onset asynchrony (SOA) was 2000 ms plus or minus a random delay of 200 ms. For each distractor, participants had to keep pressing the button during at least 1000 ms (nogo response). More specifically, in the animal categorization task, participants had to respond whenever there was an animal in the picture. In the recognition task, the session started with a learning phase. A probe image was flashed 15 times during 20 ms intermixed with two presentations of 1000 ms after the fifth and the tenth flashes, allowing an ocular exploration of the image; with an inter-stimulus of 1000 ms. Participants were instructed to carefully examine and learn the probe image in order to recognize it in the following series. The test phase started immediately after the learning phase. The probe image constituted the unique target of the series. Both tasks were organized in series of 100 images; 50 targets images were mixed with 50 non-targets in the animal categorization task; 50 copies of an unique photographs were mixed at random with 50 non-targets in the recognition task. ## Dataset Information | Dataset ID | `DS002680` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Go-nogo categorization and detection task | | Author (year) | `Delorme2020_Go_nogo_categorization` | | Canonical | — | | Importable as | `DS002680`, `Delorme2020_Go_nogo_categorization` | | Year | 2020 | | Authors | Arnaud Delorme | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002680.v1.2.0](https://doi.org/10.18112/openneuro.ds002680.v1.2.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002680) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002680) | [Source URL](https://openneuro.org/datasets/ds002680) | ### Copy-paste BibTeX ```bibtex @dataset{ds002680, title = {Go-nogo categorization and detection task}, author = {Arnaud Delorme}, doi = {10.18112/openneuro.ds002680.v1.2.0}, url = {https://doi.org/10.18112/openneuro.ds002680.v1.2.0}, } ``` ## Technical Details - Subjects: 14 - Recordings: 350 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 1000.0 - Duration (hours): 20.808333333333334 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 9.2 GB - File count: 350 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002680.v1.2.0 - Source: openneuro - OpenNeuro: [ds002680](https://openneuro.org/datasets/ds002680) - NeMAR: [ds002680](https://nemar.org/dataexplorer/detail?dataset_id=ds002680) ## API Reference Use the `DS002680` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002680(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Go-nogo categorization and detection task * **Study:** `ds002680` (OpenNeuro) * **Author (year):** `Delorme2020_Go_nogo_categorization` * **Canonical:** — Also importable as: `DS002680`, `Delorme2020_Go_nogo_categorization`. Modality: `eeg`. Subjects: 14; recordings: 350; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002680](https://openneuro.org/datasets/ds002680) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002680](https://nemar.org/dataexplorer/detail?dataset_id=ds002680) DOI: [https://doi.org/10.18112/openneuro.ds002680.v1.2.0](https://doi.org/10.18112/openneuro.ds002680.v1.2.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS002680 >>> dataset = DS002680(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002680) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002680) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002691: eeg dataset, 20 subjects *Internal attention study* Access recordings and metadata through EEGDash. **Citation:** Arnaud Delorme, Dean Radin (2020). *Internal attention study*. [10.18112/openneuro.ds002691.v1.1.0](https://doi.org/10.18112/openneuro.ds002691.v1.1.0) Modality: eeg Subjects: 20 Recordings: 20 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002691 dataset = DS002691(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002691(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002691( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002691, title = {Internal attention study}, author = {Arnaud Delorme and Dean Radin}, doi = {10.18112/openneuro.ds002691.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds002691.v1.1.0}, } ``` ## About This Dataset This experiment has 20 subjects. Subjects asked to mentally concentrate on a target (see published article for more information) for periods of about 15 seconds. There are 4 verbal instructions given to subject by an automated computer program connected to a speakerphone: - The instruction is to wait until the experiment starts - The instruction is to relax - The instruction is to get ready as the trial is about to start - The instruction is to mentally concentrate on the target All the experiment is performed eye’s closed. Relax periods last for about 9 seconds, are then followed by a period of 6 seconds where the participants is asked to “get ready” for the trial, followed by a period of 15 seconds of concentration. This sequence is repeated 20 times for each participant. ## Dataset Information | Dataset ID | `DS002691` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Internal attention study | | Author (year) | `Delorme2020_Internal_attention` | | Canonical | — | | Importable as | `DS002691`, `Delorme2020_Internal_attention` | | Year | 2020 | | Authors | Arnaud Delorme, Dean Radin | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002691.v1.1.0](https://doi.org/10.18112/openneuro.ds002691.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002691) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002691) | [Source URL](https://openneuro.org/datasets/ds002691) | ### Copy-paste BibTeX ```bibtex @dataset{ds002691, title = {Internal attention study}, author = {Arnaud Delorme and Dean Radin}, doi = {10.18112/openneuro.ds002691.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds002691.v1.1.0}, } ``` ## Technical Details - Subjects: 20 - Recordings: 20 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 250.0 - Duration (hours): 6.721111111111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 776.7 MB - File count: 20 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002691.v1.1.0 - Source: openneuro - OpenNeuro: [ds002691](https://openneuro.org/datasets/ds002691) - NeMAR: [ds002691](https://nemar.org/dataexplorer/detail?dataset_id=ds002691) ## API Reference Use the `DS002691` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002691(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Internal attention study * **Study:** `ds002691` (OpenNeuro) * **Author (year):** `Delorme2020_Internal_attention` * **Canonical:** — Also importable as: `DS002691`, `Delorme2020_Internal_attention`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002691](https://openneuro.org/datasets/ds002691) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002691](https://nemar.org/dataexplorer/detail?dataset_id=ds002691) DOI: [https://doi.org/10.18112/openneuro.ds002691.v1.1.0](https://doi.org/10.18112/openneuro.ds002691.v1.1.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS002691 >>> dataset = DS002691(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002691) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002691) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002712: meg dataset, 25 subjects *Numbers and Letters* Access recordings and metadata through EEGDash. **Citation:** Sara Aurtenetxe, Nicola Molinaro, Doug Davidson, Manuel Carreiras (2020). *Numbers and Letters*. [10.18112/openneuro.ds002712.v1.0.1](https://doi.org/10.18112/openneuro.ds002712.v1.0.1) Modality: meg Subjects: 25 Recordings: 82 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002712 dataset = DS002712(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002712(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002712( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002712, title = {Numbers and Letters}, author = {Sara Aurtenetxe and Nicola Molinaro and Doug Davidson and Manuel Carreiras}, doi = {10.18112/openneuro.ds002712.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds002712.v1.0.1}, } ``` ## About This Dataset OpenNeuro curator note: This dataset was previously accessible at ds001985. The dataset was reuploaded due to privacy considerations. The experiment is composed by two runs We here report the code triggers for each run: Run 1: single item 10 = single numbers 15 = single letters 20 & 25 = single false fonts Run 2: strings 35 = strings numbers 40 = strings letters 45 & 50 = strings false fonts raw files could be split into two files (e.g., run-1 + run-11) ## Dataset Information | Dataset ID | `DS002712` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Numbers and Letters | | Author (year) | `Aurtenetxe2020` | | Canonical | — | | Importable as | `DS002712`, `Aurtenetxe2020` | | Year | 2020 | | Authors | Sara Aurtenetxe, Nicola Molinaro, Doug Davidson, Manuel Carreiras | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002712.v1.0.1](https://doi.org/10.18112/openneuro.ds002712.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002712) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002712) | [Source URL](https://openneuro.org/datasets/ds002712) | ### Copy-paste BibTeX ```bibtex @dataset{ds002712, title = {Numbers and Letters}, author = {Sara Aurtenetxe and Nicola Molinaro and Doug Davidson and Manuel Carreiras}, doi = {10.18112/openneuro.ds002712.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds002712.v1.0.1}, } ``` ## Technical Details - Subjects: 25 - Recordings: 82 - Tasks: 1 - Channels: 312 (79), 361 (2), 314 - Sampling rate (Hz): 1000.0 - Duration (hours): 24.06388888888889 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 101.8 GB - File count: 82 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002712.v1.0.1 - Source: openneuro - OpenNeuro: [ds002712](https://openneuro.org/datasets/ds002712) - NeMAR: [ds002712](https://nemar.org/dataexplorer/detail?dataset_id=ds002712) ## API Reference Use the `DS002712` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002712(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Numbers and Letters * **Study:** `ds002712` (OpenNeuro) * **Author (year):** `Aurtenetxe2020` * **Canonical:** — Also importable as: `DS002712`, `Aurtenetxe2020`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 25; recordings: 82; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002712](https://openneuro.org/datasets/ds002712) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002712](https://nemar.org/dataexplorer/detail?dataset_id=ds002712) DOI: [https://doi.org/10.18112/openneuro.ds002712.v1.0.1](https://doi.org/10.18112/openneuro.ds002712.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002712 >>> dataset = DS002712(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002712) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002712) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS002718: eeg dataset, 18 subjects *Face processing EEG dataset for EEGLAB* Access recordings and metadata through EEGDash. **Citation:** Daniel G. Wakeman, Richard N Henson (2020). *Face processing EEG dataset for EEGLAB*. [10.18112/openneuro.ds002718.v1.1.0](https://doi.org/10.18112/openneuro.ds002718.v1.1.0) Modality: eeg Subjects: 18 Recordings: 18 License: CC0 Source: openneuro Citations: 11.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002718 dataset = DS002718(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002718(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002718( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002718, title = {Face processing EEG dataset for EEGLAB}, author = {Daniel G. Wakeman and Richard N Henson}, doi = {10.18112/openneuro.ds002718.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds002718.v1.1.0}, } ``` ## About This Dataset *Introduction:* This dataset consists of the MEEG (sMRI+MEG+EEG) portion of the multi-subject, multi-modal face processing dataset (ds000117). This dataset was originally acquired and shared by Daniel Wakeman and Richard Henson ([https://pubmed.ncbi.nlm.nih.gov/25977808/](https://pubmed.ncbi.nlm.nih.gov/25977808/)). The MEG and EEG data were simultaneously recorded; the sMRI scans were preserved to support M/EEG source localization. Following event log augmentation, reorganization, and HED (v8.0.0) annotation, the EEG data have been repackaged in EEGLAB format. *Overview of the experiment:* Eighteen participants completed two recording sessions spaced three months apart – one session recorded fMRI and the other simultaneously recorded MEG and EEG data. During each session, participants performed the same simple perceptual task, responding to presented photographs of famous, unfamiliar, and scrambled faces by pressing one of two keyboard keys to indicate a subjective yes or no decision as to the relative spatial symmetry of the viewed face. Famous faces were feature-matched to unfamiliar faces; half the faces were female. The two sessions (MEEG, fMRI) had different organizations of event timing and presentation because of technological requirements of the respective imaging modalities. Each individual face was presented twice during the session. For half of the presented faces, the second presentation followed immediately after the first. For the other half, the second presentation was delayed by 5-15 face presentations. *Preprocessing:* Multi-subject, multi-modal (sMRI+EEG) neuroimaging dataset on face processing. Original data described at [https://www.nature.com/articles/sdata20151](https://www.nature.com/articles/sdata20151) This is repackaged version of the EEG data in EEGLAB format. The data has gone through minimal preprocessing including (see wh_extracteeg_BIDS.m): - Ignoring fMRI and MEG data (sMRI preserved for EEG source localization) ### View full README *Introduction:* This dataset consists of the MEEG (sMRI+MEG+EEG) portion of the multi-subject, multi-modal face processing dataset (ds000117). This dataset was originally acquired and shared by Daniel Wakeman and Richard Henson ([https://pubmed.ncbi.nlm.nih.gov/25977808/](https://pubmed.ncbi.nlm.nih.gov/25977808/)). The MEG and EEG data were simultaneously recorded; the sMRI scans were preserved to support M/EEG source localization. Following event log augmentation, reorganization, and HED (v8.0.0) annotation, the EEG data have been repackaged in EEGLAB format. *Overview of the experiment:* Eighteen participants completed two recording sessions spaced three months apart – one session recorded fMRI and the other simultaneously recorded MEG and EEG data. During each session, participants performed the same simple perceptual task, responding to presented photographs of famous, unfamiliar, and scrambled faces by pressing one of two keyboard keys to indicate a subjective yes or no decision as to the relative spatial symmetry of the viewed face. Famous faces were feature-matched to unfamiliar faces; half the faces were female. The two sessions (MEEG, fMRI) had different organizations of event timing and presentation because of technological requirements of the respective imaging modalities. Each individual face was presented twice during the session. For half of the presented faces, the second presentation followed immediately after the first. For the other half, the second presentation was delayed by 5-15 face presentations. *Preprocessing:* Multi-subject, multi-modal (sMRI+EEG) neuroimaging dataset on face processing. Original data described at [https://www.nature.com/articles/sdata20151](https://www.nature.com/articles/sdata20151) This is repackaged version of the EEG data in EEGLAB format. The data has gone through minimal preprocessing including (see wh_extracteeg_BIDS.m): - Ignoring fMRI and MEG data (sMRI preserved for EEG source localization) - Extracting EEG channels out of the MEG/EEG fif data - Adding fiducials - Renaming EOG and EKG channels - Extracting events from event channel - Removing spurious events 5, 6, 7, 13, 14, 15, 17, 18 and 19 - Removing spurious event 24 for subject 3 run 4 - Renaming events taking into account button assigned to each subject - Correcting event latencies (events have a shift of 34 ms) - Resampling data to 250 Hz (this is a step that is done because > this dataset is used as tutorial for EEGLAB and need to be lightweight) - Merging run 1 to 6 - Removing event fields urevent and duration - Filling up empty fields for events boundary and stim_file. - Saving as EEGLAB .set format **Original and related datasets** This data is a mapping of the original openfmri dataset ds000117 on OpenfMRI, which is no longer available (although a copy is available in the sourcedata folder of the ds003645 repository). The ds000117 dataset on OpenNeuro contains only 16 subjects. The original OpenfMRI dataset is described at the bottom of this README file [https://openneuro.org/datasets/ds000117/versions/1.0.4/file-display/README](https://openneuro.org/datasets/ds000117/versions/1.0.4/file-display/README) along with the correspondance with the 16 subjects in ds000117. Note that sub-001 data on OpenfMRI was corrupted so it is not included here. The openneuro dataset ds003645 is similar to this one but also contains MEG data and HED events. Also, it does not have the different runs merged. **Import warning** Make sure to import the channel locations from the BIDS electrodes.tsv files. The EEGLAB .set files also contain channel locations, although they differ for subjects 8 and 14 because the .set version is wrong and rotated by 90 degrees. When using the EEGLAB EEG BIDS plugin, the default behavior is to import channel locations from BIDS. *Data curators:* Ramon Martinez, Dung Truong, Scott Makeig, Arnaud Delorme (UCSD, La Jolla, CA, USA) ## Dataset Information | Dataset ID | `DS002718` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Face processing EEG dataset for EEGLAB | | Author (year) | `Wakeman2020` | | Canonical | `Wakeman2015`, `WakemanHenson_EEG_MEG` | | Importable as | `DS002718`, `Wakeman2020`, `Wakeman2015`, `WakemanHenson_EEG_MEG` | | Year | 2020 | | Authors | Daniel G. Wakeman, Richard N Henson | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds002718.v1.1.0](https://doi.org/10.18112/openneuro.ds002718.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002718) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002718) | [Source URL](https://openneuro.org/datasets/ds002718) | ### Copy-paste BibTeX ```bibtex @dataset{ds002718, title = {Face processing EEG dataset for EEGLAB}, author = {Daniel G. Wakeman and Richard N Henson}, doi = {10.18112/openneuro.ds002718.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds002718.v1.1.0}, } ``` ## Technical Details - Subjects: 18 - Recordings: 18 - Tasks: 1 - Channels: 74 - Sampling rate (Hz): 250.0 - Duration (hours): 14.844166666666666 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 4.3 GB - File count: 18 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds002718.v1.1.0 - Source: openneuro - OpenNeuro: [ds002718](https://openneuro.org/datasets/ds002718) - NeMAR: [ds002718](https://nemar.org/dataexplorer/detail?dataset_id=ds002718) ## API Reference Use the `DS002718` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002718(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Face processing EEG dataset for EEGLAB * **Study:** `ds002718` (OpenNeuro) * **Author (year):** `Wakeman2020` * **Canonical:** `WakemanHenson_EEG_MEG` Also importable as: `DS002718`, `Wakeman2020`, `WakemanHenson_EEG_MEG`. Modality: `eeg`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002718](https://openneuro.org/datasets/ds002718) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002718](https://nemar.org/dataexplorer/detail?dataset_id=ds002718) DOI: [https://doi.org/10.18112/openneuro.ds002718.v1.1.0](https://doi.org/10.18112/openneuro.ds002718.v1.1.0) NEMAR citation count: 11 ### Examples ```pycon >>> from eegdash.dataset import DS002718 >>> dataset = DS002718(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002718) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002718) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002720: eeg dataset, 18 subjects *A dataset recorded during development of a tempo-based brain-computer music interface* Access recordings and metadata through EEGDash. **Citation:** Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto (2020). *A dataset recorded during development of a tempo-based brain-computer music interface*. [10.18112/openneuro.ds002720.v1.0.1](https://doi.org/10.18112/openneuro.ds002720.v1.0.1) Modality: eeg Subjects: 18 Recordings: 165 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002720 dataset = DS002720(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002720(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002720( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002720, title = {A dataset recorded during development of a tempo-based brain-computer music interface}, author = {Ian Daly and Nicoletta Nicolaou and Duncan Williams and Faustina Hwang and Alexis Kirke and Eduardo Miranda and Slawomir J. Nasuto}, doi = {10.18112/openneuro.ds002720.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds002720.v1.0.1}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS002720` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A dataset recorded during development of a tempo-based brain-computer music interface | | Author (year) | `Daly2020_recorded` | | Canonical | — | | Importable as | `DS002720`, `Daly2020_recorded` | | Year | 2020 | | Authors | Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002720.v1.0.1](https://doi.org/10.18112/openneuro.ds002720.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002720) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002720) | [Source URL](https://openneuro.org/datasets/ds002720) | ### Copy-paste BibTeX ```bibtex @dataset{ds002720, title = {A dataset recorded during development of a tempo-based brain-computer music interface}, author = {Ian Daly and Nicoletta Nicolaou and Duncan Williams and Faustina Hwang and Alexis Kirke and Eduardo Miranda and Slawomir J. Nasuto}, doi = {10.18112/openneuro.ds002720.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds002720.v1.0.1}, } ``` ## Technical Details - Subjects: 18 - Recordings: 165 - Tasks: — - Channels: 19 - Sampling rate (Hz): 1000.0 - Duration (hours): 18.774722222222223 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 2.4 GB - File count: 165 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002720.v1.0.1 - Source: openneuro - OpenNeuro: [ds002720](https://openneuro.org/datasets/ds002720) - NeMAR: [ds002720](https://nemar.org/dataexplorer/detail?dataset_id=ds002720) ## API Reference Use the `DS002720` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002720(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recorded during development of a tempo-based brain-computer music interface * **Study:** `ds002720` (OpenNeuro) * **Author (year):** `Daly2020_recorded` * **Canonical:** — Also importable as: `DS002720`, `Daly2020_recorded`. Modality: `eeg`. Subjects: 18; recordings: 165; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002720](https://openneuro.org/datasets/ds002720) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002720](https://nemar.org/dataexplorer/detail?dataset_id=ds002720) DOI: [https://doi.org/10.18112/openneuro.ds002720.v1.0.1](https://doi.org/10.18112/openneuro.ds002720.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002720 >>> dataset = DS002720(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002720) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002720) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002721: eeg dataset, 31 subjects *An EEG dataset recorded during affective music listening* Access recordings and metadata through EEGDash. **Citation:** Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto (2020). *An EEG dataset recorded during affective music listening*. [10.18112/openneuro.ds002721.v1.0.2](https://doi.org/10.18112/openneuro.ds002721.v1.0.2) Modality: eeg Subjects: 31 Recordings: 185 License: CC0 Source: openneuro Citations: 10.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002721 dataset = DS002721(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002721(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002721( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002721, title = {An EEG dataset recorded during affective music listening}, author = {Ian Daly and Nicoletta Nicolaou and Duncan Williams and Faustina Hwang and Alexis Kirke and Eduardo Miranda and Slawomir J. Nasuto}, doi = {10.18112/openneuro.ds002721.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds002721.v1.0.2}, } ``` ## About This Dataset **0. Sections** 1. Project 2. Dataset 3. Terms of Use 4. Contents 5. Method and Processing **1. PROJECT** ### View full README **0. Sections** 1. Project 2. Dataset 3. Terms of Use 4. Contents 5. Method and Processing **1. PROJECT** Title: Brain-Computer Music Interface for Monitoring and Inducing Affective States (BCMI-MIdAS) Dates: 2012-2017 Funding organisation: Engineering and Physical Sciences Research Council (EPSRC) Grant no.: EP/J003077/1 and EP/J002135/1. **2. DATASET** Title: EEG data investigating neural correlates of music-induced emotion. Description: This dataset accompanies the publication by Daly et al. (2018) and has been analysed in Daly et al. (2014; 2015a; 2015b) (please see Section 5 for full references). The purpose of the research activity in which the data were collected was to investigate the EEG neural correlates of music-induced emotion. For this purpose 31 healthy adult participants listened to 40 music clips of 12 s duration each, targeting a range of emotional states. The music clips comprised excerpts from film scores spanning a range of styles and rated on induced emotion. The dataset contains unprocessed EEG data from all 31 participants (age range 18-66, 18 female) while listening to the music clips, together with the reported induced emotional responses . The paradigm involved 6 runs of EEG recordings. The first and last runs were resting state runs, during which participants were instructed to sit still and rest for 300 s. The other 4 runs each contained 10 music listening trials. Publication Year: 2018 Creator: Nicoletta Nicolaou, Ian Daly. Contributors: Isil Poyraz Bilgin, James Weaver, Asad Malik. Principal Investigator: Slawomir Nasuto (EP/J003077/1). Co-Investigator: Eduardo Miranda (EP/J002135/1). Organisation: University of Reading Rights-holders: University of Reading Source: The musical stimuli were taken from Eerola & Vuoskoski, “A comparison of the discrete and dimensional models of emotion in music”, Psychol. Music, 39:18-49, 2010 (doi: 10.1177/0305735610362821). Stimuli set 1 was used ([https://www.jyu.fi/hytk/fi/laitokset/mutku/en/research/projects2/past-projects/coe/materials/emotion/soundtracks/set1/view](https://www.jyu.fi/hytk/fi/laitokset/mutku/en/research/projects2/past-projects/coe/materials/emotion/soundtracks/set1/view)) System: The data is prepared for use on Windows systems and no garanantee is made that the datasets can be opened correctly on other systems. **3. TERMS OF USE** Copyright University of Reading, 2018. This dataset is licensed by the rights-holder(s) under a Creative Commons Attribution 4.0 International Licence: [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/). **4. CONTENTS** BIDS File listing: The dataset comprises data from 31 participants, named using the convention: sub_s_number where: s_number is a random participant number from 1 to 31. For example: ‘sub-08’ contains data obtained from participant 8. The data is BIDS format and contains EEG and associated meta data. The sampling rate is 1 kHz and the EEG corresponding to a music clip is 20 s long (the duration of the clips). Each data folder contains the following data (please note that the number of runs varies between participants): **5. METHOD and PROCESSING** This information is available in the following publications: [1] Daly, I., Nicolaou, N., Williams, D., Hwang, F., Kirke, A., Miranda, E., Nasuto, S.J., Ԏeural and physiological data from participants listening to affective musicԬ Scientific Data, 2018. [2] Daly, I., Malik, A., Hwang, F., Roesch, E., Weaver, J., Kirke, A., Williams, D., Miranda, E. R., Nasuto, S. J., Ԏeural correlates of emotional responses to music: an EEG studyԬ Neuroscience Letters, 573: 52-7, 2014; doi: 10.1016/j.neulet.2014.05.003. [3] Daly, I., Hallowell, J., Hwang, F., Kirke, A., Malik, A., Roesch, E., Weaver, J., Williams, D., Miranda, E., Nasuto, S.J., ԃhanges in music tempo entrain movement related brain activityԬ Proc. IEEE EMBC 2014, pp.4595-8; doi: 10.1109/EMBC.2014.6944647 [4] Daly, I., Williams, D., Hallowell, J., Hwang, F., Kirke, A., Malik, A., Weaver, J., Miranda, E., Nasuto, S.J., ԍusic-induced emotions can be predicted from a combination of brain activity and acoustic featuresԬ Brain and Cognition, 101:1-11, 2015b; doi: 10.1016/j.bandc.2015.08.003 Please cite these references if you use this dataset in your study. Thank you for your interest in our work. ## Dataset Information | Dataset ID | `DS002721` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | An EEG dataset recorded during affective music listening | | Author (year) | `Daly2020_recorded_affective` | | Canonical | — | | Importable as | `DS002721`, `Daly2020_recorded_affective` | | Year | 2020 | | Authors | Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002721.v1.0.2](https://doi.org/10.18112/openneuro.ds002721.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002721) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002721) | [Source URL](https://openneuro.org/datasets/ds002721) | ### Copy-paste BibTeX ```bibtex @dataset{ds002721, title = {An EEG dataset recorded during affective music listening}, author = {Ian Daly and Nicoletta Nicolaou and Duncan Williams and Faustina Hwang and Alexis Kirke and Eduardo Miranda and Slawomir J. Nasuto}, doi = {10.18112/openneuro.ds002721.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds002721.v1.0.2}, } ``` ## Technical Details - Subjects: 31 - Recordings: 185 - Tasks: — - Channels: 19 - Sampling rate (Hz): 1000.0 - Duration (hours): 26.282777777777778 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 3.4 GB - File count: 185 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002721.v1.0.2 - Source: openneuro - OpenNeuro: [ds002721](https://openneuro.org/datasets/ds002721) - NeMAR: [ds002721](https://nemar.org/dataexplorer/detail?dataset_id=ds002721) ## API Reference Use the `DS002721` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002721(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) An EEG dataset recorded during affective music listening * **Study:** `ds002721` (OpenNeuro) * **Author (year):** `Daly2020_recorded_affective` * **Canonical:** — Also importable as: `DS002721`, `Daly2020_recorded_affective`. Modality: `eeg`. Subjects: 31; recordings: 185; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002721](https://openneuro.org/datasets/ds002721) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002721](https://nemar.org/dataexplorer/detail?dataset_id=ds002721) DOI: [https://doi.org/10.18112/openneuro.ds002721.v1.0.2](https://doi.org/10.18112/openneuro.ds002721.v1.0.2) NEMAR citation count: 10 ### Examples ```pycon >>> from eegdash.dataset import DS002721 >>> dataset = DS002721(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002721) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002721) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002722: eeg dataset, 19 subjects *A dataset recorded during development of an affective brain-computer music interface: calibration session* Access recordings and metadata through EEGDash. **Citation:** Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto (2020). *A dataset recorded during development of an affective brain-computer music interface: calibration session*. [10.18112/openneuro.ds002722.v1.0.1](https://doi.org/10.18112/openneuro.ds002722.v1.0.1) Modality: eeg Subjects: 19 Recordings: 94 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002722 dataset = DS002722(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002722(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002722( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002722, title = {A dataset recorded during development of an affective brain-computer music interface: calibration session}, author = {Ian Daly and Nicoletta Nicolaou and Duncan Williams and Faustina Hwang and Alexis Kirke and Eduardo Miranda and Slawomir J. Nasuto}, doi = {10.18112/openneuro.ds002722.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds002722.v1.0.1}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS002722` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A dataset recorded during development of an affective brain-computer music interface: calibration session | | Author (year) | `Daly2020_recorded_development` | | Canonical | — | | Importable as | `DS002722`, `Daly2020_recorded_development` | | Year | 2020 | | Authors | Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002722.v1.0.1](https://doi.org/10.18112/openneuro.ds002722.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002722) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002722) | [Source URL](https://openneuro.org/datasets/ds002722) | ### Copy-paste BibTeX ```bibtex @dataset{ds002722, title = {A dataset recorded during development of an affective brain-computer music interface: calibration session}, author = {Ian Daly and Nicoletta Nicolaou and Duncan Williams and Faustina Hwang and Alexis Kirke and Eduardo Miranda and Slawomir J. Nasuto}, doi = {10.18112/openneuro.ds002722.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds002722.v1.0.1}, } ``` ## Technical Details - Subjects: 19 - Recordings: 94 - Tasks: — - Channels: 37 - Sampling rate (Hz): 1000.0 - Duration (hours): 22.889444444444443 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 6.1 GB - File count: 94 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002722.v1.0.1 - Source: openneuro - OpenNeuro: [ds002722](https://openneuro.org/datasets/ds002722) - NeMAR: [ds002722](https://nemar.org/dataexplorer/detail?dataset_id=ds002722) ## API Reference Use the `DS002722` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002722(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recorded during development of an affective brain-computer music interface: calibration session * **Study:** `ds002722` (OpenNeuro) * **Author (year):** `Daly2020_recorded_development` * **Canonical:** — Also importable as: `DS002722`, `Daly2020_recorded_development`. Modality: `eeg`. Subjects: 19; recordings: 94; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002722](https://openneuro.org/datasets/ds002722) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002722](https://nemar.org/dataexplorer/detail?dataset_id=ds002722) DOI: [https://doi.org/10.18112/openneuro.ds002722.v1.0.1](https://doi.org/10.18112/openneuro.ds002722.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS002722 >>> dataset = DS002722(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002722) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002722) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002723: eeg dataset, 8 subjects *A dataset recorded during development of an affective brain-computer music interface: testing session* Access recordings and metadata through EEGDash. **Citation:** Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto (2020). *A dataset recorded during development of an affective brain-computer music interface: testing session*. [10.18112/openneuro.ds002723.v1.1.0](https://doi.org/10.18112/openneuro.ds002723.v1.1.0) Modality: eeg Subjects: 8 Recordings: 44 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002723 dataset = DS002723(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002723(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002723( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002723, title = {A dataset recorded during development of an affective brain-computer music interface: testing session}, author = {Ian Daly and Nicoletta Nicolaou and Duncan Williams and Faustina Hwang and Alexis Kirke and Eduardo Miranda and Slawomir J. Nasuto}, doi = {10.18112/openneuro.ds002723.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds002723.v1.1.0}, } ``` ## About This Dataset **0. Sections** 1. Project 2. Dataset 3. Terms of Use 4. Contents 5. Method and Processing **1. PROJECT** ### View full README **0. Sections** 1. Project 2. Dataset 3. Terms of Use 4. Contents 5. Method and Processing **1. PROJECT** Title: Brain-Computer Music Interface for Monitoring and Inducing Affective States (BCMI-MIdAS) Dates: 2012-2017 Funding organisation: Engineering and Physical Sciences Research Council (EPSRC) Grant no.: EP/J003077/1 **2. DATASET** EEG data from an affective Music Brain-Computer: online real-time control. Description: This dataset accompanies the publication by Daly et al. (2018) and has been analysed in Daly et al. (2016) (please see Section 5 for full references). The purpose of the research activity in which the data were collected was to investigate the performance of a real-time and online brain-computer interface that identified the user’s emotional state and modified music on-the-fly in order to induce a target emotional state. For this purpose, participants listened to 60 s music clips targeting different affective states, as defined by valence and arousal. The music clips were generated using a synthetic music generator. The dataset contains the EEG data from 8 healthy adult participants during real-time control of the system while listening to the music clips, together with the reported affective state (valence and arousal values). This dataset is connected to 2 additional datasets: 1. EEG data from an affective Music Brain-Computer Interface: system calibration. doi: 2. EEG data from an affective Music Brain-Computer: offline training data to induce target emotional states. doi: Please note that the number of participants varies between datasets; however, participant codes are the same across all three datasets. Publication Year: 2018 Creators: Nicoletta Nicolaou, Ian Daly. Contributors: Isil Poyraz Bilgin, James Weaver, Asad Malik, Alexis Kirke, Duncan Williams. Principal Investigator: Slawomir Nasuto (EP/J003077/1). Co-Investigator: Eduardo Miranda (EP/J002135/1). Organisation: University of Reading Rights-holders: University of Reading Source: The synthetic generator used to generate the music clips was presented in Williams et al., “Affective Calibration of Musical Feature Sets in an Emotionally Intelligent Music Composition System”, ACM Trans. Appl. Percept. 14, 3, Article 17 (May 2017), 13 pages. DOI: [https://doi.org/10.1145/3059005](https://doi.org/10.1145/3059005). **3. TERMS OF USE** Copyright University of Reading, 2018. This dataset is licensed by the rights-holder(s) under a Creative Commons Attribution 4.0 International Licence: [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/). **4. CONTENTS** The dataset comprises data from 8 subjects. The sampling rate is 1 kHz and the music listening task corresponding to a music clip is 60 s long (clip duration). During the first 20 s, the music clip places the listener in emotional state A, while for the remaining 40 s the music clip targets the affective trajectory from emotional state B to C. Within a 60s music listening epoch there are two target affective states. In the first 20s the music is generated to target one affective state (target A), for the next 20s the BCMI attempts to (a) work out what affective state the participant is actually in, and (b) generate music to move them from this affective state to the next targetted affective state (target B), which is targetted for the last 20s of the 60s music listening epoch. **5. METHOD and PROCESSING** This information is available in the following publications: [1] Daly, I., Nicolaou, N., Williams, D., Hwang, F., Kirke, A., Miranda, E., Nasuto, S.J., “Neural and physiological data from participants listening to affective music”, Scientific Data, 2018. [2] Daly, I., Williams, D., Hwang, F., Kirke, A., Malik, A., Weaver, J., Miranda, E. R., Nasuto, S. J., “Affective Brain-Computer Music Interfacing”, Journal of Neural Engineering, 13:4, July 2016. [http://dx.doi.org/10.1088/1741-2560/13/4/046022](http://dx.doi.org/10.1088/1741-2560/13/4/046022) If you use this dataset in your study please cite these references, as well as the following reference: [3] Williams, D., Kirke, A., Miranda, E.R., Daly, I., Hwang, F., Weaver, J., Nasuto, S.J., “Affective Calibration of Musical Feature Sets in an Emotionally Intelligent Music Composition System”, ACM Trans. Appl. Percept. 14, 3, Article 17 (May 2017), 13 pages. DOI: [https://doi.org/10.1145/3059005](https://doi.org/10.1145/3059005) Thank you for your interest in our work. ## Dataset Information | Dataset ID | `DS002723` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A dataset recorded during development of an affective brain-computer music interface: testing session | | Author (year) | `Daly2020_session` | | Canonical | — | | Importable as | `DS002723`, `Daly2020_session` | | Year | 2020 | | Authors | Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002723.v1.1.0](https://doi.org/10.18112/openneuro.ds002723.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002723) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002723) | [Source URL](https://openneuro.org/datasets/ds002723) | ### Copy-paste BibTeX ```bibtex @dataset{ds002723, title = {A dataset recorded during development of an affective brain-computer music interface: testing session}, author = {Ian Daly and Nicoletta Nicolaou and Duncan Williams and Faustina Hwang and Alexis Kirke and Eduardo Miranda and Slawomir J. Nasuto}, doi = {10.18112/openneuro.ds002723.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds002723.v1.1.0}, } ``` ## Technical Details - Subjects: 8 - Recordings: 44 - Tasks: — - Channels: 37 - Sampling rate (Hz): 1000.0 - Duration (hours): 10.46888888888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 2.6 GB - File count: 44 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002723.v1.1.0 - Source: openneuro - OpenNeuro: [ds002723](https://openneuro.org/datasets/ds002723) - NeMAR: [ds002723](https://nemar.org/dataexplorer/detail?dataset_id=ds002723) ## API Reference Use the `DS002723` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002723(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recorded during development of an affective brain-computer music interface: testing session * **Study:** `ds002723` (OpenNeuro) * **Author (year):** `Daly2020_session` * **Canonical:** — Also importable as: `DS002723`, `Daly2020_session`. Modality: `eeg`. Subjects: 8; recordings: 44; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002723](https://openneuro.org/datasets/ds002723) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002723](https://nemar.org/dataexplorer/detail?dataset_id=ds002723) DOI: [https://doi.org/10.18112/openneuro.ds002723.v1.1.0](https://doi.org/10.18112/openneuro.ds002723.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002723 >>> dataset = DS002723(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002723) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002723) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002724: eeg dataset, 10 subjects *A dataset recorded during development of an affective brain-computer music interface: training sessions* Access recordings and metadata through EEGDash. **Citation:** Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto (2020). *A dataset recorded during development of an affective brain-computer music interface: training sessions*. [10.18112/openneuro.ds002724.v1.0.1](https://doi.org/10.18112/openneuro.ds002724.v1.0.1) Modality: eeg Subjects: 10 Recordings: 96 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002724 dataset = DS002724(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002724(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002724( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002724, title = {A dataset recorded during development of an affective brain-computer music interface: training sessions}, author = {Ian Daly and Nicoletta Nicolaou and Duncan Williams and Faustina Hwang and Alexis Kirke and Eduardo Miranda and Slawomir J. Nasuto}, doi = {10.18112/openneuro.ds002724.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds002724.v1.0.1}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS002724` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A dataset recorded during development of an affective brain-computer music interface: training sessions | | Author (year) | `Daly2020_sessions` | | Canonical | — | | Importable as | `DS002724`, `Daly2020_sessions` | | Year | 2020 | | Authors | Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002724.v1.0.1](https://doi.org/10.18112/openneuro.ds002724.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002724) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002724) | [Source URL](https://openneuro.org/datasets/ds002724) | ### Copy-paste BibTeX ```bibtex @dataset{ds002724, title = {A dataset recorded during development of an affective brain-computer music interface: training sessions}, author = {Ian Daly and Nicoletta Nicolaou and Duncan Williams and Faustina Hwang and Alexis Kirke and Eduardo Miranda and Slawomir J. Nasuto}, doi = {10.18112/openneuro.ds002724.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds002724.v1.0.1}, } ``` ## Technical Details - Subjects: 10 - Recordings: 96 - Tasks: — - Channels: 37 - Sampling rate (Hz): 1000.0 - Duration (hours): 28.61055555555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 8.5 GB - File count: 96 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002724.v1.0.1 - Source: openneuro - OpenNeuro: [ds002724](https://openneuro.org/datasets/ds002724) - NeMAR: [ds002724](https://nemar.org/dataexplorer/detail?dataset_id=ds002724) ## API Reference Use the `DS002724` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002724(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recorded during development of an affective brain-computer music interface: training sessions * **Study:** `ds002724` (OpenNeuro) * **Author (year):** `Daly2020_sessions` * **Canonical:** — Also importable as: `DS002724`, `Daly2020_sessions`. Modality: `eeg`. Subjects: 10; recordings: 96; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002724](https://openneuro.org/datasets/ds002724) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002724](https://nemar.org/dataexplorer/detail?dataset_id=ds002724) DOI: [https://doi.org/10.18112/openneuro.ds002724.v1.0.1](https://doi.org/10.18112/openneuro.ds002724.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002724 >>> dataset = DS002724(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002724) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002724) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002725: eeg dataset, 21 subjects *A dataset recording joint EEG-fMRI during affective music listening* Access recordings and metadata through EEGDash. **Citation:** Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto (2020). *A dataset recording joint EEG-fMRI during affective music listening*. [10.18112/openneuro.ds002725.v1.0.0](https://doi.org/10.18112/openneuro.ds002725.v1.0.0) Modality: eeg Subjects: 21 Recordings: 105 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002725 dataset = DS002725(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002725(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002725( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002725, title = {A dataset recording joint EEG-fMRI during affective music listening}, author = {Ian Daly and Nicoletta Nicolaou and Duncan Williams and Faustina Hwang and Alexis Kirke and Eduardo Miranda and Slawomir J. Nasuto}, doi = {10.18112/openneuro.ds002725.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds002725.v1.0.0}, } ``` ## About This Dataset Dataset: Joint EEG-fMRI recording during affective music listening. This dataset was recorded from 21 healthy adult participants viia a joint EEG-fMRI modality while they listened to a set of music stimuli chosen and generated to produce different affective (emotional) reponses. Participants self-reported their felt affective states as they listened to the music. The full experiment description can be found in our paper (Daly et.al., 2019). Data recorded in 2016 Published in 2019 [1] Daly, I., Williams, D., Hwang, F., Kirke, A., Miranda, E. R., & Nasuto, S. J. (2019). Electroencephalography reflects the activity of sub-cortical brain regions during approach-withdrawal behaviour while listening to music. Scientific Reports, 9(1), 9415. [https://doi.org/10.1038/s41598-019-45105-2](https://doi.org/10.1038/s41598-019-45105-2) ## Dataset Information | Dataset ID | `DS002725` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A dataset recording joint EEG-fMRI during affective music listening | | Author (year) | `Daly2020_joint` | | Canonical | — | | Importable as | `DS002725`, `Daly2020_joint` | | Year | 2020 | | Authors | Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002725.v1.0.0](https://doi.org/10.18112/openneuro.ds002725.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002725) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002725) | [Source URL](https://openneuro.org/datasets/ds002725) | ### Copy-paste BibTeX ```bibtex @dataset{ds002725, title = {A dataset recording joint EEG-fMRI during affective music listening}, author = {Ian Daly and Nicoletta Nicolaou and Duncan Williams and Faustina Hwang and Alexis Kirke and Eduardo Miranda and Slawomir J. Nasuto}, doi = {10.18112/openneuro.ds002725.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds002725.v1.0.0}, } ``` ## Technical Details - Subjects: 21 - Recordings: 105 - Tasks: 5 - Channels: 46 - Sampling rate (Hz): 1000.0 - Duration (hours): 22.538611111111116 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 15.3 GB - File count: 105 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002725.v1.0.0 - Source: openneuro - OpenNeuro: [ds002725](https://openneuro.org/datasets/ds002725) - NeMAR: [ds002725](https://nemar.org/dataexplorer/detail?dataset_id=ds002725) ## API Reference Use the `DS002725` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002725(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recording joint EEG-fMRI during affective music listening * **Study:** `ds002725` (OpenNeuro) * **Author (year):** `Daly2020_joint` * **Canonical:** — Also importable as: `DS002725`, `Daly2020_joint`. Modality: `eeg`. Subjects: 21; recordings: 105; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002725](https://openneuro.org/datasets/ds002725) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002725](https://nemar.org/dataexplorer/detail?dataset_id=ds002725) DOI: [https://doi.org/10.18112/openneuro.ds002725.v1.0.0](https://doi.org/10.18112/openneuro.ds002725.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS002725 >>> dataset = DS002725(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002725) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002725) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002761: meg dataset, 25 subjects *memoryreplay* Access recordings and metadata through EEGDash. **Citation:** G. Elliott Wimmer, Yunzhe Liu, Neža Vehar, Timothy E.J. Behrens, Raymond J. Dolan (2020). *memoryreplay*. [10.18112/openneuro.ds002761.v1.1.2](https://doi.org/10.18112/openneuro.ds002761.v1.1.2) Modality: meg Subjects: 25 Recordings: 249 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002761 dataset = DS002761(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002761(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002761( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002761, title = {memoryreplay}, author = {G. Elliott Wimmer and Yunzhe Liu and Neža Vehar and Timothy E.J. Behrens and Raymond J. Dolan}, doi = {10.18112/openneuro.ds002761.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds002761.v1.1.2}, } ``` ## About This Dataset The MEG files contain a channel with triggers necessary for event marking and timing. Separate event files with onsets are provided in the participant directories for completeness only; the MEG triggers should be used for actual onsets in analysis. The delay between the trigger and the visual onset of an on-screen event sent by the projector is approximately 20 ms, as estimated using a photodiode. Memory phase triggers: At the onset of a trial, the first trigger represents the category (1-8) of the on-screen image. Categories 1-6 represent actual stimulus categories. Trigger values of 7 and 8 represent the 4 positive and 4 negative story-ending stimuli, respectively. The onset of the answer, approximately 5.5 sec later, is marked by a trigger value of 11. Localizer phase triggers: As in the memory phase, at the onset of a trial, the first trigger represents the category (1-8) of the on-screen image. Categories 1-6 represent true categories. Trigger values of 7 and 8 represent the 4 positive and 4 negative story-ending stimuli, respectively. For a baseline, note that for the 2 s prior to picture onset, a word naming that picture was presented on the screen; thus, baseline values should be taken from data more than 2 s before the trigger onset. Methods note: a sequenceness analysis step was omitted from the published 2020 Nature Neuroscience paper. The text should have read: “We next asked whether the βi(Δt) was consistent with a specified 6 × 6 transition matrix by taking the Frobenius inner product between these two matrices (the sum of element-wise products of the two matrices). This resulted in a single number ZΔt, which pertained to lag Δt. For each trial, sequenceness results were then z-scored across lags. Finally, differential forward – backward sequenceness was defined as ZfΔt − ZbΔt.” ## Dataset Information | Dataset ID | `DS002761` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | memoryreplay | | Author (year) | `Wimmer2020` | | Canonical | — | | Importable as | `DS002761`, `Wimmer2020` | | Year | 2020 | | Authors | 1. Elliott Wimmer, Yunzhe Liu, Neža Vehar, Timothy E.J. Behrens, Raymond J. Dolan | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds002761.v1.1.2](https://doi.org/10.18112/openneuro.ds002761.v1.1.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002761) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002761) | [Source URL](https://openneuro.org/datasets/ds002761) | ### Copy-paste BibTeX ```bibtex @dataset{ds002761, title = {memoryreplay}, author = {G. Elliott Wimmer and Yunzhe Liu and Neža Vehar and Timothy E.J. Behrens and Raymond J. Dolan}, doi = {10.18112/openneuro.ds002761.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds002761.v1.1.2}, } ``` ## Technical Details - Subjects: 25 - Recordings: 249 - Tasks: 2 - Channels: 306 - Sampling rate (Hz): 600.0 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 1.7 MB - File count: 249 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds002761.v1.1.2 - Source: openneuro - OpenNeuro: [ds002761](https://openneuro.org/datasets/ds002761) - NeMAR: [ds002761](https://nemar.org/dataexplorer/detail?dataset_id=ds002761) ## API Reference Use the `DS002761` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002761(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) memoryreplay * **Study:** `ds002761` (OpenNeuro) * **Author (year):** `Wimmer2020` * **Canonical:** — Also importable as: `DS002761`, `Wimmer2020`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 25; recordings: 249; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002761](https://openneuro.org/datasets/ds002761) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002761](https://nemar.org/dataexplorer/detail?dataset_id=ds002761) DOI: [https://doi.org/10.18112/openneuro.ds002761.v1.1.2](https://doi.org/10.18112/openneuro.ds002761.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002761 >>> dataset = DS002761(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002761) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002761) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS002778: eeg dataset, 31 subjects *UC San Diego Resting State EEG Data from Patients with Parkinson’s Disease* Access recordings and metadata through EEGDash. **Citation:** Alexander P. Rockhill, Nicko Jackson, Jobi George, Adam Aron, Nicole C. Swann (2020). *UC San Diego Resting State EEG Data from Patients with Parkinson’s Disease*. [10.18112/openneuro.ds002778.v1.0.5](https://doi.org/10.18112/openneuro.ds002778.v1.0.5) Modality: eeg Subjects: 31 Recordings: 46 License: CC0 Source: openneuro Citations: 42.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002778 dataset = DS002778(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002778(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002778( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002778, title = {UC San Diego Resting State EEG Data from Patients with Parkinson's Disease}, author = {Alexander P. Rockhill and Nicko Jackson and Jobi George and Adam Aron and Nicole C. Swann}, doi = {10.18112/openneuro.ds002778.v1.0.5}, url = {https://doi.org/10.18112/openneuro.ds002778.v1.0.5}, } ``` ## About This Dataset Welcome to the resting state EEG dataset collected at the University of San Diego and curated by Alex Rockhill at the University of Oregon. Please email [arockhil@uoregon.edu](mailto:arockhil@uoregon.edu) before submitting a manuscript to be published in a peer-reviewed journal using this data, we wish to ensure that the data to be analyzed and interpreted with scientific integrity so as not to mislead the public about findings that may have clinical relevance. The purpose of this is to be responsible stewards of the data without an “available upon reasonable request” clause that we feel doesn’t fully represent the open-source, reproducible ethos. The data is freely available to download so we cannot stop your publication if we don’t support your methods and interpretation of findings, however, in being good data stewards, we would like to offer suggestions in the pre-publication stage so as to reduce conflict in published scientific literature. As far as credit, there is precedent for receiving a mention in the acknowledgements section for reading and providing feedback on the paper or, for more involved consulting, being included as an author may be warranted. The purpose of asking for this is not to inflate our number of authorships; we take ethical considerations of the best way to handle intellectual property in the form of manuscripts very seriously, and, again, sharing is at the discretion of the author although we strongly recommend it. Please be ethical and considerate in your use of this data and all open-source data and be sure to credit authors by citing them. An example of an analysis that we could consider problematic and would strongly advice to be corrected before submission to a publication would be using machine learning to classify Parkinson’s patients from healthy controls using this dataset. This is because there are far too few patients for proper statistics. Parkinson’s disease presents heterogeneously across patients, and, with a proper test-training split, there would be fewer than 8 patients in the testing set. Statistics on 8 or fewer patients for such a complicated diease would be inaccurate due to having too small of a sample size. Furthermore, if multiple machine learning algorithms were desired to be tested, a third split would be required to choose the best method, further lowering the number of patients in the testing set. We strongly advise against using any such approach because it would mislead patients and people who are interested in knowing if they have Parkinson’s disease. Note that UPDRS rating scales were collected by laboratory personnel who had completed online training and not a board-certified neurologist. Results should be interpreted accordingly, especially that analyses based largely on these ratings should be taken with the appropriate amount of uncertainty. In addition to contacting the aforementioned email, please cite the following papers: Nicko Jackson, Scott R. Cole, Bradley Voytek, Nicole C. Swann. Characteristics of Waveform Shape in Parkinson’s Disease Detected with Scalp Electroencephalography. eNeuro 20 May 2019, 6 (3) ENEURO.0151-19.2019; DOI: 10.1523/ENEURO.0151-19.2019. Swann NC, de Hemptinne C, Aron AR, Ostrem JL, Knight RT, Starr PA. Elevated synchrony in Parkinson disease detected with electroencephalography. Ann Neurol. 2015 Nov;78(5):742-50. doi: 10.1002/ana.24507. Epub 2015 Sep 2. PMID: 26290353; PMCID: PMC4623949. George JS, Strunk J, Mak-McCully R, Houser M, Poizner H, Aron AR. Dopaminergic therapy in Parkinson’s disease decreases cortical beta band coherence in the resting state and increases cortical beta band power during executive control. Neuroimage Clin. 2013 Aug 8;3:261-70. doi: 10.1016/j.nicl.2013.07.013. PMID: 24273711; PMCID: PMC3814961. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8). Note: see this discussion on the structure of the json files that is sufficient but not optimal and will hopefully be changed in future versions of BIDS: [https://neurostars.org/t/behavior-metadata-without-tsv-event-data-related-to-a-neuroimaging-data/6768/25](https://neurostars.org/t/behavior-metadata-without-tsv-event-data-related-to-a-neuroimaging-data/6768/25). ## Dataset Information | Dataset ID | `DS002778` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | UC San Diego Resting State EEG Data from Patients with Parkinson’s Disease | | Author (year) | `Rockhill2020` | | Canonical | — | | Importable as | `DS002778`, `Rockhill2020` | | Year | 2020 | | Authors | Alexander P. Rockhill, Nicko Jackson, Jobi George, Adam Aron, Nicole C. Swann | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds002778.v1.0.5](https://doi.org/10.18112/openneuro.ds002778.v1.0.5) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002778) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002778) | [Source URL](https://openneuro.org/datasets/ds002778) | ### Copy-paste BibTeX ```bibtex @dataset{ds002778, title = {UC San Diego Resting State EEG Data from Patients with Parkinson's Disease}, author = {Alexander P. Rockhill and Nicko Jackson and Jobi George and Adam Aron and Nicole C. Swann}, doi = {10.18112/openneuro.ds002778.v1.0.5}, url = {https://doi.org/10.18112/openneuro.ds002778.v1.0.5}, } ``` ## Technical Details - Subjects: 31 - Recordings: 46 - Tasks: 1 - Channels: 41 - Sampling rate (Hz): 512.0 - Duration (hours): 2.517752821180556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 545.0 MB - File count: 46 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds002778.v1.0.5 - Source: openneuro - OpenNeuro: [ds002778](https://openneuro.org/datasets/ds002778) - NeMAR: [ds002778](https://nemar.org/dataexplorer/detail?dataset_id=ds002778) ## API Reference Use the `DS002778` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002778(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) UC San Diego Resting State EEG Data from Patients with Parkinson’s Disease * **Study:** `ds002778` (OpenNeuro) * **Author (year):** `Rockhill2020` * **Canonical:** — Also importable as: `DS002778`, `Rockhill2020`. Modality: `eeg`. Subjects: 31; recordings: 46; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002778](https://openneuro.org/datasets/ds002778) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002778](https://nemar.org/dataexplorer/detail?dataset_id=ds002778) DOI: [https://doi.org/10.18112/openneuro.ds002778.v1.0.5](https://doi.org/10.18112/openneuro.ds002778.v1.0.5) NEMAR citation count: 42 ### Examples ```pycon >>> from eegdash.dataset import DS002778 >>> dataset = DS002778(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002778) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002778) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002791: eeg dataset, 23 subjects *DataSet1* Access recordings and metadata through EEGDash. **Citation:** Ahmad Mheich, Olivier Dufor, Sahar Yassine, Aya Kabbara, Fabrice Wendling, Mahmoud Hassan (2020). *DataSet1*. [10.18112/openneuro.ds002791.v1.0.0](https://doi.org/10.18112/openneuro.ds002791.v1.0.0) Modality: eeg Subjects: 23 Recordings: 92 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002791 dataset = DS002791(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002791(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002791( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002791, title = {DataSet1}, author = {Ahmad Mheich and Olivier Dufor and Sahar Yassine and Aya Kabbara and Fabrice Wendling and Mahmoud Hassan}, doi = {10.18112/openneuro.ds002791.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds002791.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS002791` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | DataSet1 | | Author (year) | `Mheich2020_DataSet1` | | Canonical | `Mheich2020` | | Importable as | `DS002791`, `Mheich2020_DataSet1`, `Mheich2020` | | Year | 2020 | | Authors | Ahmad Mheich, Olivier Dufor, Sahar Yassine, Aya Kabbara, Fabrice Wendling, Mahmoud Hassan | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002791.v1.0.0](https://doi.org/10.18112/openneuro.ds002791.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002791) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002791) | [Source URL](https://openneuro.org/datasets/ds002791) | ### Copy-paste BibTeX ```bibtex @dataset{ds002791, title = {DataSet1}, author = {Ahmad Mheich and Olivier Dufor and Sahar Yassine and Aya Kabbara and Fabrice Wendling and Mahmoud Hassan}, doi = {10.18112/openneuro.ds002791.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds002791.v1.0.0}, } ``` ## Technical Details - Subjects: 23 - Recordings: 92 - Tasks: — - Channels: 256 (80), 257 (12) - Sampling rate (Hz): 1000.0 - Duration (hours): 13.535969444444444 - Pathology: Healthy - Modality: — - Type: — - Size on disk: 47.1 GB - File count: 92 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002791.v1.0.0 - Source: openneuro - OpenNeuro: [ds002791](https://openneuro.org/datasets/ds002791) - NeMAR: [ds002791](https://nemar.org/dataexplorer/detail?dataset_id=ds002791) ## API Reference Use the `DS002791` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002791(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) DataSet1 * **Study:** `ds002791` (OpenNeuro) * **Author (year):** `Mheich2020_DataSet1` * **Canonical:** `Mheich2020` Also importable as: `DS002791`, `Mheich2020_DataSet1`, `Mheich2020`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 23; recordings: 92; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002791](https://openneuro.org/datasets/ds002791) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002791](https://nemar.org/dataexplorer/detail?dataset_id=ds002791) DOI: [https://doi.org/10.18112/openneuro.ds002791.v1.0.0](https://doi.org/10.18112/openneuro.ds002791.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS002791 >>> dataset = DS002791(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002791) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002791) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002799: ieeg dataset, 27 subjects *Human es-fMRI Resource: Concurrent deep-brain stimulation and whole-brain functional MRI* Access recordings and metadata through EEGDash. **Citation:** Thompson WH\*, Nair R\*, Oya H\*, Esteban O, Shine JM, Petkov CI, Poldrack RA, Howard M, Adolphs R†, \*equally contributing, †corresponding author (—). *Human es-fMRI Resource: Concurrent deep-brain stimulation and whole-brain functional MRI*. [10.18112/openneuro.ds002799.v1.0.4](https://doi.org/10.18112/openneuro.ds002799.v1.0.4) Modality: ieeg Subjects: 27 Recordings: 16824 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002799 dataset = DS002799(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002799(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002799( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002799, title = {Human es-fMRI Resource: Concurrent deep-brain stimulation and whole-brain functional MRI}, author = {Thompson WH* and Nair R* and Oya H* and Esteban O and Shine JM and Petkov CI and Poldrack RA and Howard M and Adolphs R† and *equally contributing, †corresponding author}, doi = {10.18112/openneuro.ds002799.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds002799.v1.0.4}, } ``` ## About This Dataset Link to published paper for this data resource: [https://rdcu.be/b57kz](https://rdcu.be/b57kz) This collection contains data from 26 human patients who underwent electrical stimulation during functional magnetic resonance imaging (es-fMRI). The patients had medically refractory epilepsy requiring surgically implanted intracranial electrodes in cortical and subcortical locations. One or multiple contacts on these electrodes were stimulated while simultaneously recording BOLD-fMRI activity in a block design. Multiple runs exist for patients with different stimulation sites. Data is organized in two sessions : Pre-op (pre electrode implantation) and Post-op (post electrode implantation). Raw data is provided in BIDS format and consists of T1s, T2s, resting state scans (pre-op), es-fMRI scans(post-op) , any associated field-maps and stimulation electrode coordinates and stimulation parameters. Pre-processed data (fMRIprep and Freesurfer) is present in the ‘derivatives’ folder. Notes: 1. Subject IDs 339, 369 and 394 do not have stimulation electrode location data available. 2. Electrodes are in chA-chB format (chA gets leading positive phase of the stimulation). This information is stored in the “channel” file for each stimulation run. 3. In some cases, two distant sites were stimulated simultaneously as indicated by the electrode listed under the appropriate run IDs within the ieeg folders. ## Dataset Information | Dataset ID | `DS002799` | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Human es-fMRI Resource: Concurrent deep-brain stimulation and whole-brain functional MRI | | Author (year) | `Thompson2024` | | Canonical | — | | Importable as | `DS002799`, `Thompson2024` | | Year | — | | Authors | Thompson WH\*, Nair R\*, Oya H\*, Esteban O, Shine JM, Petkov CI, Poldrack RA, Howard M, Adolphs R†, \*equally contributing, †corresponding author | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002799.v1.0.4](https://doi.org/10.18112/openneuro.ds002799.v1.0.4) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002799) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002799) | [Source URL](https://openneuro.org/datasets/ds002799/versions/1.0.4) | ### Copy-paste BibTeX ```bibtex @dataset{ds002799, title = {Human es-fMRI Resource: Concurrent deep-brain stimulation and whole-brain functional MRI}, author = {Thompson WH* and Nair R* and Oya H* and Esteban O and Shine JM and Petkov CI and Poldrack RA and Howard M and Adolphs R† and *equally contributing, †corresponding author}, doi = {10.18112/openneuro.ds002799.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds002799.v1.0.4}, } ``` ## Technical Details - Subjects: 27 - Recordings: 16824 - Tasks: 2 - Channels: 2 (79), 4 - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Epilepsy - Modality: Other - Type: Clinical/Intervention - Size on disk: 18.6 GB - File count: 16824 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002799.v1.0.4 - Source: openneuro - OpenNeuro: [ds002799](https://openneuro.org/datasets/ds002799) - NeMAR: [ds002799](https://nemar.org/dataexplorer/detail?dataset_id=ds002799) ## API Reference Use the `DS002799` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002799(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Human es-fMRI Resource: Concurrent deep-brain stimulation and whole-brain functional MRI * **Study:** `ds002799` (OpenNeuro) * **Author (year):** `Thompson2024` * **Canonical:** — Also importable as: `DS002799`, `Thompson2024`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 27; recordings: 16824; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002799](https://openneuro.org/datasets/ds002799) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002799](https://nemar.org/dataexplorer/detail?dataset_id=ds002799) DOI: [https://doi.org/10.18112/openneuro.ds002799.v1.0.4](https://doi.org/10.18112/openneuro.ds002799.v1.0.4) ### Examples ```pycon >>> from eegdash.dataset import DS002799 >>> dataset = DS002799(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002799) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002799) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) * [eegdash.dataset.DS003688](eegdash.dataset.DS003688.md) # DS002814: eeg dataset, 21 subjects *A Multimodal Neuroimaging Dataset to Study Spatiotemporal Dynamics of Visual Processing in Humans* Access recordings and metadata through EEGDash. **Citation:** Fatemeh Ebrahiminia, Morteza Mahdiani, Seyed-Mahdi Khaligh-Razavi (2020). *A Multimodal Neuroimaging Dataset to Study Spatiotemporal Dynamics of Visual Processing in Humans*. [10.18112/openneuro.ds002814.v1.3.0](https://doi.org/10.18112/openneuro.ds002814.v1.3.0) Modality: eeg Subjects: 21 Recordings: 168 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002814 dataset = DS002814(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002814(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002814( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002814, title = {A Multimodal Neuroimaging Dataset to Study Spatiotemporal Dynamics of Visual Processing in Humans}, author = {Fatemeh Ebrahiminia and Morteza Mahdiani and Seyed-Mahdi Khaligh-Razavi}, doi = {10.18112/openneuro.ds002814.v1.3.0}, url = {https://doi.org/10.18112/openneuro.ds002814.v1.3.0}, } ``` ## About This Dataset TODO: Provide description for the dataset – basic details about the study, possibly pointing to pre-registration (if public or embargoed) ## Dataset Information | Dataset ID | `DS002814` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A Multimodal Neuroimaging Dataset to Study Spatiotemporal Dynamics of Visual Processing in Humans | | Author (year) | `Ebrahiminia2020` | | Canonical | — | | Importable as | `DS002814`, `Ebrahiminia2020` | | Year | 2020 | | Authors | Fatemeh Ebrahiminia, Morteza Mahdiani, Seyed-Mahdi Khaligh-Razavi | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds002814.v1.3.0](https://doi.org/10.18112/openneuro.ds002814.v1.3.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002814) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002814) | [Source URL](https://openneuro.org/datasets/ds002814) | ### Copy-paste BibTeX ```bibtex @dataset{ds002814, title = {A Multimodal Neuroimaging Dataset to Study Spatiotemporal Dynamics of Visual Processing in Humans}, author = {Fatemeh Ebrahiminia and Morteza Mahdiani and Seyed-Mahdi Khaligh-Razavi}, doi = {10.18112/openneuro.ds002814.v1.3.0}, url = {https://doi.org/10.18112/openneuro.ds002814.v1.3.0}, } ``` ## Technical Details - Subjects: 21 - Recordings: 168 - Tasks: 1 - Channels: 72 - Sampling rate (Hz): 1200.0 - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: 27.7 GB - File count: 168 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds002814.v1.3.0 - Source: openneuro - OpenNeuro: [ds002814](https://openneuro.org/datasets/ds002814) - NeMAR: [ds002814](https://nemar.org/dataexplorer/detail?dataset_id=ds002814) ## API Reference Use the `DS002814` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002814(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Multimodal Neuroimaging Dataset to Study Spatiotemporal Dynamics of Visual Processing in Humans * **Study:** `ds002814` (OpenNeuro) * **Author (year):** `Ebrahiminia2020` * **Canonical:** — Also importable as: `DS002814`, `Ebrahiminia2020`. Modality: `eeg`. Subjects: 21; recordings: 168; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002814](https://openneuro.org/datasets/ds002814) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002814](https://nemar.org/dataexplorer/detail?dataset_id=ds002814) DOI: [https://doi.org/10.18112/openneuro.ds002814.v1.3.0](https://doi.org/10.18112/openneuro.ds002814.v1.3.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS002814 >>> dataset = DS002814(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002814) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002814) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002833: eeg dataset, 20 subjects *DataSet2* Access recordings and metadata through EEGDash. **Citation:** Ahmad Mheich, Olivier Dufor, Sahar Yassine, Aya Kabbara, Fabrice Wendling, Mahmoud Hassan (2020). *DataSet2*. [10.18112/openneuro.ds002833.v1.0.0](https://doi.org/10.18112/openneuro.ds002833.v1.0.0) Modality: eeg Subjects: 20 Recordings: 80 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002833 dataset = DS002833(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002833(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002833( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002833, title = {DataSet2}, author = {Ahmad Mheich and Olivier Dufor and Sahar Yassine and Aya Kabbara and Fabrice Wendling and Mahmoud Hassan}, doi = {10.18112/openneuro.ds002833.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds002833.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS002833` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | DataSet2 | | Author (year) | `Mheich2020_DataSet2` | | Canonical | `Mheich2024` | | Importable as | `DS002833`, `Mheich2020_DataSet2`, `Mheich2024` | | Year | 2020 | | Authors | Ahmad Mheich, Olivier Dufor, Sahar Yassine, Aya Kabbara, Fabrice Wendling, Mahmoud Hassan | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002833.v1.0.0](https://doi.org/10.18112/openneuro.ds002833.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002833) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002833) | [Source URL](https://openneuro.org/datasets/ds002833) | ### Copy-paste BibTeX ```bibtex @dataset{ds002833, title = {DataSet2}, author = {Ahmad Mheich and Olivier Dufor and Sahar Yassine and Aya Kabbara and Fabrice Wendling and Mahmoud Hassan}, doi = {10.18112/openneuro.ds002833.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds002833.v1.0.0}, } ``` ## Technical Details - Subjects: 20 - Recordings: 80 - Tasks: 1 - Channels: 257 - Sampling rate (Hz): 1000.0 - Duration (hours): 11.603826666666668 - Pathology: Healthy - Modality: Visual - Type: Other - Size on disk: 39.8 GB - File count: 80 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002833.v1.0.0 - Source: openneuro - OpenNeuro: [ds002833](https://openneuro.org/datasets/ds002833) - NeMAR: [ds002833](https://nemar.org/dataexplorer/detail?dataset_id=ds002833) ## API Reference Use the `DS002833` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002833(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) DataSet2 * **Study:** `ds002833` (OpenNeuro) * **Author (year):** `Mheich2020_DataSet2` * **Canonical:** `Mheich2024` Also importable as: `DS002833`, `Mheich2020_DataSet2`, `Mheich2024`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 20; recordings: 80; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002833](https://openneuro.org/datasets/ds002833) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002833](https://nemar.org/dataexplorer/detail?dataset_id=ds002833) DOI: [https://doi.org/10.18112/openneuro.ds002833.v1.0.0](https://doi.org/10.18112/openneuro.ds002833.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS002833 >>> dataset = DS002833(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002833) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002833) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002885: meg dataset, 2 subjects *DBS Phantom Recordings* Access recordings and metadata through EEGDash. **Citation:** Ahmet Levent Kandemir, Vladimir Litvak, Esther Florin (2020). *DBS Phantom Recordings*. [10.18112/openneuro.ds002885.v1.0.1](https://doi.org/10.18112/openneuro.ds002885.v1.0.1) Modality: meg Subjects: 2 Recordings: 7 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002885 dataset = DS002885(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002885(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002885( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002885, title = {DBS Phantom Recordings}, author = {Ahmet Levent Kandemir and Vladimir Litvak and Esther Florin}, doi = {10.18112/openneuro.ds002885.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds002885.v1.0.1}, } ``` ## About This Dataset This dataset is a part of the data used for the study: ‘Kandemir, A.L., Litvak, V., Florin, E., 2020. The comparative performance of DBS artefact rejection methods for MEG recordings, NeuroImage, 2020, [https://doi.org/10.1016/j.neuroimage.2020.117057](https://doi.org/10.1016/j.neuroimage.2020.117057).’ Please use the latest version of the dataset. For detailed information about measurement protocol please refer to [https://doi.org/10.1016/j.neuroimage.2020.117057](https://doi.org/10.1016/j.neuroimage.2020.117057). Additional information about CTF Phantom measurement is provided below. The customized Matlab code for artefact rejection methods is available at: [https://gitlab.com/lkandemir/dbs-artefact-rejection](https://gitlab.com/lkandemir/dbs-artefact-rejection). CTF Phantom Measurement Stimulation reference signal is captured with EEG001 Movement trigger is captured with UPPT001 Dipole activity is captured with HADC006 ## Dataset Information | Dataset ID | `DS002885` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | DBS Phantom Recordings | | Author (year) | `Kandemir2020` | | Canonical | — | | Importable as | `DS002885`, `Kandemir2020` | | Year | 2020 | | Authors | Ahmet Levent Kandemir, Vladimir Litvak, Esther Florin | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002885.v1.0.1](https://doi.org/10.18112/openneuro.ds002885.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002885) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002885) | [Source URL](https://openneuro.org/datasets/ds002885) | ### Copy-paste BibTeX ```bibtex @dataset{ds002885, title = {DBS Phantom Recordings}, author = {Ahmet Levent Kandemir and Vladimir Litvak and Esther Florin}, doi = {10.18112/openneuro.ds002885.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds002885.v1.0.1}, } ``` ## Technical Details - Subjects: 2 - Recordings: 7 - Tasks: 4 - Channels: 306 (4), 314 (3) - Sampling rate (Hz): 19200.0 (4), 3000.0 (3) - Duration (hours): 0.3995833333333333 - Pathology: Other - Modality: Other - Type: Other - Size on disk: 20.1 GB - File count: 7 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002885.v1.0.1 - Source: openneuro - OpenNeuro: [ds002885](https://openneuro.org/datasets/ds002885) - NeMAR: [ds002885](https://nemar.org/dataexplorer/detail?dataset_id=ds002885) ## API Reference Use the `DS002885` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002885(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) DBS Phantom Recordings * **Study:** `ds002885` (OpenNeuro) * **Author (year):** `Kandemir2020` * **Canonical:** — Also importable as: `DS002885`, `Kandemir2020`. Modality: `meg`; Experiment type: `Other`; Subject type: `Other`. Subjects: 2; recordings: 7; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002885](https://openneuro.org/datasets/ds002885) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002885](https://nemar.org/dataexplorer/detail?dataset_id=ds002885) DOI: [https://doi.org/10.18112/openneuro.ds002885.v1.0.1](https://doi.org/10.18112/openneuro.ds002885.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002885 >>> dataset = DS002885(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002885) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002885) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS002893: eeg dataset, 49 subjects *Auditory-Visual Shift Study* Access recordings and metadata through EEGDash. **Citation:** Marissa Westerfield (data, curation), Scott Makeig (data, curation), Dung Truong (curation), Kay Robbins (curation), Arno Delorme (curation) (2020). *Auditory-Visual Shift Study*. [10.18112/openneuro.ds002893.v2.0.0](https://doi.org/10.18112/openneuro.ds002893.v2.0.0) Modality: eeg Subjects: 49 Recordings: 52 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002893 dataset = DS002893(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002893(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002893( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002893, title = {Auditory-Visual Shift Study}, author = {Marissa Westerfield (data, curation) and Scott Makeig (data, curation) and Dung Truong (curation) and Kay Robbins (curation) and Arno Delorme (curation)}, doi = {10.18112/openneuro.ds002893.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds002893.v2.0.0}, } ``` ## About This Dataset **Audio-Visual Attention Shift Experiment** *Project name:* [Sensory processing in aging] *Years the project ran:* 2007-2008 *Brief overview of experiment task:* The purpose of this Auditory-Visual Attention Shift study was to explore the effects of aging on selective attending and responding to auditory and visual stimulus differences using an interleaved dual-oddball audio-visual task design. EEG and EOG channels were acquired. **Data collection.** Scalp EEG data were collected from 33 scalp electrode channels, each referred to a ### View full README **Audio-Visual Attention Shift Experiment** *Project name:* [Sensory processing in aging] *Years the project ran:* 2007-2008 *Brief overview of experiment task:* The purpose of this Auditory-Visual Attention Shift study was to explore the effects of aging on selective attending and responding to auditory and visual stimulus differences using an interleaved dual-oddball audio-visual task design. EEG and EOG channels were acquired. **Data collection.** Scalp EEG data were collected from 33 scalp electrode channels, each referred to a right mastoid electrode, within an analogue passband of 0.1 to 60 Hz. *Contact person:* Scott Makeig <[smakeig@ucsd.edu](mailto:smakeig@ucsd.edu)>, ORCID: 0000-0002-9048-8438. *Access information:* Contributed to OpenNeuro.org and NEMAR.org in BIDS format following annotation using HED 8.0.0 in April, 2022. *Independent variables:* Stimulus stream (visual, auditory, cue); stimulus stream identity (target, standard); task condition (FA, FV, SH) *Dependent variables:* Participant response (correct/incorrect). Button press response attributes (task time window and post-target latency). *Participant pool:* The dataset includes data collected from 19 younger adult subjects (8 male, 11 female, ages 20?40 years) and 30 older adult subjects (11 male, 19 female, ages 49-73 years). The subjects were cognitively intact and had normal or adjusted to normal hearing and vision. *Initial setup:* EEG data were collected from 33 EEG channels using the 10-20 placement and referenced to the right mastoid. The left mastoid and two EOG channels were also included in the collection. The data was acquired at a sampling rate of 250 Hz with an analog pass band of 0.01 to 60 Hz (SA Instrumentation, San Diego). Input impedances were brought under 5 kilo-ohms by careful scalp preparation. *Task conditions:* - *Focus Visual (FV):* participants pressed the response button only in response to target visual stimuli. - *Focus Auditory (FA):* participants pressed the same button only in response to target auditory stimuli. - *Shift Focus (SF):* participants shifted between performing the FV and FA tasks as cued by the preceding (Look/Hear) cue stimulus. *Task organization:* The stimuli were presented in blocks of 264 for a duration of 2.64. In each block there were 12 “Hear” and 12 “Look” cues. A total of 20 blocks were presented for each session. Each experiment began with two non-shift blocks (one each of auditory focus FA and visual focus FV counter-balanced across sessions). These were followed by 12 SF shift blocks. Finally an auditory focus FA group (3 blocks) and a visual focus FV group (3 blocks) were presented. The order of these groups was counter-balanced across experiments. Brief rest periods occur between task blocks. The task condition in the next block was given verbally to the participant during the pre-block rest period. *Task details:* Participants respond by finger button press selectively to auditory (brief tones) and visual (colored squares) stimuli constituting distinct, interleaved auditory and visual oddball stimulus streams whose stimuli are presented in randomly interleaved order with stimulus-onset asynchronies (SOAs) varying randomly between 200 and 800 ms. - *Visual stimuli:* were (infrequent, 10%) dark blue target or (frequent, 90%) light blue standard 8.4-cm2 squares presented for 100 ms. - *Auditory stimuli:* were (infrequent, 10%) 550-Hz target or (frequent, 90%) 500-Hz tones with 100 msec duration and 63 dB SPL intensity. - *Task cue stimuli:\*interspersed in the stimulus sequence at mean 5-sec intervals and consisting of the simultaneous spoken and printed display of one of the words\*Look\*or\*Hear* each presented for 200 msec. *Additional data acquired:* Participants had no history of major neurological, psychiatric, or medical disorders. All had normal or adjusted to normal vision and hearing (none wore hearing aids). Verbal and performance IQ were assessed using the WASI-III (Wechsler, 1997). There were no significant differences between the groups in IQ measures or years of education. Participants in the Older group received a battery of neuropsychological tests to assure normal cognitive functioning, including the Mini Mental State Exam (MMSE) (Folstein et al., 1975), Dementia Rating Scale (DRS) (Mattis, 1988), Wechsler Memory Scale. *Experiment location:* Department of Psychiatry laboratory of Jeanne Townsend, University of California San Diego, La Jolla CA (USA). **Note 1**: ERP measure results for the FA and FV conditions only were presented in Ceponiene, R., Westerfield, M., Torki, M. and Townsend, J., 2008. Modality-specificity of sensory aging in vision and audition: evidence from event-related potentials.?Brain research,?1215, pp.53-68. Some unpublished results by Christian Kothe and Scott Makeig on the SH condition may be available from the authors <[christiankothe@gmail.com](mailto:christiankothe@gmail.com)> <[smakeig@ucsd.edu](mailto:smakeig@ucsd.edu)>. **Note 2**: The code subdirectory has several auxilliary files that were produced during the curation process. The curation was done using a series of Jupyter notebooks that are available as run in the code/curation_notebooks subdirectory. During the running of these curation notebooks information about the status was logged using the HEDLogger. The output of the logging process is in code/curation_logs. Updated versions of the curation notebooks can be found at: [https://github.com/hed-standard/hed-examples/tree/main/hedcode/jupyter_notebooks/dataset_specific_processing/attention_shift](https://github.com/hed-standard/hed-examples/tree/main/hedcode/jupyter_notebooks/dataset_specific_processing/attention_shift). ## Dataset Information | Dataset ID | `DS002893` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Auditory-Visual Shift Study | | Author (year) | `Westerfield2022` | | Canonical | — | | Importable as | `DS002893`, `Westerfield2022` | | Year | 2020 | | Authors | Marissa Westerfield (data, curation), Scott Makeig (data, curation), Dung Truong (curation), Kay Robbins (curation), Arno Delorme (curation) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds002893.v2.0.0](https://doi.org/10.18112/openneuro.ds002893.v2.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002893) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002893) | [Source URL](https://openneuro.org/datasets/ds002893) | ### Copy-paste BibTeX ```bibtex @dataset{ds002893, title = {Auditory-Visual Shift Study}, author = {Marissa Westerfield (data, curation) and Scott Makeig (data, curation) and Dung Truong (curation) and Kay Robbins (curation) and Arno Delorme (curation)}, doi = {10.18112/openneuro.ds002893.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds002893.v2.0.0}, } ``` ## Technical Details - Subjects: 49 - Recordings: 52 - Tasks: 1 - Channels: 36 - Sampling rate (Hz): 250.0 (42), 250.0293378038558 (10) - Duration (hours): 37.7789260473112 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 7.7 GB - File count: 52 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds002893.v2.0.0 - Source: openneuro - OpenNeuro: [ds002893](https://openneuro.org/datasets/ds002893) - NeMAR: [ds002893](https://nemar.org/dataexplorer/detail?dataset_id=ds002893) ## API Reference Use the `DS002893` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002893(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory-Visual Shift Study * **Study:** `ds002893` (OpenNeuro) * **Author (year):** `Westerfield2022` * **Canonical:** — Also importable as: `DS002893`, `Westerfield2022`. Modality: `eeg`. Subjects: 49; recordings: 52; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002893](https://openneuro.org/datasets/ds002893) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002893](https://nemar.org/dataexplorer/detail?dataset_id=ds002893) DOI: [https://doi.org/10.18112/openneuro.ds002893.v2.0.0](https://doi.org/10.18112/openneuro.ds002893.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002893 >>> dataset = DS002893(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002893) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002893) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS002908: meg dataset, 13 subjects *Human MEG recordings during sequential conflict task* Access recordings and metadata through EEGDash. **Citation:** Rafal Bogacz, Vladimir Litvak (2020). *Human MEG recordings during sequential conflict task*. [10.18112/openneuro.ds002908.v1.0.0](https://doi.org/10.18112/openneuro.ds002908.v1.0.0) Modality: meg Subjects: 13 Recordings: 53 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS002908 dataset = DS002908(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS002908(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS002908( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds002908, title = {Human MEG recordings during sequential conflict task}, author = {Rafal Bogacz and Vladimir Litvak}, doi = {10.18112/openneuro.ds002908.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds002908.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS002908` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Human MEG recordings during sequential conflict task | | Author (year) | `Bogacz2020` | | Canonical | `Bogacz2024` | | Importable as | `DS002908`, `Bogacz2020`, `Bogacz2024` | | Year | 2020 | | Authors | Rafal Bogacz, Vladimir Litvak | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds002908.v1.0.0](https://doi.org/10.18112/openneuro.ds002908.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds002908) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds002908) | [Source URL](https://openneuro.org/datasets/ds002908) | ### Copy-paste BibTeX ```bibtex @dataset{ds002908, title = {Human MEG recordings during sequential conflict task}, author = {Rafal Bogacz and Vladimir Litvak}, doi = {10.18112/openneuro.ds002908.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds002908.v1.0.0}, } ``` ## Technical Details - Subjects: 13 - Recordings: 53 - Tasks: 1 - Channels: 299 - Sampling rate (Hz): 2400.0 - Duration (hours): 5.105833333333333 - Pathology: Not specified - Modality: — - Type: Attention - Size on disk: 59.8 GB - File count: 53 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds002908.v1.0.0 - Source: openneuro - OpenNeuro: [ds002908](https://openneuro.org/datasets/ds002908) - NeMAR: [ds002908](https://nemar.org/dataexplorer/detail?dataset_id=ds002908) ## API Reference Use the `DS002908` class to access this dataset programmatically. ### *class* eegdash.dataset.DS002908(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Human MEG recordings during sequential conflict task * **Study:** `ds002908` (OpenNeuro) * **Author (year):** `Bogacz2020` * **Canonical:** `Bogacz2024` Also importable as: `DS002908`, `Bogacz2020`, `Bogacz2024`. Modality: `meg`; Experiment type: `Attention`; Subject type: `Unknown`. Subjects: 13; recordings: 53; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002908](https://openneuro.org/datasets/ds002908) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002908](https://nemar.org/dataexplorer/detail?dataset_id=ds002908) DOI: [https://doi.org/10.18112/openneuro.ds002908.v1.0.0](https://doi.org/10.18112/openneuro.ds002908.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002908 >>> dataset = DS002908(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds002908) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds002908) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS003004: eeg dataset, 34 subjects *Imagined Emotion Study* Access recordings and metadata through EEGDash. **Citation:** Julie Onton, Scott Makeig (2020). *Imagined Emotion Study*. [10.18112/openneuro.ds003004.v1.1.1](https://doi.org/10.18112/openneuro.ds003004.v1.1.1) Modality: eeg Subjects: 34 Recordings: 34 License: CC0 Source: openneuro Citations: 7.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003004 dataset = DS003004(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003004(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003004( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003004, title = {Imagined Emotion Study}, author = {Julie Onton and Scott Makeig}, doi = {10.18112/openneuro.ds003004.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds003004.v1.1.1}, } ``` ## About This Dataset *PARADIGM:* The study uses the method of guided imagery to induce resting, eyes-closed participants using voice-guided imagination to enter distinct 15 emotion states during acquisition of high-density EEG data. During the study, participants listen to 15 voice recordings that each suggest imagining a scenario in which they have experienced – or would experience the named target emotion. Some target emotions have positive valence (e.g., joy, happiness), others negative valence (e.g., sadness, anger). Before and between the 15 emotion imagination periods, participants hear relaxation suggestions (‘Now return to a neutral state by …’). *PROCEDURE:* When the participant first begins to feel the target emotion, they are asked to indicate this by pressing a handheld button. Participants are asked to continue feeling the emotion as long as possible. To intensify and lengthen the periods of experienced emotion, participants are asked to interoceptively perceive and attend relevant somatosensory sensations. When the target feeling wanes (typically after 1 and 5 minutes), participants push the button again to leave the emotion imagination period and cue the relaxation instructions. *DATA HANDLING:* The raw data have been preprocessed to fix confusing event codes and to remove excessively noisy channels. In addition, a 1-Hz high pass filter was applied to ready the data for ICA decomposition. Note: Unfortunately, the unfiltered data are no longer available. *NOTE:* Sub22 was a repeat subject, hence was removed from the dataset. ## Dataset Information | Dataset ID | `DS003004` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Imagined Emotion Study | | Author (year) | `Onton2020` | | Canonical | — | | Importable as | `DS003004`, `Onton2020` | | Year | 2020 | | Authors | Julie Onton, Scott Makeig | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003004.v1.1.1](https://doi.org/10.18112/openneuro.ds003004.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003004) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003004) | [Source URL](https://openneuro.org/datasets/ds003004) | ### Copy-paste BibTeX ```bibtex @dataset{ds003004, title = {Imagined Emotion Study}, author = {Julie Onton and Scott Makeig}, doi = {10.18112/openneuro.ds003004.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds003004.v1.1.1}, } ``` ## Technical Details - Subjects: 34 - Recordings: 34 - Tasks: 1 - Channels: 219 (3), 224 (3), 212 (2), 221 (2), 214 (2), 134, 209, 218, 215, 235, 201, 211, 189, 208, 229, 220, 232, 222, 231, 226, 207, 206, 213, 180, 196, 223, 227 - Sampling rate (Hz): 256.0 - Duration (hours): 49.07222222222222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 36.0 GB - File count: 34 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003004.v1.1.1 - Source: openneuro - OpenNeuro: [ds003004](https://openneuro.org/datasets/ds003004) - NeMAR: [ds003004](https://nemar.org/dataexplorer/detail?dataset_id=ds003004) ## API Reference Use the `DS003004` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003004(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Imagined Emotion Study * **Study:** `ds003004` (OpenNeuro) * **Author (year):** `Onton2020` * **Canonical:** — Also importable as: `DS003004`, `Onton2020`. Modality: `eeg`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003004](https://openneuro.org/datasets/ds003004) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003004](https://nemar.org/dataexplorer/detail?dataset_id=ds003004) DOI: [https://doi.org/10.18112/openneuro.ds003004.v1.1.1](https://doi.org/10.18112/openneuro.ds003004.v1.1.1) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003004 >>> dataset = DS003004(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003004) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003004) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003029: ieeg dataset, 35 subjects *Epilepsy-iEEG-Multicenter-Dataset* Access recordings and metadata through EEGDash. **Citation:** Adam Li, Sara Inati, Kareem Zaghloul, Nathan Crone, William Anderson, Emily Johnson, Iahn Cajigas, Damian Brusko, Jonathan Jagid, Angel Claudio, Andres Kanner, Jennifer Hopp, Stephanie Chen, Jennifer Haagensen, Sridevi Sarma (2020). *Epilepsy-iEEG-Multicenter-Dataset*. [10.18112/openneuro.ds003029.v1.0.5](https://doi.org/10.18112/openneuro.ds003029.v1.0.5) Modality: ieeg Subjects: 35 Recordings: 106 License: CC0 Source: openneuro Citations: 19.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003029 dataset = DS003029(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003029(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003029( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003029, title = {Epilepsy-iEEG-Multicenter-Dataset}, author = {Adam Li and Sara Inati and Kareem Zaghloul and Nathan Crone and William Anderson and Emily Johnson and Iahn Cajigas and Damian Brusko and Jonathan Jagid and Angel Claudio and Andres Kanner and Jennifer Hopp and Stephanie Chen and Jennifer Haagensen and Sridevi Sarma}, doi = {10.18112/openneuro.ds003029.v1.0.5}, url = {https://doi.org/10.18112/openneuro.ds003029.v1.0.5}, } ``` ## About This Dataset **Fragility Multi-Center Retrospective Study** This dataset was updated and prepared for release as part of a manuscript by Bernabei & Li et al. (in preparation). A subset of the data has been featured in [1]. **Summary** iEEG and EEG data from 5 centers is organized in our study with a total of 100 subjects. We publish 4 centers’ dataset here due to data sharing issues. Acquisitions include ECoG and SEEG. Each run specifies a different snapshot of EEG data from that specific subject’s session. For seizure sessions, this means that each run is a EEG snapshot around a different seizure event. For additional clinical metadata about each subject, refer to the clinical Excel table in the publication. ### View full README **Fragility Multi-Center Retrospective Study** This dataset was updated and prepared for release as part of a manuscript by Bernabei & Li et al. (in preparation). A subset of the data has been featured in [1]. **Summary** iEEG and EEG data from 5 centers is organized in our study with a total of 100 subjects. We publish 4 centers’ dataset here due to data sharing issues. Acquisitions include ECoG and SEEG. Each run specifies a different snapshot of EEG data from that specific subject’s session. For seizure sessions, this means that each run is a EEG snapshot around a different seizure event. For additional clinical metadata about each subject, refer to the clinical Excel table in the publication. **Data Availability** NIH, JHH, UMMC, and UMF agreed to share. Cleveland Clinic did not, so requires an additional DUA. All data, except for Cleveland Clinic was approved by their centers to be de-identified and shared. All data in this dataset have no PHI, or other identifiers associated with patient. In order to access Cleveland Clinic data, please forward all requests to Amber Sours, [SOURSA@ccf.org](mailto:SOURSA@ccf.org): Amber Sours, MPH Research Supervisor | Epilepsy Center Cleveland Clinic | 9500 Euclid Ave. S3-399 | Cleveland, OH 44195 (216) 444-8638 You will need to sign a data use agreement (DUA). **Sourcedata** For each subject, there was a raw EDF file, which was converted into the BrainVision format with `mne_bids`. Each subject with SEEG implantation, also has an Excel table, called `electrode_layout.xlsx`, which outlines where the clinicians marked each electrode anatomically. Note that there is no rigorous atlas applied, so the main points of interest are: `WM`, `GM`, `VENTRICLE`, `CSF`, and `OUT`, which represent white-matter, gray-matter, ventricle, cerebrospinal fluid and outside the brain. WM, Ventricle, CSF and OUT were removed channels from further analysis. These were labeled in the corresponding BIDS `channels.tsv` sidecar file as `status=bad`. The dataset uploaded to `openneuro.org` does not contain the `sourcedata` since there was an extra anonymization step that occurred when fully converting to BIDS. **Derivatives** Derivatives include: \* fragility analysis \* frequency analysis \* graph metrics analysis \* figures These can be computed by following the following paper: [Neural Fragility as an EEG Marker for the Seizure Onset Zone](https://www.biorxiv.org/content/10.1101/862797v3) **Events and Descriptions** Within each EDF file, there contain event markers that are annotated by clinicians, which may inform you of specific clinical events that are occuring in time, or of when they saw seizures onset and offset (clinical and electrographic). During a seizure event, specifically event markers may follow this time course: > * eeg onset, or clinical onset - the onset of a seizure that is either marked electrographically, or by clinical behavior. Note that the clinical onset may not always be present, since some seizures manifest without clinical behavioral changes. > * Marker/Mark On - these are usually annotations within some cases, where a health practitioner injects a chemical marker for use in ICTAL SPECT imaging after a seizure occurs. This is commonly done to see which portions of the brain are active metabolically. > * Marker/Mark Off - This is when the ICTAL SPECT stops imaging. > * eeg offset, or clinical offset - this is the offset of the seizure, as determined either electrographically, or by clinical symptoms. Other events included may be beneficial for you to understand the time-course of each seizure. Note that ICTAL SPECT occurs in all Cleveland Clinic data. Note that seizure markers are not consistent in their description naming, so one might encode some specific regular-expression rules to consistently capture seizure onset/offset markers across all dataset. In the case of UMMC data, all onset and offset markers were provided by the clinicians on an Excel sheet instead of via the EDF file. So we went in and added the annotations manually to each EDF file. **Seizure Electrographic and Clinical Onset Annotations** For various datasets, there are seizures present within the dataset. Generally there is only one seizure per EDF file. When seizures are present, they are marked electrographically (and clinically if present) via standard approaches in the epilepsy clinical workflow. Clinical onset are just manifestation of the seizures with clinical syndromes. Sometimes the maker may not be present. **Seizure Onset Zone Annotations** What is actually important in the evaluation of datasets is the clinical annotations of their localization hypotheses of the seizure onset zone. These generally include: > * early onset: the earliest onset electrodes participating in the seizure that clinicians saw > * early/late spread (optional): the electrodes that showed epileptic spread activity after seizure onset. Not all seizures has spread contacts annotated. **Surgical Zone (Resection or Ablation) Annotations** For patients with the post-surgical MRI available, then the segmentation process outlined above tells us which electrodes were within the surgical removed brain region. Otherwise, clinicians give us their best estimate, of which electrodes were resected/ablated based on their surgical notes. For surgical patients whose postoperative medical records did not explicitly indicate specific resected or ablated contacts, manual visual inspection was performed to determine the approximate contacts that were located in later resected/ablated tissue. Postoperative T1 MRI scans were compared against post-SEEG implantation CT scans or CURRY coregistrations of preoperative MRI/post SEEG CT scans. Contacts of interest in and around the area of the reported resection were selected individually and the corresponding slice was navigated to on the CT scan or CURRY coregistration. After identifying landmarks of that slice (e.g. skull shape, skull features, shape of prominent brain structures like the ventricles, central sulcus, superior temporal gyrus, etc.), the location of a given contact in relation to these landmarks, and the location of the slice along the axial plane, the corresponding slice in the postoperative MRI scan was navigated to. The resected tissue within the slice was then visually inspected and compared against the distinct landmarks identified in the CT scans, if brain tissue was not present in the corresponding location of the contact, then the contact was marked as resected/ablated. This process was repeated for each contact of interest. **References** [1] Adam Li, Chester Huynh, Zachary Fitzgerald, Iahn Cajigas, Damian Brusko, Jonathan Jagid, Angel Claudio, Andres Kanner, Jennifer Hopp, Stephanie Chen, Jennifer Haagensen, Emily Johnson, William Anderson, Nathan Crone, Sara Inati, Kareem Zaghloul, Juan Bulacio, Jorge Gonzalez-Martinez, Sridevi V. Sarma. Neural Fragility as an EEG Marker of the Seizure Onset Zone. bioRxiv 862797; doi: [https://doi.org/10.1101/862797](https://doi.org/10.1101/862797) [2] Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) [3] Holdgraf, C., Appelhoff, S., Bickel, S., Bouchard, K., D’Ambrosio, S., David, O., … Hermes, D. (2019). iEEG-BIDS, extending the Brain Imaging Data Structure specification to human intracranial electrophysiology. Scientific Data, 6, 102. [https://doi.org/10.1038/s41597-019-0105-7](https://doi.org/10.1038/s41597-019-0105-7) [4] Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS003029` | |----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Epilepsy-iEEG-Multicenter-Dataset | | Author (year) | `Li2020` | | Canonical | — | | Importable as | `DS003029`, `Li2020` | | Year | 2020 | | Authors | Adam Li, Sara Inati, Kareem Zaghloul, Nathan Crone, William Anderson, Emily Johnson, Iahn Cajigas, Damian Brusko, Jonathan Jagid, Angel Claudio, Andres Kanner, Jennifer Hopp, Stephanie Chen, Jennifer Haagensen, Sridevi Sarma | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003029.v1.0.5](https://doi.org/10.18112/openneuro.ds003029.v1.0.5) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003029) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003029) | [Source URL](https://openneuro.org/datasets/ds003029) | ### Copy-paste BibTeX ```bibtex @dataset{ds003029, title = {Epilepsy-iEEG-Multicenter-Dataset}, author = {Adam Li and Sara Inati and Kareem Zaghloul and Nathan Crone and William Anderson and Emily Johnson and Iahn Cajigas and Damian Brusko and Jonathan Jagid and Angel Claudio and Andres Kanner and Jennifer Hopp and Stephanie Chen and Jennifer Haagensen and Sridevi Sarma}, doi = {10.18112/openneuro.ds003029.v1.0.5}, url = {https://doi.org/10.18112/openneuro.ds003029.v1.0.5}, } ``` ## Technical Details - Subjects: 35 - Recordings: 106 - Tasks: 1 - Channels: 129 (30), 132 (8), 135 (6), 88 (6), 147 (6), 123 (6), 101 (5), 91 (4), 98 (4), 110 (3), 53 (3), 81 (3), 99 (3), 60 (3), 111 (3), 80 (3), 89 (3), 86 (3), 65 (2), 216, 47 - Sampling rate (Hz): 1000.0 (56), 999.4121105232217 (13), 249.85355222464145 (10), 999.9999999999999 (9), 499.7071044492829 (7), 1000.0000000000001 (7), 2000.0000000000002 (3), 1024.5997950800408 - Duration (hours): 8.208814574268422 - Pathology: Epilepsy - Modality: Other - Type: Clinical/Intervention - Size on disk: 10.3 GB - File count: 106 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003029.v1.0.5 - Source: openneuro - OpenNeuro: [ds003029](https://openneuro.org/datasets/ds003029) - NeMAR: [ds003029](https://nemar.org/dataexplorer/detail?dataset_id=ds003029) ## API Reference Use the `DS003029` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003029(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Epilepsy-iEEG-Multicenter-Dataset * **Study:** `ds003029` (OpenNeuro) * **Author (year):** `Li2020` * **Canonical:** — Also importable as: `DS003029`, `Li2020`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 35; recordings: 106; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003029](https://openneuro.org/datasets/ds003029) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003029](https://nemar.org/dataexplorer/detail?dataset_id=ds003029) DOI: [https://doi.org/10.18112/openneuro.ds003029.v1.0.5](https://doi.org/10.18112/openneuro.ds003029.v1.0.5) NEMAR citation count: 19 ### Examples ```pycon >>> from eegdash.dataset import DS003029 >>> dataset = DS003029(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003029) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003029) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) * [eegdash.dataset.DS003688](eegdash.dataset.DS003688.md) # DS003039: eeg dataset, 19 subjects *free walking study* Access recordings and metadata through EEGDash. **Citation:** Nadine Jacobsen, Sarah Blum, Karsten Witt, Stefan Debener, Joanna Scanlon (2020). *free walking study*. [10.18112/openneuro.ds003039.v1.0.2](https://doi.org/10.18112/openneuro.ds003039.v1.0.2) Modality: eeg Subjects: 19 Recordings: 19 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003039 dataset = DS003039(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003039(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003039( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003039, title = {free walking study}, author = {Nadine Jacobsen and Sarah Blum and Karsten Witt and Stefan Debener and Joanna Scanlon}, doi = {10.18112/openneuro.ds003039.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003039.v1.0.2}, } ``` ## About This Dataset This free walking experiment consists of 19 participants. Subjects walked two different routes outdoors Each route was walked twice. Once while pressing buttons in a self-paced manner. The article (see Reference) contains all methodological details - Nadine Jacobsen (March, 2020) ## Dataset Information | Dataset ID | `DS003039` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | free walking study | | Author (year) | `Jacobsen2020` | | Canonical | — | | Importable as | `DS003039`, `Jacobsen2020` | | Year | 2020 | | Authors | Nadine Jacobsen, Sarah Blum, Karsten Witt, Stefan Debener, Joanna Scanlon | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003039.v1.0.2](https://doi.org/10.18112/openneuro.ds003039.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003039) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003039) | [Source URL](https://openneuro.org/datasets/ds003039) | ### Copy-paste BibTeX ```bibtex @dataset{ds003039, title = {free walking study}, author = {Nadine Jacobsen and Sarah Blum and Karsten Witt and Stefan Debener and Joanna Scanlon}, doi = {10.18112/openneuro.ds003039.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003039.v1.0.2}, } ``` ## Technical Details - Subjects: 19 - Recordings: 19 - Tasks: 1 - Channels: 67 - Sampling rate (Hz): 500.0 - Duration (hours): 17.507142777777776 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 7.8 GB - File count: 19 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003039.v1.0.2 - Source: openneuro - OpenNeuro: [ds003039](https://openneuro.org/datasets/ds003039) - NeMAR: [ds003039](https://nemar.org/dataexplorer/detail?dataset_id=ds003039) ## API Reference Use the `DS003039` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003039(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) free walking study * **Study:** `ds003039` (OpenNeuro) * **Author (year):** `Jacobsen2020` * **Canonical:** — Also importable as: `DS003039`, `Jacobsen2020`. Modality: `eeg`. Subjects: 19; recordings: 19; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003039](https://openneuro.org/datasets/ds003039) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003039](https://nemar.org/dataexplorer/detail?dataset_id=ds003039) DOI: [https://doi.org/10.18112/openneuro.ds003039.v1.0.2](https://doi.org/10.18112/openneuro.ds003039.v1.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003039 >>> dataset = DS003039(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003039) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003039) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003061: eeg dataset, 13 subjects *EEG data from an auditory oddball task* Access recordings and metadata through EEGDash. **Citation:** Arnaud Delorme (2020). *EEG data from an auditory oddball task*. [10.18112/openneuro.ds003061.v1.1.0](https://doi.org/10.18112/openneuro.ds003061.v1.1.0) Modality: eeg Subjects: 13 Recordings: 39 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003061 dataset = DS003061(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003061(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003061( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003061, title = {EEG data from an auditory oddball task}, author = {Arnaud Delorme}, doi = {10.18112/openneuro.ds003061.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003061.v1.1.0}, } ``` ## About This Dataset Data collection took place at the Meditation Research Institute (MRI) in Rishikesh, India under the supervision of Arnaud Delorme, PhD. The project was approved by the local MRI Indian ethical committee and the ethical committee of the University of California San Diego (IRB project # 090731). Participants sat either on a blanket on the floor or on a chair for both experimental periods depending on their personal preference. They were asked to keep their eyes closed and all lighting in the room was turned off during data collection. An intercom allowed communication between the experimental and the recording room. Participants performed three identical sessions of 13 minutes each. 750 stimuli were presented with 70% of them being standard (500 Hz pure tone lasting 60 milliseconds), 15% being oddball (1000 Hz pure tone lasting 60 ms) and 15% being distractors (1000 Hz white noise lasting 60 ms). All sounds took 5 milliseconds to ramp up and 5 milliseconds to ramp down. Sounds were presented at a rate of 1 per second with a random gaussian jitter of standard deviation 25 ms. Participants were instructed to respond to oddball by pressing a key on a keypad that was resting on their lap. ## Dataset Information | Dataset ID | `DS003061` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG data from an auditory oddball task | | Author (year) | `Delorme2020_auditory_oddball` | | Canonical | `Delorme` | | Importable as | `DS003061`, `Delorme2020_auditory_oddball`, `Delorme` | | Year | 2020 | | Authors | Arnaud Delorme | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003061.v1.1.0](https://doi.org/10.18112/openneuro.ds003061.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003061) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003061) | [Source URL](https://openneuro.org/datasets/ds003061) | ### Copy-paste BibTeX ```bibtex @dataset{ds003061, title = {EEG data from an auditory oddball task}, author = {Arnaud Delorme}, doi = {10.18112/openneuro.ds003061.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003061.v1.1.0}, } ``` ## Technical Details - Subjects: 13 - Recordings: 39 - Tasks: 1 - Channels: 79 - Sampling rate (Hz): 256.0 - Duration (hours): 8.206666666666667 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 2.3 GB - File count: 39 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003061.v1.1.0 - Source: openneuro - OpenNeuro: [ds003061](https://openneuro.org/datasets/ds003061) - NeMAR: [ds003061](https://nemar.org/dataexplorer/detail?dataset_id=ds003061) ## API Reference Use the `DS003061` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003061(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data from an auditory oddball task * **Study:** `ds003061` (OpenNeuro) * **Author (year):** `Delorme2020_auditory_oddball` * **Canonical:** `Delorme` Also importable as: `DS003061`, `Delorme2020_auditory_oddball`, `Delorme`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 39; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003061](https://openneuro.org/datasets/ds003061) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003061](https://nemar.org/dataexplorer/detail?dataset_id=ds003061) DOI: [https://doi.org/10.18112/openneuro.ds003061.v1.1.0](https://doi.org/10.18112/openneuro.ds003061.v1.1.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003061 >>> dataset = DS003061(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003061) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003061) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003078: ieeg dataset, 6 subjects *PROBE iEEG* Access recordings and metadata through EEGDash. **Citation:** Philippe DOMENECH, Sylvain RHEIMS, Etienne KOECHLIN (2020). *PROBE iEEG*. [10.18112/openneuro.ds003078.v1.0.0](https://doi.org/10.18112/openneuro.ds003078.v1.0.0) Modality: ieeg Subjects: 6 Recordings: 72 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003078 dataset = DS003078(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003078(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003078( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003078, title = {PROBE iEEG}, author = {Philippe DOMENECH and Sylvain RHEIMS and Etienne KOECHLIN}, doi = {10.18112/openneuro.ds003078.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003078.v1.0.0}, } ``` ## About This Dataset — version 1.0.0 — initial release Raw iEEG data + pre and post surgery MRI No reference to the published study (still in press) ## Dataset Information | Dataset ID | `DS003078` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PROBE iEEG | | Author (year) | `DOMENECH2020` | | Canonical | — | | Importable as | `DS003078`, `DOMENECH2020` | | Year | 2020 | | Authors | Philippe DOMENECH, Sylvain RHEIMS, Etienne KOECHLIN | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003078.v1.0.0](https://doi.org/10.18112/openneuro.ds003078.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003078) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003078) | [Source URL](https://openneuro.org/datasets/ds003078) | ### Copy-paste BibTeX ```bibtex @dataset{ds003078, title = {PROBE iEEG}, author = {Philippe DOMENECH and Sylvain RHEIMS and Etienne KOECHLIN}, doi = {10.18112/openneuro.ds003078.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003078.v1.0.0}, } ``` ## Technical Details - Subjects: 6 - Recordings: 72 - Tasks: 1 - Channels: 130 - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Surgery - Modality: — - Type: — - Size on disk: 11.0 GB - File count: 72 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003078.v1.0.0 - Source: openneuro - OpenNeuro: [ds003078](https://openneuro.org/datasets/ds003078) - NeMAR: [ds003078](https://nemar.org/dataexplorer/detail?dataset_id=ds003078) ## API Reference Use the `DS003078` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003078(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PROBE iEEG * **Study:** `ds003078` (OpenNeuro) * **Author (year):** `DOMENECH2020` * **Canonical:** — Also importable as: `DS003078`, `DOMENECH2020`. Modality: `ieeg`; Experiment type: `Unknown`; Subject type: `Surgery`. Subjects: 6; recordings: 72; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003078](https://openneuro.org/datasets/ds003078) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003078](https://nemar.org/dataexplorer/detail?dataset_id=ds003078) DOI: [https://doi.org/10.18112/openneuro.ds003078.v1.0.0](https://doi.org/10.18112/openneuro.ds003078.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003078 >>> dataset = DS003078(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003078) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003078) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) * [eegdash.dataset.DS003688](eegdash.dataset.DS003688.md) # DS003082: meg dataset, 2 subjects *Auditory Cortex Mapping Dataset* Access recordings and metadata through EEGDash. **Citation:** Jonathan Cote, Etienne de Villers-Sidani (2020). *Auditory Cortex Mapping Dataset*. [10.18112/openneuro.ds003082.v1.0.0](https://doi.org/10.18112/openneuro.ds003082.v1.0.0) Modality: meg Subjects: 2 Recordings: 3 License: CC0 Source: openneuro Citations: 5.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003082 dataset = DS003082(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003082(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003082( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003082, title = {Auditory Cortex Mapping Dataset}, author = {Jonathan Cote and Etienne de Villers-Sidani}, doi = {10.18112/openneuro.ds003082.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003082.v1.0.0}, } ``` ## About This Dataset **Brainstorm - Auditory Cortex Mapping Dataset** **License** This dataset (MEG and MRI data) was collected by Jonathan Cote of the Neuroplasticity and Sensory Biomarking Lab, Montreal Neurological Institute, McGill University, Canada. Its purpose is to serve as a data example to be used with our MEG-based auditory cortex mapping technique. It is presently released in the Public Domain, and is not subject to copyright in any jurisdiction. We would appreciate though that you reference this dataset in your publications: please acknowledge its authors (Jonathan Cote and Etienne de Villers-Sidani) and cite the mapping technique publication (under review) This dataset will first be a single subject, but might be expanded up to the 10 participants in the future. ### View full README **Brainstorm - Auditory Cortex Mapping Dataset** **License** This dataset (MEG and MRI data) was collected by Jonathan Cote of the Neuroplasticity and Sensory Biomarking Lab, Montreal Neurological Institute, McGill University, Canada. Its purpose is to serve as a data example to be used with our MEG-based auditory cortex mapping technique. It is presently released in the Public Domain, and is not subject to copyright in any jurisdiction. We would appreciate though that you reference this dataset in your publications: please acknowledge its authors (Jonathan Cote and Etienne de Villers-Sidani) and cite the mapping technique publication (under review) This dataset will first be a single subject, but might be expanded up to the 10 participants in the future. **Presentation of the experiment** **Experiment** \* One subject, one acquisition run of around 12 minutes \* Subject stimulated binaurally with intra-aural earphones (air tubes+transducers) \* The run contains: > * 1795 iso-intensity pure tones (IIPT) > * The frequency of these ranges between 100 Hz and 21527 Hz, spaced by 1/4 octave. \* Random inter-stimulus interval: randomized but averaging at a presentation rate of 3Hz \* The subject passively listened while looking at a fixation cross \* Auditory stimuli generated with the Matlab Psychophysics toolbox **MEG acquisition** \* Acquisition at \*\*120000Hz\*\*, with a \*\*CTF 275\*\* system, subject in seating position \* Recorded at the Montreal Neurological Institute in January 2015 \* Anti-aliasing low-pass filter at 3000Hz, files saved with the 3rd order gradient \* Recorded channels (340): > * 1 Trigger channel indicating the presentation times of the audio stimuli: UADC001 (#306) > * 26 MEG reference sensors (#4-#29) > * 272 MEG axial gradiometers (#30-#302) > * 1 ECG bipolar (#303) > * 2 EOG bipolar (vertical #304, horizontal #305) > * 3 Unused channels (#1-#3) \* 3 datasets: : * \*\*sub-0001_ses-0001_task-mapping_run-01_meg.ds\*\*: Run #1, 653s, 1795 IIPT, sampled at 12000 Hz * \*\*sub-emptyroom_ses-0001_emptyroom_run-01_meg.ds\*\*: Empty room recording, 120s long, sampled at 12000 Hz * \*\*sub-emptyroom_ses-0001_emptyroom_run-02_meg.ds\*\*: Empty room recording, 120s long, sampled at 2400 Hz \* Use of the .ds, not the AUX (standard at the MNI) because they are easier to manipulate in FieldTrip **Stimulation delays** \* \*\*Delay #1\*\*: Transmission of the sound. Between when the sound card plays the sound and when the subject receives the sound in the ears. This is the time it takes for the transducer to convert the analog audio signal into a sound, plus the time it takes to the sound to travel through the air tubes from the transducer to the subject’s ears. This delay cannot be estimated from the recorded signals: before the acquisition, we placed a sound meter at the extremity of the tubes to record when the sound is delivered. Delay **between 4.8ms and 5.0ms\*\*(std = 0.08ms). At a sampling rate of 2400Hz, this delay can be considered \*\* constant**, we will not compensate for it. \* \*\*Delay #2\*\*: Recording of the signals. The CTF MEG systems have a constant delay of \*\*4 samples\*\*between the MEG/EEG channels and the analog channels (such as the audio signal UADC001), because of an anti-aliasing filtered that is applied to the first and not the second. This translate here to a \*\* constant delay\*\*of \*\*1.7ms\*\*. \* \*\*Uncorrected delays\*\*: We will keep the delays. We decide not to compensate for these delays because they do not introduce any jitter in the responses and they are not going to change anything in the interpretation of the data. **Head shape and fiducial points** ``` * ``` 3D digitization using a Polhemus Fastrak device driven by Brainstorm ( ``` S01_20131218_ ``` \*.pos) \* More information: [Digitize EEG electrodes and head shape](http://neuroimage.usc.edu/brainstorm/Tutorials/TutDigitize) \* The output file is copied to each .ds folder and contains the following entries: > * The position of the center of CTF coils > * The position of the anatomical references we use in Brainstorm: Nasion and connections tragus/helix, as illustrated [here](http://neuroimage.usc.edu/brainstorm/CoordinateSystems#Pre-auricular_points_.28LPA.2C_RPA.29). \* Around 150 head points distributed on the hard parts of the head (no soft tissues) **Subject anatomy** \* Subject with 1.5T MRI \* Processed with FreeSurfer 5.3 ## Dataset Information | Dataset ID | `DS003082` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Auditory Cortex Mapping Dataset | | Author (year) | `Cote2020` | | Canonical | `Cote2015` | | Importable as | `DS003082`, `Cote2020`, `Cote2015` | | Year | 2020 | | Authors | Jonathan Cote, Etienne de Villers-Sidani | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003082.v1.0.0](https://doi.org/10.18112/openneuro.ds003082.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003082) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003082) | [Source URL](https://openneuro.org/datasets/ds003082) | ### Copy-paste BibTeX ```bibtex @dataset{ds003082, title = {Auditory Cortex Mapping Dataset}, author = {Jonathan Cote and Etienne de Villers-Sidani}, doi = {10.18112/openneuro.ds003082.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003082.v1.0.0}, } ``` ## Technical Details - Subjects: 2 - Recordings: 3 - Tasks: 2 - Channels: 300 (2), 306 - Sampling rate (Hz): 12000.0 (2), 2400.0 - Duration (hours): 0.2955555555555555 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 13.2 GB - File count: 3 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003082.v1.0.0 - Source: openneuro - OpenNeuro: [ds003082](https://openneuro.org/datasets/ds003082) - NeMAR: [ds003082](https://nemar.org/dataexplorer/detail?dataset_id=ds003082) ## API Reference Use the `DS003082` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003082(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory Cortex Mapping Dataset * **Study:** `ds003082` (OpenNeuro) * **Author (year):** `Cote2020` * **Canonical:** `Cote2015` Also importable as: `DS003082`, `Cote2020`, `Cote2015`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 2; recordings: 3; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003082](https://openneuro.org/datasets/ds003082) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003082](https://nemar.org/dataexplorer/detail?dataset_id=ds003082) DOI: [https://doi.org/10.18112/openneuro.ds003082.v1.0.0](https://doi.org/10.18112/openneuro.ds003082.v1.0.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003082 >>> dataset = DS003082(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003082) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003082) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS003104: meg dataset, 1 subjects *MNE-somato-data-bids (anonymized)* Access recordings and metadata through EEGDash. **Citation:** Lauri Parkkonen, Stefan Appelhoff, Alexandre Gramfort, Mainak Jas, Richard Höchenberger (2020). *MNE-somato-data-bids (anonymized)*. [10.18112/openneuro.ds003104.v1.0.1](https://doi.org/10.18112/openneuro.ds003104.v1.0.1) Modality: meg Subjects: 1 Recordings: 1 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003104 dataset = DS003104(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003104(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003104( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003104, title = {MNE-somato-data-bids (anonymized)}, author = {Lauri Parkkonen and Stefan Appelhoff and Alexandre Gramfort and Mainak Jas and Richard Höchenberger}, doi = {10.18112/openneuro.ds003104.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003104.v1.0.1}, } ``` ## About This Dataset **MNE-somato-data-bids** This dataset contains the MNE-somato-data in BIDS format. The conversion can be reproduced through the Python script stored in the `/code` directory of this dataset. See the README in that directory. The `/derivatives` directory contains the outputs of running the FreeSurfer pipeline `recon-all` on the MRI data with no additional commandline options (only defaults were used): $ recon-all -i sub-01_T1w.nii.gz -s 01 -all After the `recon-all` call, there were further FreeSurfer calls from the MNE API: $ mne make_scalp_surfaces -s 01 –force $ mne watershed_bem -s 01 The derivatives also contain the forward model `*-fwd.fif`, which was produced using the source space definition, a `*-trans.fif` file, and the boundary element model (=conductor model) that lives in `freesurfer/subjects/01/bem/*-bem-sol.fif`. The `*-trans.fif` file is not saved, but can be recovered from the anatomical landmarks in the `sub-01/anat/T1w.json` file and MNE-BIDS’ function `get_head_mri_transform`. See: [https://github.com/mne-tools/mne-bids](https://github.com/mne-tools/mne-bids) for more information. **Notes on FreeSurfer** the FreeSurfer pipeline `recon-all` was run new for the sake of converting the somato data to BIDS format. This needed to be done to change the “somato” subject name to the BIDS subject label “01”. Note, that this is NOT “sub-01”, because in BIDS, the “sub-” is just a prefix, whereas the “01” is the subject label. ## Dataset Information | Dataset ID | `DS003104` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MNE-somato-data-bids (anonymized) | | Author (year) | `Parkkonen2020` | | Canonical | `MNESomato`, `Somato`, `MNESomatoData` | | Importable as | `DS003104`, `Parkkonen2020`, `MNESomato`, `Somato`, `MNESomatoData` | | Year | 2020 | | Authors | Lauri Parkkonen, Stefan Appelhoff, Alexandre Gramfort, Mainak Jas, Richard Höchenberger | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003104.v1.0.1](https://doi.org/10.18112/openneuro.ds003104.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003104) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003104) | [Source URL](https://openneuro.org/datasets/ds003104) | ### Copy-paste BibTeX ```bibtex @dataset{ds003104, title = {MNE-somato-data-bids (anonymized)}, author = {Lauri Parkkonen and Stefan Appelhoff and Alexandre Gramfort and Mainak Jas and Richard Höchenberger}, doi = {10.18112/openneuro.ds003104.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003104.v1.0.1}, } ``` ## Technical Details - Subjects: 1 - Recordings: 1 - Tasks: 1 - Channels: 316 - Sampling rate (Hz): 300.3074951171875 - Duration (hours): 0.2491881047669284 - Pathology: Healthy - Modality: Tactile - Type: Perception - Size on disk: 333.7 MB - File count: 1 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003104.v1.0.1 - Source: openneuro - OpenNeuro: [ds003104](https://openneuro.org/datasets/ds003104) - NeMAR: [ds003104](https://nemar.org/dataexplorer/detail?dataset_id=ds003104) ## API Reference Use the `DS003104` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003104(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MNE-somato-data-bids (anonymized) * **Study:** `ds003104` (OpenNeuro) * **Author (year):** `Parkkonen2020` * **Canonical:** `MNESomato`, `Somato`, `MNESomatoData` Also importable as: `DS003104`, `Parkkonen2020`, `MNESomato`, `Somato`, `MNESomatoData`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003104](https://openneuro.org/datasets/ds003104) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003104](https://nemar.org/dataexplorer/detail?dataset_id=ds003104) DOI: [https://doi.org/10.18112/openneuro.ds003104.v1.0.1](https://doi.org/10.18112/openneuro.ds003104.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003104 >>> dataset = DS003104(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003104) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003104) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS003190: eeg dataset, 19 subjects *Assesment of the visual stimuli properties in P300 paradigm* Access recordings and metadata through EEGDash. **Citation:** Omar Mendoza-Montoya, Javier M. Antelis (2020). *Assesment of the visual stimuli properties in P300 paradigm*. [10.18112/openneuro.ds003190.v1.0.1](https://doi.org/10.18112/openneuro.ds003190.v1.0.1) Modality: eeg Subjects: 19 Recordings: 384 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003190 dataset = DS003190(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003190(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003190( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003190, title = {Assesment of the visual stimuli properties in P300 paradigm}, author = {Omar Mendoza-Montoya and Javier M. Antelis}, doi = {10.18112/openneuro.ds003190.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003190.v1.0.1}, } ``` ## About This Dataset Dataset description: The database consists of a total of 382 electroencephalographic files from 19 participants. All recordings were collected on channels Fz, Cz, P3, Pz,P4, PO7, PO8 and Oz, according to the 10-20 EEG electrode placement standard, grounded to AFz channel and referenced to right mastoid (M2). - Each participant (S1-S19) performed 3 experimental sessions (Session01-Session03) and in each session there are 7 data files. - The filenames for these data files are ’Training 4’, ’Training 5 - SF’, ’Training 5 - CF’, ’Training 6’, ’Training 7’, ’Training 8’, and ’Training 9’. - The number accompanying the filename indicates the number of stimuli, whereas letters SF and CF for data files with 5 stimuli indicate the type of flash, SF for Standard-Flash of the stimulus and CF for superimposing a yellow smiling Cartoon Face. - Note that filenames for data-files with 4, 6, 7, 8, and 9 stimuli do not have a letter and were recorded with the type of flash that provided the greater classification accuracy when using 5 stimuli. - Each data file contains the data stream in a 2D matrix where rows correspond to channels and columns correspond to time samples with sampling frequency of 256Hz. - There are 10 rows, 1 to 8 for each EEG electrode (in descending order Fz, Cz, P3, Pz, P4, PO7, PO8 and Oz), 9 for time stamps, and 10 for a marker that encode information about the execution of theexperiment. The marker encodes this information as follows: - (i)marker numbers 101, 200, 201, 202 and 203, indicate the beginning and end of the five phases in a block - (ii)marker numbers 1, 2, 3, 4, 5, 6, 7, 8 and 9, indicate the symbol that is activated on the screen - (iii)each phase of the experiment block is identified with a marker - (iv)the phases of one block of the experiment are: Fixation, Target Presentation, Preparation, Stimulation and Rest - (iv)in particular the Stimulation phase has a start marker and an end marker ## Dataset Information | Dataset ID | `DS003190` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Assesment of the visual stimuli properties in P300 paradigm | | Author (year) | `MendozaMontoya2020` | | Canonical | — | | Importable as | `DS003190`, `MendozaMontoya2020` | | Year | 2020 | | Authors | Omar Mendoza-Montoya, Javier M. Antelis | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003190.v1.0.1](https://doi.org/10.18112/openneuro.ds003190.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003190) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003190) | [Source URL](https://openneuro.org/datasets/ds003190) | ### Copy-paste BibTeX ```bibtex @dataset{ds003190, title = {Assesment of the visual stimuli properties in P300 paradigm}, author = {Omar Mendoza-Montoya and Javier M. Antelis}, doi = {10.18112/openneuro.ds003190.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003190.v1.0.1}, } ``` ## Technical Details - Subjects: 19 - Recordings: 384 - Tasks: 2 - Channels: 9 (382), 10 (2) - Sampling rate (Hz): 256.0 - Duration (hours): 39.74703125 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 1.0 GB - File count: 384 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003190.v1.0.1 - Source: openneuro - OpenNeuro: [ds003190](https://openneuro.org/datasets/ds003190) - NeMAR: [ds003190](https://nemar.org/dataexplorer/detail?dataset_id=ds003190) ## API Reference Use the `DS003190` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003190(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Assesment of the visual stimuli properties in P300 paradigm * **Study:** `ds003190` (OpenNeuro) * **Author (year):** `MendozaMontoya2020` * **Canonical:** — Also importable as: `DS003190`, `MendozaMontoya2020`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 19; recordings: 384; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003190](https://openneuro.org/datasets/ds003190) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003190](https://nemar.org/dataexplorer/detail?dataset_id=ds003190) DOI: [https://doi.org/10.18112/openneuro.ds003190.v1.0.1](https://doi.org/10.18112/openneuro.ds003190.v1.0.1) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003190 >>> dataset = DS003190(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003190) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003190) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003194: eeg dataset, 15 subjects *Neuroepo multisession* Access recordings and metadata through EEGDash. **Citation:** Maria Luisa Bringas Vega, Lilia Morales Chacon, Ivonne Pedroso Ibanez (2020). *Neuroepo multisession*. [10.18112/openneuro.ds003194.v1.0.3](https://doi.org/10.18112/openneuro.ds003194.v1.0.3) Modality: eeg Subjects: 15 Recordings: 29 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003194 dataset = DS003194(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003194(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003194( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003194, title = {Neuroepo multisession}, author = {Maria Luisa Bringas Vega and Lilia Morales Chacon and Ivonne Pedroso Ibanez}, doi = {10.18112/openneuro.ds003194.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds003194.v1.0.3}, } ``` ## About This Dataset The quest for neuroprotection in Parkinson’s disease (PD) has been for new compounds to slow disease progression and stable and non-invasive biomarkers to document their benefits. Neuroepo, a new formulation of EPO with low content of sialic acid reported good results in animal model and tolerance in healthy participants and PD patients. In a double-blind randomized placebo ([https://clinicaltrials.gov/](https://clinicaltrials.gov/) number NCT04110678) twenty-five PD patients were assigned randomly to Neuroepo (n=15) or placebo (n=10) groups we reported the tolerance of the drug. We recorded resting-state EEG before and six months after the administration of the drug. The qualitative analysis of the abnormalities of the EEG was evaluated by two experts using a Likert-type scale and a multivariate item response theory (MIRT) approach was employed to establish the differences between groups in the two times. The quantitative EEG (qEEG) analysis was performed at the sources looking for generators of the neural activity using software VARETA and co-registering the results using the Montreal Neurological Institute Atlas. The statistical analysis between the sources was conducted using a permutation test and later a contrast method using the surfstat software between groups and before vs after condition, with Bonferroni correction for multiple comparisons. Here in this repository, we placed the raw EEG in BIDS format (Pernet, C. R. et al. EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Sci. data 6, 103 (2019). For the use of VARETA the qEEG program you can use (Bosch-Bayard, J. et al. A Quantitative EEG Toolbox for the MNI Neuroinformatics Ecosystem: Normative SPM of EEG Source Spectra. Front. Neuroinform. 14, (2020).) The EEG dataset from the different stages of processing can be requested to the authors. ## Dataset Information | Dataset ID | `DS003194` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Neuroepo multisession | | Author (year) | `Vega2020_Neuroepo` | | Canonical | — | | Importable as | `DS003194`, `Vega2020_Neuroepo` | | Year | 2020 | | Authors | Maria Luisa Bringas Vega, Lilia Morales Chacon, Ivonne Pedroso Ibanez | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003194.v1.0.3](https://doi.org/10.18112/openneuro.ds003194.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003194) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003194) | [Source URL](https://openneuro.org/datasets/ds003194) | ### Copy-paste BibTeX ```bibtex @dataset{ds003194, title = {Neuroepo multisession}, author = {Maria Luisa Bringas Vega and Lilia Morales Chacon and Ivonne Pedroso Ibanez}, doi = {10.18112/openneuro.ds003194.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds003194.v1.0.3}, } ``` ## Technical Details - Subjects: 15 - Recordings: 29 - Tasks: 2 - Channels: 19 (24), 20 (4), 21 - Sampling rate (Hz): 200.0 - Duration (hours): 7.177747222222222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 189.1 MB - File count: 29 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003194.v1.0.3 - Source: openneuro - OpenNeuro: [ds003194](https://openneuro.org/datasets/ds003194) - NeMAR: [ds003194](https://nemar.org/dataexplorer/detail?dataset_id=ds003194) ## API Reference Use the `DS003194` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003194(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neuroepo multisession * **Study:** `ds003194` (OpenNeuro) * **Author (year):** `Vega2020_Neuroepo` * **Canonical:** — Also importable as: `DS003194`, `Vega2020_Neuroepo`. Modality: `eeg`. Subjects: 15; recordings: 29; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003194](https://openneuro.org/datasets/ds003194) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003194](https://nemar.org/dataexplorer/detail?dataset_id=ds003194) DOI: [https://doi.org/10.18112/openneuro.ds003194.v1.0.3](https://doi.org/10.18112/openneuro.ds003194.v1.0.3) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003194 >>> dataset = DS003194(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003194) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003194) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003195: eeg dataset, 10 subjects *Placebo Neuroepo multisession* Access recordings and metadata through EEGDash. **Citation:** Maria Luisa Bringas Vega, Lilia Morales Chacon, Ivonne Pedroso Ibanez (2020). *Placebo Neuroepo multisession*. [10.18112/openneuro.ds003195.v1.0.3](https://doi.org/10.18112/openneuro.ds003195.v1.0.3) Modality: eeg Subjects: 10 Recordings: 20 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003195 dataset = DS003195(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003195(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003195( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003195, title = {Placebo Neuroepo multisession}, author = {Maria Luisa Bringas Vega and Lilia Morales Chacon and Ivonne Pedroso Ibanez}, doi = {10.18112/openneuro.ds003195.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds003195.v1.0.3}, } ``` ## About This Dataset The quest for neuroprotection in Parkinson’s disease (PD) has been for new compounds to slow disease progression and stable and non-invasive biomarkers to document their benefits. Neuroepo, a new formulation of EPO with low content of sialic acid reported good results in animal model and tolerance in healthy participants and PD patients. In a double-blind randomized placebo ([https://clinicaltrials.gov/](https://clinicaltrials.gov/) number NCT04110678) twenty-five PD patients were assigned randomly to Neuroepo (n=15) or placebo (n=10) groups we reported the tolerance of the drug. We recorded resting-state EEG before and six months after the administration of the drug. The qualitative analysis of the abnormalities of the EEG was evaluated by two experts using a Likert-type scale and a multivariate item response theory (MIRT) approach was employed to stablish the differences between groups in the two times. The quantitative EEG (qEEG) analysis was performed at the sources looking for generators of the neural activity using software VARETA and co-registering the results using the Montreal Neurological Institute Atlas. The statistical analysis between the sources was conducted using a permutation test and later a contrast method using the surfstat software between groups and before vs after condition, with Bonferroni correction for multiple comparisons. Here in this repository we placed the raw EEG in BIDS format (Pernet, C. R. et al. EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Sci. data 6, 103 (2019). For the use of VARETA the qEEG program you can use (Bosch-Bayard, J. et al. A Quantitative EEG Toolbox for the MNI Neuroinformatics Ecosystem: Normative SPM of EEG Source Spectra. Front. Neuroinform. 14, (2020).) The EEG dataset from the different stages of processing can be requested to the authors. ## Dataset Information | Dataset ID | `DS003195` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Placebo Neuroepo multisession | | Author (year) | `Vega2020_Placebo` | | Canonical | — | | Importable as | `DS003195`, `Vega2020_Placebo` | | Year | 2020 | | Authors | Maria Luisa Bringas Vega, Lilia Morales Chacon, Ivonne Pedroso Ibanez | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003195.v1.0.3](https://doi.org/10.18112/openneuro.ds003195.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003195) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003195) | [Source URL](https://openneuro.org/datasets/ds003195) | ### Copy-paste BibTeX ```bibtex @dataset{ds003195, title = {Placebo Neuroepo multisession}, author = {Maria Luisa Bringas Vega and Lilia Morales Chacon and Ivonne Pedroso Ibanez}, doi = {10.18112/openneuro.ds003195.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds003195.v1.0.3}, } ``` ## Technical Details - Subjects: 10 - Recordings: 20 - Tasks: 2 - Channels: 19 - Sampling rate (Hz): 200.0 - Duration (hours): 4.653659722222222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 121.1 MB - File count: 20 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003195.v1.0.3 - Source: openneuro - OpenNeuro: [ds003195](https://openneuro.org/datasets/ds003195) - NeMAR: [ds003195](https://nemar.org/dataexplorer/detail?dataset_id=ds003195) ## API Reference Use the `DS003195` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003195(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Placebo Neuroepo multisession * **Study:** `ds003195` (OpenNeuro) * **Author (year):** `Vega2020_Placebo` * **Canonical:** — Also importable as: `DS003195`, `Vega2020_Placebo`. Modality: `eeg`. Subjects: 10; recordings: 20; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003195](https://openneuro.org/datasets/ds003195) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003195](https://nemar.org/dataexplorer/detail?dataset_id=ds003195) DOI: [https://doi.org/10.18112/openneuro.ds003195.v1.0.3](https://doi.org/10.18112/openneuro.ds003195.v1.0.3) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003195 >>> dataset = DS003195(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003195) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003195) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003343: eeg dataset, 20 subjects *Disentangling the percepts of illusory movement and sensory stimulation during tendon vibration in the EEG* Access recordings and metadata through EEGDash. **Citation:** Christoph Schneider, Renaud Marquis, Jane Johr, Marina Da Silva Lopes, Philippe Ryvlin, Andrea Serino, Marzia De Lucia, Karin Diserens (2020). *Disentangling the percepts of illusory movement and sensory stimulation during tendon vibration in the EEG*. [10.18112/openneuro.ds003343.v2.0.1](https://doi.org/10.18112/openneuro.ds003343.v2.0.1) Modality: eeg Subjects: 20 Recordings: 59 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003343 dataset = DS003343(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003343(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003343( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003343, title = {Disentangling the percepts of illusory movement and sensory stimulation during tendon vibration in the EEG}, author = {Christoph Schneider and Renaud Marquis and Jane Johr and Marina Da Silva Lopes and Philippe Ryvlin and Andrea Serino and Marzia De Lucia and Karin Diserens}, doi = {10.18112/openneuro.ds003343.v2.0.1}, url = {https://doi.org/10.18112/openneuro.ds003343.v2.0.1}, } ``` ## About This Dataset This dataset contains the EEG data used for the study: “Disentangling the percepts of illusory movement and sensory stimulation during tendon vibration in the EEG” (Schneider, C., Marquis, R., Jöhr, J., Da Lopes Silva, M., Ryvlin, P., Serino, A., De Lucia, M., Diserens, K. Unpublished [fill according to following pattern: Journal (Year). [https://doi.org/](https://doi.org/)….]) Participants: Twenty healthy participants (twelve female, eight male), age 24.6 ± 3.2 years, all right-handed. All subjects participated voluntarily and consented in writing to the experiment. The study was covered by the ethical protocol No. 142/09 from the Commission cantonale d’éthique de la recherche sur l’être humain (CER -VD) and in agreement with the Declaration of Helsinki. Experimental setup: The subjects sat comfortably in a chair facing towards their right side so to not see the stimulated left arm, which could have hampered the illusion of movement created during the tendon vibration. While their right arm rested comfortably in the lap, the left arm was supported by a movable forearm rest which allowed two degrees of freedom in the horizontal plane. The reason for this was that proprioceptive feedback of the arm touching an immobile object can prevent the motor illusion from forming. Subjects wore an EEG cap with built-in wireless amplifier (g.tec Nautilus, g.tec medical engineering, Graz, Austria) with 16 electrodes covering the sensorimotor cortex in the international 10-10 system at positions (Fz, FC3, FC2, FCz, FC2, FC4, C3, C1, Cz, C2, C4, CP3, CP1, CPz, CP2, CP4). The signals were recorded at 500Hz with a hardware-implemented bandpass filter between 0.1 and 100 Hz and sent to a computer in the same room. The reference electrode was placed on the right earlobe. Tendon vibration was achieved with electromechanical wireless vibrators set into a soft, elastic brace on the left elbow joint (Vibramoov, Techno Concept, Manosque, France). The left arm was chosen since it was demonstrated that illusions start faster and are more vivid in the non-dominant extremity. One vibrator was sitting against the distal biceps tendon and the other against the distal triceps tendon on the same arm. Time information about the beginning of each stimulation was sent via a cable link to the computer and stored with the EEG data. Study protocol: EEG was recorded continuously while delivering stimulation sequences consisting of two different vibration types. The first elicited an illusion of elbow extension and was produced by vibrating the distal biceps tendon at 90Hz and the distal triceps tendon at 50Hz. The second produced only a vibration sensation without any movement illusion and consisted of stimulating both tendons at 70Hz. So, the average frequency of stimulation delivered to the agonist-antagonist pair was the same between conditions, but one condition was designed to induce a clear illusion and the other no illusion at all (control). Each stimulation lasted three seconds and consisted of one second of linear frequency ramp-up, one second of a stable frequency interval and one second of linear frequency ramp-down. The linear ramps started and ended 10 Hz below the target frequency for each stimulation type. The amplitude of the vibration was 2-3 mm. These parameters were based on Romaiguère et al. (2003) and the perception of illusory movement across all subjects was ensured in a pre-screening procedure. This setting was kept constant throughout the whole recording session. Each subject underwent three blocks of 72 vibrations (36 illusion, 36 control), arranged randomly and different for each block. The same stimulus sequences were employed for each participant. Inter stimulus intervals varied between one and three seconds and were randomized within and between blocks in order to minimize stimulus onset anticipation. ## Dataset Information | Dataset ID | `DS003343` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Disentangling the percepts of illusory movement and sensory stimulation during tendon vibration in the EEG | | Author (year) | `Schneider2020` | | Canonical | — | | Importable as | `DS003343`, `Schneider2020` | | Year | 2020 | | Authors | Christoph Schneider, Renaud Marquis, Jane Johr, Marina Da Silva Lopes, Philippe Ryvlin, Andrea Serino, Marzia De Lucia, Karin Diserens | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003343.v2.0.1](https://doi.org/10.18112/openneuro.ds003343.v2.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003343) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003343) | [Source URL](https://openneuro.org/datasets/ds003343) | ### Copy-paste BibTeX ```bibtex @dataset{ds003343, title = {Disentangling the percepts of illusory movement and sensory stimulation during tendon vibration in the EEG}, author = {Christoph Schneider and Renaud Marquis and Jane Johr and Marina Da Silva Lopes and Philippe Ryvlin and Andrea Serino and Marzia De Lucia and Karin Diserens}, doi = {10.18112/openneuro.ds003343.v2.0.1}, url = {https://doi.org/10.18112/openneuro.ds003343.v2.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 59 - Tasks: 1 - Channels: 20 - Sampling rate (Hz): 500.0 - Duration (hours): 6.434444444444445 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 663.4 MB - File count: 59 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003343.v2.0.1 - Source: openneuro - OpenNeuro: [ds003343](https://openneuro.org/datasets/ds003343) - NeMAR: [ds003343](https://nemar.org/dataexplorer/detail?dataset_id=ds003343) ## API Reference Use the `DS003343` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003343(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Disentangling the percepts of illusory movement and sensory stimulation during tendon vibration in the EEG * **Study:** `ds003343` (OpenNeuro) * **Author (year):** `Schneider2020` * **Canonical:** — Also importable as: `DS003343`, `Schneider2020`. Modality: `eeg`. Subjects: 20; recordings: 59; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003343](https://openneuro.org/datasets/ds003343) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003343](https://nemar.org/dataexplorer/detail?dataset_id=ds003343) DOI: [https://doi.org/10.18112/openneuro.ds003343.v2.0.1](https://doi.org/10.18112/openneuro.ds003343.v2.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003343 >>> dataset = DS003343(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003343) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003343) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003352: meg dataset, 18 subjects *1 - Light Pink Spiral* Access recordings and metadata through EEGDash. **Citation:** Katherine Hermann, Isabelle Rosenthal, Shridhar R. Singh, Dimitrios Pantazis, Bevil R. Conway (2020). *1 - Light Pink Spiral*. [10.18112/openneuro.ds003352.v1.0.0](https://doi.org/10.18112/openneuro.ds003352.v1.0.0) Modality: meg Subjects: 18 Recordings: 138 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003352 dataset = DS003352(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003352(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003352( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003352, title = {1 - Light Pink Spiral}, author = {Katherine Hermann and Isabelle Rosenthal and Shridhar R. Singh and Dimitrios Pantazis and Bevil R. Conway}, doi = {10.18112/openneuro.ds003352.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003352.v1.0.0}, } ``` ## About This Dataset Stimuli include eight different square wave spiral gratings subtending 10 degrees of visual angle as well as the color words “blue” and “green.” The color words appeared as white on a gray background. Each stimulus appeared on the screen for 116 ms. The triggers or event ID’s of each stimulus are as follows: 1 - Light Pink Spiral 2 - Dark Pink Spiral 3 - Light Blue Spiral 4 - Dark Blue Spiral 5 - Light Green Spiral 6 - Dark Green Spiral 7 - Light Orange Spiral 8 - Dark Orange Spiral 9 - “green” 10 - “blue” ## Dataset Information | Dataset ID | `DS003352` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 1 - Light Pink Spiral | | Author (year) | `Hermann2020` | | Canonical | `Hermann2021` | | Importable as | `DS003352`, `Hermann2020`, `Hermann2021` | | Year | 2020 | | Authors | Katherine Hermann, Isabelle Rosenthal, Shridhar R. Singh, Dimitrios Pantazis, Bevil R. Conway | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003352.v1.0.0](https://doi.org/10.18112/openneuro.ds003352.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003352) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003352) | [Source URL](https://openneuro.org/datasets/ds003352) | ### Copy-paste BibTeX ```bibtex @dataset{ds003352, title = {1 - Light Pink Spiral}, author = {Katherine Hermann and Isabelle Rosenthal and Shridhar R. Singh and Dimitrios Pantazis and Bevil R. Conway}, doi = {10.18112/openneuro.ds003352.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003352.v1.0.0}, } ``` ## Technical Details - Subjects: 18 - Recordings: 138 - Tasks: 1 - Channels: 323 - Sampling rate (Hz): 1000.0 - Duration (hours): 52.308295 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 214.3 GB - File count: 138 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003352.v1.0.0 - Source: openneuro - OpenNeuro: [ds003352](https://openneuro.org/datasets/ds003352) - NeMAR: [ds003352](https://nemar.org/dataexplorer/detail?dataset_id=ds003352) ## API Reference Use the `DS003352` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003352(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 1 - Light Pink Spiral * **Study:** `ds003352` (OpenNeuro) * **Author (year):** `Hermann2020` * **Canonical:** `Hermann2021` Also importable as: `DS003352`, `Hermann2020`, `Hermann2021`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 18; recordings: 138; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003352](https://openneuro.org/datasets/ds003352) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003352](https://nemar.org/dataexplorer/detail?dataset_id=ds003352) DOI: [https://doi.org/10.18112/openneuro.ds003352.v1.0.0](https://doi.org/10.18112/openneuro.ds003352.v1.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003352 >>> dataset = DS003352(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003352) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003352) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS003374: ieeg dataset, 9 subjects *Dataset of neurons and intracranial EEG from human amygdala during aversive dynamic visual stimulation* Access recordings and metadata through EEGDash. **Citation:** Tommaso Fedele, Ece Boran, Valeri Chirkov, Peter Hilfiker, Thomas Grunwald, Lennart Stieglitz, Hennric Jokeit, Johannes Sarnthein (2020). *Dataset of neurons and intracranial EEG from human amygdala during aversive dynamic visual stimulation*. [10.18112/openneuro.ds003374.v1.1.1](https://doi.org/10.18112/openneuro.ds003374.v1.1.1) Modality: ieeg Subjects: 9 Recordings: 18 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003374 dataset = DS003374(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003374(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003374( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003374, title = {Dataset of neurons and intracranial EEG from human amygdala during aversive dynamic visual stimulation}, author = {Tommaso Fedele and Ece Boran and Valeri Chirkov and Peter Hilfiker and Thomas Grunwald and Lennart Stieglitz and Hennric Jokeit and Johannes Sarnthein}, doi = {10.18112/openneuro.ds003374.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds003374.v1.1.1}, } ``` ## About This Dataset **Dataset of neurons and intracranial EEG from human amygdala during aversive dynamic visual stimulation** **Summary** We present an electrophysiological dataset collected from the amygdalae of nine subjects attending a visual dynamic stimulation of emotional aversive content. The subjects were patients affected by epilepsy who underwent preoperative invasive monitoring in the mesial temporal lobe. Subjects were presented with dynamic visual sequences of fearful faces (aversive condition), interleaved with sequences of neutral landscapes (neutral condition). We provide the recordings of intracranial EEG (iEEG) and metadata related to the task, subjects, sessions and electrodes in the BIDS standard. We also provide a more extended version of the dataset that includes neuronal spike times and waveforms in the NIX standard under the folder “bidsignore/data_NIX”. This extended dataset is also available in G-Node at [https://gin.g-node.org/USZ_NCH/Human_Amygdala_MUA_sEEG_FearVideo/](https://gin.g-node.org/USZ_NCH/Human_Amygdala_MUA_sEEG_FearVideo/). This dataset allows the investigation of amygdalar response to dynamic aversive stimuli at multiple spatial scales, from the macroscopic EEG to the neuronal firing in the human brain. ### View full README **Dataset of neurons and intracranial EEG from human amygdala during aversive dynamic visual stimulation** **Summary** We present an electrophysiological dataset collected from the amygdalae of nine subjects attending a visual dynamic stimulation of emotional aversive content. The subjects were patients affected by epilepsy who underwent preoperative invasive monitoring in the mesial temporal lobe. Subjects were presented with dynamic visual sequences of fearful faces (aversive condition), interleaved with sequences of neutral landscapes (neutral condition). We provide the recordings of intracranial EEG (iEEG) and metadata related to the task, subjects, sessions and electrodes in the BIDS standard. We also provide a more extended version of the dataset that includes neuronal spike times and waveforms in the NIX standard under the folder “bidsignore/data_NIX”. This extended dataset is also available in G-Node at [https://gin.g-node.org/USZ_NCH/Human_Amygdala_MUA_sEEG_FearVideo/](https://gin.g-node.org/USZ_NCH/Human_Amygdala_MUA_sEEG_FearVideo/). This dataset allows the investigation of amygdalar response to dynamic aversive stimuli at multiple spatial scales, from the macroscopic EEG to the neuronal firing in the human brain. **Repository structure** **Main directory** Contains metadata in the BIDS standard. **Directories sub-\*\*** Contains folders for each subject, named sub-. **Directory bidsignore** Contains data in the NIX standard, and metadata files. Subject_Characteristics.pdf describes subjects and NIX_File_Structure.pdf describes the structure of the NIX files. **Directory code_MATLAB** Contains MATLAB code for loading the data and generating the publication figures. Main_Load_NIX_Data.m contains code snippets for reading NIX data and task related information. Main_Plot_Figures.m uses the functions Figure_2.m and Figure_3.m to generate figures. Required dependencies to run the script Main_Load_NIX_Data.m: \* [Nix-mx v1.4.1](https://github.com/G-Node/nix-mx/) Required dependencies to run the script Main_Plot_Figures.m: \* [Nix-mx v1.4.1](https://github.com/G-Node/nix-mx/) \* [Gramm](https://github.com/piermorel/gramm/) \* [FieldTrip](http://www.fieldtriptoolbox.org/download/) **Directory data_NIX** Contains nix files for each session of the task. Each file is named with the format: Data_Subject__Session_.h5 **Support** For questions on the dataset or the task, contact Johannes Sarnthein at [johannes.sarnthein@usz.ch](mailto:johannes.sarnthein@usz.ch). ## Dataset Information | Dataset ID | `DS003374` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset of neurons and intracranial EEG from human amygdala during aversive dynamic visual stimulation | | Author (year) | `Fedele2020` | | Canonical | — | | Importable as | `DS003374`, `Fedele2020` | | Year | 2020 | | Authors | Tommaso Fedele, Ece Boran, Valeri Chirkov, Peter Hilfiker, Thomas Grunwald, Lennart Stieglitz, Hennric Jokeit, Johannes Sarnthein | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003374.v1.1.1](https://doi.org/10.18112/openneuro.ds003374.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003374) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003374) | [Source URL](https://openneuro.org/datasets/ds003374) | ### Copy-paste BibTeX ```bibtex @dataset{ds003374, title = {Dataset of neurons and intracranial EEG from human amygdala during aversive dynamic visual stimulation}, author = {Tommaso Fedele and Ece Boran and Valeri Chirkov and Peter Hilfiker and Thomas Grunwald and Lennart Stieglitz and Hennric Jokeit and Johannes Sarnthein}, doi = {10.18112/openneuro.ds003374.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds003374.v1.1.1}, } ``` ## Technical Details - Subjects: 9 - Recordings: 18 - Tasks: 1 - Channels: 4 (10), 2 (8) - Sampling rate (Hz): 2000.0 - Duration (hours): 2.61 - Pathology: Epilepsy - Modality: Visual - Type: Affect - Size on disk: 167.3 MB - File count: 18 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003374.v1.1.1 - Source: openneuro - OpenNeuro: [ds003374](https://openneuro.org/datasets/ds003374) - NeMAR: [ds003374](https://nemar.org/dataexplorer/detail?dataset_id=ds003374) ## API Reference Use the `DS003374` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003374(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of neurons and intracranial EEG from human amygdala during aversive dynamic visual stimulation * **Study:** `ds003374` (OpenNeuro) * **Author (year):** `Fedele2020` * **Canonical:** — Also importable as: `DS003374`, `Fedele2020`. Modality: `ieeg`; Experiment type: `Affect`; Subject type: `Epilepsy`. Subjects: 9; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003374](https://openneuro.org/datasets/ds003374) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003374](https://nemar.org/dataexplorer/detail?dataset_id=ds003374) DOI: [https://doi.org/10.18112/openneuro.ds003374.v1.1.1](https://doi.org/10.18112/openneuro.ds003374.v1.1.1) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003374 >>> dataset = DS003374(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003374) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003374) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) * [eegdash.dataset.DS003688](eegdash.dataset.DS003688.md) # DS003380: eeg dataset, 1 subjects *Corticothalamic communication under analgesia, sedation and gradual ischemia: a multimodal model of controlled gradual cerebral ischemia in pig* Access recordings and metadata through EEGDash. **Citation:** Martin G. Frasch, Bernd Walter, Chrstophe L. Herry, Reinhard Bauer (2020). *Corticothalamic communication under analgesia, sedation and gradual ischemia: a multimodal model of controlled gradual cerebral ischemia in pig*. [10.18112/openneuro.ds003380.v1.0.0](https://doi.org/10.18112/openneuro.ds003380.v1.0.0) Modality: eeg Subjects: 1 Recordings: 5 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003380 dataset = DS003380(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003380(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003380( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003380, title = {Corticothalamic communication under analgesia, sedation and gradual ischemia: a multimodal model of controlled gradual cerebral ischemia in pig}, author = {Martin G. Frasch and Bernd Walter and Chrstophe L. Herry and Reinhard Bauer}, doi = {10.18112/openneuro.ds003380.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003380.v1.0.0}, } ``` ## About This Dataset This sedation, ischemia, recovery experiment contains 11 animals (juvenile pigs). Animals were surgically instrumented, and then monitored under sedation states 1-5 (isoflurane, fentanyl, propofol), followed by 1 or 2 episodes of gradual ischemia (states 6 and 8) and recovery (recovery 1 = state 7, between state 6 and 8; recovery 2, after state 8, corresponding to states 9-12). Two crude groups are indicated: 1) sedation - animals had no ischemia and 2) ischemia - animals had sedation, followed by ischemia episodes and followed by recovery. The scientific article (see Reference) contains all methodological details. - Martin Frasch and Reinhard Bauer, October 2, 2020 PS. Sub-12 folder is to be ignored. It was added to satisfy the BIDS validation algorithm. ## Dataset Information | Dataset ID | `DS003380` | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Corticothalamic communication under analgesia, sedation and gradual ischemia: a multimodal model of controlled gradual cerebral ischemia in pig | | Author (year) | `Frasch2020` | | Canonical | — | | Importable as | `DS003380`, `Frasch2020` | | Year | 2020 | | Authors | Martin G. Frasch, Bernd Walter, Chrstophe L. Herry, Reinhard Bauer | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003380.v1.0.0](https://doi.org/10.18112/openneuro.ds003380.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003380) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003380) | [Source URL](https://openneuro.org/datasets/ds003380/versions/1.0.0) | ### Copy-paste BibTeX ```bibtex @dataset{ds003380, title = {Corticothalamic communication under analgesia, sedation and gradual ischemia: a multimodal model of controlled gradual cerebral ischemia in pig}, author = {Martin G. Frasch and Bernd Walter and Chrstophe L. Herry and Reinhard Bauer}, doi = {10.18112/openneuro.ds003380.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003380.v1.0.0}, } ``` ## Technical Details - Subjects: 1 - Recordings: 5 - Tasks: — - Channels: 16 - Sampling rate (Hz): Varies - Duration (hours): 0.0894444444444444 - Pathology: Other - Modality: Anesthesia - Type: Clinical/Intervention - Size on disk: 19.7 MB - File count: 5 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003380.v1.0.0 - Source: openneuro - OpenNeuro: [ds003380](https://openneuro.org/datasets/ds003380) - NeMAR: [ds003380](https://nemar.org/dataexplorer/detail?dataset_id=ds003380) ## API Reference Use the `DS003380` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003380(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Corticothalamic communication under analgesia, sedation and gradual ischemia: a multimodal model of controlled gradual cerebral ischemia in pig * **Study:** `ds003380` (OpenNeuro) * **Author (year):** `Frasch2020` * **Canonical:** — Also importable as: `DS003380`, `Frasch2020`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 1; recordings: 5; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003380](https://openneuro.org/datasets/ds003380) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003380](https://nemar.org/dataexplorer/detail?dataset_id=ds003380) DOI: [https://doi.org/10.18112/openneuro.ds003380.v1.0.0](https://doi.org/10.18112/openneuro.ds003380.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS003380 >>> dataset = DS003380(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003380) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003380) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003392: meg dataset, 12 subjects *NeuroSpin hMT+ Localizer DATA (MEG & aMRI)* Access recordings and metadata through EEGDash. **Citation:** Nicolas Zilber, Philippe Ciuciu, Alexandre Gramfort, Leila Azizi, Virginie van Wassenhove (2020). *NeuroSpin hMT+ Localizer DATA (MEG & aMRI)*. [10.18112/openneuro.ds003392.v1.0.4](https://doi.org/10.18112/openneuro.ds003392.v1.0.4) Modality: meg Subjects: 12 Recordings: 33 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003392 dataset = DS003392(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003392(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003392( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003392, title = {NeuroSpin hMT+ Localizer DATA (MEG & aMRI)}, author = {Nicolas Zilber and Philippe Ciuciu and Alexandre Gramfort and Leila Azizi and Virginie van Wassenhove}, doi = {10.18112/openneuro.ds003392.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds003392.v1.0.4}, } ``` ## About This Dataset Dataset description: Magnetoencephalography (MEG) dataset recorded during a hMT+ (human visual motion area) localizer task Published in: Zilber, N., Ciuciu, P., Gramfort, A., Azizi, L., & Van Wassenhove, V. (2014). Supramodal processing optimizes visual perceptual learning and plasticity. Neuroimage, 93, 32-46. Data curation: Sophie Herbst, Alexandre Gramfort This MEG dataset was prepared in the Brain Imaging Data Structure (MEG-BIDS, Niso et al. 2018) format using MNE-BIDS (Appelhoff et al. 2019). The dataset contains 10 of the 12 participants from the vision-only training group. Two participants were removed, one due to problems with the trigger channel, and one due to different settings in the acquisition preventing us from processing the dataset without prior adjustment. **EXPERIMENT** ### View full README Dataset description: Magnetoencephalography (MEG) dataset recorded during a hMT+ (human visual motion area) localizer task Published in: Zilber, N., Ciuciu, P., Gramfort, A., Azizi, L., & Van Wassenhove, V. (2014). Supramodal processing optimizes visual perceptual learning and plasticity. Neuroimage, 93, 32-46. Data curation: Sophie Herbst, Alexandre Gramfort This MEG dataset was prepared in the Brain Imaging Data Structure (MEG-BIDS, Niso et al. 2018) format using MNE-BIDS (Appelhoff et al. 2019). The dataset contains 10 of the 12 participants from the vision-only training group. Two participants were removed, one due to problems with the trigger channel, and one due to different settings in the acquisition preventing us from processing the dataset without prior adjustment. **EXPERIMENT** Participants were presented with a cloud of moving dots, always starting with incoherent movement (up or down result in equal display, due to the incoherence). After 500 ms, the movement became coherent in 50% of the trials (95% coherence, up or down) and remained incoherent in the other 50%, lasting for 1000 ms. Participants were instructed to passively view the stimuli for a total of 120 trials. Events: 1: coherent / down 2: coherent / up 3: incoherent / down 4: incoherent / up **MEG** Brain magnetic fields were recorded in a MSR using a 306 MEG system (Neuromag Elekta LTD, Helsinki). MEG recordings were sampled at 2 kHz and band-pass filtered between 0.03 and 600 Hz. Four head position coils (HPI) measured the head position of participants before each block; three fiducial markers (nasion and pre-auricular points) were used for digitization and anatomicalMRI (aMRI) immediately following MEG acquisition. Electrooculograms (EOG, horizontal and vertical eye movements) and electrocardiogram (ECG) were simultaneously recorded. Prior to the session, 5 min of empty room recordings was acquired for the computation of the noise covariance matrix. Bad MEG channels were marked manually. **MRI** The T1 weighted aMRI was recorded using a 3-T Siemens Trio MRI scanner. Parameters of the sequence were: voxel size: 1.0 × 1.0 × 1.1 mm; acquisition time: 466 s; repetition time TR = 2300 ms; and echo time TE = 2.98 ms **References** Zilber, N., Ciuciu, P., Gramfort, A., Azizi, L., & Van Wassenhove, V. (2014). Supramodal processing optimizes visual perceptual learning and plasticity. Neuroimage, 93, 32-46. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [http://doi.org/10.1038/sdata.2018.110](http://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `DS003392` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | NeuroSpin hMT+ Localizer DATA (MEG & aMRI) | | Author (year) | `Zilber2020` | | Canonical | — | | Importable as | `DS003392`, `Zilber2020` | | Year | 2020 | | Authors | Nicolas Zilber, Philippe Ciuciu, Alexandre Gramfort, Leila Azizi, Virginie van Wassenhove | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003392.v1.0.4](https://doi.org/10.18112/openneuro.ds003392.v1.0.4) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003392) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003392) | [Source URL](https://openneuro.org/datasets/ds003392) | ### Copy-paste BibTeX ```bibtex @dataset{ds003392, title = {NeuroSpin hMT+ Localizer DATA (MEG & aMRI)}, author = {Nicolas Zilber and Philippe Ciuciu and Alexandre Gramfort and Leila Azizi and Virginie van Wassenhove}, doi = {10.18112/openneuro.ds003392.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds003392.v1.0.4}, } ``` ## Technical Details - Subjects: 12 - Recordings: 33 - Tasks: 2 - Channels: 320 - Sampling rate (Hz): 2000.0 - Duration (hours): 1.232496944444445 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 10.1 GB - File count: 33 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003392.v1.0.4 - Source: openneuro - OpenNeuro: [ds003392](https://openneuro.org/datasets/ds003392) - NeMAR: [ds003392](https://nemar.org/dataexplorer/detail?dataset_id=ds003392) ## API Reference Use the `DS003392` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003392(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NeuroSpin hMT+ Localizer DATA (MEG & aMRI) * **Study:** `ds003392` (OpenNeuro) * **Author (year):** `Zilber2020` * **Canonical:** — Also importable as: `DS003392`, `Zilber2020`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 12; recordings: 33; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003392](https://openneuro.org/datasets/ds003392) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003392](https://nemar.org/dataexplorer/detail?dataset_id=ds003392) DOI: [https://doi.org/10.18112/openneuro.ds003392.v1.0.4](https://doi.org/10.18112/openneuro.ds003392.v1.0.4) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003392 >>> dataset = DS003392(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003392) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003392) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS003420: eeg dataset, 23 subjects *HD-EEGtask(Dataset 1)* Access recordings and metadata through EEGDash. **Citation:** Ahmad Mheich, Olivier Dufor, Sahar Yassine, Aya Kabbara, Arnaud Biraben, Fabrice Wendling, Mahmoud Hassan (2020). *HD-EEGtask(Dataset 1)*. [10.18112/openneuro.ds003420.v1.0.2](https://doi.org/10.18112/openneuro.ds003420.v1.0.2) Modality: eeg Subjects: 23 Recordings: 92 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003420 dataset = DS003420(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003420(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003420( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003420, title = {HD-EEGtask(Dataset 1)}, author = {Ahmad Mheich and Olivier Dufor and Sahar Yassine and Aya Kabbara and Arnaud Biraben and Fabrice Wendling and Mahmoud Hassan}, doi = {10.18112/openneuro.ds003420.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003420.v1.0.2}, } ``` ## About This Dataset **Dataset 1** **Presentation** > This dataset was collected between 2012 and 2013 in Rennes (France) during two conditions (visual naming and spelling tasks). > The dataset consists of naming and spelling the names of visually presented objects. The data was collected in the Rennes University Hospital. This experiment was approved by an independent ethics committee and authorized by the French institutional review board (IRB): “Comite de Protection des Personnes dans la Recherche Biomedicale Ouest V” (CCPPRB-Ouest V). > This study was registered under the name “conneXion” and the agreement number: 2012- A01227-36. ### View full README **Dataset 1** **Presentation** > This dataset was collected between 2012 and 2013 in Rennes (France) during two conditions (visual naming and spelling tasks). > The dataset consists of naming and spelling the names of visually presented objects. The data was collected in the Rennes University Hospital. This experiment was approved by an independent ethics committee and authorized by the French institutional review board (IRB): “Comite de Protection des Personnes dans la Recherche Biomedicale Ouest V” (CCPPRB-Ouest V). > This study was registered under the name “conneXion” and the agreement number: 2012- A01227-36. **Participants** > Twenty-three right-handed healthy volunteers of whom 12 females, with an age range between > 19 and 40 years (mean age 28 year),and 11 males with an age range between 19 and 33 years (mean age 23 years) participated in this study. (See participants.json and participants.tsv for more details) **Experiment** > * The experiment begins with the verification of inclusion/exclusion criteria. > * The participants read the information notice and the consent form. > * Then they sign two questionnaires. > * One subject –>Two conditions (naming and spelling)–> two runs for each condition. > * Each run contains 74 stimuli. > * The spelling task always follow the naming task and its instruction was not given before the naming task was completed to avoid any reminiscence of words orthographic structures > * Each run contains balanced numbers of animals and objects as well as long and short words. > * Pictures are presented on a screen using a computer and the experimental paradigm is presented using E-prime Psychology Software Tools. > * The responses produced by the participants were collected via a Logitech microphone and analyzed to detect onsets of speech using Praat v5.3.13(University of Amsterdam, 1012VT Amsterdam, The Netherlands). **EEG acquisition** > * HD-EEG system (EGI, Electrical Geodesic Inc., 256 electrodes) > * Sampling frequency: 1000Hz > * Impedances were kept below 5k **Contact** > * If you have any questions or comments, please contact: > * Ahmad Mheich: [mheich.ahmad@gmail.com](mailto:mheich.ahmad@gmail.com) ## Dataset Information | Dataset ID | `DS003420` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | HD-EEGtask(Dataset 1) | | Author (year) | `Mheich2020_HD` | | Canonical | — | | Importable as | `DS003420`, `Mheich2020_HD` | | Year | 2020 | | Authors | Ahmad Mheich, Olivier Dufor, Sahar Yassine, Aya Kabbara, Arnaud Biraben, Fabrice Wendling, Mahmoud Hassan | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003420.v1.0.2](https://doi.org/10.18112/openneuro.ds003420.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003420) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003420) | [Source URL](https://openneuro.org/datasets/ds003420) | ### Copy-paste BibTeX ```bibtex @dataset{ds003420, title = {HD-EEGtask(Dataset 1)}, author = {Ahmad Mheich and Olivier Dufor and Sahar Yassine and Aya Kabbara and Arnaud Biraben and Fabrice Wendling and Mahmoud Hassan}, doi = {10.18112/openneuro.ds003420.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003420.v1.0.2}, } ``` ## Technical Details - Subjects: 23 - Recordings: 92 - Tasks: — - Channels: 256 (80), 257 (12) - Sampling rate (Hz): 1000.0 - Duration (hours): 13.535969444444444 - Pathology: Healthy - Modality: Visual - Type: Other - Size on disk: 47.1 GB - File count: 92 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003420.v1.0.2 - Source: openneuro - OpenNeuro: [ds003420](https://openneuro.org/datasets/ds003420) - NeMAR: [ds003420](https://nemar.org/dataexplorer/detail?dataset_id=ds003420) ## API Reference Use the `DS003420` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003420(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HD-EEGtask(Dataset 1) * **Study:** `ds003420` (OpenNeuro) * **Author (year):** `Mheich2020_HD` * **Canonical:** — Also importable as: `DS003420`, `Mheich2020_HD`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 23; recordings: 92; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003420](https://openneuro.org/datasets/ds003420) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003420](https://nemar.org/dataexplorer/detail?dataset_id=ds003420) DOI: [https://doi.org/10.18112/openneuro.ds003420.v1.0.2](https://doi.org/10.18112/openneuro.ds003420.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003420 >>> dataset = DS003420(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003420) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003420) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003421: eeg dataset, 20 subjects *HD-EEGtask(Dataset 2)* Access recordings and metadata through EEGDash. **Citation:** Ahmad Mheich, Olivier Dufor, Sahar Yassine, Aya Kabbara, Arnaud Biraben, Fabrice Wendling, Mahmoud Hassan (2020). *HD-EEGtask(Dataset 2)*. [10.18112/openneuro.ds003421.v1.0.2](https://doi.org/10.18112/openneuro.ds003421.v1.0.2) Modality: eeg Subjects: 20 Recordings: 80 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003421 dataset = DS003421(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003421(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003421( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003421, title = {HD-EEGtask(Dataset 2)}, author = {Ahmad Mheich and Olivier Dufor and Sahar Yassine and Aya Kabbara and Arnaud Biraben and Fabrice Wendling and Mahmoud Hassan}, doi = {10.18112/openneuro.ds003421.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003421.v1.0.2}, } ``` ## About This Dataset **Dataset 2** **Presentation** > This dataset was collected between 2014 and 2017 in Rennes (France) during four conditions (resting state, visual naming, auditory naming and working memory tasks). > All participants provided a written informed consent to participate in this study which was approved > by an independent ethics committee and authorized by the IRB “Comite de Protection des Personnes > dans la Recherche Biomedicale Ouest V” (CCPPRB-Ouest V). ### View full README **Dataset 2** **Presentation** > This dataset was collected between 2014 and 2017 in Rennes (France) during four conditions (resting state, visual naming, auditory naming and working memory tasks). > All participants provided a written informed consent to participate in this study which was approved > by an independent ethics committee and authorized by the IRB “Comite de Protection des Personnes > dans la Recherche Biomedicale Ouest V” (CCPPRB-Ouest V). > The study name was “Braingraph” and study agreement number was 2014-A01461-46. > Its promoter was the Rennes University Hospital. **Participants** > Twenty right-handed healthy volunteers (10 females, 10 males, mean age 23 years) participated > in this experiment. (See participants.json and participants.tsv for more details) **Experiment** > * The experiment begins with the verification of inclusion/exclusion criteria. > * The participants read the information notice and the consent form. > * Then they sign two questionnaires. > * One subject –>four conditions (resting state, visual naming, auditory naming and working memory). > * Resting state–> subject asked to relax for 10 min with their eyes open. > * Visual naming–>subject asked to name 80 pictures. 40 scrambled pictures were presented and participantس were asked to say nothing. > * Auditory naming–> subject asked to name 80 different sounds. > * Memory–> 80 pictures were displayed of which 40 have already been shown in the naming task. New pictures and already seen pictures randomly appeared on the screen and participants have to indicate if they have seen them before by pressing a button or not. **EEG acquisition** > * HD-EEG system (EGI, Electrical Geodesic Inc., 256 electrodes) > * Sampling frequency: 1000Hz > * Impedances were kept below 5k **Contact** > * If you have any questions or comments, please contact: > * Ahmad Mheich: [mheich.ahmad@gmail.com](mailto:mheich.ahmad@gmail.com) ## Dataset Information | Dataset ID | `DS003421` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | HD-EEGtask(Dataset 2) | | Author (year) | `Mheich2020_HD_EEGtask` | | Canonical | — | | Importable as | `DS003421`, `Mheich2020_HD_EEGtask` | | Year | 2020 | | Authors | Ahmad Mheich, Olivier Dufor, Sahar Yassine, Aya Kabbara, Arnaud Biraben, Fabrice Wendling, Mahmoud Hassan | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003421.v1.0.2](https://doi.org/10.18112/openneuro.ds003421.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003421) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003421) | [Source URL](https://openneuro.org/datasets/ds003421) | ### Copy-paste BibTeX ```bibtex @dataset{ds003421, title = {HD-EEGtask(Dataset 2)}, author = {Ahmad Mheich and Olivier Dufor and Sahar Yassine and Aya Kabbara and Arnaud Biraben and Fabrice Wendling and Mahmoud Hassan}, doi = {10.18112/openneuro.ds003421.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003421.v1.0.2}, } ``` ## Technical Details - Subjects: 20 - Recordings: 80 - Tasks: 1 - Channels: 257 - Sampling rate (Hz): 1000.0 - Duration (hours): 11.603826666666668 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 39.6 GB - File count: 80 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003421.v1.0.2 - Source: openneuro - OpenNeuro: [ds003421](https://openneuro.org/datasets/ds003421) - NeMAR: [ds003421](https://nemar.org/dataexplorer/detail?dataset_id=ds003421) ## API Reference Use the `DS003421` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003421(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HD-EEGtask(Dataset 2) * **Study:** `ds003421` (OpenNeuro) * **Author (year):** `Mheich2020_HD_EEGtask` * **Canonical:** — Also importable as: `DS003421`, `Mheich2020_HD_EEGtask`. Modality: `eeg`. Subjects: 20; recordings: 80; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003421](https://openneuro.org/datasets/ds003421) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003421](https://nemar.org/dataexplorer/detail?dataset_id=ds003421) DOI: [https://doi.org/10.18112/openneuro.ds003421.v1.0.2](https://doi.org/10.18112/openneuro.ds003421.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003421 >>> dataset = DS003421(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003421) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003421) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003458: eeg dataset, 23 subjects *EEG: Three armed bandit gambling task* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu) (2021). *EEG: Three armed bandit gambling task*. [10.18112/openneuro.ds003458.v1.1.0](https://doi.org/10.18112/openneuro.ds003458.v1.1.0) Modality: eeg Subjects: 23 Recordings: 23 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003458 dataset = DS003458(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003458(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003458( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003458, title = {EEG: Three armed bandit gambling task}, author = {James F Cavanagh jcavanagh@unm.edu}, doi = {10.18112/openneuro.ds003458.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003458.v1.1.0}, } ``` ## About This Dataset Healthy control college students. 23 subjects completed the 3-armed bandit task with oscillating probabilities. For example, the ‘blue’ stim would slowly move from 20% reinforcing to 90% then back to 20 over many trials. The other ‘red’ and ‘green’ stims would move similarly, but in different phase. See Fig 1 of the paper. This makes the task great for investigating reward processing & reward prediction error in the service of novel task set generation. Task included in Matlab programming language. Data collected in 2014 in the Cognitive Rhythms and Computation Lab, University of New Mexico. I also collected Corrugator EMG (may be labeled EKG) and Skin Conductance on most people. But quality was dubious so I never did much with it. Check .xls sheet under code folder. Some pre-processing scripts are included in code folder as well. - James F Cavanagh 01/04/2021 ## Dataset Information | Dataset ID | `DS003458` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Three armed bandit gambling task | | Author (year) | `Cavanagh2021_Three` | | Canonical | — | | Importable as | `DS003458`, `Cavanagh2021_Three` | | Year | 2021 | | Authors | James F Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu) | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003458.v1.1.0](https://doi.org/10.18112/openneuro.ds003458.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003458) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003458) | [Source URL](https://openneuro.org/datasets/ds003458) | ### Copy-paste BibTeX ```bibtex @dataset{ds003458, title = {EEG: Three armed bandit gambling task}, author = {James F Cavanagh jcavanagh@unm.edu}, doi = {10.18112/openneuro.ds003458.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003458.v1.1.0}, } ``` ## Technical Details - Subjects: 23 - Recordings: 23 - Tasks: 1 - Channels: 66 (19), 64 (4) - Sampling rate (Hz): 500.0 - Duration (hours): 10.447388888888888 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 4.7 GB - File count: 23 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003458.v1.1.0 - Source: openneuro - OpenNeuro: [ds003458](https://openneuro.org/datasets/ds003458) - NeMAR: [ds003458](https://nemar.org/dataexplorer/detail?dataset_id=ds003458) ## API Reference Use the `DS003458` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003458(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Three armed bandit gambling task * **Study:** `ds003458` (OpenNeuro) * **Author (year):** `Cavanagh2021_Three` * **Canonical:** — Also importable as: `DS003458`, `Cavanagh2021_Three`. Modality: `eeg`. Subjects: 23; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003458](https://openneuro.org/datasets/ds003458) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003458](https://nemar.org/dataexplorer/detail?dataset_id=ds003458) DOI: [https://doi.org/10.18112/openneuro.ds003458.v1.1.0](https://doi.org/10.18112/openneuro.ds003458.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003458 >>> dataset = DS003458(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003458) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003458) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003474: eeg dataset, 122 subjects *EEG: Probabilistic Selection and Depression* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu) (2021). *EEG: Probabilistic Selection and Depression*. [10.18112/openneuro.ds003474.v1.1.0](https://doi.org/10.18112/openneuro.ds003474.v1.1.0) Modality: eeg Subjects: 122 Recordings: 122 License: CC0 Source: openneuro Citations: 9.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003474 dataset = DS003474(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003474(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003474( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003474, title = {EEG: Probabilistic Selection and Depression}, author = {James F Cavanagh jcavanagh@unm.edu}, doi = {10.18112/openneuro.ds003474.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003474.v1.1.0}, } ``` ## About This Dataset Probabilistic selection task with 122 college-age participants. Task included in DMDX programming language. Data collected circa 2008-2010 in John J.B. Allen lab at U Arizona. Subjects scored reliably high or low in Beck Depression Inventory. Some have been clinically interviewed. For some subjects (maybe all?), HEOG and VEOG may be mis-labeled as the other. Some files have had some channels interpolated already. There are no raw data to revert to instead… Note subj 544 is not used b/c they had unstable BDI from pre-assessment to test session. Code is included to re-create this paper: DOI: 10.1162/cpsy_a_00024 : - James F Cavanagh 01/11/2021 ## Dataset Information | Dataset ID | `DS003474` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Probabilistic Selection and Depression | | Author (year) | `Cavanagh2021_Probabilistic` | | Canonical | — | | Importable as | `DS003474`, `Cavanagh2021_Probabilistic` | | Year | 2021 | | Authors | James F Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu) | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003474.v1.1.0](https://doi.org/10.18112/openneuro.ds003474.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003474) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003474) | [Source URL](https://openneuro.org/datasets/ds003474) | ### Copy-paste BibTeX ```bibtex @dataset{ds003474, title = {EEG: Probabilistic Selection and Depression}, author = {James F Cavanagh jcavanagh@unm.edu}, doi = {10.18112/openneuro.ds003474.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003474.v1.1.0}, } ``` ## Technical Details - Subjects: 122 - Recordings: 122 - Tasks: 1 - Channels: 66 (72), 67 (50) - Sampling rate (Hz): 500.0 - Duration (hours): 36.61049388888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 16.6 GB - File count: 122 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003474.v1.1.0 - Source: openneuro - OpenNeuro: [ds003474](https://openneuro.org/datasets/ds003474) - NeMAR: [ds003474](https://nemar.org/dataexplorer/detail?dataset_id=ds003474) ## API Reference Use the `DS003474` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003474(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Probabilistic Selection and Depression * **Study:** `ds003474` (OpenNeuro) * **Author (year):** `Cavanagh2021_Probabilistic` * **Canonical:** — Also importable as: `DS003474`, `Cavanagh2021_Probabilistic`. Modality: `eeg`. Subjects: 122; recordings: 122; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003474](https://openneuro.org/datasets/ds003474) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003474](https://nemar.org/dataexplorer/detail?dataset_id=ds003474) DOI: [https://doi.org/10.18112/openneuro.ds003474.v1.1.0](https://doi.org/10.18112/openneuro.ds003474.v1.1.0) NEMAR citation count: 9 ### Examples ```pycon >>> from eegdash.dataset import DS003474 >>> dataset = DS003474(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003474) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003474) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003478: eeg dataset, 122 subjects *EEG: Depression rest* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu) (2021). *EEG: Depression rest*. [10.18112/openneuro.ds003478.v1.1.0](https://doi.org/10.18112/openneuro.ds003478.v1.1.0) Modality: eeg Subjects: 122 Recordings: 243 License: CC0 Source: openneuro Citations: 22.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003478 dataset = DS003478(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003478(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003478( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003478, title = {EEG: Depression rest}, author = {James F Cavanagh jcavanagh@unm.edu}, doi = {10.18112/openneuro.ds003478.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003478.v1.1.0}, } ``` ## About This Dataset Resting EEG data with 122 college-age participants. These are the same participants as the Openneuro prob selection task. Subjects have the same task IDs, so you could match them up if you like. Task included in DMDX programming language, with instructions for eyes open & eyes closed Triggers included for instrucgted one minute spans for open or closed, e.g. : OCCOCO or COOCOC Data collected circa 2008-2010 in John J.B. Allen lab at U Arizona. Subjects scored reliably high or low in Beck Depression Inventory. Some have been clinically interviewed. See .xls sheet. For some subjects (maybe all?), HEOG and VEOG may be mis-labeled as the other. Some files have had some channels interpolated already. There are no raw data to revert to instead… I have never even looked at the last rest run; no idea how it looks. First rest run was high quality though. The first 6 mins happened immedately after EEG hook-up. The second 6 minutes came after task performance (about 1 hour later) 516 has no rest2. 544 was unused in all anlayses due to unstable BDI between mass assessment and lab assessment (1-4 months) - James F Cavanagh 01/18/2021 ## Dataset Information | Dataset ID | `DS003478` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Depression rest | | Author (year) | `Cavanagh2021_Depression` | | Canonical | — | | Importable as | `DS003478`, `Cavanagh2021_Depression` | | Year | 2021 | | Authors | James F Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu) | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003478.v1.1.0](https://doi.org/10.18112/openneuro.ds003478.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003478) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003478) | [Source URL](https://openneuro.org/datasets/ds003478) | ### Copy-paste BibTeX ```bibtex @dataset{ds003478, title = {EEG: Depression rest}, author = {James F Cavanagh jcavanagh@unm.edu}, doi = {10.18112/openneuro.ds003478.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003478.v1.1.0}, } ``` ## Technical Details - Subjects: 122 - Recordings: 243 - Tasks: 1 - Channels: 66 (133), 67 (110) - Sampling rate (Hz): 500.0 - Duration (hours): 23.35965722222222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 10.6 GB - File count: 243 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003478.v1.1.0 - Source: openneuro - OpenNeuro: [ds003478](https://openneuro.org/datasets/ds003478) - NeMAR: [ds003478](https://nemar.org/dataexplorer/detail?dataset_id=ds003478) ## API Reference Use the `DS003478` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003478(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Depression rest * **Study:** `ds003478` (OpenNeuro) * **Author (year):** `Cavanagh2021_Depression` * **Canonical:** — Also importable as: `DS003478`, `Cavanagh2021_Depression`. Modality: `eeg`. Subjects: 122; recordings: 243; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003478](https://openneuro.org/datasets/ds003478) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003478](https://nemar.org/dataexplorer/detail?dataset_id=ds003478) DOI: [https://doi.org/10.18112/openneuro.ds003478.v1.1.0](https://doi.org/10.18112/openneuro.ds003478.v1.1.0) NEMAR citation count: 22 ### Examples ```pycon >>> from eegdash.dataset import DS003478 >>> dataset = DS003478(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003478) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003478) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003483: meg dataset, 21 subjects *Logical reasoning study* Access recordings and metadata through EEGDash. **Citation:** Cognitive and Computational Neuroscience Laboratory (UPM - UCM)., PI: Fernando Maestu., PI: Carmen Requena, PI: Francisco Salto Alemany (2021). *Logical reasoning study*. [10.18112/openneuro.ds003483.v1.0.2](https://doi.org/10.18112/openneuro.ds003483.v1.0.2) Modality: meg Subjects: 21 Recordings: 41 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003483 dataset = DS003483(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003483(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003483( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003483, title = {Logical reasoning study}, author = {Cognitive and Computational Neuroscience Laboratory (UPM - UCM). and PI: Fernando Maestu. and PI: Carmen Requena and PI: Francisco Salto Alemany}, doi = {10.18112/openneuro.ds003483.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003483.v1.0.2}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS003483` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Logical reasoning study | | Author (year) | `Cognitive2021` | | Canonical | `Maestu2021` | | Importable as | `DS003483`, `Cognitive2021`, `Maestu2021` | | Year | 2021 | | Authors | Cognitive and Computational Neuroscience Laboratory (UPM - UCM)., PI: Fernando Maestu., PI: Carmen Requena, PI: Francisco Salto Alemany | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003483.v1.0.2](https://doi.org/10.18112/openneuro.ds003483.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003483) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003483) | [Source URL](https://openneuro.org/datasets/ds003483) | ### Copy-paste BibTeX ```bibtex @dataset{ds003483, title = {Logical reasoning study}, author = {Cognitive and Computational Neuroscience Laboratory (UPM - UCM). and PI: Fernando Maestu. and PI: Carmen Requena and PI: Francisco Salto Alemany}, doi = {10.18112/openneuro.ds003483.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003483.v1.0.2}, } ``` ## Technical Details - Subjects: 21 - Recordings: 41 - Tasks: 2 - Channels: 320 - Sampling rate (Hz): 1000.0 - Duration (hours): 11.018888888888888 - Pathology: Healthy - Modality: — - Type: Decision-making - Size on disk: 24.5 GB - File count: 41 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003483.v1.0.2 - Source: openneuro - OpenNeuro: [ds003483](https://openneuro.org/datasets/ds003483) - NeMAR: [ds003483](https://nemar.org/dataexplorer/detail?dataset_id=ds003483) ## API Reference Use the `DS003483` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003483(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Logical reasoning study * **Study:** `ds003483` (OpenNeuro) * **Author (year):** `Cognitive2021` * **Canonical:** `Maestu2021` Also importable as: `DS003483`, `Cognitive2021`, `Maestu2021`. Modality: `meg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 21; recordings: 41; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003483](https://openneuro.org/datasets/ds003483) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003483](https://nemar.org/dataexplorer/detail?dataset_id=ds003483) DOI: [https://doi.org/10.18112/openneuro.ds003483.v1.0.2](https://doi.org/10.18112/openneuro.ds003483.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003483 >>> dataset = DS003483(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003483) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003483) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS003490: eeg dataset, 50 subjects *EEG: 3-Stim Auditory Oddball and Rest in Parkinson’s* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu) (2021). *EEG: 3-Stim Auditory Oddball and Rest in Parkinson’s*. [10.18112/openneuro.ds003490.v1.1.0](https://doi.org/10.18112/openneuro.ds003490.v1.1.0) Modality: eeg Subjects: 50 Recordings: 75 License: CC0 Source: openneuro Citations: 13.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003490 dataset = DS003490(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003490(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003490( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003490, title = {EEG: 3-Stim Auditory Oddball and Rest in Parkinson's}, author = {James F Cavanagh jcavanagh@unm.edu}, doi = {10.18112/openneuro.ds003490.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003490.v1.1.0}, } ``` ## About This Dataset Rest and 3 stimulus auditory oddball data with 25 Parkinson patients and 25 matched controls. Some more subjects are included in the .xls sheet that don’t have EEG data in this task. C’est la vie. PD came in twice separated by a week, either ON or OFF medication. CTL only came in once. Task included in Matlab programming language, with instructions for eyes open & eyes closed Triggers included for instructed one minute spans for open or closed (OC or CO) before the task. Data collected circa 2015 in Cognitive Rhythms and Computation Lab at University of New Mexico. Subjs also had an acceleromter taped to their most tremor affected hand. X, Y, Z dimensions recorded throughout. Check the .xls sheet under code folder for more meta data. Also code to re-create the paper. - James F Cavanagh 01/18/2021 ## Dataset Information | Dataset ID | `DS003490` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: 3-Stim Auditory Oddball and Rest in Parkinson’s | | Author (year) | `Cavanagh2021_3` | | Canonical | — | | Importable as | `DS003490`, `Cavanagh2021_3` | | Year | 2021 | | Authors | James F Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu) | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003490.v1.1.0](https://doi.org/10.18112/openneuro.ds003490.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003490) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003490) | [Source URL](https://openneuro.org/datasets/ds003490) | ### Copy-paste BibTeX ```bibtex @dataset{ds003490, title = {EEG: 3-Stim Auditory Oddball and Rest in Parkinson's}, author = {James F Cavanagh jcavanagh@unm.edu}, doi = {10.18112/openneuro.ds003490.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003490.v1.1.0}, } ``` ## Technical Details - Subjects: 50 - Recordings: 75 - Tasks: 1 - Channels: 67 - Sampling rate (Hz): 500.0 - Duration (hours): 12.759916666666664 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 5.8 GB - File count: 75 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003490.v1.1.0 - Source: openneuro - OpenNeuro: [ds003490](https://openneuro.org/datasets/ds003490) - NeMAR: [ds003490](https://nemar.org/dataexplorer/detail?dataset_id=ds003490) ## API Reference Use the `DS003490` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003490(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: 3-Stim Auditory Oddball and Rest in Parkinson’s * **Study:** `ds003490` (OpenNeuro) * **Author (year):** `Cavanagh2021_3` * **Canonical:** — Also importable as: `DS003490`, `Cavanagh2021_3`. Modality: `eeg`. Subjects: 50; recordings: 75; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003490](https://openneuro.org/datasets/ds003490) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003490](https://nemar.org/dataexplorer/detail?dataset_id=ds003490) DOI: [https://doi.org/10.18112/openneuro.ds003490.v1.1.0](https://doi.org/10.18112/openneuro.ds003490.v1.1.0) NEMAR citation count: 13 ### Examples ```pycon >>> from eegdash.dataset import DS003490 >>> dataset = DS003490(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003490) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003490) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003498: ieeg dataset, 20 subjects *interictal iEEG during slow-wave sleep with HFO markings* Access recordings and metadata through EEGDash. **Citation:** Fedele T, Krayenbühl N, Hilfiker P, Adam Li, Sarnthein J. (2021). *interictal iEEG during slow-wave sleep with HFO markings*. [10.18112/openneuro.ds003498.v1.0.1](https://doi.org/10.18112/openneuro.ds003498.v1.0.1) Modality: ieeg Subjects: 20 Recordings: 385 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003498 dataset = DS003498(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003498(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003498( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003498, title = {interictal iEEG during slow-wave sleep with HFO markings}, author = {Fedele T and Krayenbühl N and Hilfiker P and Adam Li and Sarnthein J.}, doi = {10.18112/openneuro.ds003498.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003498.v1.0.1}, } ``` ## About This Dataset **Zurich iEEG HFO Dataset** This dataset was obtained from the publication [1]. There are 20 subjects with HFO events. We converted the dataset into BIDS format. The original uploader: adam2392 obtained explicit permission from the authors of the dataset to upload this to openneuro. Adam worked on an open-source Python implementation of HFO detection algorithms, and uses this dataset in validation. Even though the publication involves a `Morphology` HFO detector, we have implemented our interpretation of the RMS, LineLength and Hilbert detectors in the [mne-hfo repository] ([https://github.com/adam2392/mne-hfo](https://github.com/adam2392/mne-hfo)) [2].For more information, visit: [https://github.com/adam2392/mne-hfo](https://github.com/adam2392/mne-hfo). **Note from the paper** “We excluded all electrode contacts where electrical stimulation evoked motor or language responses (Table S1). ### View full README **Zurich iEEG HFO Dataset** This dataset was obtained from the publication [1]. There are 20 subjects with HFO events. We converted the dataset into BIDS format. The original uploader: adam2392 obtained explicit permission from the authors of the dataset to upload this to openneuro. Adam worked on an open-source Python implementation of HFO detection algorithms, and uses this dataset in validation. Even though the publication involves a `Morphology` HFO detector, we have implemented our interpretation of the RMS, LineLength and Hilbert detectors in the [mne-hfo repository] ([https://github.com/adam2392/mne-hfo](https://github.com/adam2392/mne-hfo)) [2].For more information, visit: [https://github.com/adam2392/mne-hfo](https://github.com/adam2392/mne-hfo). **Note from the paper** “We excluded all electrode contacts where electrical stimulation evoked motor or language responses (Table S1). In TLE patients, we included only the 3 most mesial bipolar channels”. **BIDS Conversion** MNE-BIDS was used to convert the dataset into BIDS format. The code inside `code/` was used to generate the data. **HFO Events From Original Paper** The HFO events from the original paper that were validated and detected are stored in the `*events.tsv` file per dataset run. The format is similar to `mne-hfo` and can be easily read in using `mne-bids` and/or `mne-python`. Each row in the events.tsv file corresponds to a HFO detected in the original source dataset. The `trial_type` column stores the information pertaining type of HFO (e.g. `ripple`, `fr` for fast ripple, or `frandr` for fast ripple and ripple). The channel name (possibly in bipolar reference) is `"-"` character delimited and appended to the type of HFO with a `"_"` separating. For example: `_` is the form. **Reference Dataset** The following website was where the original data was downloaded. [http://crcns.org/data-sets/methods/ieeg-1](http://crcns.org/data-sets/methods/ieeg-1) **References** [1] Fedele T, Burnos S, Boran E, Krayenbühl N, Hilfiker P, Grunwald T, Sarnthein J. Resection of high frequency oscillations predicts seizure outcome in the individual patient. Scientific Reports. 2017;7(1):13836. [https://www.nature.com/articles/s41598-017-13064-1](https://www.nature.com/articles/s41598-017-13064-1) doi:10.1038/s41598-017-13064-1 [2] Dataset meta analysis with mne-hfo. 10.5281/zenodo.4485036 [3] Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 [4] Holdgraf, C., Appelhoff, S., Bickel, S., Bouchard, K., D’Ambrosio, S., David, O., … Hermes, D. (2019). iEEG-BIDS, extending the Brain Imaging Data Structure specification to human intracranial electrophysiology. Scientific Data, 6, 102. https://doi.org/10.1038/s41597-019-0105-7 ## Dataset Information | Dataset ID | `DS003498` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | interictal iEEG during slow-wave sleep with HFO markings | | Author (year) | `Fedele2021` | | Canonical | — | | Importable as | `DS003498`, `Fedele2021` | | Year | 2021 | | Authors | Fedele T, Krayenbühl N, Hilfiker P, Adam Li, Sarnthein J. | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003498.v1.0.1](https://doi.org/10.18112/openneuro.ds003498.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003498) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003498) | [Source URL](https://openneuro.org/datasets/ds003498) | ### Copy-paste BibTeX ```bibtex @dataset{ds003498, title = {interictal iEEG during slow-wave sleep with HFO markings}, author = {Fedele T and Krayenbühl N and Hilfiker P and Adam Li and Sarnthein J.}, doi = {10.18112/openneuro.ds003498.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003498.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 385 - Tasks: — - Channels: 64 (146), 40 (73), 42 (43), 74 (35), 16 (29), 50 (28), 52 (13), 48 (13), 30 (5) - Sampling rate (Hz): 2000.0 - Duration (hours): 32.344113194444446 - Pathology: Epilepsy - Modality: Resting State - Type: Clinical/Intervention - Size on disk: 44.7 GB - File count: 385 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003498.v1.0.1 - Source: openneuro - OpenNeuro: [ds003498](https://openneuro.org/datasets/ds003498) - NeMAR: [ds003498](https://nemar.org/dataexplorer/detail?dataset_id=ds003498) ## API Reference Use the `DS003498` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003498(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) interictal iEEG during slow-wave sleep with HFO markings * **Study:** `ds003498` (OpenNeuro) * **Author (year):** `Fedele2021` * **Canonical:** — Also importable as: `DS003498`, `Fedele2021`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 20; recordings: 385; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003498](https://openneuro.org/datasets/ds003498) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003498](https://nemar.org/dataexplorer/detail?dataset_id=ds003498) DOI: [https://doi.org/10.18112/openneuro.ds003498.v1.0.1](https://doi.org/10.18112/openneuro.ds003498.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003498 >>> dataset = DS003498(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003498) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003498) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003688](eegdash.dataset.DS003688.md) # DS003505: eeg dataset, 19 subjects *VEPCON: Source imaging of high-density visual evoked potentials with multi-scale brain parcellations and connectomes* Access recordings and metadata through EEGDash. **Citation:** David Pascucci, Sebastien Tourbier, Joan Rue-Queralt, Margherita Carboni, Patric Hagmann, Gijs Plomp (2021). *VEPCON: Source imaging of high-density visual evoked potentials with multi-scale brain parcellations and connectomes*. [10.18112/openneuro.ds003505.v1.1.1](https://doi.org/10.18112/openneuro.ds003505.v1.1.1) Modality: eeg Subjects: 19 Recordings: 37 License: CC0 Source: openneuro Citations: 5.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003505 dataset = DS003505(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003505(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003505( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003505, title = {VEPCON: Source imaging of high-density visual evoked potentials with multi-scale brain parcellations and connectomes}, author = {David Pascucci and Sebastien Tourbier and Joan Rue-Queralt and Margherita Carboni and Patric Hagmann and Gijs Plomp}, doi = {10.18112/openneuro.ds003505.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds003505.v1.1.1}, } ``` ## About This Dataset **VEPCON: Source imaging of high-density visual evoked potentials with multi-scale brain parcellations and connectomes** **Overview** The multimodal dataset VEPCON follows the BIDS standard and provides raw data of high-density EEG, structural MRI and diffusion weighted images (DWI) recorded in 20 participants. Visual evoked potentials were recorded while participants discriminated briefly presented faces from scrambled faces (`task-faces`), or coherently moving stimuli from incoherent ones (`task-motion`). Note that raw EEG data for `sub-05` (for both `task-faces` and `task-motion`) and for `sub-15` (for `task-motion`) were discarded because of excessive motion. MRI and DWI were recorded in a separate session from the same participants. VEPCON also contains data derivatives that follow as close as possible the BIDS derivatives specifications. It includes in particular: pre-processed EEG of single trials in each condition, behavioral measures, structural MRIs, Freesurfer `7.1.1` outputs of defaced MRIs, individual brain parcellations at 5 spatial resolutions (83 to 1015 regions), and corresponding structural connectomes based on fiber count, fiber density, average fractional anisotropy and mean diffusivity maps. In addition, Freesurfer’s outputs include a `bem/` folder that contains all files generated by MNE to describe the Boundary Element Model (BEM) based on Freesurfer’s surfaces estimated from the original undefaced structural MRIs. Finally, VEPCON also provides EEG inverse solutions for source imaging based on individual anatomy, and Python and Matlab code for deriving time-series of activity in each brain region, at each parcellation level. We believe this dataset can contribute to multimodal methods development, studying structure-function relations, as well as unimodal optimization of source imaging and graph analysis, among many other possibilities. All code supporting the dataset can be found in the `code/` folder. ## Dataset Information | Dataset ID | `DS003505` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | VEPCON: Source imaging of high-density visual evoked potentials with multi-scale brain parcellations and connectomes | | Author (year) | `Pascucci2021` | | Canonical | `VEPCON` | | Importable as | `DS003505`, `Pascucci2021`, `VEPCON` | | Year | 2021 | | Authors | David Pascucci, Sebastien Tourbier, Joan Rue-Queralt, Margherita Carboni, Patric Hagmann, Gijs Plomp | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003505.v1.1.1](https://doi.org/10.18112/openneuro.ds003505.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003505) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003505) | [Source URL](https://openneuro.org/datasets/ds003505) | ### Copy-paste BibTeX ```bibtex @dataset{ds003505, title = {VEPCON: Source imaging of high-density visual evoked potentials with multi-scale brain parcellations and connectomes}, author = {David Pascucci and Sebastien Tourbier and Joan Rue-Queralt and Margherita Carboni and Patric Hagmann and Gijs Plomp}, doi = {10.18112/openneuro.ds003505.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds003505.v1.1.1}, } ``` ## Technical Details - Subjects: 19 - Recordings: 37 - Tasks: 2 - Channels: 128 - Sampling rate (Hz): 2048.0 - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: 29.0 GB - File count: 37 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003505.v1.1.1 - Source: openneuro - OpenNeuro: [ds003505](https://openneuro.org/datasets/ds003505) - NeMAR: [ds003505](https://nemar.org/dataexplorer/detail?dataset_id=ds003505) ## API Reference Use the `DS003505` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003505(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) VEPCON: Source imaging of high-density visual evoked potentials with multi-scale brain parcellations and connectomes * **Study:** `ds003505` (OpenNeuro) * **Author (year):** `Pascucci2021` * **Canonical:** `VEPCON` Also importable as: `DS003505`, `Pascucci2021`, `VEPCON`. Modality: `eeg`. Subjects: 19; recordings: 37; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003505](https://openneuro.org/datasets/ds003505) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003505](https://nemar.org/dataexplorer/detail?dataset_id=ds003505) DOI: [https://doi.org/10.18112/openneuro.ds003505.v1.1.1](https://doi.org/10.18112/openneuro.ds003505.v1.1.1) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003505 >>> dataset = DS003505(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003505) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003505) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003506: eeg dataset, 56 subjects *EEG: Reinforcement Learning in Parkinson’s* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh, Darin Brown (2021). *EEG: Reinforcement Learning in Parkinson’s*. [10.18112/openneuro.ds003506.v1.1.0](https://doi.org/10.18112/openneuro.ds003506.v1.1.0) Modality: eeg Subjects: 56 Recordings: 84 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003506 dataset = DS003506(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003506(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003506( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003506, title = {EEG: Reinforcement Learning in Parkinson's}, author = {James F Cavanagh and Darin Brown}, doi = {10.18112/openneuro.ds003506.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003506.v1.1.0}, } ``` ## About This Dataset Reinforcement learning task with 28 Parkinson patients and 28 matched controls. Task with volitional and instucted choices. Task adapted from here: [https://doi.org/10.1016/j.neuron.2014.06.035](https://doi.org/10.1016/j.neuron.2014.06.035). Beh data first published here: 10.1016/j.cortex.2017.02.021. EEG published here: 10.1016/j.brainres.2019.146541. PD came in twice separated by a week, either ON or OFF medication. CTL only came in once. Task included in Matlab programming language. Data collected circa 2015 in Cognitive Rhythms and Computation Lab at University of New Mexico. Subjs also had an acceleromter taped to their most tremor affected hand. X, Y, Z dimensions recorded throughout. Check the .xls sheet under code folder for more meta data. Some Matlab analytic scripts are included, but I didnt ensure that these are complete. Also behavioral files from the task, which contain more trial-specific information than the triggers. : - James F Cavanagh 02/05/2021 ## Dataset Information | Dataset ID | `DS003506` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Reinforcement Learning in Parkinson’s | | Author (year) | `Cavanagh2021_Reinforcement` | | Canonical | — | | Importable as | `DS003506`, `Cavanagh2021_Reinforcement` | | Year | 2021 | | Authors | James F Cavanagh, Darin Brown | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003506.v1.1.0](https://doi.org/10.18112/openneuro.ds003506.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003506) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003506) | [Source URL](https://openneuro.org/datasets/ds003506) | ### Copy-paste BibTeX ```bibtex @dataset{ds003506, title = {EEG: Reinforcement Learning in Parkinson's}, author = {James F Cavanagh and Darin Brown}, doi = {10.18112/openneuro.ds003506.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003506.v1.1.0}, } ``` ## Technical Details - Subjects: 56 - Recordings: 84 - Tasks: 1 - Channels: 67 - Sampling rate (Hz): 500.0 - Duration (hours): 35.38127777777778 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 16.2 GB - File count: 84 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003506.v1.1.0 - Source: openneuro - OpenNeuro: [ds003506](https://openneuro.org/datasets/ds003506) - NeMAR: [ds003506](https://nemar.org/dataexplorer/detail?dataset_id=ds003506) ## API Reference Use the `DS003506` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003506(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Reinforcement Learning in Parkinson’s * **Study:** `ds003506` (OpenNeuro) * **Author (year):** `Cavanagh2021_Reinforcement` * **Canonical:** — Also importable as: `DS003506`, `Cavanagh2021_Reinforcement`. Modality: `eeg`. Subjects: 56; recordings: 84; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003506](https://openneuro.org/datasets/ds003506) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003506](https://nemar.org/dataexplorer/detail?dataset_id=ds003506) DOI: [https://doi.org/10.18112/openneuro.ds003506.v1.1.0](https://doi.org/10.18112/openneuro.ds003506.v1.1.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003506 >>> dataset = DS003506(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003506) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003506) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003509: eeg dataset, 56 subjects *EEG: Simon Conflict in Parkinson’s* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh, Arun Singh, Kumar Narayanan (2021). *EEG: Simon Conflict in Parkinson’s*. [10.18112/openneuro.ds003509.v1.1.0](https://doi.org/10.18112/openneuro.ds003509.v1.1.0) Modality: eeg Subjects: 56 Recordings: 84 License: CC0 Source: openneuro Citations: 5.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003509 dataset = DS003509(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003509(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003509( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003509, title = {EEG: Simon Conflict in Parkinson's}, author = {James F Cavanagh and Arun Singh and Kumar Narayanan}, doi = {10.18112/openneuro.ds003509.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003509.v1.1.0}, } ``` ## About This Dataset Simon conflict task with cost of conflict reinforcement manipulation. 28 Parkinson patients and 28 matched controls. Task adapted from here: 10.1038/ncomms6394. Beh data first published here: 10.1016/j.cortex.2017.02.021. EEG published here: 10.1016/j.neuropsychologia.2018.05.020. PD came in twice separated by a week, either ON or OFF medication. CTL only came in once. Task included in Matlab programming language. Data collected circa 2015 in Cognitive Rhythms and Computation Lab at University of New Mexico. Subjs also had an acceleromter taped to their most tremor affected hand. X, Y, Z dimensions recorded throughout. Check the .xls sheet under code folder for more meta data. Triggers are complicated. See CC_Triggers.mat under code folder. Many analysis scripts are included; no idea how these hold up. Many are old. - James F Cavanagh 02/08/2021 ## Dataset Information | Dataset ID | `DS003509` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Simon Conflict in Parkinson’s | | Author (year) | `Cavanagh2021_Simon` | | Canonical | — | | Importable as | `DS003509`, `Cavanagh2021_Simon` | | Year | 2021 | | Authors | James F Cavanagh, Arun Singh, Kumar Narayanan | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003509.v1.1.0](https://doi.org/10.18112/openneuro.ds003509.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003509) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003509) | [Source URL](https://openneuro.org/datasets/ds003509) | ### Copy-paste BibTeX ```bibtex @dataset{ds003509, title = {EEG: Simon Conflict in Parkinson's}, author = {James F Cavanagh and Arun Singh and Kumar Narayanan}, doi = {10.18112/openneuro.ds003509.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003509.v1.1.0}, } ``` ## Technical Details - Subjects: 56 - Recordings: 84 - Tasks: 1 - Channels: 67 - Sampling rate (Hz): 500.0 - Duration (hours): 48.53486111111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 22.3 GB - File count: 84 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003509.v1.1.0 - Source: openneuro - OpenNeuro: [ds003509](https://openneuro.org/datasets/ds003509) - NeMAR: [ds003509](https://nemar.org/dataexplorer/detail?dataset_id=ds003509) ## API Reference Use the `DS003509` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003509(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Simon Conflict in Parkinson’s * **Study:** `ds003509` (OpenNeuro) * **Author (year):** `Cavanagh2021_Simon` * **Canonical:** — Also importable as: `DS003509`, `Cavanagh2021_Simon`. Modality: `eeg`. Subjects: 56; recordings: 84; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003509](https://openneuro.org/datasets/ds003509) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003509](https://nemar.org/dataexplorer/detail?dataset_id=ds003509) DOI: [https://doi.org/10.18112/openneuro.ds003509.v1.1.0](https://doi.org/10.18112/openneuro.ds003509.v1.1.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003509 >>> dataset = DS003509(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003509) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003509) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003516: eeg dataset, 25 subjects *EEG: Attended Speaker Paradigm (Own Name in Ignored Stream)* Access recordings and metadata through EEGDash. **Citation:** Bjoern Holtze, Manuela Jaeger, Stefan Debener, Kamil Adiloglu, Bojana Mirkovic (2021). *EEG: Attended Speaker Paradigm (Own Name in Ignored Stream)*. [10.18112/openneuro.ds003516.v1.1.1](https://doi.org/10.18112/openneuro.ds003516.v1.1.1) Modality: eeg Subjects: 25 Recordings: 25 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003516 dataset = DS003516(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003516(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003516( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003516, title = {EEG: Attended Speaker Paradigm (Own Name in Ignored Stream)}, author = {Bjoern Holtze and Manuela Jaeger and Stefan Debener and Kamil Adiloglu and Bojana Mirkovic}, doi = {10.18112/openneuro.ds003516.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds003516.v1.1.1}, } ``` ## About This Dataset Within this experiment 25 participants performed a two-competing speaker paradigm. Participants were instructed to either attend to the left or right audio book. The paradigm consisted of five 10-minute blocks of audio book presentation. In each 10-minute block the participants own name was presented 10 times, embedded within the to-be-ignored audio book. A 10-minute block could either be presented in the omnidirectional condition (both audio books were presented equally loud) or within the beamforming condition (the to-be-attended audio book was louder than the to-be-ignored audio book). The first 10-minute block was always presented in the omnidirectional condition whereas the conditions were alternated for the later four blocks, with one half of the participants starting with the omnidirectonal condition and the other half starting with the beamforming condition. The article ([https://doi.org/10.3389/fnins.2021.643705](https://doi.org/10.3389/fnins.2021.643705)) contains all methodological details - Björn Holtze (January, 2021) ## Dataset Information | Dataset ID | `DS003516` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Attended Speaker Paradigm (Own Name in Ignored Stream) | | Author (year) | `Holtze2021` | | Canonical | — | | Importable as | `DS003516`, `Holtze2021` | | Year | 2021 | | Authors | Bjoern Holtze, Manuela Jaeger, Stefan Debener, Kamil Adiloglu, Bojana Mirkovic | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003516.v1.1.1](https://doi.org/10.18112/openneuro.ds003516.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003516) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003516) | [Source URL](https://openneuro.org/datasets/ds003516) | ### Copy-paste BibTeX ```bibtex @dataset{ds003516, title = {EEG: Attended Speaker Paradigm (Own Name in Ignored Stream)}, author = {Bjoern Holtze and Manuela Jaeger and Stefan Debener and Kamil Adiloglu and Bojana Mirkovic}, doi = {10.18112/openneuro.ds003516.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds003516.v1.1.1}, } ``` ## Technical Details - Subjects: 25 - Recordings: 25 - Tasks: 1 - Channels: 49 - Sampling rate (Hz): 500.0 - Duration (hours): 22.56954166666667 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 7.6 GB - File count: 25 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003516.v1.1.1 - Source: openneuro - OpenNeuro: [ds003516](https://openneuro.org/datasets/ds003516) - NeMAR: [ds003516](https://nemar.org/dataexplorer/detail?dataset_id=ds003516) ## API Reference Use the `DS003516` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003516(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Attended Speaker Paradigm (Own Name in Ignored Stream) * **Study:** `ds003516` (OpenNeuro) * **Author (year):** `Holtze2021` * **Canonical:** — Also importable as: `DS003516`, `Holtze2021`. Modality: `eeg`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003516](https://openneuro.org/datasets/ds003516) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003516](https://nemar.org/dataexplorer/detail?dataset_id=ds003516) DOI: [https://doi.org/10.18112/openneuro.ds003516.v1.1.1](https://doi.org/10.18112/openneuro.ds003516.v1.1.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003516 >>> dataset = DS003516(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003516) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003516) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003517: eeg dataset, 17 subjects *EEG: Continuous gameplay of an 8-bit style video game* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh, Joel Castellanos (2021). *EEG: Continuous gameplay of an 8-bit style video game*. [10.18112/openneuro.ds003517.v1.1.0](https://doi.org/10.18112/openneuro.ds003517.v1.1.0) Modality: eeg Subjects: 17 Recordings: 34 License: CC0 Source: openneuro Citations: 5.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003517 dataset = DS003517(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003517(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003517( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003517, title = {EEG: Continuous gameplay of an 8-bit style video game}, author = {James F Cavanagh and Joel Castellanos}, doi = {10.18112/openneuro.ds003517.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003517.v1.1.0}, } ``` ## About This Dataset EEG during during continuous gameplay of an 8-bit style video game. EEG published here: 10.1016/j.neuroimage.2016.02.075. N=17 participants. in addition to the video game, participants first completed a 2-stim visual oddball and a 2-doors gambling task. Tasks included in Java programming language. Its pretty fun… Each task sends triggers to the EEG file, and also outputs continuous data in a .csv log file. For the Escape from Asteroid Axon video game this has a wealth of movement, player position and action, antagonist position, loot box, etc info. Data collected circa 2015 in Cognitive Rhythms and Computation Lab at University of New Mexico. Some analytic scripts are inlcuded, but I cant verify that these were what I used in the final analysis. Some (ExAAx_Log.m) are clearly pilot analyses. Your best bet would be to play the game and record some triggers and examine how those line up with the .csv log, etc. - James F Cavanagh 02/10/2021 ## Dataset Information | Dataset ID | `DS003517` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Continuous gameplay of an 8-bit style video game | | Author (year) | `Cavanagh2021_Continuous` | | Canonical | — | | Importable as | `DS003517`, `Cavanagh2021_Continuous` | | Year | 2021 | | Authors | James F Cavanagh, Joel Castellanos | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003517.v1.1.0](https://doi.org/10.18112/openneuro.ds003517.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003517) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003517) | [Source URL](https://openneuro.org/datasets/ds003517) | ### Copy-paste BibTeX ```bibtex @dataset{ds003517, title = {EEG: Continuous gameplay of an 8-bit style video game}, author = {James F Cavanagh and Joel Castellanos}, doi = {10.18112/openneuro.ds003517.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003517.v1.1.0}, } ``` ## Technical Details - Subjects: 17 - Recordings: 34 - Tasks: 1 - Channels: 65 - Sampling rate (Hz): 500.0 - Duration (hours): 13.00626388888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 5.8 GB - File count: 34 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003517.v1.1.0 - Source: openneuro - OpenNeuro: [ds003517](https://openneuro.org/datasets/ds003517) - NeMAR: [ds003517](https://nemar.org/dataexplorer/detail?dataset_id=ds003517) ## API Reference Use the `DS003517` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003517(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Continuous gameplay of an 8-bit style video game * **Study:** `ds003517` (OpenNeuro) * **Author (year):** `Cavanagh2021_Continuous` * **Canonical:** — Also importable as: `DS003517`, `Cavanagh2021_Continuous`. Modality: `eeg`. Subjects: 17; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003517](https://openneuro.org/datasets/ds003517) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003517](https://nemar.org/dataexplorer/detail?dataset_id=ds003517) DOI: [https://doi.org/10.18112/openneuro.ds003517.v1.1.0](https://doi.org/10.18112/openneuro.ds003517.v1.1.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003517 >>> dataset = DS003517(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003517) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003517) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003518: eeg dataset, 110 subjects *EEG: Simon Conflict w/ Reinforcement + Cabergoline Challenge* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh, Michael J Frank (2021). *EEG: Simon Conflict w/ Reinforcement + Cabergoline Challenge*. [10.18112/openneuro.ds003518.v1.1.0](https://doi.org/10.18112/openneuro.ds003518.v1.1.0) Modality: eeg Subjects: 110 Recordings: 137 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003518 dataset = DS003518(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003518(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003518( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003518, title = {EEG: Simon Conflict w/ Reinforcement + Cabergoline Challenge}, author = {James F Cavanagh and Michael J Frank}, doi = {10.18112/openneuro.ds003518.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003518.v1.1.0}, } ``` ## About This Dataset Simon conflict task with cost of conflict reinforcement manipulation. Study 1: 80 healthy participants (2 removed) + 5 placebo session from a pilot of the drug study. Total n=83. Study 2: 30 healthy participants (3 dropout) in a double-blind drug study. Total n=27. Drug was Cabergoline 1.25 mg. Study 1 subjects had IDs 101-180 and the 5 placebo were 301/401 - 305/405. Study 2 subjects had IDs 305/405 - 330/430. The dual numbers were for session: 300s were first session, 400s were second session. Here we have simply put them in as session 1 and session 2. So Joe Smith would have been 305 on visit 1, then 405 on visit 2. If he got cab first we indicated that in the Sess1_Drug column. EEG published here: 10.1038/ncomms6394. Task included in Matlab programming language. Data collected circa 2012-2013 in Laboratory for Neural Computation & Cognition at Brown. Check the .xls sheet under code folder for more meta data. Triggers are complicated. See CC_Triggers.mat under code folder. A few old analysis scripts are included. - James F Cavanagh 02/15/2021 ## Dataset Information | Dataset ID | `DS003518` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Simon Conflict w/ Reinforcement + Cabergoline Challenge | | Author (year) | `Cavanagh2021_Simon_Conflict` | | Canonical | — | | Importable as | `DS003518`, `Cavanagh2021_Simon_Conflict` | | Year | 2021 | | Authors | James F Cavanagh, Michael J Frank | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003518.v1.1.0](https://doi.org/10.18112/openneuro.ds003518.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003518) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003518) | [Source URL](https://openneuro.org/datasets/ds003518) | ### Copy-paste BibTeX ```bibtex @dataset{ds003518, title = {EEG: Simon Conflict w/ Reinforcement + Cabergoline Challenge}, author = {James F Cavanagh and Michael J Frank}, doi = {10.18112/openneuro.ds003518.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003518.v1.1.0}, } ``` ## Technical Details - Subjects: 110 - Recordings: 137 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 500.0 - Duration (hours): 89.88768666666667 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 39.5 GB - File count: 137 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003518.v1.1.0 - Source: openneuro - OpenNeuro: [ds003518](https://openneuro.org/datasets/ds003518) - NeMAR: [ds003518](https://nemar.org/dataexplorer/detail?dataset_id=ds003518) ## API Reference Use the `DS003518` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003518(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Simon Conflict w/ Reinforcement + Cabergoline Challenge * **Study:** `ds003518` (OpenNeuro) * **Author (year):** `Cavanagh2021_Simon_Conflict` * **Canonical:** — Also importable as: `DS003518`, `Cavanagh2021_Simon_Conflict`. Modality: `eeg`. Subjects: 110; recordings: 137; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003518](https://openneuro.org/datasets/ds003518) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003518](https://nemar.org/dataexplorer/detail?dataset_id=ds003518) DOI: [https://doi.org/10.18112/openneuro.ds003518.v1.1.0](https://doi.org/10.18112/openneuro.ds003518.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003518 >>> dataset = DS003518(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003518) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003518) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003519: eeg dataset, 27 subjects *EEG: Visual Working Memory + Cabergoline Challenge* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh, Michael J Frank, James Broadway (2021). *EEG: Visual Working Memory + Cabergoline Challenge*. [10.18112/openneuro.ds003519.v1.1.0](https://doi.org/10.18112/openneuro.ds003519.v1.1.0) Modality: eeg Subjects: 27 Recordings: 54 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003519 dataset = DS003519(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003519(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003519( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003519, title = {EEG: Visual Working Memory + Cabergoline Challenge}, author = {James F Cavanagh and Michael J Frank and James Broadway}, doi = {10.18112/openneuro.ds003519.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003519.v1.1.0}, } ``` ## About This Dataset Visual Working Memory. Mostly unpublished! Beh data published here: 10.3758/s13415-018-0584-6. EEG data never published. Same sample as this published study: 10.1038/ncomms6394. 30 healthy participants (3 dropout) in a double-blind drug study. Total n=27. Drug was Cabergoline 1.25 mg. Subjects had IDs 305/405 - 330/430. The dual numbers were for session: 300s were first session, 400s were second session. Here we have simply put them in as session 1 and session 2. So Joe Smith would have been 305 on visit 1, then 405 on visit 2. If he got cab first we indicated that in the Sess1_Drug column. Task included in Matlab programming language. Data collected circa 2012-2013 in Laboratory for Neural Computation & Cognition at Brown. Check the .xls sheet under code folder for more meta data, incl. OSpan etc. - James F Cavanagh 02/15/2021 ## Dataset Information | Dataset ID | `DS003519` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Visual Working Memory + Cabergoline Challenge | | Author (year) | `Cavanagh2021_Visual` | | Canonical | — | | Importable as | `DS003519`, `Cavanagh2021_Visual` | | Year | 2021 | | Authors | James F Cavanagh, Michael J Frank, James Broadway | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003519.v1.1.0](https://doi.org/10.18112/openneuro.ds003519.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003519) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003519) | [Source URL](https://openneuro.org/datasets/ds003519) | ### Copy-paste BibTeX ```bibtex @dataset{ds003519, title = {EEG: Visual Working Memory + Cabergoline Challenge}, author = {James F Cavanagh and Michael J Frank and James Broadway}, doi = {10.18112/openneuro.ds003519.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003519.v1.1.0}, } ``` ## Technical Details - Subjects: 27 - Recordings: 54 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 500.0 - Duration (hours): 20.50396111111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 9.0 GB - File count: 54 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003519.v1.1.0 - Source: openneuro - OpenNeuro: [ds003519](https://openneuro.org/datasets/ds003519) - NeMAR: [ds003519](https://nemar.org/dataexplorer/detail?dataset_id=ds003519) ## API Reference Use the `DS003519` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003519(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Visual Working Memory + Cabergoline Challenge * **Study:** `ds003519` (OpenNeuro) * **Author (year):** `Cavanagh2021_Visual` * **Canonical:** — Also importable as: `DS003519`, `Cavanagh2021_Visual`. Modality: `eeg`. Subjects: 27; recordings: 54; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003519](https://openneuro.org/datasets/ds003519) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003519](https://nemar.org/dataexplorer/detail?dataset_id=ds003519) DOI: [https://doi.org/10.18112/openneuro.ds003519.v1.1.0](https://doi.org/10.18112/openneuro.ds003519.v1.1.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003519 >>> dataset = DS003519(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003519) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003519) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003522: eeg dataset, 96 subjects *EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh, Davin Quinn (2021). *EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI*. [10.18112/openneuro.ds003522.v1.1.0](https://doi.org/10.18112/openneuro.ds003522.v1.1.0) Modality: eeg Subjects: 96 Recordings: 200 License: CC0 Source: openneuro Citations: 5.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003522 dataset = DS003522(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003522(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003522( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003522, title = {EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI}, author = {James F Cavanagh and Davin Quinn}, doi = {10.18112/openneuro.ds003522.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003522.v1.1.0}, } ``` ## About This Dataset 3 stimulus auditory oddball data in control, sub-acute mild TBI, and chronic TBI. Rest data is also included. 3AOB data published here: 10.1016/j.neuropsychologia.2019.107125. FYI, same task as this different dataset: [https://openneuro.org/datasets/ds003490/versions/1.1.0](https://openneuro.org/datasets/ds003490/versions/1.1.0). For CTL and sub-acute mTBI: Session 1 was from 3 to 14 days post-injury and was the only session with MRI. (MRI will be uploaded …later). Session 2 was ~2 months (1.5 to 3) and Session 3 was ~4 months (3 to 5) following Session 1. For Chronic TBI, there was only one session for this study. There was A LOT of subject attrition over timepoints. Same samples as reported here: [https://psycnet.apa.org/record/2020-66677-001](https://psycnet.apa.org/record/2020-66677-001) [https://pubmed.ncbi.nlm.nih.gov/31344589/](https://pubmed.ncbi.nlm.nih.gov/31344589/) [https://pubmed.ncbi.nlm.nih.gov/31368085/](https://pubmed.ncbi.nlm.nih.gov/31368085/) Task included in Matlab programming language. Data collected 2016-2018 in the Center for Brain Recovery and Repair at the UNM Health Sciences Center. Check the .xls sheet under code folder for *LOTS* more meta data. Analysis scripts are included to re-create the paper. - James F Cavanagh 02/17/2021 ## Dataset Information | Dataset ID | `DS003522` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI | | Author (year) | `Cavanagh2021_Three_Stim` | | Canonical | — | | Importable as | `DS003522`, `Cavanagh2021_Three_Stim` | | Year | 2021 | | Authors | James F Cavanagh, Davin Quinn | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003522.v1.1.0](https://doi.org/10.18112/openneuro.ds003522.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003522) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003522) | [Source URL](https://openneuro.org/datasets/ds003522) | ### Copy-paste BibTeX ```bibtex @dataset{ds003522, title = {EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI}, author = {James F Cavanagh and Davin Quinn}, doi = {10.18112/openneuro.ds003522.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003522.v1.1.0}, } ``` ## Technical Details - Subjects: 96 - Recordings: 200 - Tasks: 1 - Channels: 65 (192), 64 (8) - Sampling rate (Hz): 500.0 - Duration (hours): 57.07904611111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 25.4 GB - File count: 200 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003522.v1.1.0 - Source: openneuro - OpenNeuro: [ds003522](https://openneuro.org/datasets/ds003522) - NeMAR: [ds003522](https://nemar.org/dataexplorer/detail?dataset_id=ds003522) ## API Reference Use the `DS003522` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003522(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI * **Study:** `ds003522` (OpenNeuro) * **Author (year):** `Cavanagh2021_Three_Stim` * **Canonical:** — Also importable as: `DS003522`, `Cavanagh2021_Three_Stim`. Modality: `eeg`. Subjects: 96; recordings: 200; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003522](https://openneuro.org/datasets/ds003522) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003522](https://nemar.org/dataexplorer/detail?dataset_id=ds003522) DOI: [https://doi.org/10.18112/openneuro.ds003522.v1.1.0](https://doi.org/10.18112/openneuro.ds003522.v1.1.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003522 >>> dataset = DS003522(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003522) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003522) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003523: eeg dataset, 91 subjects *EEG: Visual Working Memory in Acute TBI* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh (2021). *EEG: Visual Working Memory in Acute TBI*. [10.18112/openneuro.ds003523.v1.1.0](https://doi.org/10.18112/openneuro.ds003523.v1.1.0) Modality: eeg Subjects: 91 Recordings: 221 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003523 dataset = DS003523(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003523(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003523( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003523, title = {EEG: Visual Working Memory in Acute TBI}, author = {James F Cavanagh}, doi = {10.18112/openneuro.ds003523.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003523.v1.1.0}, } ``` ## About This Dataset Visual working memory in control & sub-acute mild TBI. Mind wandering probes were inserted between trials. ``` ** ``` DATA NEVER PUBLISHED!\*\*If youre interested in working together, I have 1/3 of the paper already done, including CONSORT diagrams, tables, behavioral analysis, etc. All EEG data are even fully cleaned and pre-processed. For CTL and sub-acute mTBI: Session 1 was from 3 to 14 days post-injury and was the only session with MRI. (MRI will be uploaded …later). Session 2 was ~2 months (1.5 to 3) and Session 3 was ~4 months (3 to 5) following Session 1. There was A LOT of subject attrition over timepoints. Same samples as reported here: [https://psycnet.apa.org/record/2020-66677-001](https://psycnet.apa.org/record/2020-66677-001) [https://pubmed.ncbi.nlm.nih.gov/31344589/](https://pubmed.ncbi.nlm.nih.gov/31344589/) [https://pubmed.ncbi.nlm.nih.gov/31368085/](https://pubmed.ncbi.nlm.nih.gov/31368085/) 10.1016/j.neuropsychologia.2019.107125 Same task as this one here: 10.3758/s13415-018-0584-6. Task included in Matlab programming language. Data collected 2016-2018 in the Center for Brain Recovery and Repair at the UNM Health Sciences Center. Check the .xls sheet under code folder for\*LOTS\* more meta data. Analysis scripts are included. - James F Cavanagh 02/17/2021 ## Dataset Information | Dataset ID | `DS003523` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Visual Working Memory in Acute TBI | | Author (year) | `Cavanagh2021_Visual_Working` | | Canonical | — | | Importable as | `DS003523`, `Cavanagh2021_Visual_Working` | | Year | 2021 | | Authors | James F Cavanagh | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003523.v1.1.0](https://doi.org/10.18112/openneuro.ds003523.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003523) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003523) | [Source URL](https://openneuro.org/datasets/ds003523) | ### Copy-paste BibTeX ```bibtex @dataset{ds003523, title = {EEG: Visual Working Memory in Acute TBI}, author = {James F Cavanagh}, doi = {10.18112/openneuro.ds003523.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003523.v1.1.0}, } ``` ## Technical Details - Subjects: 91 - Recordings: 221 - Tasks: 1 - Channels: 65 (216), 64 (5) - Sampling rate (Hz): 500.0 - Duration (hours): 84.58588777777777 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 37.5 GB - File count: 221 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003523.v1.1.0 - Source: openneuro - OpenNeuro: [ds003523](https://openneuro.org/datasets/ds003523) - NeMAR: [ds003523](https://nemar.org/dataexplorer/detail?dataset_id=ds003523) ## API Reference Use the `DS003523` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003523(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Visual Working Memory in Acute TBI * **Study:** `ds003523` (OpenNeuro) * **Author (year):** `Cavanagh2021_Visual_Working` * **Canonical:** — Also importable as: `DS003523`, `Cavanagh2021_Visual_Working`. Modality: `eeg`. Subjects: 91; recordings: 221; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003523](https://openneuro.org/datasets/ds003523) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003523](https://nemar.org/dataexplorer/detail?dataset_id=ds003523) DOI: [https://doi.org/10.18112/openneuro.ds003523.v1.1.0](https://doi.org/10.18112/openneuro.ds003523.v1.1.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003523 >>> dataset = DS003523(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003523) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003523) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003555: eeg dataset, 30 subjects *Dataset of EEG recordings of pediatric patients with epilepsy based on the 10-20 system* Access recordings and metadata through EEGDash. **Citation:** Dorottya Cserpan, Ece Boran, Richard Rosch, San Pietro Lo Biundo, Georgia Ramantani, Johannes Sarnthein (2021). *Dataset of EEG recordings of pediatric patients with epilepsy based on the 10-20 system*. [10.18112/openneuro.ds003555.v1.0.1](https://doi.org/10.18112/openneuro.ds003555.v1.0.1) Modality: eeg Subjects: 30 Recordings: 30 License: CC0 Source: openneuro Citations: 8.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003555 dataset = DS003555(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003555(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003555( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003555, title = {Dataset of EEG recordings of pediatric patients with epilepsy based on the 10-20 system}, author = {Dorottya Cserpan and Ece Boran and Richard Rosch and San Pietro Lo Biundo and Georgia Ramantani and Johannes Sarnthein}, doi = {10.18112/openneuro.ds003555.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003555.v1.0.1}, } ``` ## About This Dataset **Dataset of EEG recordings containing HFO markings for 30 pediatric patients with epilepsy** **Summary** High-frequency oscillations in scalp EEG are promising non-invasive biomarkers of epileptogenicity. However, it is unclear how high-frequency oscillations are impacted by age in the pediatric population. We recorded and processed the first 3 hours of sleep EEG data in 30 children and adolescents with focal or generalized epilepsy. We used an automated and clinically validated high-frequency oscillation detector to determine ripple rates (80-250 Hz) in bipolar channels. The software for the detection of HFOs is freely available at the GitHub repository ([https://github.com/ZurichNCH/Automatic-High-Frequency-Oscillation-Detector](https://github.com/ZurichNCH/Automatic-High-Frequency-Oscillation-Detector)). Furthermore HFO markings are also added in this database for the selected N3 intervals. **Repository structure** ### View full README **Dataset of EEG recordings containing HFO markings for 30 pediatric patients with epilepsy** **Summary** High-frequency oscillations in scalp EEG are promising non-invasive biomarkers of epileptogenicity. However, it is unclear how high-frequency oscillations are impacted by age in the pediatric population. We recorded and processed the first 3 hours of sleep EEG data in 30 children and adolescents with focal or generalized epilepsy. We used an automated and clinically validated high-frequency oscillation detector to determine ripple rates (80-250 Hz) in bipolar channels. The software for the detection of HFOs is freely available at the GitHub repository ([https://github.com/ZurichNCH/Automatic-High-Frequency-Oscillation-Detector](https://github.com/ZurichNCH/Automatic-High-Frequency-Oscillation-Detector)). Furthermore HFO markings are also added in this database for the selected N3 intervals. **Repository structure** **Main directory (hfo/)** Contains metadata files in the BIDS standard about the participants and the study. Folders are explained below. **Subfolders** ``` * ``` hfo/sub-\*\*/ Contains folders for each subject, named sub- and session information. ``` * ``` hfo/sub-\*\*/ses-01/eeg Contains the raw eeg data in .edf format for each subject. The duration is typically 3 hours, that was recorded in the beginning of the sleep. Details about the channels are given in the corresponding .tsv file. \* hfo/derivatives Besides containingsubfolders for the raw data, there are two .json files. The events_description.json explains the meaning of the columns of the event description tsv files (in the subfolders). The interval_description.json explains the meaning of the columns of the interval description tsv files (in the subfolders). ``` * ``` hfo/derivatives/sub-\*\*/ses-01/eeg/ Contains processed data for each subject. Based on the sleep annotations, first we identified the sleep stages. Then we cut 5 minutes data intervals from the N3 sleep stages. We applied bipolar referencing by considering all nearest neighbour chanels, thus resulting in 52 bipolar channels. Each run corresponds to one 5 minute data interval. The DataIntervals.tsv file provides information about how the various runs are related to the raw data by providing the start and end indeces. Besides the .edf and channel descriptor .tsv files there is an other .tsv file containing the detected candidate event details. Eg. sub-26_ses-01_task-hfo_run-01_events.tsv contains for subject 26 for the first processed data interval the event markings as indeces with additional features of this event described in the abovementioned events_description.json file. **Related materials** The code for HFO detection is available at [https://github.com/ZurichNCH/Automatic-High-Frequency-Oscillation-Detector](https://github.com/ZurichNCH/Automatic-High-Frequency-Oscillation-Detector) **Support** For questions on the dataset or the task, contact Johannes Sarnthein at [johannes.sarnthein@usz.ch](mailto:johannes.sarnthein@usz.ch). ## Dataset Information | Dataset ID | `DS003555` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset of EEG recordings of pediatric patients with epilepsy based on the 10-20 system | | Author (year) | `Cserpan2021` | | Canonical | — | | Importable as | `DS003555`, `Cserpan2021` | | Year | 2021 | | Authors | Dorottya Cserpan, Ece Boran, Richard Rosch, San Pietro Lo Biundo, Georgia Ramantani, Johannes Sarnthein | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003555.v1.0.1](https://doi.org/10.18112/openneuro.ds003555.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003555) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003555) | [Source URL](https://openneuro.org/datasets/ds003555) | ### Copy-paste BibTeX ```bibtex @dataset{ds003555, title = {Dataset of EEG recordings of pediatric patients with epilepsy based on the 10-20 system}, author = {Dorottya Cserpan and Ece Boran and Richard Rosch and San Pietro Lo Biundo and Georgia Ramantani and Johannes Sarnthein}, doi = {10.18112/openneuro.ds003555.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003555.v1.0.1}, } ``` ## Technical Details - Subjects: 30 - Recordings: 30 - Tasks: 1 - Channels: 23 (27), 24 (3) - Sampling rate (Hz): 1024.0 - Duration (hours): 107.92361111111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 15.1 GB - File count: 30 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003555.v1.0.1 - Source: openneuro - OpenNeuro: [ds003555](https://openneuro.org/datasets/ds003555) - NeMAR: [ds003555](https://nemar.org/dataexplorer/detail?dataset_id=ds003555) ## API Reference Use the `DS003555` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003555(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of EEG recordings of pediatric patients with epilepsy based on the 10-20 system * **Study:** `ds003555` (OpenNeuro) * **Author (year):** `Cserpan2021` * **Canonical:** — Also importable as: `DS003555`, `Cserpan2021`. Modality: `eeg`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003555](https://openneuro.org/datasets/ds003555) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003555](https://nemar.org/dataexplorer/detail?dataset_id=ds003555) DOI: [https://doi.org/10.18112/openneuro.ds003555.v1.0.1](https://doi.org/10.18112/openneuro.ds003555.v1.0.1) NEMAR citation count: 8 ### Examples ```pycon >>> from eegdash.dataset import DS003555 >>> dataset = DS003555(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003555) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003555) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003568: meg dataset, 51 subjects *Mood induction in MDD and healthy adolescents* Access recordings and metadata through EEGDash. **Citation:** Lucrezia Liuzzi, Katharine Chang, Hanna Keren, Charles Zheng, Dipta Saha, Dylan Nielson, Argyris Stringaris (2021). *Mood induction in MDD and healthy adolescents*. [10.18112/openneuro.ds003568.v1.0.2](https://doi.org/10.18112/openneuro.ds003568.v1.0.2) Modality: meg Subjects: 51 Recordings: 118 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003568 dataset = DS003568(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003568(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003568( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003568, title = {Mood induction in MDD and healthy adolescents}, author = {Lucrezia Liuzzi and Katharine Chang and Hanna Keren and Charles Zheng and Dipta Saha and Dylan Nielson and Argyris Stringaris}, doi = {10.18112/openneuro.ds003568.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003568.v1.0.2}, } ``` ## About This Dataset This dataset contains the MEG and structural MRI data from the “Electrophysiological correlates of mood and reward dynamics in human adolescents” pre-registered analysis ([https://www.biorxiv.org/content/10.1101/2021.03.04.433969v1](https://www.biorxiv.org/content/10.1101/2021.03.04.433969v1)). The task-mmi3 data corresponds to the monetary gambling mood induction task described in the paper. Task-mmi3 data has been pre-processed marking bad channels and bad segments (motion > 5mm or/and noise artifacts). Task-rest data is unprocessed 10 minutes resting state scan acquired during the same scanning session. Anatomical MRIs have been defaced and co-registered fiducial coordinates are available in the anatomical json files. Data from four confirmatory subjects are not made available because of missing data sharing consent. sub-22658 and sub-24247 do not have an available structural scan. ## Dataset Information | Dataset ID | `DS003568` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mood induction in MDD and healthy adolescents | | Author (year) | `Liuzzi2021` | | Canonical | — | | Importable as | `DS003568`, `Liuzzi2021` | | Year | 2021 | | Authors | Lucrezia Liuzzi, Katharine Chang, Hanna Keren, Charles Zheng, Dipta Saha, Dylan Nielson, Argyris Stringaris | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003568.v1.0.2](https://doi.org/10.18112/openneuro.ds003568.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003568) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003568) | [Source URL](https://openneuro.org/datasets/ds003568) | ### Copy-paste BibTeX ```bibtex @dataset{ds003568, title = {Mood induction in MDD and healthy adolescents}, author = {Lucrezia Liuzzi and Katharine Chang and Hanna Keren and Charles Zheng and Dipta Saha and Dylan Nielson and Argyris Stringaris}, doi = {10.18112/openneuro.ds003568.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003568.v1.0.2}, } ``` ## Technical Details - Subjects: 51 - Recordings: 118 - Tasks: 2 - Channels: 340 (48), 339 (29), 335 (11), 336 (6), 342 (5), 343 (5), 309 (3), 338 (3), 312 (3), 305 (2), 348, 310, 313 - Sampling rate (Hz): 1200.0 - Duration (hours): 22.541666666666668 - Pathology: Healthy - Modality: Visual - Type: Affect - Size on disk: 123.4 GB - File count: 118 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003568.v1.0.2 - Source: openneuro - OpenNeuro: [ds003568](https://openneuro.org/datasets/ds003568) - NeMAR: [ds003568](https://nemar.org/dataexplorer/detail?dataset_id=ds003568) ## API Reference Use the `DS003568` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003568(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mood induction in MDD and healthy adolescents * **Study:** `ds003568` (OpenNeuro) * **Author (year):** `Liuzzi2021` * **Canonical:** — Also importable as: `DS003568`, `Liuzzi2021`. Modality: `meg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 51; recordings: 118; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003568](https://openneuro.org/datasets/ds003568) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003568](https://nemar.org/dataexplorer/detail?dataset_id=ds003568) DOI: [https://doi.org/10.18112/openneuro.ds003568.v1.0.2](https://doi.org/10.18112/openneuro.ds003568.v1.0.2) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003568 >>> dataset = DS003568(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003568) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003568) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS003570: eeg dataset, 40 subjects *EEG: Improvisation and Musical Structures* Access recordings and metadata through EEGDash. **Citation:** Andrew Goldman, Tyreek Jackson, Paul Sajda (2021). *EEG: Improvisation and Musical Structures*. [10.18112/openneuro.ds003570.v1.0.0](https://doi.org/10.18112/openneuro.ds003570.v1.0.0) Modality: eeg Subjects: 40 Recordings: 40 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003570 dataset = DS003570(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003570(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003570( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003570, title = {EEG: Improvisation and Musical Structures}, author = {Andrew Goldman and Tyreek Jackson and Paul Sajda}, doi = {10.18112/openneuro.ds003570.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003570.v1.0.0}, } ``` ## About This Dataset The musicians were instructed to listen to chord progressions, that each consisted of three chords. We refer to one instance of such a progression in the recording as a trial. Every one of the three chords in one trial sounded in sequence, each for 400 ms in piano timbre, after which each trial ended with another 400 ms silence. This resulted in a fixed, total trial length of 1600 ms. The only progressions used in the experiment were ii-IV-I, ii-V-I, ii-IV6-I and ii-V6-I. Each experimental block consisted of 180 trials. For each such block one of the four aforementioned progressions were chosen as “standard”, resulting in four types of blocks. These “block types” were used to counterbalance the effect of other features of the individual progressions such as intervallic content that may have been in themselves salient. An experimental block always started with at least eight “standard” trials for the purpose of allowing participants to learn what type of progression would be the standard for the current block. There were two types of deviant trials that each occurred at a probability of 7.5% (in total 15%). Every deviant trial was followed by at least three standard trials. Deviant trials only differed from standard trials in terms of the middle chord: (1) Exemplar deviants, where the middle chord was replaced with a chord of identical notes but different inversion. For example, if the middle chord for a standard trial in that experimental block was V then the middle chord for the exemplar deviant in that block would be V6. For (2) function deviants, the middle chord was replaced by a chord from a different functional class. For example, if the middle chord for a standard was again V, then the middle chord for the corresponding function deviant in that block would be IV. Importantly, the key for each trial’’s chord progression was picked at random. This meant that musicians needed to examine the second chord of every trial relative to the first and/or third to identify whether the trial was a standard or deviant. The order of standards and deviants within every one of the four types of experimental blocks was generated once only, and was thus identical across subjects within these block types. For the experiment, every one of the block types occurred twice, thus resulting in a total of eight blocks per subject. The order of the eight blocks was shuffled for every subject. In total, there were 1440 trials per subject of which 222 were functional and 218 were exemplar deviants. ## Dataset Information | Dataset ID | `DS003570` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Improvisation and Musical Structures | | Author (year) | `Goldman2021` | | Canonical | — | | Importable as | `DS003570`, `Goldman2021` | | Year | 2021 | | Authors | Andrew Goldman, Tyreek Jackson, Paul Sajda | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003570.v1.0.0](https://doi.org/10.18112/openneuro.ds003570.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003570) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003570) | [Source URL](https://openneuro.org/datasets/ds003570) | ### Copy-paste BibTeX ```bibtex @dataset{ds003570, title = {EEG: Improvisation and Musical Structures}, author = {Andrew Goldman and Tyreek Jackson and Paul Sajda}, doi = {10.18112/openneuro.ds003570.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003570.v1.0.0}, } ``` ## Technical Details - Subjects: 40 - Recordings: 40 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 2048.0 - Duration (hours): 26.20805555555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 47.6 GB - File count: 40 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003570.v1.0.0 - Source: openneuro - OpenNeuro: [ds003570](https://openneuro.org/datasets/ds003570) - NeMAR: [ds003570](https://nemar.org/dataexplorer/detail?dataset_id=ds003570) ## API Reference Use the `DS003570` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003570(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Improvisation and Musical Structures * **Study:** `ds003570` (OpenNeuro) * **Author (year):** `Goldman2021` * **Canonical:** — Also importable as: `DS003570`, `Goldman2021`. Modality: `eeg`. Subjects: 40; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003570](https://openneuro.org/datasets/ds003570) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003570](https://nemar.org/dataexplorer/detail?dataset_id=ds003570) DOI: [https://doi.org/10.18112/openneuro.ds003570.v1.0.0](https://doi.org/10.18112/openneuro.ds003570.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003570 >>> dataset = DS003570(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003570) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003570) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003574: eeg dataset, 18 subjects *Reward biases spontaneous neural reactivation during sleep* Access recordings and metadata through EEGDash. **Citation:** Virginie Sterpenich, Mojca KM van Schie, Maximilien Catsiyannis, Avinash Ramyead, Stephen Perrig, Hee-Deok Yang, Dimitri Van De Ville, Sophie Schwartz (2021). *Reward biases spontaneous neural reactivation during sleep*. [10.18112/openneuro.ds003574.v1.0.2](https://doi.org/10.18112/openneuro.ds003574.v1.0.2) Modality: eeg Subjects: 18 Recordings: 18 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003574 dataset = DS003574(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003574(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003574( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003574, title = {Reward biases spontaneous neural reactivation during sleep}, author = {Virginie Sterpenich and Mojca KM van Schie and Maximilien Catsiyannis and Avinash Ramyead and Stephen Perrig and Hee-Deok Yang and Dimitri Van De Ville and Sophie Schwartz}, doi = {10.18112/openneuro.ds003574.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003574.v1.0.2}, } ``` ## About This Dataset The data included 18 participants that played at 2 different games during wakefulness in the 3T MRI: the FACE and the MAZE game, intermixed with periods of REST and period of preparation of each game (game session). The tasks were manipulated and at the end of the game session, one game was won (Reward game) and the second was lost (No Reward game), randomly assigned for each participant. Next, during the sleep session, 64 electrodes were placed on the head of the participants, before they slept in the MRI with EEG for 1-2 hours (sleep session). Participants can be separated according to the won game (face or maze) and according sleep depth (whether they reached N3 sleep in the MRI or only N2 sleep). A decoding classifier was trained on the data from the game session at wake and applied to the MRI data acquired during sleep (sleep session). Finally, a memory test was performed the next day on the 2 tasks (face and maze). For any question related to the methods, please see the manuscript or contact Virginie Sterpenich ([Virginie.Sterpenich@unige.ch](mailto:Virginie.Sterpenich@unige.ch)) Files includes are 1) 2 EPI sessions for the tasks 2) 1 EPI session during resting including wake and sleep (sleep session) 3) 1 EEG file corresponding to the sleep session (including wake and sleep in the MRI) 4) 1 T1 anatomical image ## Dataset Information | Dataset ID | `DS003574` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Reward biases spontaneous neural reactivation during sleep | | Author (year) | `Sterpenich2021` | | Canonical | — | | Importable as | `DS003574`, `Sterpenich2021` | | Year | 2021 | | Authors | Virginie Sterpenich, Mojca KM van Schie, Maximilien Catsiyannis, Avinash Ramyead, Stephen Perrig, Hee-Deok Yang, Dimitri Van De Ville, Sophie Schwartz | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003574.v1.0.2](https://doi.org/10.18112/openneuro.ds003574.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003574) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003574) | [Source URL](https://openneuro.org/datasets/ds003574) | ### Copy-paste BibTeX ```bibtex @dataset{ds003574, title = {Reward biases spontaneous neural reactivation during sleep}, author = {Virginie Sterpenich and Mojca KM van Schie and Maximilien Catsiyannis and Avinash Ramyead and Stephen Perrig and Hee-Deok Yang and Dimitri Van De Ville and Sophie Schwartz}, doi = {10.18112/openneuro.ds003574.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003574.v1.0.2}, } ``` ## Technical Details - Subjects: 18 - Recordings: 18 - Tasks: 1 - Channels: 69 - Sampling rate (Hz): 500.0 - Duration (hours): 30.093788888888888 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 18.0 GB - File count: 18 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003574.v1.0.2 - Source: openneuro - OpenNeuro: [ds003574](https://openneuro.org/datasets/ds003574) - NeMAR: [ds003574](https://nemar.org/dataexplorer/detail?dataset_id=ds003574) ## API Reference Use the `DS003574` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003574(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Reward biases spontaneous neural reactivation during sleep * **Study:** `ds003574` (OpenNeuro) * **Author (year):** `Sterpenich2021` * **Canonical:** — Also importable as: `DS003574`, `Sterpenich2021`. Modality: `eeg`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003574](https://openneuro.org/datasets/ds003574) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003574](https://nemar.org/dataexplorer/detail?dataset_id=ds003574) DOI: [https://doi.org/10.18112/openneuro.ds003574.v1.0.2](https://doi.org/10.18112/openneuro.ds003574.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003574 >>> dataset = DS003574(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003574) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003574) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003602: eeg dataset, 118 subjects *Childhood Sexual Abuse and problem drinking in women: Neurobehavioral mechanisms* Access recordings and metadata through EEGDash. **Citation:** Ozlem Korucuoglu, Andrey P. Anokhin (2021). *Childhood Sexual Abuse and problem drinking in women: Neurobehavioral mechanisms*. [10.18112/openneuro.ds003602.v1.0.0](https://doi.org/10.18112/openneuro.ds003602.v1.0.0) Modality: eeg Subjects: 118 Recordings: 699 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003602 dataset = DS003602(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003602(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003602( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003602, title = {Childhood Sexual Abuse and problem drinking in women: Neurobehavioral mechanisms}, author = {Ozlem Korucuoglu and Andrey P. Anokhin}, doi = {10.18112/openneuro.ds003602.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003602.v1.0.0}, } ``` ## About This Dataset Data collection took place at the Washington University School of Medicine, St. Louis, under the supervision of Dr. Andrey Anokhin ([andrey@wustl.edu](mailto:andrey@wustl.edu)). The project was approved by the Washington University Institutional Review Board (IRB project # 201707051). Detailed task description and subject instructions can be found in a seperate PDF file under the folder stimuli. The task sequence file (stim program code) together with the visual stimuli used in the task are also provided in the stimulus folder. Participants were Monozygotic twin pairs, twin pairs have the same FamilyID (provided in participants.tsv) ## Dataset Information | Dataset ID | `DS003602` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Childhood Sexual Abuse and problem drinking in women: Neurobehavioral mechanisms | | Author (year) | `Korucuoglu2021` | | Canonical | — | | Importable as | `DS003602`, `Korucuoglu2021` | | Year | 2021 | | Authors | Ozlem Korucuoglu, Andrey P. Anokhin | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003602.v1.0.0](https://doi.org/10.18112/openneuro.ds003602.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003602) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003602) | [Source URL](https://openneuro.org/datasets/ds003602) | ### Copy-paste BibTeX ```bibtex @dataset{ds003602, title = {Childhood Sexual Abuse and problem drinking in women: Neurobehavioral mechanisms}, author = {Ozlem Korucuoglu and Andrey P. Anokhin}, doi = {10.18112/openneuro.ds003602.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003602.v1.0.0}, } ``` ## Technical Details - Subjects: 118 - Recordings: 699 - Tasks: 6 - Channels: 35 - Sampling rate (Hz): 1000.0 - Duration (hours): 152.53169944444446 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 73.2 GB - File count: 699 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003602.v1.0.0 - Source: openneuro - OpenNeuro: [ds003602](https://openneuro.org/datasets/ds003602) - NeMAR: [ds003602](https://nemar.org/dataexplorer/detail?dataset_id=ds003602) ## API Reference Use the `DS003602` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003602(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Childhood Sexual Abuse and problem drinking in women: Neurobehavioral mechanisms * **Study:** `ds003602` (OpenNeuro) * **Author (year):** `Korucuoglu2021` * **Canonical:** — Also importable as: `DS003602`, `Korucuoglu2021`. Modality: `eeg`. Subjects: 118; recordings: 699; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003602](https://openneuro.org/datasets/ds003602) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003602](https://nemar.org/dataexplorer/detail?dataset_id=ds003602) DOI: [https://doi.org/10.18112/openneuro.ds003602.v1.0.0](https://doi.org/10.18112/openneuro.ds003602.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003602 >>> dataset = DS003602(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003602) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003602) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003620: eeg dataset, 44 subjects *Runabout: A mobile EEG study of auditory oddball processing in laboratory and real-world conditions* Access recordings and metadata through EEGDash. **Citation:** Magnus Liebherr, Andrew W. Corcoran, Phillip M. Alday, Scott Coussens, Valeria Bellan, Caitlin A. Howlett, Maarten A. Immink, Mark Kohler, Matthias Schlesewsky, Ina Bornkessel-Schlesewsky (2021). *Runabout: A mobile EEG study of auditory oddball processing in laboratory and real-world conditions*. [10.18112/openneuro.ds003620.v1.1.1](https://doi.org/10.18112/openneuro.ds003620.v1.1.1) Modality: eeg Subjects: 44 Recordings: 100 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003620 dataset = DS003620(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003620(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003620( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003620, title = {Runabout: A mobile EEG study of auditory oddball processing in laboratory and real-world conditions}, author = {Magnus Liebherr and Andrew W. Corcoran and Phillip M. Alday and Scott Coussens and Valeria Bellan and Caitlin A. Howlett and Maarten A. Immink and Mark Kohler and Matthias Schlesewsky and Ina Bornkessel-Schlesewsky}, doi = {10.18112/openneuro.ds003620.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds003620.v1.1.1}, } ``` ## About This Dataset **Overview** This dataset contains raw and pre-processed EEG data from a mobile EEG study investigating the effects of cognitive task demands, motor demands, and environmental complexity on attentional processing (see below for experiment details). All preprocessing and analysis code is deposited in the `code` directory. The entire MATLAB pipeline can be reproduced by executing the `run_pipeline.m` script. In order to run these scripts, you will need to ensure you have the required MATLAB toolboxes and R packages on your system. You will also need to adapt `def_local.m` to specify local paths to MATLAB and EEGLAB. Descriptive statistics and mixed-effects models can be reproduced in R by running the `stat_analysis.R` script. See below for software details. **Citing this dataset** In addition to citing this dataset, please cite the original manuscript reporting data collection and experimental procedures. ### View full README **Overview** This dataset contains raw and pre-processed EEG data from a mobile EEG study investigating the effects of cognitive task demands, motor demands, and environmental complexity on attentional processing (see below for experiment details). All preprocessing and analysis code is deposited in the `code` directory. The entire MATLAB pipeline can be reproduced by executing the `run_pipeline.m` script. In order to run these scripts, you will need to ensure you have the required MATLAB toolboxes and R packages on your system. You will also need to adapt `def_local.m` to specify local paths to MATLAB and EEGLAB. Descriptive statistics and mixed-effects models can be reproduced in R by running the `stat_analysis.R` script. See below for software details. **Citing this dataset** In addition to citing this dataset, please cite the original manuscript reporting data collection and experimental procedures. For more information, see the `dataset_description.json` file. **License** ODC Open Database License (ODbL). For more information, see the `LICENCE` file. **Format** Dataset is formatted according to the EEG-BIDS extension (Pernet et al., 2019) and the BIDS extension proposal for common electrophysiological derivatives (BEP021) v0.0.1, which can be found here: [https://docs.google.com/document/d/1PmcVs7vg7Th-cGC-UrX8rAhKUHIzOI-uIOh69_mvdlw/edit#heading=h.mqkmyp254xh6](https://docs.google.com/document/d/1PmcVs7vg7Th-cGC-UrX8rAhKUHIzOI-uIOh69_mvdlw/edit#heading=h.mqkmyp254xh6) Note that BEP021 is still a work in progress as of 2021-03-01. Generally, you can find data in the .tsv files and descriptions in the accompanying .json files. An important BIDS definition to consider is the “Inheritance Principle” (see 3.5 in the BIDS specification: [http://bids.neuroimaging.io/bids_spec.pdf](http://bids.neuroimaging.io/bids_spec.pdf)), which states: > Any metadata file (.json, .bvec, .tsv, etc.) may be defined at any directory level. The values from the top level are inherited by all lower levels unless they are overridden by a file at the lower level. **Details about the experiment** Forty-four healthy adults aged 18-40 performed an oddball task involving complex tone (piano and horn) stimuli in three settings: (1) sitting in a quiet room in the lab (LAB); (2) walking around a sports field (FIELD); (3) navigating a route through a university campus (CAMPUS). Participants performed each environmental condition twice: once while attending to oddball stimuli (i.e. counting the number of presented deviant tones; COUNT), and once while disregarding or ignoring the tone stimuli (IGNORE). EEG signals were recorded from 32 active electrodes using a Brain Vision LiveAmp 32 amplifier. See manuscript for further details. **MATLAB software details** MATLAB Version: 9.7.0.1319299 (R2019b) Update 5 MATLAB License Number: 678256 Operating System: Microsoft Windows 10 Enterprise Version 10.0 (Build 18363) Java Version: Java 1.8.0_202-b08 with Oracle Corporation Java HotSpot(TM) 64-Bit Server VM mixed mode \* MATLAB (v9.7) \* Simulink (v10.0) \* Curve Fitting Toolbox (v3.5.10) \* DSP System Toolbox (v9.9) \* Image Processing Toolbox (v11.0) \* MATLAB Compiler (v7.1) \* MATLAB Compiler SDK (v6.7) \* Parallel Computing Toolbox (v7.1) \* Signal Processing Toolbox (v8.3) \* Statistics and Machine Learning Toolbox (v11.6) \* Symbolic Math Toolbox (v8.4) \* Wavelet Toolbox (v5.3) *The following toolboxes/helper functions were also used:* \* EEGLAB (v2019.1) \* ERPLAB (v8.10) \* ICLabel (v1.3) \* clean_rawdata (v2.3) \* bids-matlab-tools (v5.2) \* dipfit (v3.4) \* firfilt (v2.4) \* export_fig (v3.12) \* ColorBrewer (v3.1.0) **R software details** **R version 3.6.2 (2019-12-12)** *Platform:* x86_64-w64-mingw32/x64 (64-bit) *locale:* \_LC_COLLATE=English_Australia.1252_, \_LC_CTYPE=English_Australia.1252_, \_LC_MONETARY=English_Australia.1252_, \_LC_NUMERIC=C_ and \_LC_TIME=English_Australia.1252_ *attached base packages:* \* stats \* graphics \* grDevices \* utils \* datasets \* methods \* base *other attached packages:* \* sjPlot(v.2.8.7) \* emmeans(v.1.5.1) \* car(v.3.0-10) \* carData(v.3.0-4) \* lme4(v.1.1-23) \* Matrix(v.1.2-18) \* data.table(v.1.13.0) \* forcats(v.0.5.0) \* stringr(v.1.4.0) \* dplyr(v.1.0.2) \* purrr(v.0.3.4) \* readr(v.1.4.0) \* tidyr(v.1.1.2) \* tibble(v.3.0.4) \* ggplot2(v.3.3.2) \* tidyverse(v.1.3.0) *loaded via a namespace (and not attached):* \* nlme(v.3.1-149) \* pbkrtest(v.0.4-8.6) \* fs(v.1.5.0) \* lubridate(v.1.7.9) \* insight(v.0.12.0) \* httr(v.1.4.2) \* numDeriv(v.2016.8-1.1) \* tools(v.3.6.2) \* backports(v.1.1.10) \* utf8(v.1.1.4) \* R6(v.2.4.1) \* sjlabelled(v.1.1.7) \* DBI(v.1.1.0) \* colorspace(v.1.4-1) \* withr(v.2.3.0) \* tidyselect(v.1.1.0) \* curl(v.4.3) \* compiler(v.3.6.2) \* performance(v.0.5.0) \* cli(v.2.1.0) \* rvest(v.0.3.6) \* xml2(v.1.3.2) \* sandwich(v.3.0-0) \* labeling(v.0.3) \* bayestestR(v.0.7.2) \* scales(v.1.1.1) \* mvtnorm(v.1.1-1) \* digest(v.0.6.25) \* foreign(v.0.8-76) \* minqa(v.1.2.4) \* rio(v.0.5.16) \* pkgconfig(v.2.0.3) \* dbplyr(v.1.4.4) \* rlang(v.0.4.8) \* readxl(v.1.3.1) \* rstudioapi(v.0.11) \* farver(v.2.0.3) \* generics(v.0.0.2) \* zoo(v.1.8-8) \* jsonlite(v.1.7.1) \* zip(v.2.1.1) \* magrittr(v.1.5) \* parameters(v.0.8.6) \* Rcpp(v.1.0.5) \* munsell(v.0.5.0) \* fansi(v.0.4.1) \* abind(v.1.4-5) \* lifecycle(v.0.2.0) \* stringi(v.1.4.6) \* multcomp(v.1.4-14) \* MASS(v.7.3-53) \* plyr(v.1.8.6) \* grid(v.3.6.2) \* blob(v.1.2.1) \* parallel(v.3.6.2) \* sjmisc(v.2.8.6) \* crayon(v.1.3.4) \* lattice(v.0.20-41) \* ggeffects(v.0.16.0) \* haven(v.2.3.1) \* splines(v.3.6.2) \* pander(v.0.6.3) \* sjstats(v.0.18.1) \* hms(v.0.5.3) \* knitr(v.1.30) \* pillar(v.1.4.6) \* boot(v.1.3-25) \* estimability(v.1.3) \* effectsize(v.0.3.3) \* codetools(v.0.2-16) \* reprex(v.0.3.0) \* glue(v.1.4.2) \* modelr(v.0.1.8) \* vctrs(v.0.3.4) \* nloptr(v.1.2.2.2) \* cellranger(v.1.1.0) \* gtable(v.0.3.0) \* assertthat(v.0.2.1) \* xfun(v.0.18) \* openxlsx(v.4.2.2) \* xtable(v.1.8-4) \* broom(v.0.7.1) \* coda(v.0.19-4) \* survival(v.3.2-7) \* lmerTest(v.3.1-3) \* statmod(v.1.4.34) \* TH.data(v.1.0-10) \* ellipsis(v.0.3.1) ## Dataset Information | Dataset ID | `DS003620` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Runabout: A mobile EEG study of auditory oddball processing in laboratory and real-world conditions | | Author (year) | `Liebherr2021` | | Canonical | `Runabout` | | Importable as | `DS003620`, `Liebherr2021`, `Runabout` | | Year | 2021 | | Authors | Magnus Liebherr, Andrew W. Corcoran, Phillip M. Alday, Scott Coussens, Valeria Bellan, Caitlin A. Howlett, Maarten A. Immink, Mark Kohler, Matthias Schlesewsky, Ina Bornkessel-Schlesewsky | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003620.v1.1.1](https://doi.org/10.18112/openneuro.ds003620.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003620) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003620) | [Source URL](https://openneuro.org/datasets/ds003620) | ### Copy-paste BibTeX ```bibtex @dataset{ds003620, title = {Runabout: A mobile EEG study of auditory oddball processing in laboratory and real-world conditions}, author = {Magnus Liebherr and Andrew W. Corcoran and Phillip M. Alday and Scott Coussens and Valeria Bellan and Caitlin A. Howlett and Maarten A. Immink and Mark Kohler and Matthias Schlesewsky and Ina Bornkessel-Schlesewsky}, doi = {10.18112/openneuro.ds003620.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds003620.v1.1.1}, } ``` ## Technical Details - Subjects: 44 - Recordings: 100 - Tasks: 1 - Channels: 35 - Sampling rate (Hz): 500.0 - Duration (hours): 59.53691277777778 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 17.0 GB - File count: 100 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003620.v1.1.1 - Source: openneuro - OpenNeuro: [ds003620](https://openneuro.org/datasets/ds003620) - NeMAR: [ds003620](https://nemar.org/dataexplorer/detail?dataset_id=ds003620) ## API Reference Use the `DS003620` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003620(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Runabout: A mobile EEG study of auditory oddball processing in laboratory and real-world conditions * **Study:** `ds003620` (OpenNeuro) * **Author (year):** `Liebherr2021` * **Canonical:** `Runabout` Also importable as: `DS003620`, `Liebherr2021`, `Runabout`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 44; recordings: 100; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003620](https://openneuro.org/datasets/ds003620) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003620](https://nemar.org/dataexplorer/detail?dataset_id=ds003620) DOI: [https://doi.org/10.18112/openneuro.ds003620.v1.1.1](https://doi.org/10.18112/openneuro.ds003620.v1.1.1) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003620 >>> dataset = DS003620(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003620) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003620) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003626: eeg dataset, 10 subjects *Inner Speech* Access recordings and metadata through EEGDash. **Citation:** Nicolas Nieto, Victoria Peterson, Hugo Rufiner, Juan Kamienkowski, Ruben Spies (2021). *Inner Speech*. [10.18112/openneuro.ds003626.v2.0.0](https://doi.org/10.18112/openneuro.ds003626.v2.0.0) Modality: eeg Subjects: 10 Recordings: 30 License: CC0 Source: openneuro Citations: 6.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003626 dataset = DS003626(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003626(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003626( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003626, title = {Inner Speech}, author = {Nicolas Nieto and Victoria Peterson and Hugo Rufiner and Juan Kamienkowski and Ruben Spies}, doi = {10.18112/openneuro.ds003626.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds003626.v2.0.0}, } ``` ## About This Dataset Inner Speech Dataset. Author: Nicolas Nieto Code available at [https://github.com/N-Nieto/Inner_Speech_Dataset](https://github.com/N-Nieto/Inner_Speech_Dataset) Prepreint available at [https://www.biorxiv.org/content/10.1101/2021.04.19.440473v1](https://www.biorxiv.org/content/10.1101/2021.04.19.440473v1) Abstract: Surface electroencephalography is a standard and noninvasive way to measure electrical brain activity. Recent advances in artificial intelligence led to significant improvements in the automatic detection of brain patterns, allowing increasingly faster, more reliable and accessible Brain-Computer Interfaces. Different paradigms have been used to enable the human-machine interaction and the last few years have broad a mark increase in the interest for interpreting and characterizing the “inner voice” phenomenon. This paradigm, called inner speech, raises the possibility of executing an order just by thinking about it, allowing a “natural” way of controlling external devices. Unfortunately, the lack of publicly available electroencephalography datasets, restricts the development of new techniques for inner speech recognition. A ten-subjects dataset acquired under this and two others related paradigms, obtain with an acquisition systems of 136 channels, is presented. The main purpose of this work is to provide the scientific community with an open-access multiclass electroencephalography database of inner speech commands that could be used for better understanding of the related brain mechanisms. Conditions = Inner Speech, Pronounced Speech, Visualized Condition Classes = “Arriba/Up”, “Abajo/Down”, “Derecha/Right”, “Izquierda/Left” Total Trials = 5640 Please contact us at this e-mail address if you have any doubts: [nnieto@sinc.unl.edu.ar](mailto:nnieto@sinc.unl.edu.ar) ## Dataset Information | Dataset ID | `DS003626` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Inner Speech | | Author (year) | `Nieto2021` | | Canonical | — | | Importable as | `DS003626`, `Nieto2021` | | Year | 2021 | | Authors | Nicolas Nieto, Victoria Peterson, Hugo Rufiner, Juan Kamienkowski, Ruben Spies | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003626.v2.0.0](https://doi.org/10.18112/openneuro.ds003626.v2.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003626) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003626) | [Source URL](https://openneuro.org/datasets/ds003626) | ### Copy-paste BibTeX ```bibtex @dataset{ds003626, title = {Inner Speech}, author = {Nicolas Nieto and Victoria Peterson and Hugo Rufiner and Juan Kamienkowski and Ruben Spies}, doi = {10.18112/openneuro.ds003626.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds003626.v2.0.0}, } ``` ## Technical Details - Subjects: 10 - Recordings: 30 - Tasks: 1 - Channels: 137 - Sampling rate (Hz): Varies - Duration (hours): 12.951944444444443 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 18.3 GB - File count: 30 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003626.v2.0.0 - Source: openneuro - OpenNeuro: [ds003626](https://openneuro.org/datasets/ds003626) - NeMAR: [ds003626](https://nemar.org/dataexplorer/detail?dataset_id=ds003626) ## API Reference Use the `DS003626` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003626(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Inner Speech * **Study:** `ds003626` (OpenNeuro) * **Author (year):** `Nieto2021` * **Canonical:** — Also importable as: `DS003626`, `Nieto2021`. Modality: `eeg`. Subjects: 10; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003626](https://openneuro.org/datasets/ds003626) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003626](https://nemar.org/dataexplorer/detail?dataset_id=ds003626) DOI: [https://doi.org/10.18112/openneuro.ds003626.v2.0.0](https://doi.org/10.18112/openneuro.ds003626.v2.0.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS003626 >>> dataset = DS003626(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003626) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003626) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003633: meg dataset, 12 subjects *ForrestGump-MEG* Access recordings and metadata through EEGDash. **Citation:** Xingyu Liu, Yuxuan Dai, Hailun Xie, Zonglei Zhen (2021). *ForrestGump-MEG*. [10.18112/openneuro.ds003633.v1.0.3](https://doi.org/10.18112/openneuro.ds003633.v1.0.3) Modality: meg Subjects: 12 Recordings: 96 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003633 dataset = DS003633(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003633(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003633( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003633, title = {ForrestGump-MEG}, author = {Xingyu Liu and Yuxuan Dai and Hailun Xie and Zonglei Zhen}, doi = {10.18112/openneuro.ds003633.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds003633.v1.0.3}, } ``` ## About This Dataset **ForrestGump-MEG: A audio-visual movie watching MEG dataset** For details please refer to our paper on [https://www.biorxiv.org/content/10.1101/2021.06.04.446837v1](https://www.biorxiv.org/content/10.1101/2021.06.04.446837v1). This dataset contains MEG data recorded from 11 subjects while watching the 2h long Chinese-dubbed audio-visual movie ‘Forrest Gump’. The data were acquired with a 275-channel CTF MEG. Auxiliary data (T1w) as well as derivation data such as preprocessed data and MEG-MRI co-registration are also included. **Pre-process procedure description** The T1w images stored as NIFTI files were minimally-preprocessed using the anatomical preprocessing pipeline from fMRIPrep with default settings. MEG data were pre-processed using MNE following a three-step procedure: 1. bad channels were detected and removed. 2. a high-pass filter of 1 Hz was applied to remove possible slow drifts from the continuous MEG data. 3. artifacts removal was performed with ICA. **Stimulus material** The audio-visual stimulus materials were from the Chinese-dubbed ‘Forrest Gump’ DVD released in 2013 (ISBN: 978-7-7991-3934-0), which cannot be publicly released due to copyright restrictions. The stimulus materials are available upon reasonable request and on condition of a research-only data use agreement (correspondence with Xingyu Liu, [liuxingyu987@foxmail.com](mailto:liuxingyu987@foxmail.com)). **Dataset content overview** the data were organized following the MEG-BIDS using MNE-BIDS toolbox. ### View full README **ForrestGump-MEG: A audio-visual movie watching MEG dataset** For details please refer to our paper on [https://www.biorxiv.org/content/10.1101/2021.06.04.446837v1](https://www.biorxiv.org/content/10.1101/2021.06.04.446837v1). This dataset contains MEG data recorded from 11 subjects while watching the 2h long Chinese-dubbed audio-visual movie ‘Forrest Gump’. The data were acquired with a 275-channel CTF MEG. Auxiliary data (T1w) as well as derivation data such as preprocessed data and MEG-MRI co-registration are also included. **Pre-process procedure description** The T1w images stored as NIFTI files were minimally-preprocessed using the anatomical preprocessing pipeline from fMRIPrep with default settings. MEG data were pre-processed using MNE following a three-step procedure: 1. bad channels were detected and removed. 2. a high-pass filter of 1 Hz was applied to remove possible slow drifts from the continuous MEG data. 3. artifacts removal was performed with ICA. **Stimulus material** The audio-visual stimulus materials were from the Chinese-dubbed ‘Forrest Gump’ DVD released in 2013 (ISBN: 978-7-7991-3934-0), which cannot be publicly released due to copyright restrictions. The stimulus materials are available upon reasonable request and on condition of a research-only data use agreement (correspondence with Xingyu Liu, [liuxingyu987@foxmail.com](mailto:liuxingyu987@foxmail.com)). **Dataset content overview** the data were organized following the MEG-BIDS using MNE-BIDS toolbox. *the pre-processed MEG data* The preprocessed MEG recordings including the preprocessed MEG data, the event files, the ICA decomposition and label files and the MEG-MRI coordinate transformation file are hosted here. ```text |---./derivatives/preproc_meg-mne_mri-fmriprep/sub-xx/ses-movie/meg/ |---sub-xx_ses-movie_coordsystem.json |---sub-xx_ses-movie_task-movie_run-xx_channels.tsv |---sub-xx_ses-movie_task-movie_run-xx_decomposition.tsv |---sub-xx_ses-movie_task-movie_run-xx_events.tsv |---sub-xx_ses-movie_task-movie_run-xx_ica.fif.gz |---sub-xx_ses-movie_task-movie_run-xx_meg.fif |---sub-xx_ses-movie_task-movie_run-xx_meg.json |---... |---sub-xx_ses-movie_task-movie_trans.fif ``` *the pre-processed MRI data* The preprocessed MRI volume, reconstructed surface, and other associations including transformation files are hosted here ```text |---./derivatives/preproc_meg-mne_mri-fmriprep/sub-xx/ses-movie/anat/ |---sub-xx_ses-movie_desc-preproc_T1w.nii.gz |---sub-xx_ses-movie_hemi-L_inflated.surf.gii |---sub-xx_ses-movie_hemi-L_midthickness.surf.gii |---sub-xx_ses-movie_hemi-L_pial.surf.gii |---sub-xx_ses-movie_hemi-L_smoothwm.surf.gii |---sub-xx_ses-movie_hemi-R_inflated.surf.gii |---sub-xx_ses-movie_hemi-R_midthickness.surf.gii |---sub-xx_ses-movie_hemi-R_pial.surf.gii |---sub-xx_ses-movie_hemi-R_smoothwm.surf.gii |---sub-xx_ses-movie_space-MNI152NLin2009cAsym_desc-preproc_T1w.nii.gz |---sub-xx_ses-movie_space-MNI152NLin6Asym_desc-preproc_T1w.nii.gz |---... ``` the FreeSurfer surface data, the high-resolution head surface and the MRI-fiducials are provided here ```text |---./derivatives/preproc_meg-mne_mri-fmriprep/sourcedata/ |---freesurfer |---sub-xx |---... ``` *the raw data* ```text |---./sub-xx/ses-movie/ |---meg/ | |---sub-xx_ses-movie_coordsystem.json | |---sub-xx_ses-movie_task-movie_run-xx_channels.tsv | |---sub-xx_ses-movie_task-movie_run-xx_events.tsv | |---sub-xx_ses-movie_task-movie_run-xx_meg.ds | |---sub-xx_ses-movie_task-movie_run-xx_meg.json | |---... |---anat/ |---sub-xx_ses-movie_T1w.json |---sub-xx_ses-movie_T1w.nii.gz ``` ## Dataset Information | Dataset ID | `DS003633` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | ForrestGump-MEG | | Author (year) | `Liu2021` | | Canonical | `ForrestGump_MEG` | | Importable as | `DS003633`, `Liu2021`, `ForrestGump_MEG` | | Year | 2021 | | Authors | Xingyu Liu, Yuxuan Dai, Hailun Xie, Zonglei Zhen | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003633.v1.0.3](https://doi.org/10.18112/openneuro.ds003633.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003633) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003633) | [Source URL](https://openneuro.org/datasets/ds003633) | ### Copy-paste BibTeX ```bibtex @dataset{ds003633, title = {ForrestGump-MEG}, author = {Xingyu Liu and Yuxuan Dai and Hailun Xie and Zonglei Zhen}, doi = {10.18112/openneuro.ds003633.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds003633.v1.0.3}, } ``` ## Technical Details - Subjects: 12 - Recordings: 96 - Tasks: 2 - Channels: 409 (89), 378 (7) - Sampling rate (Hz): 600.0 (89), 1200.0 (7) - Duration (hours): 21.97826087962963 - Pathology: Healthy - Modality: Multisensory - Type: Perception - Size on disk: 73.5 GB - File count: 96 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003633.v1.0.3 - Source: openneuro - OpenNeuro: [ds003633](https://openneuro.org/datasets/ds003633) - NeMAR: [ds003633](https://nemar.org/dataexplorer/detail?dataset_id=ds003633) ## API Reference Use the `DS003633` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003633(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ForrestGump-MEG * **Study:** `ds003633` (OpenNeuro) * **Author (year):** `Liu2021` * **Canonical:** `ForrestGump_MEG` Also importable as: `DS003633`, `Liu2021`, `ForrestGump_MEG`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 12; recordings: 96; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003633](https://openneuro.org/datasets/ds003633) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003633](https://nemar.org/dataexplorer/detail?dataset_id=ds003633) DOI: [https://doi.org/10.18112/openneuro.ds003633.v1.0.3](https://doi.org/10.18112/openneuro.ds003633.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003633 >>> dataset = DS003633(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003633) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003633) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS003638: eeg dataset, 57 subjects *EEG: Electrophysiological biomarkers of behavioral dimensions from cross-species paradigms* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh, Greg Light, Neal Swerdlow, Jonathan Brigman, Jared Young (2021). *EEG: Electrophysiological biomarkers of behavioral dimensions from cross-species paradigms*. [10.18112/openneuro.ds003638.v1.0.0](https://doi.org/10.18112/openneuro.ds003638.v1.0.0) Modality: eeg Subjects: 57 Recordings: 57 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003638 dataset = DS003638(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003638(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003638( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003638, title = {EEG: Electrophysiological biomarkers of behavioral dimensions from cross-species paradigms}, author = {James F Cavanagh and Greg Light and Neal Swerdlow and Jonathan Brigman and Jared Young}, doi = {10.18112/openneuro.ds003638.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003638.v1.0.0}, } ``` ## About This Dataset Three different tasks. From: “Electrophysiological biomarkers of behavioral dimensions from cross-species paradigms” N=57 humans. Also has mouse data in code folder. Triggers were odd binary recombinations that were re-translated into 0-255 in Matlab. See .m scripts and Trigger Translator.xlsData collected circa 2014-2016 in San Diego. Data analyzed circa 2015-2021 in New Mexico. - James F Cavanagh 04/19/2021 ## Dataset Information | Dataset ID | `DS003638` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Electrophysiological biomarkers of behavioral dimensions from cross-species paradigms | | Author (year) | `Cavanagh2021_Electrophysiological` | | Canonical | — | | Importable as | `DS003638`, `Cavanagh2021_Electrophysiological` | | Year | 2021 | | Authors | James F Cavanagh, Greg Light, Neal Swerdlow, Jonathan Brigman, Jared Young | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003638.v1.0.0](https://doi.org/10.18112/openneuro.ds003638.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003638) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003638) | [Source URL](https://openneuro.org/datasets/ds003638) | ### Copy-paste BibTeX ```bibtex @dataset{ds003638, title = {EEG: Electrophysiological biomarkers of behavioral dimensions from cross-species paradigms}, author = {James F Cavanagh and Greg Light and Neal Swerdlow and Jonathan Brigman and Jared Young}, doi = {10.18112/openneuro.ds003638.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003638.v1.0.0}, } ``` ## Technical Details - Subjects: 57 - Recordings: 57 - Tasks: 1 - Channels: 72 - Sampling rate (Hz): 512.0 - Duration (hours): 40.5975 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 15.3 GB - File count: 57 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003638.v1.0.0 - Source: openneuro - OpenNeuro: [ds003638](https://openneuro.org/datasets/ds003638) - NeMAR: [ds003638](https://nemar.org/dataexplorer/detail?dataset_id=ds003638) ## API Reference Use the `DS003638` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003638(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Electrophysiological biomarkers of behavioral dimensions from cross-species paradigms * **Study:** `ds003638` (OpenNeuro) * **Author (year):** `Cavanagh2021_Electrophysiological` * **Canonical:** — Also importable as: `DS003638`, `Cavanagh2021_Electrophysiological`. Modality: `eeg`. Subjects: 57; recordings: 57; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003638](https://openneuro.org/datasets/ds003638) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003638](https://nemar.org/dataexplorer/detail?dataset_id=ds003638) DOI: [https://doi.org/10.18112/openneuro.ds003638.v1.0.0](https://doi.org/10.18112/openneuro.ds003638.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003638 >>> dataset = DS003638(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003638) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003638) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003645: eeg, meg dataset, 19 subjects *Face processing MEEG dataset with HED annotation* Access recordings and metadata through EEGDash. **Citation:** Daniel G. Wakeman, Richard N Henson, Dung Truong (curation), Kay Robbins (curation), Scott Makeig (curation), Arno Delorme (curation) (2021). *Face processing MEEG dataset with HED annotation*. [10.18112/openneuro.ds003645.v2.0.2](https://doi.org/10.18112/openneuro.ds003645.v2.0.2) Modality: eeg, meg Subjects: 19 Recordings: 224 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003645 dataset = DS003645(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003645(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003645( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003645, title = {Face processing MEEG dataset with HED annotation}, author = {Daniel G. Wakeman and Richard N Henson and Dung Truong (curation) and Kay Robbins (curation) and Scott Makeig (curation) and Arno Delorme (curation)}, doi = {10.18112/openneuro.ds003645.v2.0.2}, url = {https://doi.org/10.18112/openneuro.ds003645.v2.0.2}, } ``` ## About This Dataset *Introduction:* This dataset consists of the MEEG (sMRI+MEG+EEG) portion of the multi-subject, multi-modal face processing dataset (ds000117). This dataset was originally acquired and shared by Daniel Wakeman and Richard Henson ([https://pubmed.ncbi.nlm.nih.gov/25977808/](https://pubmed.ncbi.nlm.nih.gov/25977808/)). The MEG and EEG data were simultaneously recorded; the sMRI scans were preserved to support M/EEG source localization. Following event log augmentation, reorganization, and HED (v8.0.0) annotation, the EEG data have been repackaged in EEGLAB format. *Overview of the experiment:* Eighteen participants completed two recording sessions spaced three months apart – one session recorded fMRI and the other simultaneously recorded MEG and EEG data. During each session, participants performed the same simple perceptual task, responding to presented photographs of famous, unfamiliar, and scrambled faces by pressing one of two keyboard keys to indicate a subjective yes or no decision as to the relative spatial symmetry of the viewed face. Famous faces were feature-matched to unfamiliar faces; half the faces were female. The two sessions (MEEG, fMRI) had different organizations of event timing and presentation because of technological requirements of the respective imaging modalities. Each individual face was presented twice during the session. For half of the presented faces, the second presentation followed immediately after the first. For the other half, the second presentation was delayed by 5-15 face presentations. *Preprocessing:* The EEG preprocessing, which was performed using the `wh_extracteeg_BIDS.m` located in the code directory, includes the following steps: \* Ignore MRI data except for sMRI. \* Extract EEG channels out of the MEG/EEG fif data \* Add fiducials \* Rename EOG and EKG channels \* Extract events from event channel \* Add button press events! \* Remove spurious event types 5, 6, 7, 13, 14, 15, 17, 18 and 19 \* Remove spurious event types 24 for subject 3 run 4 \* Correct event latencies (events have a shift of 34 ms) \* Add HED (v8.0.0) event annotations – see Robbins et al. (2021) \* Remove event fields `urevent` and `duration` \* Save as EEGLAB .set format *Data curators:* Dung Truong, Ramon Martinez, Scott Makeig, Arnaud Delorme (UCSD, La Jolla, CA, USA), Kay Robbins (UTSA, San Antonio, TX, USA) ## Dataset Information | Dataset ID | `DS003645` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Face processing MEEG dataset with HED annotation | | Author (year) | `Wakeman2021` | | Canonical | — | | Importable as | `DS003645`, `Wakeman2021` | | Year | 2021 | | Authors | Daniel G. Wakeman, Richard N Henson, Dung Truong (curation), Kay Robbins (curation), Scott Makeig (curation), Arno Delorme (curation) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003645.v2.0.2](https://doi.org/10.18112/openneuro.ds003645.v2.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003645) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003645) | [Source URL](https://openneuro.org/datasets/ds003645) | ### Copy-paste BibTeX ```bibtex @dataset{ds003645, title = {Face processing MEEG dataset with HED annotation}, author = {Daniel G. Wakeman and Richard N Henson and Dung Truong (curation) and Kay Robbins (curation) and Scott Makeig (curation) and Arno Delorme (curation)}, doi = {10.18112/openneuro.ds003645.v2.0.2}, url = {https://doi.org/10.18112/openneuro.ds003645.v2.0.2}, } ``` ## Technical Details - Subjects: 19 - Recordings: 224 - Tasks: 2 - Channels: 404 (120), 394 (96) - Sampling rate (Hz): 1100.0 - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: 106.3 GB - File count: 224 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003645.v2.0.2 - Source: openneuro - OpenNeuro: [ds003645](https://openneuro.org/datasets/ds003645) - NeMAR: [ds003645](https://nemar.org/dataexplorer/detail?dataset_id=ds003645) ## API Reference Use the `DS003645` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003645(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Face processing MEEG dataset with HED annotation * **Study:** `ds003645` (OpenNeuro) * **Author (year):** `Wakeman2021` * **Canonical:** — Also importable as: `DS003645`, `Wakeman2021`. Modality: `eeg, meg`. Subjects: 19; recordings: 224; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003645](https://openneuro.org/datasets/ds003645) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003645](https://nemar.org/dataexplorer/detail?dataset_id=ds003645) DOI: [https://doi.org/10.18112/openneuro.ds003645.v2.0.2](https://doi.org/10.18112/openneuro.ds003645.v2.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003645 >>> dataset = DS003645(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003645) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003645) * [eegdash.dataset.DS007353](eegdash.dataset.DS007353.md) # DS003655: eeg dataset, 156 subjects *VerbalWorkingMemory* Access recordings and metadata through EEGDash. **Citation:** Yuri G. Pavlov (2021). *VerbalWorkingMemory*. [10.18112/openneuro.ds003655.v1.0.0](https://doi.org/10.18112/openneuro.ds003655.v1.0.0) Modality: eeg Subjects: 156 Recordings: 156 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003655 dataset = DS003655(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003655(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003655( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003655, title = {VerbalWorkingMemory}, author = {Yuri G. Pavlov}, doi = {10.18112/openneuro.ds003655.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003655.v1.0.0}, } ``` ## About This Dataset EEG in a modified Sternberg working memory paradigm with two types of task: with mental manipulations (alphabetization) and simple retention (TASK) and 3 levels of load: 5, 6, or 7 letter to memorize (LOAD) ## Dataset Information | Dataset ID | `DS003655` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | VerbalWorkingMemory | | Author (year) | `Pavlov2021_VerbalWorkingMemory` | | Canonical | — | | Importable as | `DS003655`, `Pavlov2021_VerbalWorkingMemory` | | Year | 2021 | | Authors | Yuri G. Pavlov | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003655.v1.0.0](https://doi.org/10.18112/openneuro.ds003655.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003655) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003655) | [Source URL](https://openneuro.org/datasets/ds003655) | ### Copy-paste BibTeX ```bibtex @dataset{ds003655, title = {VerbalWorkingMemory}, author = {Yuri G. Pavlov}, doi = {10.18112/openneuro.ds003655.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003655.v1.0.0}, } ``` ## Technical Details - Subjects: 156 - Recordings: 156 - Tasks: 1 - Channels: 21 - Sampling rate (Hz): 500.0 - Duration (hours): 130.92305555555555 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 20.3 GB - File count: 156 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003655.v1.0.0 - Source: openneuro - OpenNeuro: [ds003655](https://openneuro.org/datasets/ds003655) - NeMAR: [ds003655](https://nemar.org/dataexplorer/detail?dataset_id=ds003655) ## API Reference Use the `DS003655` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003655(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) VerbalWorkingMemory * **Study:** `ds003655` (OpenNeuro) * **Author (year):** `Pavlov2021_VerbalWorkingMemory` * **Canonical:** — Also importable as: `DS003655`, `Pavlov2021_VerbalWorkingMemory`. Modality: `eeg`. Subjects: 156; recordings: 156; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003655](https://openneuro.org/datasets/ds003655) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003655](https://nemar.org/dataexplorer/detail?dataset_id=ds003655) DOI: [https://doi.org/10.18112/openneuro.ds003655.v1.0.0](https://doi.org/10.18112/openneuro.ds003655.v1.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003655 >>> dataset = DS003655(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003655) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003655) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003670: eeg dataset, 25 subjects *Dataset of Concurrent EEG, ECG, and Behavior with Multiple Doses of transcranial Electrical Stimulation - BIDS* Access recordings and metadata through EEGDash. **Citation:** Nigel Gebodh, Zeinab Esmaeilpour, Abhishek Datta, Marom Bikson (2021). *Dataset of Concurrent EEG, ECG, and Behavior with Multiple Doses of transcranial Electrical Stimulation - BIDS*. [10.18112/openneuro.ds003670.v1.1.0](https://doi.org/10.18112/openneuro.ds003670.v1.1.0) Modality: eeg Subjects: 25 Recordings: 62 License: CC0 Source: openneuro Citations: 6.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003670 dataset = DS003670(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003670(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003670( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003670, title = {Dataset of Concurrent EEG, ECG, and Behavior with Multiple Doses of transcranial Electrical Stimulation - BIDS}, author = {Nigel Gebodh and Zeinab Esmaeilpour and Abhishek Datta and Marom Bikson}, doi = {10.18112/openneuro.ds003670.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003670.v1.1.0}, } ``` ## About This Dataset **Synopsis** This is the **GX dataset** formatted to comply with [BIDS](https://bids.neuroimaging.io/) standard format. The tES/EEG/CTT/Vigilance experiment contains 19 unique participants (some repeated experiments). Over a 70 min period EEG/ECG/EOG were recorded concurrently with a [CTT](https://sccn.ucsd.edu/~scott/pdf/COMPTRACK.pdf) where participants maintained a ball at the center of the screen and were periodically stimulated (with low-intensity noninvasive brain stimulation) for 30 secs with combinations of 9 stimulation montages. For the **raw data** please see: [https://zenodo.org/record/4456079](https://zenodo.org/record/4456079) For methodological details please see corresponding article titled: > **Dataset of concurrent EEG, ECG, and behavior with multiple doses of transcranial Electrical Stimulation** **Data Descriptor Abstract** We present a dataset combining human-participant high-density electroencephalography (EEG) with physiological and continuous behavioral metrics during transcranial electrical stimulation (tES). Data include within participant application of nine High-Definition tES (HD-tES) types, targeting three cortical regions (frontal, motor, parietal) with three stimulation waveforms (DC, 5 Hz, 30 Hz); more than 783 total stimulation trials over 62 sessions with EEG, physiological (ECG, EOG), and continuous behavioral vigilance/alertness metrics. Experiment 1 and 2 consisted of participants performing a continuous vigilance/alertness task over three 70-minute and two 70.5-minute sessions, respectively. Demographic data were collected, as well as self-reported wellness questionnaires before and after each session. Participants received all 9 stimulation types in Experiment 1, with each session including three stimulation types, with 4 trials per type. Participants received 2 stimulation types in Experiment 2, with 20 trials of a given stimulation type per session. Within-participant reliability was tested by repeating select sessions. This unique dataset supports a range of hypothesis testing including interactions of tDCS/tACS location and frequency, brain-state, physiology, fatigue, and cognitive performance. For more details please see the full data descriptor article. Code used to import and process this dataset can be found here: **GitHub** : [https://github.com/ngebodh/GX_tES_EEG_Physio_Behavior](https://github.com/ngebodh/GX_tES_EEG_Physio_Behavior) For downsampled data please see: **Experiment 1** : [https://doi.org/10.5281/zenodo.3840615](https://doi.org/10.5281/zenodo.3840615) **Experiment 2** : [https://doi.org/10.5281/zenodo.3840617](https://doi.org/10.5281/zenodo.3840617) - Nigel Gebodh (May 26th, 2021) ## Dataset Information | Dataset ID | `DS003670` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset of Concurrent EEG, ECG, and Behavior with Multiple Doses of transcranial Electrical Stimulation - BIDS | | Author (year) | `Gebodh2021` | | Canonical | — | | Importable as | `DS003670`, `Gebodh2021` | | Year | 2021 | | Authors | Nigel Gebodh, Zeinab Esmaeilpour, Abhishek Datta, Marom Bikson | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003670.v1.1.0](https://doi.org/10.18112/openneuro.ds003670.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003670) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003670) | [Source URL](https://openneuro.org/datasets/ds003670) | ### Copy-paste BibTeX ```bibtex @dataset{ds003670, title = {Dataset of Concurrent EEG, ECG, and Behavior with Multiple Doses of transcranial Electrical Stimulation - BIDS}, author = {Nigel Gebodh and Zeinab Esmaeilpour and Abhishek Datta and Marom Bikson}, doi = {10.18112/openneuro.ds003670.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003670.v1.1.0}, } ``` ## Technical Details - Subjects: 25 - Recordings: 62 - Tasks: 1 - Channels: 35 - Sampling rate (Hz): 2000.0 - Duration (hours): 72.77206638888889 - Pathology: Healthy - Modality: Visual - Type: Clinical/Intervention - Size on disk: 72.2 GB - File count: 62 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003670.v1.1.0 - Source: openneuro - OpenNeuro: [ds003670](https://openneuro.org/datasets/ds003670) - NeMAR: [ds003670](https://nemar.org/dataexplorer/detail?dataset_id=ds003670) ## API Reference Use the `DS003670` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003670(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of Concurrent EEG, ECG, and Behavior with Multiple Doses of transcranial Electrical Stimulation - BIDS * **Study:** `ds003670` (OpenNeuro) * **Author (year):** `Gebodh2021` * **Canonical:** — Also importable as: `DS003670`, `Gebodh2021`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 25; recordings: 62; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003670](https://openneuro.org/datasets/ds003670) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003670](https://nemar.org/dataexplorer/detail?dataset_id=ds003670) DOI: [https://doi.org/10.18112/openneuro.ds003670.v1.1.0](https://doi.org/10.18112/openneuro.ds003670.v1.1.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS003670 >>> dataset = DS003670(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003670) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003670) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003682: meg dataset, 28 subjects *Model-based aversive learning in humans is supported by preferential task state reactivation* Access recordings and metadata through EEGDash. **Citation:** Toby Wise, Yunzhe Liu, Fatima Chowdhury, Raymond J. Dolan (2021). *Model-based aversive learning in humans is supported by preferential task state reactivation*. [10.18112/openneuro.ds003682.v1.0.0](https://doi.org/10.18112/openneuro.ds003682.v1.0.0) Modality: meg Subjects: 28 Recordings: 336 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003682 dataset = DS003682(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003682(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003682( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003682, title = {Model-based aversive learning in humans is supported by preferential task state reactivation}, author = {Toby Wise and Yunzhe Liu and Fatima Chowdhury and Raymond J. Dolan}, doi = {10.18112/openneuro.ds003682.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003682.v1.0.0}, } ``` ## About This Dataset This dataset contains raw and processed MEG data for the paper “Model-based aversive learning in humans is supported by preferential task state reactivation” by Toby Wise, Yunzhe Liu, Fatima Chowdhury & Ray Dolan. Raw data is provided as `.fif` files, although it was acquired on a CRF system. ## Dataset Information | Dataset ID | `DS003682` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Model-based aversive learning in humans is supported by preferential task state reactivation | | Author (year) | `Wise2021` | | Canonical | — | | Importable as | `DS003682`, `Wise2021` | | Year | 2021 | | Authors | Toby Wise, Yunzhe Liu, Fatima Chowdhury, Raymond J. Dolan | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003682.v1.0.0](https://doi.org/10.18112/openneuro.ds003682.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003682) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003682) | [Source URL](https://openneuro.org/datasets/ds003682) | ### Copy-paste BibTeX ```bibtex @dataset{ds003682, title = {Model-based aversive learning in humans is supported by preferential task state reactivation}, author = {Toby Wise and Yunzhe Liu and Fatima Chowdhury and Raymond J. Dolan}, doi = {10.18112/openneuro.ds003682.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003682.v1.0.0}, } ``` ## Technical Details - Subjects: 28 - Recordings: 336 - Tasks: 1 - Channels: 414 - Sampling rate (Hz): 1200.0 - Duration (hours): 31.7550625 - Pathology: Healthy - Modality: — - Type: Learning - Size on disk: 211.6 GB - File count: 336 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003682.v1.0.0 - Source: openneuro - OpenNeuro: [ds003682](https://openneuro.org/datasets/ds003682) - NeMAR: [ds003682](https://nemar.org/dataexplorer/detail?dataset_id=ds003682) ## API Reference Use the `DS003682` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003682(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Model-based aversive learning in humans is supported by preferential task state reactivation * **Study:** `ds003682` (OpenNeuro) * **Author (year):** `Wise2021` * **Canonical:** — Also importable as: `DS003682`, `Wise2021`. Modality: `meg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 28; recordings: 336; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003682](https://openneuro.org/datasets/ds003682) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003682](https://nemar.org/dataexplorer/detail?dataset_id=ds003682) DOI: [https://doi.org/10.18112/openneuro.ds003682.v1.0.0](https://doi.org/10.18112/openneuro.ds003682.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003682 >>> dataset = DS003682(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003682) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003682) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS003688: ieeg dataset, 51 subjects *Open multimodal iEEG-fMRI dataset from naturalistic stimulation with a short audiovisual film* Access recordings and metadata through EEGDash. **Citation:** Julia Berezutskaya, Mariska J. Vansteensel, Erik J. Aarnoutse, Zachary V. Freudenburg, Giovanni Piantoni, Mariana P. Branco, Nick F. Ramsey (2021). *Open multimodal iEEG-fMRI dataset from naturalistic stimulation with a short audiovisual film*. [10.18112/openneuro.ds003688.v1.0.7](https://doi.org/10.18112/openneuro.ds003688.v1.0.7) Modality: ieeg Subjects: 51 Recordings: 107 License: CC0 Source: openneuro Citations: 9.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003688 dataset = DS003688(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003688(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003688( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003688, title = {Open multimodal iEEG-fMRI dataset from naturalistic stimulation with a short audiovisual film}, author = {Julia Berezutskaya and Mariska J. Vansteensel and Erik J. Aarnoutse and Zachary V. Freudenburg and Giovanni Piantoni and Mariana P. Branco and Nick F. Ramsey}, doi = {10.18112/openneuro.ds003688.v1.0.7}, url = {https://doi.org/10.18112/openneuro.ds003688.v1.0.7}, } ``` ## About This Dataset Open iEEG-fMRI dataset from stimulation with a short audiovisual film Full description of the data in our dataset paper: [https://www.nature.com/articles/s41597-022-01173-0](https://www.nature.com/articles/s41597-022-01173-0) Video description of the dataset: [https://www.youtube.com/watch?v=C14cWM1CvrE&t=13s](https://www.youtube.com/watch?v=C14cWM1CvrE&t=13s) UMC Utrecht Team [https://www.nick-ramsey.eu/](https://www.nick-ramsey.eu/) ## Dataset Information | Dataset ID | `DS003688` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Open multimodal iEEG-fMRI dataset from naturalistic stimulation with a short audiovisual film | | Author (year) | `Berezutskaya2021` | | Canonical | — | | Importable as | `DS003688`, `Berezutskaya2021` | | Year | 2021 | | Authors | Julia Berezutskaya, Mariska J. Vansteensel, Erik J. Aarnoutse, Zachary V. Freudenburg, Giovanni Piantoni, Mariana P. Branco, Nick F. Ramsey | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003688.v1.0.7](https://doi.org/10.18112/openneuro.ds003688.v1.0.7) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003688) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003688) | [Source URL](https://openneuro.org/datasets/ds003688) | ### Copy-paste BibTeX ```bibtex @dataset{ds003688, title = {Open multimodal iEEG-fMRI dataset from naturalistic stimulation with a short audiovisual film}, author = {Julia Berezutskaya and Mariska J. Vansteensel and Erik J. Aarnoutse and Zachary V. Freudenburg and Giovanni Piantoni and Mariana P. Branco and Nick F. Ramsey}, doi = {10.18112/openneuro.ds003688.v1.0.7}, url = {https://doi.org/10.18112/openneuro.ds003688.v1.0.7}, } ``` ## Technical Details - Subjects: 51 - Recordings: 107 - Tasks: 2 - Channels: 74 (6), 109 (5), 62 (5), 125 (4), 85 (4), 107 (4), 88 (4), 72 (4), 67 (3), 95 (3), 115 (3), 71 (3), 111 (3), 81 (3), 118 (3), 76 (3), 84 (3), 102 (2), 87 (2), 126 (2), 80 (2), 75 (2), 86 (2), 122 (2), 116 (2), 121 (2), 112 (2), 100 (2), 54 (2), 177 (2), 64 (2), 128 (2), 89 (2), 113 (2), 110 (2), 60 (2), 97, 69, 65, 94, 91, 92 - Sampling rate (Hz): 512.0 (71), 2048.0 (32), 2000.0 (4) - Duration (hours): 9.20130295247396 - Pathology: Epilepsy - Modality: Multisensory - Type: Perception - Size on disk: 15.2 GB - File count: 107 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003688.v1.0.7 - Source: openneuro - OpenNeuro: [ds003688](https://openneuro.org/datasets/ds003688) - NeMAR: [ds003688](https://nemar.org/dataexplorer/detail?dataset_id=ds003688) ## API Reference Use the `DS003688` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003688(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Open multimodal iEEG-fMRI dataset from naturalistic stimulation with a short audiovisual film * **Study:** `ds003688` (OpenNeuro) * **Author (year):** `Berezutskaya2021` * **Canonical:** — Also importable as: `DS003688`, `Berezutskaya2021`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Epilepsy`. Subjects: 51; recordings: 107; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003688](https://openneuro.org/datasets/ds003688) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003688](https://nemar.org/dataexplorer/detail?dataset_id=ds003688) DOI: [https://doi.org/10.18112/openneuro.ds003688.v1.0.7](https://doi.org/10.18112/openneuro.ds003688.v1.0.7) NEMAR citation count: 9 ### Examples ```pycon >>> from eegdash.dataset import DS003688 >>> dataset = DS003688(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003688) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003688) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS003690: eeg dataset, 75 subjects *EEG, ECG and pupil data from young and older adults: rest and auditory cued reaction time tasks* Access recordings and metadata through EEGDash. **Citation:** Maria J. Ribeiro, Miguel Castelo-Branco (2021). *EEG, ECG and pupil data from young and older adults: rest and auditory cued reaction time tasks*. [10.18112/openneuro.ds003690.v1.0.0](https://doi.org/10.18112/openneuro.ds003690.v1.0.0) Modality: eeg Subjects: 75 Recordings: 375 License: CC0 Source: openneuro Citations: 5.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003690 dataset = DS003690(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003690(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003690( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003690, title = {EEG, ECG and pupil data from young and older adults: rest and auditory cued reaction time tasks}, author = {Maria J. Ribeiro and Miguel Castelo-Branco}, doi = {10.18112/openneuro.ds003690.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003690.v1.0.0}, } ``` ## About This Dataset Age-related differences in EEG, ECG and pupilography during auditory cued reaction time tasks In this study, we acquired the electroencephalogram (EEG), pupilogram and electrocardiogram (ECG) while a group of young (N = 36) and a group of older (N = 39) adults were engaged in auditory cued reaction time tasks (active tasks) or passively listening to the auditory stimulus used as temporal cue, presented with the same frequency as in the active tasks (passive task - 4 minutes acquired at the beginning of the session). The active tasks were a cued simple reaction time task and a cued go/no-go task. In the active tasks, 16% of the trials were cue only trials (the cue was presented but no target followed). The order of the active tasks was counterbalanced across participants and were acquired in two runs of 8 minutes per task. In each task, we acquired 120 trials. In the simple reaction time task, 100 trials were cue-target trials and 20 trials were cue-only. In the go/no-go task, 80 trials were cue-go trials, 20 were cue-no-go trials, and 20 trials were cue-only trials. Participants were fixating a grey computer screen with a lighter grey fixation cross at the center. The auditory stimuli were single-frequency signals (pure tones) with duration 250 ms, with the following frequencies: cue 1500 Hz; go stimulus 1700 Hz; no-go stimulus 1300 Hz; and error feedback signal 1000 Hz. The sounds were played at around 67 dB(A) from a hi-fi speakers system. All stimuli were suprathreshold. EEG signal was recorded using a 64-channel Neuroscan system with scalp electrodes placed according to the International 10-20 electrode placement standard, with reference between the electrodes CPz and Cz and ground between FPz and Fz. Acquisition rate was 500 Hz. Vertical and horizontal electrooculograms were recorded to monitor eye movements and blinks. Bipolar electrocardiogram (ECG) electrodes were placed on the chest. During data acquisition, the participants head was stabilized with a chin and forehead rest. Consequently, the electrodes on the forehead, FP1, FPz, and FP2, displayed signal fluctuation artifacts due to the pressure on the forehead rest. These were excluded from the recordings. Electrode positions were measured using a 3D-digitizer Fastrak (Polhemus, VT, USA) and imported into the EEGLAB files. Pupil data was acquired with iView X Hi-Speed 1250 system from SMI with a sampling rate of 240 Hz. Pupil data was imported into the EEG dataset with the EYE-EEG EEGLAB plugin. Synchronized EEG, ECG and pupil data are included in separate channels in the EEGLAB .set files. ## Dataset Information | Dataset ID | `DS003690` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG, ECG and pupil data from young and older adults: rest and auditory cued reaction time tasks | | Author (year) | `Ribeiro2021` | | Canonical | — | | Importable as | `DS003690`, `Ribeiro2021` | | Year | 2021 | | Authors | Maria J. Ribeiro, Miguel Castelo-Branco | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003690.v1.0.0](https://doi.org/10.18112/openneuro.ds003690.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003690) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003690) | [Source URL](https://openneuro.org/datasets/ds003690) | ### Copy-paste BibTeX ```bibtex @dataset{ds003690, title = {EEG, ECG and pupil data from young and older adults: rest and auditory cued reaction time tasks}, author = {Maria J. Ribeiro and Miguel Castelo-Branco}, doi = {10.18112/openneuro.ds003690.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003690.v1.0.0}, } ``` ## Technical Details - Subjects: 75 - Recordings: 375 - Tasks: 3 - Channels: 66 (365), 64 (10) - Sampling rate (Hz): 500.0 - Duration (hours): 47.65696944444444 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 21.5 GB - File count: 375 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003690.v1.0.0 - Source: openneuro - OpenNeuro: [ds003690](https://openneuro.org/datasets/ds003690) - NeMAR: [ds003690](https://nemar.org/dataexplorer/detail?dataset_id=ds003690) ## API Reference Use the `DS003690` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003690(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG, ECG and pupil data from young and older adults: rest and auditory cued reaction time tasks * **Study:** `ds003690` (OpenNeuro) * **Author (year):** `Ribeiro2021` * **Canonical:** — Also importable as: `DS003690`, `Ribeiro2021`. Modality: `eeg`. Subjects: 75; recordings: 375; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003690](https://openneuro.org/datasets/ds003690) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003690](https://nemar.org/dataexplorer/detail?dataset_id=ds003690) DOI: [https://doi.org/10.18112/openneuro.ds003690.v1.0.0](https://doi.org/10.18112/openneuro.ds003690.v1.0.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003690 >>> dataset = DS003690(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003690) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003690) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003694: meg dataset, 28 subjects *MEGMEM* Access recordings and metadata through EEGDash. **Citation:** Benjamin J. Griffiths, María Carmen Martín-Buro, Bernhard Staresina, Simon Hanslmayr (2021). *MEGMEM*. [10.18112/openneuro.ds003694.v1.0.0](https://doi.org/10.18112/openneuro.ds003694.v1.0.0) Modality: meg Subjects: 28 Recordings: 132 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003694 dataset = DS003694(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003694(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003694( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003694, title = {MEGMEM}, author = {Benjamin J. Griffiths and María Carmen Martín-Buro and Bernhard Staresina and Simon Hanslmayr}, doi = {10.18112/openneuro.ds003694.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003694.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS003694` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MEGMEM | | Author (year) | `Griffiths2021` | | Canonical | `MEGMEM` | | Importable as | `DS003694`, `Griffiths2021`, `MEGMEM` | | Year | 2021 | | Authors | Benjamin J. Griffiths, María Carmen Martín-Buro, Bernhard Staresina, Simon Hanslmayr | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003694.v1.0.0](https://doi.org/10.18112/openneuro.ds003694.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003694) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003694) | [Source URL](https://openneuro.org/datasets/ds003694) | ### Copy-paste BibTeX ```bibtex @dataset{ds003694, title = {MEGMEM}, author = {Benjamin J. Griffiths and María Carmen Martín-Buro and Bernhard Staresina and Simon Hanslmayr}, doi = {10.18112/openneuro.ds003694.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003694.v1.0.0}, } ``` ## Technical Details - Subjects: 28 - Recordings: 132 - Tasks: 1 - Channels: 327 (109), 319 (19), 336 (4) - Sampling rate (Hz): 1000.0 - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: Memory - Size on disk: 218.5 GB - File count: 132 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003694.v1.0.0 - Source: openneuro - OpenNeuro: [ds003694](https://openneuro.org/datasets/ds003694) - NeMAR: [ds003694](https://nemar.org/dataexplorer/detail?dataset_id=ds003694) ## API Reference Use the `DS003694` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003694(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEGMEM * **Study:** `ds003694` (OpenNeuro) * **Author (year):** `Griffiths2021` * **Canonical:** `MEGMEM` Also importable as: `DS003694`, `Griffiths2021`, `MEGMEM`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 28; recordings: 132; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003694](https://openneuro.org/datasets/ds003694) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003694](https://nemar.org/dataexplorer/detail?dataset_id=ds003694) DOI: [https://doi.org/10.18112/openneuro.ds003694.v1.0.0](https://doi.org/10.18112/openneuro.ds003694.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003694 >>> dataset = DS003694(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003694) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003694) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS003702: eeg dataset, 47 subjects *Social Memory cuing* Access recordings and metadata through EEGDash. **Citation:** Samantha Gregory, Hongfang Wang, Klaus Kessler (2021). *Social Memory cuing*. [10.18112/openneuro.ds003702.v1.0.1](https://doi.org/10.18112/openneuro.ds003702.v1.0.1) Modality: eeg Subjects: 47 Recordings: 47 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003702 dataset = DS003702(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003702(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003702( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003702, title = {Social Memory cuing}, author = {Samantha Gregory and Hongfang Wang and Klaus Kessler}, doi = {10.18112/openneuro.ds003702.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003702.v1.0.1}, } ``` ## About This Dataset EEG Raw and processed data for a memory task presented in virtual reality, The full task is on OSF: [https://osf.io/s9xmu/files](https://osf.io/s9xmu/files) Derviatives: Behavioral data from the task in derivatives Processed data is in the derivatives Code Event codes are listed in the code file Trial function, preprocessing, data cleaning and analysis are also in the code file. Task: Wearing a head mounted display to display the task in virtual reality participants completed a visual working memory task ### View full README EEG Raw and processed data for a memory task presented in virtual reality, The full task is on OSF: [https://osf.io/s9xmu/files](https://osf.io/s9xmu/files) Derviatives: Behavioral data from the task in derivatives Processed data is in the derivatives Code Event codes are listed in the code file Trial function, preprocessing, data cleaning and analysis are also in the code file. Task: Wearing a head mounted display to display the task in virtual reality participants completed a visual working memory task In the task they had to remember the status of and details about presented objects. A person or a stick cued the items such that it could look left or right and items could appear on the left or right. Sometimes the cue was valid (i.e. pointed where objects appeared) and sometimes it was invalid (pointed away from where objects appeared) The objects were always a bowl, a cup, a plate and a teapot. Each could have a different status that needed to be remembered praticipants were probed on memory for item location and then item status Preprint to be added Event codes within the data set are as follows. Trial function is included in code folder. For the avatar cue s3021 Character shown - i.e. moment the avatar appears s3022 Objects shown - i.e. moment that the memory targets appear s3023 Maintenance interal - i.e. moment the memory objects leave the screen and the blank maintenance interval occurs s3024 Probe object shown - i.e. moment that participantis presented with location probe s3025 Resp 1 made - i.e. moment that the participants have responded to the location probe s3026 Q2 shown - i.e. moment that participants are asked a question about the status of the objects s3027 Resp 2 made - i.e. moment that the participants have responded to the status probe For the stick cue s3041 Stick shown - i.e. moment the stick appears s3042 Objects shown - i.e. moment that the memory targets appear s3043 Maintenance interal - i.e. moment the memory objects leave the screen and the blank maintenance interval occurs s3044 Probe object shown - i.e. moment that participantis presented with location probe s3045 Resp 1 made - i.e. moment that the participants have responded to the location probe s3046 Q2 shown - i.e. moment that participants are asked a question about the status of the objects s3047 Resp 2 made - i.e. moment that the participants have responded to the status probe Trial info EEG processed data 1: Main condition (MainCon) > 1 = Stick Congruent > 2 = Stick Incongruenct > 3 = Avatar Congruent > 4 = Avatar Incongruent 2: Cue condition left or right (MConCueLR) : 1 = Stick Congruent Cue shifts left 2 = Stick Congruent Cue shifts right 3 = Stick Incongruent Cue shifts left 4 = Stick Incongruent Cue shifts right 5 = Avatar Congruent Cue shifts left 6 = Avatar Congruent Cue shifts right 7 = Avatar Incongruent Cue shifts left 8 = Avatar Incongruent Cue shifts right 3: Condition from experiment build (con) : 1 = Congruent, cueshift L, items Left, same location 2 = Congruent, cueshift L, items Left, dif location 3 = Congruent, cueshift R, items Right, same location 4 = Congruent, cueshift R, items Right, dif location 5 = Incongruent, cueshift L, items Right, same location 6 = Incongruent, cueshift L, items Right, dif location 7 = Incongruent, cueshift R, items Left, same location 8 = Incongruent, cueshift R, items Left, dif location 4: Validity : 1 = valid (congruent) 2= invalid (incongruent) 5: Location : 1 = Same (i.e. probe at same location as when initially presented) 2 = Different (i.e. probe at different location as when initially presented) 6 Cue : 1 = Stick cue 2 = Avatar cue 7 Left or Right cue (LorR) specific to the cue shift : 1 = Left Stick 2 = Right Stick 3 = Left Avatar 4 = Right Avatar 8 Start: Sample time for start of the trial 9 Cue: Sample time for cue onset 10 Targets: Sample time for target onset 11 Maintenance: Sample time for start of mainanance interval 12 Location probe: Sample time for location probe being shown 13 Location response time: Sample time for response to location probe being made 14 Status question: Sample time for status question being asked 15 Status response time: Sample time for response to status question being made 16 Accuracy for the location question 17 Accuracy for the status question ## Dataset Information | Dataset ID | `DS003702` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Social Memory cuing | | Author (year) | `Gregory2021` | | Canonical | — | | Importable as | `DS003702`, `Gregory2021` | | Year | 2021 | | Authors | Samantha Gregory, Hongfang Wang, Klaus Kessler | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003702.v1.0.1](https://doi.org/10.18112/openneuro.ds003702.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003702) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003702) | [Source URL](https://openneuro.org/datasets/ds003702) | ### Copy-paste BibTeX ```bibtex @dataset{ds003702, title = {Social Memory cuing}, author = {Samantha Gregory and Hongfang Wang and Klaus Kessler}, doi = {10.18112/openneuro.ds003702.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003702.v1.0.1}, } ``` ## Technical Details - Subjects: 47 - Recordings: 47 - Tasks: 1 - Channels: 59 - Sampling rate (Hz): 500.0 - Duration (hours): 40.45904166666666 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 17.5 GB - File count: 47 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003702.v1.0.1 - Source: openneuro - OpenNeuro: [ds003702](https://openneuro.org/datasets/ds003702) - NeMAR: [ds003702](https://nemar.org/dataexplorer/detail?dataset_id=ds003702) ## API Reference Use the `DS003702` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003702(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Social Memory cuing * **Study:** `ds003702` (OpenNeuro) * **Author (year):** `Gregory2021` * **Canonical:** — Also importable as: `DS003702`, `Gregory2021`. Modality: `eeg`. Subjects: 47; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003702](https://openneuro.org/datasets/ds003702) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003702](https://nemar.org/dataexplorer/detail?dataset_id=ds003702) DOI: [https://doi.org/10.18112/openneuro.ds003702.v1.0.1](https://doi.org/10.18112/openneuro.ds003702.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003702 >>> dataset = DS003702(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003702) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003702) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003703: meg dataset, 34 subjects *Frequency Tagging of Syntactic Structure or Lexical Properties* Access recordings and metadata through EEGDash. **Citation:** Evgenii Kalenkovich, Anna Shestakova, Nina Kazanina (2021). *Frequency Tagging of Syntactic Structure or Lexical Properties*. [10.18112/openneuro.ds003703.v1.0.0](https://doi.org/10.18112/openneuro.ds003703.v1.0.0) Modality: meg Subjects: 34 Recordings: 102 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003703 dataset = DS003703(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003703(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003703( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003703, title = {Frequency Tagging of Syntactic Structure or Lexical Properties}, author = {Evgenii Kalenkovich and Anna Shestakova and Nina Kazanina}, doi = {10.18112/openneuro.ds003703.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003703.v1.0.0}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [http://doi.org/10.1038/sdata.2018.110](http://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `DS003703` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Frequency Tagging of Syntactic Structure or Lexical Properties | | Author (year) | `Kalenkovich2021` | | Canonical | `Kalenkovich2019` | | Importable as | `DS003703`, `Kalenkovich2021`, `Kalenkovich2019` | | Year | 2021 | | Authors | Evgenii Kalenkovich, Anna Shestakova, Nina Kazanina | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003703.v1.0.0](https://doi.org/10.18112/openneuro.ds003703.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003703) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003703) | [Source URL](https://openneuro.org/datasets/ds003703) | ### Copy-paste BibTeX ```bibtex @dataset{ds003703, title = {Frequency Tagging of Syntactic Structure or Lexical Properties}, author = {Evgenii Kalenkovich and Anna Shestakova and Nina Kazanina}, doi = {10.18112/openneuro.ds003703.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003703.v1.0.0}, } ``` ## Technical Details - Subjects: 34 - Recordings: 102 - Tasks: 2 - Channels: 314 - Sampling rate (Hz): 1000.0 - Duration (hours): 21.81608277777778 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 92.3 GB - File count: 102 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003703.v1.0.0 - Source: openneuro - OpenNeuro: [ds003703](https://openneuro.org/datasets/ds003703) - NeMAR: [ds003703](https://nemar.org/dataexplorer/detail?dataset_id=ds003703) ## API Reference Use the `DS003703` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003703(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Frequency Tagging of Syntactic Structure or Lexical Properties * **Study:** `ds003703` (OpenNeuro) * **Author (year):** `Kalenkovich2021` * **Canonical:** `Kalenkovich2019` Also importable as: `DS003703`, `Kalenkovich2021`, `Kalenkovich2019`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 34; recordings: 102; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003703](https://openneuro.org/datasets/ds003703) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003703](https://nemar.org/dataexplorer/detail?dataset_id=ds003703) DOI: [https://doi.org/10.18112/openneuro.ds003703.v1.0.0](https://doi.org/10.18112/openneuro.ds003703.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003703 >>> dataset = DS003703(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003703) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003703) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS003708: ieeg dataset, 1 subjects *Basis profile curve identification to understand electrical stimulation effects in human brain networks* Access recordings and metadata through EEGDash. **Citation:** Dora Hermes, Gabriella Ojeda, Kai J. Miller, Multimodal Neuroimaging Laboratory at Mayo Clinic (2021). *Basis profile curve identification to understand electrical stimulation effects in human brain networks*. [10.18112/openneuro.ds003708.v1.0.0](https://doi.org/10.18112/openneuro.ds003708.v1.0.0) Modality: ieeg Subjects: 1 Recordings: 1 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003708 dataset = DS003708(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003708(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003708( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003708, title = {Basis profile curve identification to understand electrical stimulation effects in human brain networks}, author = {Dora Hermes and Gabriella Ojeda and Kai J. Miller and Multimodal Neuroimaging Laboratory at Mayo Clinic}, doi = {10.18112/openneuro.ds003708.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003708.v1.0.0}, } ``` ## About This Dataset This dataset contains intracranial EEG recordings from one patient during single pulse electrical stimulation. These data were recorded at the Mayo Clinic in Rochester, MN, as part of the NIH Brain Initiative supported project R01 MH122258 “CRCNS: Processing speed in the human connectome across the lifespan”. The overarching goal of this project is to develop a large database of single pulse stimulation data and develop tools to advance our understanding of the human connectome across the lifespan. **Citing this dataset** This dataset is part of the paper on ‘Basis profile curve identification to understand electrical stimulation effects in human brain networks’ by Miller, Mueller and Hermes, 2021, [https://www.biorxiv.org/content/10.1101/2021.01.24.428020v1.full](https://www.biorxiv.org/content/10.1101/2021.01.24.428020v1.full). This project was funded by the National Institute Of Mental Health of the National Institutes of Health under Award Number R01MH122258 to Dora Hermes (Mayo Clinic). The data was collected by Dora Hermes, Nick Gregg, Brian Lundstrom, Cindy Nelson, Gregg Worrell and Kai J. Miller. The BIDS formatting was performed by Dora Hermes and Gabriella Ojeda Valencia. **Format** It is formatted according to BIDS version 1.3.0 **Details about the single pulse stimulation experiment** Patients were resting in the hospital bed, while single pulse stimulation was performed with a frequency of ~0.2 Hz. The stimulation had a duration of 200 microseconds, was biphasic and had an amplitude of 6mA. On the motor cortex stimulation amplitude was sometimes reduced to 1 or 2mA to minimize movement artifacts. **Contact** Please contact Dora Hermes ([hermes.dora@mayo.edu](mailto:hermes.dora@mayo.edu)) for questions. ## Dataset Information | Dataset ID | `DS003708` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Basis profile curve identification to understand electrical stimulation effects in human brain networks | | Author (year) | `Hermes2021` | | Canonical | `Miller2021` | | Importable as | `DS003708`, `Hermes2021`, `Miller2021` | | Year | 2021 | | Authors | Dora Hermes, Gabriella Ojeda, Kai J. Miller, Multimodal Neuroimaging Laboratory at Mayo Clinic | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003708.v1.0.0](https://doi.org/10.18112/openneuro.ds003708.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003708) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003708) | [Source URL](https://openneuro.org/datasets/ds003708) | ### Copy-paste BibTeX ```bibtex @dataset{ds003708, title = {Basis profile curve identification to understand electrical stimulation effects in human brain networks}, author = {Dora Hermes and Gabriella Ojeda and Kai J. Miller and Multimodal Neuroimaging Laboratory at Mayo Clinic}, doi = {10.18112/openneuro.ds003708.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003708.v1.0.0}, } ``` ## Technical Details - Subjects: 1 - Recordings: 1 - Tasks: 1 - Channels: 89 - Sampling rate (Hz): 2048.0 - Duration (hours): 1.1047416178385416 - Pathology: Not specified - Modality: Other - Type: Clinical/Intervention - Size on disk: 620.1 MB - File count: 1 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003708.v1.0.0 - Source: openneuro - OpenNeuro: [ds003708](https://openneuro.org/datasets/ds003708) - NeMAR: [ds003708](https://nemar.org/dataexplorer/detail?dataset_id=ds003708) ## API Reference Use the `DS003708` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003708(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Basis profile curve identification to understand electrical stimulation effects in human brain networks * **Study:** `ds003708` (OpenNeuro) * **Author (year):** `Hermes2021` * **Canonical:** `Miller2021` Also importable as: `DS003708`, `Hermes2021`, `Miller2021`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003708](https://openneuro.org/datasets/ds003708) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003708](https://nemar.org/dataexplorer/detail?dataset_id=ds003708) DOI: [https://doi.org/10.18112/openneuro.ds003708.v1.0.0](https://doi.org/10.18112/openneuro.ds003708.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003708 >>> dataset = DS003708(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003708) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003708) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS003710: eeg dataset, 13 subjects *APPLESEED Example Dataset* Access recordings and metadata through EEGDash. **Citation:** Cabell L. Williams, Meghan H. Puglia (2021). *APPLESEED Example Dataset*. [10.18112/openneuro.ds003710.v1.0.2](https://doi.org/10.18112/openneuro.ds003710.v1.0.2) Modality: eeg Subjects: 13 Recordings: 48 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003710 dataset = DS003710(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003710(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003710( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003710, title = {APPLESEED Example Dataset}, author = {Cabell L. Williams and Meghan H. Puglia}, doi = {10.18112/openneuro.ds003710.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003710.v1.0.2}, } ``` ## About This Dataset The APPLESEED Example Dataset This dataset consists of longitudinal EEG recordings from 13 infants at 4, 8, and 12 months of age. Test-retest reliability was assessed at 4 months of age via two appointments (session 1 & 2) that occurred within 1 week of each other. Session 3 data was recorded at 8 months of age and session 4 data was recorded at 12 months of age. Two participants did not return for longitudinal testing at sessions 3 & 4. Therefore, the complete dataset consists of 48 recording sessions, with reliability and longitudinal data (sessions 1-4) for 11 infants (6 F), and reliability data only (sessions 1 & 2) for an additional 2 infants. A channel location file and bin file for analysis are included in the “code” directory. This dataset was used to develop and validate the Automated Preprocessing Pipe-Line for the Estimation of Scale-wise Entropy from EEG Data (APPLESEED) and is provided as an example dataset to accompany Puglia, M.H., Slobin, J.S., & Williams, C.L., 2022. The Automated Preprocessing Pipe-Line for the Estimation of Scale-wise Entropy from EEG Data (APPLESEED): Development and validation for use in pediatric populations. Developmental Cognitive Neuroscience, 101163. APPLESEED code is available to download from [https://github.com/mhpuglia/APPLESEED](https://github.com/mhpuglia/APPLESEED). This dataset is part of a larger, ongoing longitudinal study initially described in Puglia, M.H., Krol, K.M., Missana, M., Williams, C.L., Lillard, T.S., Morris, J.P., Connelly, J.J. and Grossmann, T., 2020. Epigenetic tuning of brain signal entropy in emergent human social behavior. BMC medicine, 18(1), pp.1-24. [https://doi.org/10.1186/s12916-020-01683-x](https://doi.org/10.1186/s12916-020-01683-x). ## Dataset Information | Dataset ID | `DS003710` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | APPLESEED Example Dataset | | Author (year) | `Williams2021` | | Canonical | `APPLESEED` | | Importable as | `DS003710`, `Williams2021`, `APPLESEED` | | Year | 2021 | | Authors | Cabell L. Williams, Meghan H. Puglia | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003710.v1.0.2](https://doi.org/10.18112/openneuro.ds003710.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003710) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003710) | [Source URL](https://openneuro.org/datasets/ds003710) | ### Copy-paste BibTeX ```bibtex @dataset{ds003710, title = {APPLESEED Example Dataset}, author = {Cabell L. Williams and Meghan H. Puglia}, doi = {10.18112/openneuro.ds003710.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003710.v1.0.2}, } ``` ## Technical Details - Subjects: 13 - Recordings: 48 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 5000.0 - Duration (hours): 9.1654915 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 10.2 GB - File count: 48 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003710.v1.0.2 - Source: openneuro - OpenNeuro: [ds003710](https://openneuro.org/datasets/ds003710) - NeMAR: [ds003710](https://nemar.org/dataexplorer/detail?dataset_id=ds003710) ## API Reference Use the `DS003710` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003710(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) APPLESEED Example Dataset * **Study:** `ds003710` (OpenNeuro) * **Author (year):** `Williams2021` * **Canonical:** `APPLESEED` Also importable as: `DS003710`, `Williams2021`, `APPLESEED`. Modality: `eeg`. Subjects: 13; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003710](https://openneuro.org/datasets/ds003710) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003710](https://nemar.org/dataexplorer/detail?dataset_id=ds003710) DOI: [https://doi.org/10.18112/openneuro.ds003710.v1.0.2](https://doi.org/10.18112/openneuro.ds003710.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003710 >>> dataset = DS003710(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003710) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003710) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003739: eeg dataset, 30 subjects *Perturbed beam-walking task* Access recordings and metadata through EEGDash. **Citation:** Steven Peterson, Daniel Ferris (2021). *Perturbed beam-walking task*. [10.18112/openneuro.ds003739.v1.0.2](https://doi.org/10.18112/openneuro.ds003739.v1.0.2) Modality: eeg Subjects: 30 Recordings: 120 License: CC0 Source: openneuro Citations: 5.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003739 dataset = DS003739(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003739(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003739( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003739, title = {Perturbed beam-walking task}, author = {Steven Peterson and Daniel Ferris}, doi = {10.18112/openneuro.ds003739.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003739.v1.0.2}, } ``` ## About This Dataset Data was collected at the University of Michigan by Steven Peterson in the lab of Daniel Ferris. This study’s protocol was approved by the University of Michigan Institutional Review Board and all participants provided written consent. Each data file includes synchronized 128-channel EEG, lower leg EMG, neck EMG, EOG, and motion capture data. Participants performed four 10-minute, same-day sessions where they either stood or walked at 0.22 m/s on a treadmill-mounted balance beam that was 2.5 cm tall and 12.7 cm wide. During each session, participants were exposed to sensorimotor perturbations (either virtual-reality-induced visual field rotations or side-to-side waist pulls, lasting 0.5 seconds and 1 second in duration, respectively). Each session involved 150 perturbation events, balanced between rotation/pull directions. We have included the indices of all good channels for each participant in EEG.etc.good_chans of each .set file (includes non-EEG channel indices). Criteria for determining good/bad EEG channels can be found in our eNeuro publication. EEG.etc also includes the resulting ICA sphere and weight matrices when run on only the EEG channels, along with the selected good IC’s that were retained for our analyses. ## Dataset Information | Dataset ID | `DS003739` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Perturbed beam-walking task | | Author (year) | `Peterson2021_Perturbed_beam_walking` | | Canonical | — | | Importable as | `DS003739`, `Peterson2021_Perturbed_beam_walking` | | Year | 2021 | | Authors | Steven Peterson, Daniel Ferris | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003739.v1.0.2](https://doi.org/10.18112/openneuro.ds003739.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003739) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003739) | [Source URL](https://openneuro.org/datasets/ds003739) | ### Copy-paste BibTeX ```bibtex @dataset{ds003739, title = {Perturbed beam-walking task}, author = {Steven Peterson and Daniel Ferris}, doi = {10.18112/openneuro.ds003739.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003739.v1.0.2}, } ``` ## Technical Details - Subjects: 30 - Recordings: 120 - Tasks: 4 - Channels: 149 - Sampling rate (Hz): 256.0 - Duration (hours): 20.57443576388889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 10.9 GB - File count: 120 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003739.v1.0.2 - Source: openneuro - OpenNeuro: [ds003739](https://openneuro.org/datasets/ds003739) - NeMAR: [ds003739](https://nemar.org/dataexplorer/detail?dataset_id=ds003739) ## API Reference Use the `DS003739` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003739(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Perturbed beam-walking task * **Study:** `ds003739` (OpenNeuro) * **Author (year):** `Peterson2021_Perturbed_beam_walking` * **Canonical:** — Also importable as: `DS003739`, `Peterson2021_Perturbed_beam_walking`. Modality: `eeg`. Subjects: 30; recordings: 120; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003739](https://openneuro.org/datasets/ds003739) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003739](https://nemar.org/dataexplorer/detail?dataset_id=ds003739) DOI: [https://doi.org/10.18112/openneuro.ds003739.v1.0.2](https://doi.org/10.18112/openneuro.ds003739.v1.0.2) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003739 >>> dataset = DS003739(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003739) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003739) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003751: eeg dataset, 38 subjects *Dataset on Emotion with Naturalistic Stimuli (DENS)* Access recordings and metadata through EEGDash. **Citation:** Sudhakar Mishra, Md. Asif, Uma Shanker Tiwary, Narayanan Srinivasan (2021). *Dataset on Emotion with Naturalistic Stimuli (DENS)*. [10.18112/openneuro.ds003751.v1.0.2](https://doi.org/10.18112/openneuro.ds003751.v1.0.2) Modality: eeg Subjects: 38 Recordings: 38 License: CC0 Source: openneuro Citations: 7.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003751 dataset = DS003751(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003751(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003751( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003751, title = {Dataset on Emotion with Naturalistic Stimuli (DENS)}, author = {Sudhakar Mishra and Md. Asif and Uma Shanker Tiwary and Narayanan Srinivasan}, doi = {10.18112/openneuro.ds003751.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003751.v1.0.2}, } ``` ## About This Dataset **Overview** This is the “Emotion” dataset. The Dataset is recorded with naturalistic paradigm In brief, it contains EEG, ECG and EMG data for 40 subjects emotionally stimulated using naturalistic emotion stimuli. The stimuli are multimedia videos providing context to understand the situated conceptualization of emotions. For details, see the `Details about the experiment` section. **Citing this dataset** Please cite as follows: For more information, see the `dataset_description.json` file. **License** This eeg_emotion dataset is made available under the Open Database License: See the LICENSE file. A human readable information can be found at: [https://opendatacommons.org/licenses/odbl/summary/](https://opendatacommons.org/licenses/odbl/summary/) Any rights in individual contents of the database are licensed under the Database Contents License: [http://opendatacommons.org/licenses/dbcl/1.0/](http://opendatacommons.org/licenses/dbcl/1.0/) **Dataset Description** Dataset_description file described the metadata for the dataset. Participants related details are described in participants.json and participants.tsv files. Each subject directory contains two directories- beh and eeg. A tsv file inside beh folder having entries about the feedbacks given by subject on self-assessment scales-valence, arousal, dominance, liking, familiarity, relevance and emotion category. In addition, it contains the information about the time-stamp of mouse click and other details. The eeg folder inside subject directory contains the raw eeg data in .set & .fdt format along with the information about task events in \_task-emotion_events.tsv file. The stimuli directory contains stimuli which were used during the experiment. In addition, feedback excel sheet participant_details.xlsx filled by participants is also added. The code directory contains the python code for data collection, python code for data validation and matlab file for pre-processing the raw data. ## Dataset Information | Dataset ID | `DS003751` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset on Emotion with Naturalistic Stimuli (DENS) | | Author (year) | `Mishra2021` | | Canonical | `DENS` | | Importable as | `DS003751`, `Mishra2021`, `DENS` | | Year | 2021 | | Authors | Sudhakar Mishra, Md. Asif, Uma Shanker Tiwary, Narayanan Srinivasan | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003751.v1.0.2](https://doi.org/10.18112/openneuro.ds003751.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003751) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003751) | [Source URL](https://openneuro.org/datasets/ds003751) | ### Copy-paste BibTeX ```bibtex @dataset{ds003751, title = {Dataset on Emotion with Naturalistic Stimuli (DENS)}, author = {Sudhakar Mishra and Md. Asif and Uma Shanker Tiwary and Narayanan Srinivasan}, doi = {10.18112/openneuro.ds003751.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003751.v1.0.2}, } ``` ## Technical Details - Subjects: 38 - Recordings: 38 - Tasks: 1 - Channels: 131 - Sampling rate (Hz): 250.0 - Duration (hours): 19.94988888888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 4.7 GB - File count: 38 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003751.v1.0.2 - Source: openneuro - OpenNeuro: [ds003751](https://openneuro.org/datasets/ds003751) - NeMAR: [ds003751](https://nemar.org/dataexplorer/detail?dataset_id=ds003751) ## API Reference Use the `DS003751` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003751(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset on Emotion with Naturalistic Stimuli (DENS) * **Study:** `ds003751` (OpenNeuro) * **Author (year):** `Mishra2021` * **Canonical:** `DENS` Also importable as: `DS003751`, `Mishra2021`, `DENS`. Modality: `eeg`. Subjects: 38; recordings: 38; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003751](https://openneuro.org/datasets/ds003751) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003751](https://nemar.org/dataexplorer/detail?dataset_id=ds003751) DOI: [https://doi.org/10.18112/openneuro.ds003751.v1.0.2](https://doi.org/10.18112/openneuro.ds003751.v1.0.2) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003751 >>> dataset = DS003751(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003751) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003751) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003753: eeg dataset, 25 subjects *EEG: Probabilistic Learning with Affective Feedback: Exp #2* Access recordings and metadata through EEGDash. **Citation:** Darin R. Brown, Trevor Jackson, James F Cavanagh (2021). *EEG: Probabilistic Learning with Affective Feedback: Exp #2*. [10.18112/openneuro.ds003753.v1.1.0](https://doi.org/10.18112/openneuro.ds003753.v1.1.0) Modality: eeg Subjects: 25 Recordings: 25 License: CC0 Source: openneuro Metadata: Good (80%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003753 dataset = DS003753(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003753(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003753( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003753, title = {EEG: Probabilistic Learning with Affective Feedback: Exp #2}, author = {Darin R. Brown and Trevor Jackson and James F Cavanagh}, doi = {10.18112/openneuro.ds003753.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003753.v1.1.0}, } ``` ## About This Dataset RL task in N=25 college age participants. Data collected circa 2019 in the CRCL at UNM. The paper [Brown, D.R., Jackson, T.J. & Cavanagh, J.F. The Reward Positivity is sensitive to affective liking] Should be coming out in Cognitive, Affective, & Behavioral Neuroscience. THIS IS EXPERIMENT #2. Your best bet for understanding this task would be to read that paper first. Note we have since made minor adjustments to the task which really enhance the ability to resolve the RewP. I also have analytic scripts for it. If you are interetsted in running this task, contact me for the new version. - James F Cavanagh 07/02/2021 ## Dataset Information | Dataset ID | `DS003753` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Probabilistic Learning with Affective Feedback: Exp #2 | | Author (year) | `Brown2021_Probabilistic` | | Canonical | — | | Importable as | `DS003753`, `Brown2021_Probabilistic` | | Year | 2021 | | Authors | Darin R. Brown, Trevor Jackson, James F Cavanagh | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003753.v1.1.0](https://doi.org/10.18112/openneuro.ds003753.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003753) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003753) | [Source URL](https://openneuro.org/datasets/ds003753) | ### Copy-paste BibTeX ```bibtex @dataset{ds003753, title = {EEG: Probabilistic Learning with Affective Feedback: Exp #2}, author = {Darin R. Brown and Trevor Jackson and James F Cavanagh}, doi = {10.18112/openneuro.ds003753.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003753.v1.1.0}, } ``` ## Technical Details - Subjects: 25 - Recordings: 25 - Tasks: 1 - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): 10.10425 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 4.6 GB - File count: 25 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003753.v1.1.0 - Source: openneuro - OpenNeuro: [ds003753](https://openneuro.org/datasets/ds003753) - NeMAR: [ds003753](https://nemar.org/dataexplorer/detail?dataset_id=ds003753) ## API Reference Use the `DS003753` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003753(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Probabilistic Learning with Affective Feedback: Exp * **Study:** `ds003753` (OpenNeuro) * **Author (year):** `Brown2021_Probabilistic` * **Canonical:** — Also importable as: `DS003753`, `Brown2021_Probabilistic`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003753](https://openneuro.org/datasets/ds003753) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003753](https://nemar.org/dataexplorer/detail?dataset_id=ds003753) ### Examples ```pycon >>> from eegdash.dataset import DS003753 >>> dataset = DS003753(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003753) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003753) # DS003766: eeg dataset, 31 subjects *A resource for assessing dynamic binary choices in the adult brain using EEG and mouse-tracking* Access recordings and metadata through EEGDash. **Citation:** Kun Chen, Ruien Wang, Jiamin Huang, Fei Gao, Zhen Yuan, Yanyan Qi, Haiyan Wu (2021). *A resource for assessing dynamic binary choices in the adult brain using EEG and mouse-tracking*. [10.18112/openneuro.ds003766.v2.0.3](https://doi.org/10.18112/openneuro.ds003766.v2.0.3) Modality: eeg Subjects: 31 Recordings: 124 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003766 dataset = DS003766(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003766(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003766( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003766, title = {A resource for assessing dynamic binary choices in the adult brain using EEG and mouse-tracking}, author = {Kun Chen and Ruien Wang and Jiamin Huang and Fei Gao and Zhen Yuan and Yanyan Qi and Haiyan Wu}, doi = {10.18112/openneuro.ds003766.v2.0.3}, url = {https://doi.org/10.18112/openneuro.ds003766.v2.0.3}, } ``` ## About This Dataset **A resource for assessing dynamic binary choices in the adult brain using EEG and mouse-tracking** **Description** This dataset was collected in 2020, which combines high-density Electroencephalography (HD-EEG, 128 channels) and mouse-tracking intended as a resource for examining the dynamic decision process of semantics and preference choices in the human brain. The dataset includes high-density resting-state and task-related (food preference choices and semantic judgments) EEG acquired from 31 individuals (ages: 18-33). **EEG acquisition** The EEG data were acquired using a 128-channel cap based on the standard 10/20 System with Electrical Geodesics Inc (EGI, Eugene, Oregon) system. During recording, sampling rate was 1000Hz, and the E129 (Cz) electrode was used as reference. Electrode impedances were kept below 50kohm for each electrode during the experiment. **Main files** **\`\`sub-\*\`\`**: EEG (`.set`) and behavior data with BIDS format. **\`\`sourcedata/rawdata\`\`**: Raw `.mff` EGI data and behavior data with subject information desensitization. **\`\`sourcedata/psychopy\`\`**: Stimuli and PsychoPy scripts for presentation. **\`\`derivatives/eeglab-preproc\`\`**: Preprocessed continuous EEG data with EEGLAB (Easy to set different epoch time windows for further analysis). **Others** Please refer to the [corresponding paper](https://doi.org/10.1038/s41597-022-01538-5) and [GitHub code](https://github.com/andlab-um/MT-EEG-dataset) to get more details. **References** Chen, K., Wang, R., Huang, J., Gao, F., Yuan, Z., Qi, Y., & Wu, H. (2022). A resource for assessing dynamic binary choices in the adult brain using EEG and mouse-tracking. Scientific Data, 9(1), 416. [https://doi.org/10.1038/s41597-022-01538-5](https://doi.org/10.1038/s41597-022-01538-5) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS003766` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A resource for assessing dynamic binary choices in the adult brain using EEG and mouse-tracking | | Author (year) | `Chen2021` | | Canonical | — | | Importable as | `DS003766`, `Chen2021` | | Year | 2021 | | Authors | Kun Chen, Ruien Wang, Jiamin Huang, Fei Gao, Zhen Yuan, Yanyan Qi, Haiyan Wu | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003766.v2.0.3](https://doi.org/10.18112/openneuro.ds003766.v2.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003766) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003766) | [Source URL](https://openneuro.org/datasets/ds003766) | ### Copy-paste BibTeX ```bibtex @dataset{ds003766, title = {A resource for assessing dynamic binary choices in the adult brain using EEG and mouse-tracking}, author = {Kun Chen and Ruien Wang and Jiamin Huang and Fei Gao and Zhen Yuan and Yanyan Qi and Haiyan Wu}, doi = {10.18112/openneuro.ds003766.v2.0.3}, url = {https://doi.org/10.18112/openneuro.ds003766.v2.0.3}, } ``` ## Technical Details - Subjects: 31 - Recordings: 124 - Tasks: 4 - Channels: 129 - Sampling rate (Hz): 1000.0 - Duration (hours): 40.77816472222222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 71.3 GB - File count: 124 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003766.v2.0.3 - Source: openneuro - OpenNeuro: [ds003766](https://openneuro.org/datasets/ds003766) - NeMAR: [ds003766](https://nemar.org/dataexplorer/detail?dataset_id=ds003766) ## API Reference Use the `DS003766` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003766(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A resource for assessing dynamic binary choices in the adult brain using EEG and mouse-tracking * **Study:** `ds003766` (OpenNeuro) * **Author (year):** `Chen2021` * **Canonical:** — Also importable as: `DS003766`, `Chen2021`. Modality: `eeg`. Subjects: 31; recordings: 124; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003766](https://openneuro.org/datasets/ds003766) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003766](https://nemar.org/dataexplorer/detail?dataset_id=ds003766) DOI: [https://doi.org/10.18112/openneuro.ds003766.v2.0.3](https://doi.org/10.18112/openneuro.ds003766.v2.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003766 >>> dataset = DS003766(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003766) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003766) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003768: eeg dataset, 33 subjects *Simultaneous EEG and fMRI signals during sleep from humans* Access recordings and metadata through EEGDash. **Citation:** Yameng Gu, Feng Han, Lucas E. Sainburg, Margeaux M. Schade, Orfeu M. Buxton, Jeff H. Duyn, Xiao Liu (2021). *Simultaneous EEG and fMRI signals during sleep from humans*. [10.18112/openneuro.ds003768.v1.0.0](https://doi.org/10.18112/openneuro.ds003768.v1.0.0) Modality: eeg Subjects: 33 Recordings: 255 License: CC0 Source: openneuro Citations: 21.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003768 dataset = DS003768(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003768(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003768( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003768, title = {Simultaneous EEG and fMRI signals during sleep from humans}, author = {Yameng Gu and Feng Han and Lucas E. Sainburg and Margeaux M. Schade and Orfeu M. Buxton and Jeff H. Duyn and Xiao Liu}, doi = {10.18112/openneuro.ds003768.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003768.v1.0.0}, } ``` ## About This Dataset This dataset included 33 healthy participants collected at Penn State with informed consent. Simultaneously collected EEG and BOLD signals for each participant were recorded and organized at each folder. EEG data were collected using a 32 channel MR-compatible EEG system (Brain Products, Munich, Germany). R128 in the EEG signals corresponds to the BOLD fMRI volume trigger. Each scanning section consisted of an anatomical session, two 10-min resting-state sessions, and several 15-min sleep sessions. The first resting-state session was conducted before a visual-motor adaptation task (Albouy et al, Journal of Sleep Research, 2013) and the second resting-state session was conducted after a visual-motor adaptation task. The scored sleep stages for these 33 subjects were organized under sourcedata folder. Each TSV file contained the sleep stages for each 30-sec epoch across different sessions for each subject. For more information or any questions about this dataset, please see the manuscript on bioRxiv or contact Yameng Gu ([ymgu95@gmail.com](mailto:ymgu95@gmail.com)) ## Dataset Information | Dataset ID | `DS003768` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Simultaneous EEG and fMRI signals during sleep from humans | | Author (year) | `Gu2021` | | Canonical | — | | Importable as | `DS003768`, `Gu2021` | | Year | 2021 | | Authors | Yameng Gu, Feng Han, Lucas E. Sainburg, Margeaux M. Schade, Orfeu M. Buxton, Jeff H. Duyn, Xiao Liu | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003768.v1.0.0](https://doi.org/10.18112/openneuro.ds003768.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003768) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003768) | [Source URL](https://openneuro.org/datasets/ds003768) | ### Copy-paste BibTeX ```bibtex @dataset{ds003768, title = {Simultaneous EEG and fMRI signals during sleep from humans}, author = {Yameng Gu and Feng Han and Lucas E. Sainburg and Margeaux M. Schade and Orfeu M. Buxton and Jeff H. Duyn and Xiao Liu}, doi = {10.18112/openneuro.ds003768.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003768.v1.0.0}, } ``` ## Technical Details - Subjects: 33 - Recordings: 255 - Tasks: 2 - Channels: 32 - Sampling rate (Hz): 5000.0 - Duration (hours): 60.597566666666665 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 86.6 GB - File count: 255 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003768.v1.0.0 - Source: openneuro - OpenNeuro: [ds003768](https://openneuro.org/datasets/ds003768) - NeMAR: [ds003768](https://nemar.org/dataexplorer/detail?dataset_id=ds003768) ## API Reference Use the `DS003768` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003768(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Simultaneous EEG and fMRI signals during sleep from humans * **Study:** `ds003768` (OpenNeuro) * **Author (year):** `Gu2021` * **Canonical:** — Also importable as: `DS003768`, `Gu2021`. Modality: `eeg`. Subjects: 33; recordings: 255; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003768](https://openneuro.org/datasets/ds003768) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003768](https://nemar.org/dataexplorer/detail?dataset_id=ds003768) DOI: [https://doi.org/10.18112/openneuro.ds003768.v1.0.0](https://doi.org/10.18112/openneuro.ds003768.v1.0.0) NEMAR citation count: 21 ### Examples ```pycon >>> from eegdash.dataset import DS003768 >>> dataset = DS003768(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003768) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003768) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003774: eeg dataset, 20 subjects *Music Listening- Genre EEG dataset (MUSIN-G)* Access recordings and metadata through EEGDash. **Citation:** Krishna Prasad Miyapuram, Pankaj Pandey, Nashra Ahmad, Bharatesh R Shiraguppi, Esha Sharma, Prashant Lawhatre, Dhananjay Sonawane, Derek Lomas (2021). *Music Listening- Genre EEG dataset (MUSIN-G)*. [10.18112/openneuro.ds003774.v1.0.0](https://doi.org/10.18112/openneuro.ds003774.v1.0.0) Modality: eeg Subjects: 20 Recordings: 240 License: CC0 Source: openneuro Citations: 8.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003774 dataset = DS003774(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003774(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003774( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003774, title = {Music Listening- Genre EEG dataset (MUSIN-G)}, author = {Krishna Prasad Miyapuram and Pankaj Pandey and Nashra Ahmad and Bharatesh R Shiraguppi and Esha Sharma and Prashant Lawhatre and Dhananjay Sonawane and Derek Lomas}, doi = {10.18112/openneuro.ds003774.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003774.v1.0.0}, } ``` ## About This Dataset The dataset contains Electroencephalography (EEG) responses from 20 Indian participants, on 12 songs of different genres (from Indian Classical to Goth Rock). Each session indicates a song by its number. For the experiment, the participants were indicated to close their eyes indicated by a single beep, and the song was presented to them on speakers. After listening to each song, a double beep was presented, asking them to open their eyes and rate their familiarity and enjoyment to the song. The responses were taken on a scale of 1 to 5, where 1 meant most familiar or most enjoyable, and 5 meant least familiar or least enjoyable. ## Dataset Information | Dataset ID | `DS003774` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Music Listening- Genre EEG dataset (MUSIN-G) | | Author (year) | `Miyapuram2021` | | Canonical | `MUSING` | | Importable as | `DS003774`, `Miyapuram2021`, `MUSING` | | Year | 2021 | | Authors | Krishna Prasad Miyapuram, Pankaj Pandey, Nashra Ahmad, Bharatesh R Shiraguppi, Esha Sharma, Prashant Lawhatre, Dhananjay Sonawane, Derek Lomas | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003774.v1.0.0](https://doi.org/10.18112/openneuro.ds003774.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003774) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003774) | [Source URL](https://openneuro.org/datasets/ds003774) | ### Copy-paste BibTeX ```bibtex @dataset{ds003774, title = {Music Listening- Genre EEG dataset (MUSIN-G)}, author = {Krishna Prasad Miyapuram and Pankaj Pandey and Nashra Ahmad and Bharatesh R Shiraguppi and Esha Sharma and Prashant Lawhatre and Dhananjay Sonawane and Derek Lomas}, doi = {10.18112/openneuro.ds003774.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003774.v1.0.0}, } ``` ## Technical Details - Subjects: 20 - Recordings: 240 - Tasks: 1 - Channels: 129 - Sampling rate (Hz): 1000.0 (132), 250.0 (108) - Duration (hours): 8.63974111111111 - Pathology: Healthy - Modality: Auditory - Type: Affect - Size on disk: 10.1 GB - File count: 240 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003774.v1.0.0 - Source: openneuro - OpenNeuro: [ds003774](https://openneuro.org/datasets/ds003774) - NeMAR: [ds003774](https://nemar.org/dataexplorer/detail?dataset_id=ds003774) ## API Reference Use the `DS003774` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003774(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Music Listening- Genre EEG dataset (MUSIN-G) * **Study:** `ds003774` (OpenNeuro) * **Author (year):** `Miyapuram2021` * **Canonical:** `MUSING` Also importable as: `DS003774`, `Miyapuram2021`, `MUSING`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 20; recordings: 240; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003774](https://openneuro.org/datasets/ds003774) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003774](https://nemar.org/dataexplorer/detail?dataset_id=ds003774) DOI: [https://doi.org/10.18112/openneuro.ds003774.v1.0.0](https://doi.org/10.18112/openneuro.ds003774.v1.0.0) NEMAR citation count: 8 ### Examples ```pycon >>> from eegdash.dataset import DS003774 >>> dataset = DS003774(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003774) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003774) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003775: eeg dataset, 111 subjects *SRM Resting-state EEG* Access recordings and metadata through EEGDash. **Citation:** Christoffer Hatlestad-Hall, Trine Waage Rygvold, Stein Andersson (2021). *SRM Resting-state EEG*. [10.18112/openneuro.ds003775.v1.2.1](https://doi.org/10.18112/openneuro.ds003775.v1.2.1) Modality: eeg Subjects: 111 Recordings: 153 License: CC0 Source: openneuro Citations: 8.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003775 dataset = DS003775(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003775(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003775( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003775, title = {SRM Resting-state EEG}, author = {Christoffer Hatlestad-Hall and Trine Waage Rygvold and Stein Andersson}, doi = {10.18112/openneuro.ds003775.v1.2.1}, url = {https://doi.org/10.18112/openneuro.ds003775.v1.2.1}, } ``` ## About This Dataset **SRM Resting-state EEG** **Introduction** This EEG dataset contains resting-state EEG extracted from the experimental paradigm used in the *Stimulus-Selective Response Modulation* (SRM) project at the Dept. of Psychology, University of Oslo, Norway. The data is recorded with a BioSemi ActiveTwo system, using 64 electrodes ### View full README **SRM Resting-state EEG** **Introduction** This EEG dataset contains resting-state EEG extracted from the experimental paradigm used in the *Stimulus-Selective Response Modulation* (SRM) project at the Dept. of Psychology, University of Oslo, Norway. The data is recorded with a BioSemi ActiveTwo system, using 64 electrodes following the positional scheme of the extended 10-20 system (10-10). Each datafile comprises four minutes of uninterrupted EEG acquired while the subjects were resting with their eyes closed. The dataset includes EEG from 111 healthy control subjects (the “t1” session), of which a number underwent an additional EEG recording at a later date (the “t2” session). Thus, some subjects have one associated EEG file, whereas others have two. **Disclaimer** The dataset is provided “as is”. Hereunder, the authors take no responsibility with regard to data quality. The user is solely responsible for ascertaining that the data used for publications or in other contexts fulfil the required quality criteria. **The data** **Raw data files** The raw EEG data signals are rereferenced to the average reference. Other than that, no operations have been performed on the data. The files contain no events; the whole continuous segment is resting-state data. The data signals are unfiltered (recorded in Europe, the line noise frequency is 50 Hz). The time points for the subject’s EEG recording(s), are listed in the \*_scans.tsv file (particularly interesting for the subjects with two recordings). Please note that the quality of the raw data has **not** been carefully assessed. While most data files are of high quality, a few might be of poorer quality. The data files are provided “as is”, and it is the user’s esponsibility to ascertain the quality of the individual data file. **/derivatives/cleaned_data** For convenience, a cleaned dataset is provided. The files in this derived dataset have been preprocessed with a basic, fully automated pipeline (see /code/s2_preprocess.m for details) directory for details. The derived files are stored as EEGLAB .set files in a directory structure identical to that of the raw files. Please note that the \*\\\*_channels.tsv\* files associated with the derived files have been updated with status information about each channel (“good” or “bad”). The “bad” channels are – for the sake of consistency – interpolated, and thus still present in the data. It might be advisable to remove these channels in some analyses, as they (per definition) do not provide anything to the EEG data. The cleaned data signals are referenced to the average reference (including the interpolated channels). Please mind the automatic nature of the employed pipeline. It might not perform optimally on all data files (*e.g.* over-/underestimating proportion of bad channels). For publications, we recommend implementing a more sensitive cleaning pipeline. **Demographic and cognitive test data** The *participants.tsv* file in the root folder contains the variables age, sex, and a range of cognitive test scores. See the sidecar participants.json for more information on the behavioural measures. Please note that these measures were collected in connection with the “t1” session recording. **How to cite** All use of this dataset in a publication context requires the following paper to be cited: Hatlestad-Hall, C., Rygvold, T. W., & Andersson, S. (2022). BIDS-structured resting-state electroencephalography (EEG) data extracted from an experimental paradigm. Data in Brief, 45, 108647. [https://doi.org/10.1016/j.dib.2022.108647](https://doi.org/10.1016/j.dib.2022.108647) **Contact** Questions regarding the EEG data may be addressed to Christoffer Hatlestad-Hall ([chr.hh@pm.me](mailto:chr.hh@pm.me)). Question regarding the project in general may be addressed to Stein Andersson ([stein.andersson@psykologi.uio.no](mailto:stein.andersson@psykologi.uio.no)) or Trine W. Rygvold ([t.w.rygvold@psykologi.uio.no](mailto:t.w.rygvold@psykologi.uio.no)). ## Dataset Information | Dataset ID | `DS003775` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | SRM Resting-state EEG | | Author (year) | `HatlestadHall2021` | | Canonical | — | | Importable as | `DS003775`, `HatlestadHall2021` | | Year | 2021 | | Authors | Christoffer Hatlestad-Hall, Trine Waage Rygvold, Stein Andersson | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003775.v1.2.1](https://doi.org/10.18112/openneuro.ds003775.v1.2.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003775) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003775) | [Source URL](https://openneuro.org/datasets/ds003775) | ### Copy-paste BibTeX ```bibtex @dataset{ds003775, title = {SRM Resting-state EEG}, author = {Christoffer Hatlestad-Hall and Trine Waage Rygvold and Stein Andersson}, doi = {10.18112/openneuro.ds003775.v1.2.1}, url = {https://doi.org/10.18112/openneuro.ds003775.v1.2.1}, } ``` ## Technical Details - Subjects: 111 - Recordings: 153 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1024.0 - Duration (hours): 10.2 - Pathology: Healthy - Modality: Resting State - Type: Resting-state - Size on disk: 4.5 GB - File count: 153 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003775.v1.2.1 - Source: openneuro - OpenNeuro: [ds003775](https://openneuro.org/datasets/ds003775) - NeMAR: [ds003775](https://nemar.org/dataexplorer/detail?dataset_id=ds003775) ## API Reference Use the `DS003775` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003775(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SRM Resting-state EEG * **Study:** `ds003775` (OpenNeuro) * **Author (year):** `HatlestadHall2021` * **Canonical:** — Also importable as: `DS003775`, `HatlestadHall2021`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 111; recordings: 153; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003775](https://openneuro.org/datasets/ds003775) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003775](https://nemar.org/dataexplorer/detail?dataset_id=ds003775) DOI: [https://doi.org/10.18112/openneuro.ds003775.v1.2.1](https://doi.org/10.18112/openneuro.ds003775.v1.2.1) NEMAR citation count: 8 ### Examples ```pycon >>> from eegdash.dataset import DS003775 >>> dataset = DS003775(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003775) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003775) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003800: eeg dataset, 13 subjects *Auditory Gamma Entrainment* Access recordings and metadata through EEGDash. **Citation:** Mojtaba Lahijanian, Mohammad Javad Sedghizadeh, Hamid Aghajan, Zahra Vahabi (2021). *Auditory Gamma Entrainment*. [10.18112/openneuro.ds003800.v1.0.0](https://doi.org/10.18112/openneuro.ds003800.v1.0.0) Modality: eeg Subjects: 13 Recordings: 24 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003800 dataset = DS003800(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003800(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003800( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003800, title = {Auditory Gamma Entrainment}, author = {Mojtaba Lahijanian and Mohammad Javad Sedghizadeh and Hamid Aghajan and Zahra Vahabi}, doi = {10.18112/openneuro.ds003800.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003800.v1.0.0}, } ``` ## About This Dataset > Introduction This experiment was designed to entrain the brain oscillations through synthetic auditory stimulation conducted on a group of elderly suffering from dementia. Recently, gamma entrainment has been proposed and shown effective in improving several symptoms of Alzheimer’s Diseases (AD). The aim of this study is to investigate the effect of entrainment on brain oscillations using EEG signal recording during the auditory brain stimulation. This study was approved by the Review Board of Tehran University of Medical Sciences (Approval ID: IR.TUMS.MEDICINE.REC.1398.524) and all participants provided informed consent before participating and were free to withdraw at any time. > Rest data Before the main task, a one-minute data was recorded with open eyes for measuring raw resting-state potentials. The rest data for participants number 6 and 13 are missing. > Auditory stimulation Two speakers were placed in front of the participant 50cm apart from each other and directly pointed at the participant’s ears at a distance of 50cm. The sound intensity was around -40dB within a fixed range for all participants. Before starting the task, the participant was asked if the volume was loud enough and the sound volume was set at a comfortable level for each participant. The auditory stimulus was a 5kHz carrier tone amplitude modulated with a 40Hz rectangular wave (40Hz On and Off cycles). Since a 40Hz audio signal cannot be easily heard, the 5KHz carrier frequency was used to render the 40Hz pulse train audible. In order to minimize the effect of the carrier sound, the duty cycle of the modulating 40Hz waveform was set to 4% (1ms of the 25ms cycle was On). The auditory stimulant was generated in MATLAB and played as a .wav file. This file consisted of six trials of 40sec stimulus interleaved by five trials of 20sec rest (silence). The entire session resulted in 340sec (6\*40+5\*20) of recorded EEG signal. > EEG recording and preprocessing All EEG data were recorded using 19 monopolar channels in the standard 10/20 system referenced to the earlobes, sampled at 250Hz, and the impedance of the electrodes was kept under 20kOhm. Data from all the participants were preprocessed identically following Makoto’s preprocessing pipeline: Highpass filtering above 1Hz; removal of the line noise; rejecting potential bad channels; interpolating rejected channels; re-referencing data to the average; Artifact Subspace Reconstruction (ASR); re-referencing data to the average again; estimating the brain source activity using independent component analysis (ICA); dipole fitting; rejecting bad dipoles (sources) for further cleaning the data. These preprocessing steps were performed using EEGLab MATLAB toolbox. > Instructions During the experiment, participants were seated comfortably with open eyes in a quiet room. They were instructed to relax their body to avoid muscle artifacts and move their head as little as possible. ## Dataset Information | Dataset ID | `DS003800` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Auditory Gamma Entrainment | | Author (year) | `Lahijanian2021_Auditory` | | Canonical | — | | Importable as | `DS003800`, `Lahijanian2021_Auditory` | | Year | 2021 | | Authors | Mojtaba Lahijanian, Mohammad Javad Sedghizadeh, Hamid Aghajan, Zahra Vahabi | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003800.v1.0.0](https://doi.org/10.18112/openneuro.ds003800.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003800) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003800) | [Source URL](https://openneuro.org/datasets/ds003800) | ### Copy-paste BibTeX ```bibtex @dataset{ds003800, title = {Auditory Gamma Entrainment}, author = {Mojtaba Lahijanian and Mohammad Javad Sedghizadeh and Hamid Aghajan and Zahra Vahabi}, doi = {10.18112/openneuro.ds003800.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003800.v1.0.0}, } ``` ## Technical Details - Subjects: 13 - Recordings: 24 - Tasks: 2 - Channels: 19 - Sampling rate (Hz): 250.0 - Duration (hours): 1.4111111111111112 - Pathology: Dementia - Modality: Auditory - Type: Clinical/Intervention - Size on disk: 189.3 MB - File count: 24 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003800.v1.0.0 - Source: openneuro - OpenNeuro: [ds003800](https://openneuro.org/datasets/ds003800) - NeMAR: [ds003800](https://nemar.org/dataexplorer/detail?dataset_id=ds003800) ## API Reference Use the `DS003800` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003800(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory Gamma Entrainment * **Study:** `ds003800` (OpenNeuro) * **Author (year):** `Lahijanian2021_Auditory` * **Canonical:** — Also importable as: `DS003800`, `Lahijanian2021_Auditory`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Dementia`. Subjects: 13; recordings: 24; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003800](https://openneuro.org/datasets/ds003800) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003800](https://nemar.org/dataexplorer/detail?dataset_id=ds003800) DOI: [https://doi.org/10.18112/openneuro.ds003800.v1.0.0](https://doi.org/10.18112/openneuro.ds003800.v1.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003800 >>> dataset = DS003800(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003800) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003800) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003801: eeg dataset, 20 subjects *Neural Tracking to go* Access recordings and metadata through EEGDash. **Citation:** Lisa Straetmans, Bjoern Holtze, Stefan Debener, Manuela Jaeger, Bojana Mirkovic (2021). *Neural Tracking to go*. [10.18112/openneuro.ds003801.v1.0.0](https://doi.org/10.18112/openneuro.ds003801.v1.0.0) Modality: eeg Subjects: 20 Recordings: 20 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003801 dataset = DS003801(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003801(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003801( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003801, title = {Neural Tracking to go}, author = {Lisa Straetmans and Bjoern Holtze and Stefan Debener and Manuela Jaeger and Bojana Mirkovic}, doi = {10.18112/openneuro.ds003801.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003801.v1.0.0}, } ``` ## About This Dataset This mobile EEG auditory attention experiment consists of 20 participants. In a two-competing speaker paradigm subjects either sat on a chair or walked a route indoors Attention was disrupted by environmental salient eventsfrom in front of the participant - Lisa Straetmans (Sep, 2021) ## Dataset Information | Dataset ID | `DS003801` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Neural Tracking to go | | Author (year) | `Straetmans2021` | | Canonical | — | | Importable as | `DS003801`, `Straetmans2021` | | Year | 2021 | | Authors | Lisa Straetmans, Bjoern Holtze, Stefan Debener, Manuela Jaeger, Bojana Mirkovic | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003801.v1.0.0](https://doi.org/10.18112/openneuro.ds003801.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003801) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003801) | [Source URL](https://openneuro.org/datasets/ds003801) | ### Copy-paste BibTeX ```bibtex @dataset{ds003801, title = {Neural Tracking to go}, author = {Lisa Straetmans and Bjoern Holtze and Stefan Debener and Manuela Jaeger and Bojana Mirkovic}, doi = {10.18112/openneuro.ds003801.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003801.v1.0.0}, } ``` ## Technical Details - Subjects: 20 - Recordings: 20 - Tasks: 1 - Channels: 24 - Sampling rate (Hz): 250.0 - Duration (hours): 13.688888888888888 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 1.1 GB - File count: 20 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003801.v1.0.0 - Source: openneuro - OpenNeuro: [ds003801](https://openneuro.org/datasets/ds003801) - NeMAR: [ds003801](https://nemar.org/dataexplorer/detail?dataset_id=ds003801) ## API Reference Use the `DS003801` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003801(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neural Tracking to go * **Study:** `ds003801` (OpenNeuro) * **Author (year):** `Straetmans2021` * **Canonical:** — Also importable as: `DS003801`, `Straetmans2021`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003801](https://openneuro.org/datasets/ds003801) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003801](https://nemar.org/dataexplorer/detail?dataset_id=ds003801) DOI: [https://doi.org/10.18112/openneuro.ds003801.v1.0.0](https://doi.org/10.18112/openneuro.ds003801.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003801 >>> dataset = DS003801(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003801) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003801) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003805: eeg dataset, 1 subjects *Multisensory Gamma Entrainment* Access recordings and metadata through EEGDash. **Citation:** Mojtaba Lahijanian, Mohammad Javad Sedghizadeh, Hamid Aghajan (2021). *Multisensory Gamma Entrainment*. [10.18112/openneuro.ds003805.v1.0.0](https://doi.org/10.18112/openneuro.ds003805.v1.0.0) Modality: eeg Subjects: 1 Recordings: 1 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003805 dataset = DS003805(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003805(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003805( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003805, title = {Multisensory Gamma Entrainment}, author = {Mojtaba Lahijanian and Mohammad Javad Sedghizadeh and Hamid Aghajan}, doi = {10.18112/openneuro.ds003805.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003805.v1.0.0}, } ``` ## About This Dataset > Introduction This experiment was designed to study the effects of different sensory modalities (auditory, visual, and audio-visual) on brain entrainment. The EEG data was collected from a young healthy volunteer (23 years old male). Recently, gamma entrainment based on individual (auditory or visual) sensory stimulation as well as simultaneous auditory and visual stimulation have been proposed and shown effective in improving several symptoms of Alzheimer’s Diseases (AD) in mice and humans. The aim of this study is to investigate the effect of different modalities in producing synchronized brain oscillations. The task is composed of three epochs of auditory, visual, and audio-visual stimulations respectively, each lasting for 40sec in one session. > Auditory stimulation Two speakers were placed in front of the participant 50cm apart from each other and directly pointed at the participant’s ears at a distance of 50cm. The sound intensity was set to around -40dB. Before starting the task, the participant was asked if the volume was loud enough and the sound volume was set at a comfortable level for him. The auditory stimulus was a 5kHz carrier tone amplitude modulated with a 40Hz rectangular wave (40Hz On and Off cycles). Since a 40Hz audio signal cannot be easily heard, the 5KHz carrier frequency was used to render the 40Hz pulse train audible. In order to minimize the effect of the carrier sound, the duty cycle of the modulating 40Hz waveform was set to 4% (1ms of the 25ms cycle was On). The auditory stimulant was generated in MATLAB and played as a .wav file. This file consisted of 40sec of stimulus. > Visual stimulation The visual stimulant was a 20Hz flickering white light produced by an array of LEDs and reflected from a white wall at 50cm distance in front of the participant (open eyes) with 50% On cycles (duty cycle = 50%) flickering for 40sec. Due to the presence of harmonic frequencies in the pulse train of the stimulus, the 20Hz stimulant is able to drive 40Hz oscillations in the brain. > EEG recording and preprocessing The EEG data were recorded using 19 monopolar channels in the standard 10/20 system referenced to the earlobes, sampled at 500Hz, and the impedance of the electrodes was kept under 20kOhm. Data from all three epochs were preprocessed identically following Makoto’s preprocessing pipeline: Highpass filtering above 1Hz; removal of the line noise; rejecting potential bad channels; interpolating rejected channels; re-referencing data to the average; Artifact Subspace Reconstruction (ASR); re-referencing data to the average again; estimating the brain source activity using independent component analysis (ICA); dipole fitting; rejecting bad dipoles (sources) for further cleaning the data. These preprocessing steps were performed using EEGLab MATLAB toolbox. > Instructions During the experiment, participant was seated comfortably with open eyes in a quiet room. He was instructed to relax his body to avoid muscle artifacts and move his head as little as possible. The participant was free to take a rest after each epoch but the EEG cap was not taken off. ## Dataset Information | Dataset ID | `DS003805` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Multisensory Gamma Entrainment | | Author (year) | `Lahijanian2021_Multisensory` | | Canonical | — | | Importable as | `DS003805`, `Lahijanian2021_Multisensory` | | Year | 2021 | | Authors | Mojtaba Lahijanian, Mohammad Javad Sedghizadeh, Hamid Aghajan | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003805.v1.0.0](https://doi.org/10.18112/openneuro.ds003805.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003805) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003805) | [Source URL](https://openneuro.org/datasets/ds003805) | ### Copy-paste BibTeX ```bibtex @dataset{ds003805, title = {Multisensory Gamma Entrainment}, author = {Mojtaba Lahijanian and Mohammad Javad Sedghizadeh and Hamid Aghajan}, doi = {10.18112/openneuro.ds003805.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003805.v1.0.0}, } ``` ## Technical Details - Subjects: 1 - Recordings: 1 - Tasks: 1 - Channels: 19 - Sampling rate (Hz): 500.0 - Duration (hours): 0.0333333333333333 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 8.8 MB - File count: 1 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003805.v1.0.0 - Source: openneuro - OpenNeuro: [ds003805](https://openneuro.org/datasets/ds003805) - NeMAR: [ds003805](https://nemar.org/dataexplorer/detail?dataset_id=ds003805) ## API Reference Use the `DS003805` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003805(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multisensory Gamma Entrainment * **Study:** `ds003805` (OpenNeuro) * **Author (year):** `Lahijanian2021_Multisensory` * **Canonical:** — Also importable as: `DS003805`, `Lahijanian2021_Multisensory`. Modality: `eeg`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003805](https://openneuro.org/datasets/ds003805) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003805](https://nemar.org/dataexplorer/detail?dataset_id=ds003805) DOI: [https://doi.org/10.18112/openneuro.ds003805.v1.0.0](https://doi.org/10.18112/openneuro.ds003805.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003805 >>> dataset = DS003805(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003805) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003805) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003810: eeg dataset, 10 subjects *Motor Imagery vs Rest - Low-Cost EEG System* Access recordings and metadata through EEGDash. **Citation:** Victoria Peterson, Catalina Maria Galvan, Hugo Sacha Hernadez, Ruben Spies (2021). *Motor Imagery vs Rest - Low-Cost EEG System*. [10.18112/openneuro.ds003810.v2.0.2](https://doi.org/10.18112/openneuro.ds003810.v2.0.2) Modality: eeg Subjects: 10 Recordings: 50 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003810 dataset = DS003810(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003810(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003810( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003810, title = {Motor Imagery vs Rest - Low-Cost EEG System}, author = {Victoria Peterson and Catalina Maria Galvan and Hugo Sacha Hernadez and Ruben Spies}, doi = {10.18112/openneuro.ds003810.v2.0.2}, url = {https://doi.org/10.18112/openneuro.ds003810.v2.0.2}, } ``` ## About This Dataset This dataset consists of electroencephalography (EEG) signals adquired with a low-cost consumer-grade device. The 10 participants had no previous BCI experience. The BCI protocol consisted of two conditions, namely the kinesthetic imagination of grasping movement (MI) of the dominant hand and rest/idle condition.Five protocol runs were asked to be performed by the user. The first run, called RUN0, involved real grasping movement in order to better explain the protocol and to help the subject to focus on the sensation of making the movement. The rest of the runs (RUN1-RUN4) were equal, consisting of MI vs.Rest conditions. The EMG signals of the dominant hand was adquired for protocol control. During acquisition, the EEG signals were filtered between 0.5 and 45 Hz with a 3rd order Butterworth bandpass-filter. ## Dataset Information | Dataset ID | `DS003810` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Motor Imagery vs Rest - Low-Cost EEG System | | Author (year) | `Peterson2021_Motor_Imagery_vs` | | Canonical | — | | Importable as | `DS003810`, `Peterson2021_Motor_Imagery_vs` | | Year | 2021 | | Authors | Victoria Peterson, Catalina Maria Galvan, Hugo Sacha Hernadez, Ruben Spies | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003810.v2.0.2](https://doi.org/10.18112/openneuro.ds003810.v2.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003810) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003810) | [Source URL](https://openneuro.org/datasets/ds003810) | ### Copy-paste BibTeX ```bibtex @dataset{ds003810, title = {Motor Imagery vs Rest - Low-Cost EEG System}, author = {Victoria Peterson and Catalina Maria Galvan and Hugo Sacha Hernadez and Ruben Spies}, doi = {10.18112/openneuro.ds003810.v2.0.2}, url = {https://doi.org/10.18112/openneuro.ds003810.v2.0.2}, } ``` ## Technical Details - Subjects: 10 - Recordings: 50 - Tasks: 1 - Channels: 15 - Sampling rate (Hz): 125.0 - Duration (hours): 5.188611111111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 69.0 MB - File count: 50 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003810.v2.0.2 - Source: openneuro - OpenNeuro: [ds003810](https://openneuro.org/datasets/ds003810) - NeMAR: [ds003810](https://nemar.org/dataexplorer/detail?dataset_id=ds003810) ## API Reference Use the `DS003810` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003810(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Imagery vs Rest - Low-Cost EEG System * **Study:** `ds003810` (OpenNeuro) * **Author (year):** `Peterson2021_Motor_Imagery_vs` * **Canonical:** — Also importable as: `DS003810`, `Peterson2021_Motor_Imagery_vs`. Modality: `eeg`. Subjects: 10; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003810](https://openneuro.org/datasets/ds003810) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003810](https://nemar.org/dataexplorer/detail?dataset_id=ds003810) DOI: [https://doi.org/10.18112/openneuro.ds003810.v2.0.2](https://doi.org/10.18112/openneuro.ds003810.v2.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003810 >>> dataset = DS003810(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003810) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003810) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003816: eeg dataset, 48 subjects *The Effect of Buddhism Derived Loving Kindness Meditation on Modulating EEG: Long-term and Short-term Effect* Access recordings and metadata through EEGDash. **Citation:** SUN, Rui, Ven WONG, Goon Fui, GAO, Jungling (—). *The Effect of Buddhism Derived Loving Kindness Meditation on Modulating EEG: Long-term and Short-term Effect*. [10.18112/openneuro.ds003816.v1.0.1](https://doi.org/10.18112/openneuro.ds003816.v1.0.1) Modality: eeg Subjects: 48 Recordings: 1077 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003816 dataset = DS003816(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003816(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003816( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003816, title = {The Effect of Buddhism Derived Loving Kindness Meditation on Modulating EEG: Long-term and Short-term Effect}, author = {SUN, Rui and Ven WONG, Goon Fui and GAO, Jungling}, doi = {10.18112/openneuro.ds003816.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003816.v1.0.1}, } ``` ## About This Dataset This dataset contains Pre-rest ; Post-rest; Radiating LKM to Self; Radiating LKM to Other; Visualize Self and Visualize Other (eyes closed) states EEG and ECG recordings with 48 participants. Among of 48 participants, 15 participants were interested to participate as long-term practitioner. They were able to participate EEG and ECG data recording more than 10 times within two-month. For the rest of the participants recorded only once respectively. High-density EEG and one channel ECG were collected simultaneously by a bio-signal amplifier (actiCHamp, Brain Products, German) from the 48 participants during the whole LKM training session with a sampling frequency of 1000 Hz. 128 EEG electrodes were fixed on the participant’s scalp according to the International 10-20 System. One ECG electrode was fixed on V3 lead. All Electrodes’ impedance was kept under 20kohm to maintain a good signal- to-noise ratio. ## Dataset Information | Dataset ID | `DS003816` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The Effect of Buddhism Derived Loving Kindness Meditation on Modulating EEG: Long-term and Short-term Effect | | Author (year) | `Sun2024` | | Canonical | — | | Importable as | `DS003816`, `Sun2024` | | Year | — | | Authors | SUN, Rui, Ven WONG, Goon Fui, GAO, Jungling | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003816.v1.0.1](https://doi.org/10.18112/openneuro.ds003816.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003816) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003816) | [Source URL](https://openneuro.org/datasets/ds003816) | ### Copy-paste BibTeX ```bibtex @dataset{ds003816, title = {The Effect of Buddhism Derived Loving Kindness Meditation on Modulating EEG: Long-term and Short-term Effect}, author = {SUN, Rui and Ven WONG, Goon Fui and GAO, Jungling}, doi = {10.18112/openneuro.ds003816.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003816.v1.0.1}, } ``` ## Technical Details - Subjects: 48 - Recordings: 1077 - Tasks: 8 - Channels: 128 - Sampling rate (Hz): 1000.0 - Duration (hours): 161.7142777777778 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 54.0 GB - File count: 1077 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003816.v1.0.1 - Source: openneuro - OpenNeuro: [ds003816](https://openneuro.org/datasets/ds003816) - NeMAR: [ds003816](https://nemar.org/dataexplorer/detail?dataset_id=ds003816) ## API Reference Use the `DS003816` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003816(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Effect of Buddhism Derived Loving Kindness Meditation on Modulating EEG: Long-term and Short-term Effect * **Study:** `ds003816` (OpenNeuro) * **Author (year):** `Sun2024` * **Canonical:** — Also importable as: `DS003816`, `Sun2024`. Modality: `eeg`. Subjects: 48; recordings: 1077; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003816](https://openneuro.org/datasets/ds003816) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003816](https://nemar.org/dataexplorer/detail?dataset_id=ds003816) DOI: [https://doi.org/10.18112/openneuro.ds003816.v1.0.1](https://doi.org/10.18112/openneuro.ds003816.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS003816 >>> dataset = DS003816(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003816) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003816) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003822: eeg dataset, 25 subjects *EEG: Probabilistic Learning with Affective Feedback: Exp #1* Access recordings and metadata through EEGDash. **Citation:** Darin R. Brown, Trevor Jackson, James F Cavanagh (2021). *EEG: Probabilistic Learning with Affective Feedback: Exp #1*. [10.18112/openneuro.ds003822.v1.1.0](https://doi.org/10.18112/openneuro.ds003822.v1.1.0) Modality: eeg Subjects: 25 Recordings: 25 License: CC0 Source: openneuro Metadata: Good (80%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003822 dataset = DS003822(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003822(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003822( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003822, title = {EEG: Probabilistic Learning with Affective Feedback: Exp #1}, author = {Darin R. Brown and Trevor Jackson and James F Cavanagh}, doi = {10.18112/openneuro.ds003822.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003822.v1.1.0}, } ``` ## About This Dataset RL task in N=25 college age participants. Data collected circa 2018 in the CRCL at UNM. The paper [Brown, D.R., Jackson, T.J. & Cavanagh, J.F. The Reward Positivity is sensitive to affective liking] is now coming out in Cognitive, Affective, & Behavioral Neuroscience. Your best bet for understanding this task would be to read that paper first. I’ve included additional scripts to help understand stimulus triggers etc. These additional scripts were for a secondary analysis: they were not the scripts used for the paper above. So they are slightly different and have some interesting (unfinished) tangents. - James F Cavanagh 09/29/2021 ## Dataset Information | Dataset ID | `DS003822` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Probabilistic Learning with Affective Feedback: Exp #1 | | Author (year) | `Brown2021_Probabilistic_Learning` | | Canonical | — | | Importable as | `DS003822`, `Brown2021_Probabilistic_Learning` | | Year | 2021 | | Authors | Darin R. Brown, Trevor Jackson, James F Cavanagh | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003822.v1.1.0](https://doi.org/10.18112/openneuro.ds003822.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003822) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003822) | [Source URL](https://openneuro.org/datasets/ds003822) | ### Copy-paste BibTeX ```bibtex @dataset{ds003822, title = {EEG: Probabilistic Learning with Affective Feedback: Exp #1}, author = {Darin R. Brown and Trevor Jackson and James F Cavanagh}, doi = {10.18112/openneuro.ds003822.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003822.v1.1.0}, } ``` ## Technical Details - Subjects: 25 - Recordings: 25 - Tasks: 1 - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): 12.87734722222222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 5.8 GB - File count: 25 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003822.v1.1.0 - Source: openneuro - OpenNeuro: [ds003822](https://openneuro.org/datasets/ds003822) - NeMAR: [ds003822](https://nemar.org/dataexplorer/detail?dataset_id=ds003822) ## API Reference Use the `DS003822` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003822(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Probabilistic Learning with Affective Feedback: Exp * **Study:** `ds003822` (OpenNeuro) * **Author (year):** `Brown2021_Probabilistic_Learning` * **Canonical:** — Also importable as: `DS003822`, `Brown2021_Probabilistic_Learning`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003822](https://openneuro.org/datasets/ds003822) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003822](https://nemar.org/dataexplorer/detail?dataset_id=ds003822) ### Examples ```pycon >>> from eegdash.dataset import DS003822 >>> dataset = DS003822(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003822) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003822) # DS003825: eeg dataset, 50 subjects *Human electroencephalography recordings from 50 subjects for 22,248 images from 1,854 object concepts* Access recordings and metadata through EEGDash. **Citation:** Grootswagers, Tijl, Zhou, Ivy, Robinson, Amanda, Hebart, Martin, Carlson, Thomas (2021). *Human electroencephalography recordings from 50 subjects for 22,248 images from 1,854 object concepts*. [10.18112/openneuro.ds003825.v1.1.0](https://doi.org/10.18112/openneuro.ds003825.v1.1.0) Modality: eeg Subjects: 50 Recordings: 50 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003825 dataset = DS003825(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003825(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003825( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003825, title = {Human electroencephalography recordings from 50 subjects for 22,248 images from 1,854 object concepts}, author = {Grootswagers, Tijl and Zhou, Ivy and Robinson, Amanda and Hebart, Martin and Carlson, Thomas}, doi = {10.18112/openneuro.ds003825.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003825.v1.1.0}, } ``` ## About This Dataset Experiment Details Human electroencephalography recordings from 50 subjects for 1,854 concepts and 22,248 images in the THINGS stimulus database. Images were presented in rapid serial visual presentation streams at 10Hz rates. Participants performed an orthogonal fixation colour change detection task. Experiment length: 1 hour ## Dataset Information | Dataset ID | `DS003825` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Human electroencephalography recordings from 50 subjects for 22,248 images from 1,854 object concepts | | Author (year) | `Grootswagers2021` | | Canonical | `THINGS`, `THINGS_EEG` | | Importable as | `DS003825`, `Grootswagers2021`, `THINGS`, `THINGS_EEG` | | Year | 2021 | | Authors | Grootswagers, Tijl, Zhou, Ivy, Robinson, Amanda, Hebart, Martin, Carlson, Thomas | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003825.v1.1.0](https://doi.org/10.18112/openneuro.ds003825.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003825) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003825) | [Source URL](https://openneuro.org/datasets/ds003825) | ### Copy-paste BibTeX ```bibtex @dataset{ds003825, title = {Human electroencephalography recordings from 50 subjects for 22,248 images from 1,854 object concepts}, author = {Grootswagers, Tijl and Zhou, Ivy and Robinson, Amanda and Hebart, Martin and Carlson, Thomas}, doi = {10.18112/openneuro.ds003825.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds003825.v1.1.0}, } ``` ## Technical Details - Subjects: 50 - Recordings: 50 - Tasks: 1 - Channels: 63 (48), 128 (2) - Sampling rate (Hz): 1000.0 - Duration (hours): 46.32270555555555 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 41.2 GB - File count: 50 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003825.v1.1.0 - Source: openneuro - OpenNeuro: [ds003825](https://openneuro.org/datasets/ds003825) - NeMAR: [ds003825](https://nemar.org/dataexplorer/detail?dataset_id=ds003825) ## API Reference Use the `DS003825` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003825(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Human electroencephalography recordings from 50 subjects for 22,248 images from 1,854 object concepts * **Study:** `ds003825` (OpenNeuro) * **Author (year):** `Grootswagers2021` * **Canonical:** `THINGS`, `THINGS_EEG` Also importable as: `DS003825`, `Grootswagers2021`, `THINGS`, `THINGS_EEG`. Modality: `eeg`. Subjects: 50; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003825](https://openneuro.org/datasets/ds003825) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003825](https://nemar.org/dataexplorer/detail?dataset_id=ds003825) DOI: [https://doi.org/10.18112/openneuro.ds003825.v1.1.0](https://doi.org/10.18112/openneuro.ds003825.v1.1.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003825 >>> dataset = DS003825(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003825) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003825) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003838: eeg dataset, 65 subjects *EEG, pupillometry, ECG and photoplethysmography, and behavioral data in the digit span task and rest* Access recordings and metadata through EEGDash. **Citation:** Yuri G. Pavlov, Dauren Kasanov, Alexandra I. Kosachenko, Alexander I. Kotyusov (2021). *EEG, pupillometry, ECG and photoplethysmography, and behavioral data in the digit span task and rest*. [10.18112/openneuro.ds003838.v1.0.6](https://doi.org/10.18112/openneuro.ds003838.v1.0.6) Modality: eeg Subjects: 65 Recordings: 130 License: CC0 Source: openneuro Citations: 7.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003838 dataset = DS003838(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003838(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003838( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003838, title = {EEG, pupillometry, ECG and photoplethysmography, and behavioral data in the digit span task and rest}, author = {Yuri G. Pavlov and Dauren Kasanov and Alexandra I. Kosachenko and Alexander I. Kotyusov}, doi = {10.18112/openneuro.ds003838.v1.0.6}, url = {https://doi.org/10.18112/openneuro.ds003838.v1.0.6}, } ``` ## About This Dataset This dataset consists of raw 64-channel EEG, cardiovascular (electrocardiography and photoplethysmography), and pupillometry data from 86 human participants during 4 minutes of eyes-closed resting and during performance of a classic working memory task – digit span task with serial recall. The participants either memorized (memory) or just listened to (control condition) sequences of 5, 9, or 13 digits presented auditorily with 2 second stimulus onset asynchrony. The dataset can be used for (1) developing algorithms for cognitive load discrimination and detection of cognitive overload; (2) studying neural (event-related potentials and brain oscillations) and peripheral physiological (electrocardiography, photoplethysmography, and pupillometry) signals during encoding and maintenance of each sequentially presented memory item in a fine time scale; (3) correlating cognitive load and individual differences in working memory to neural and peripheral physiology, and studying the relationship between the physiological signals; (4) integration of the physiological findings with the vast knowledge coming from behavioral studies of verbal working memory in simple span paradigms. EEG, pupillometry, ECG and photoplethysmography, and behavioral data are stored separately in corresponding folders. Each data record can consist of four data folders: beh - behavioral data: correctness of the recall in the memory trials ecg - electrocardiography (ECG) and photoplethysmography (PPG) data eeg - EEG data pupil - pupillometry and eye-tracking data Some of the participants had some physiological data missing: sub-017, sub-094 have no pupillometry data sub-017, sub-037, sub-066 have no ECG and PPG data sub-013, sub-014, sub-015, sub-016, sub-017, sub-018, sub-019, sub-020, sub-021, sub-022, sub-023, sub-024, sub-025, sub-026, sub-027, sub-028, sub-029, sub-030, sub-031, sub-037, sub-066 have no EEG data ## Dataset Information | Dataset ID | `DS003838` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG, pupillometry, ECG and photoplethysmography, and behavioral data in the digit span task and rest | | Author (year) | `Pavlov2021_pupillometry` | | Canonical | — | | Importable as | `DS003838`, `Pavlov2021_pupillometry` | | Year | 2021 | | Authors | Yuri G. Pavlov, Dauren Kasanov, Alexandra I. Kosachenko, Alexander I. Kotyusov | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003838.v1.0.6](https://doi.org/10.18112/openneuro.ds003838.v1.0.6) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003838) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003838) | [Source URL](https://openneuro.org/datasets/ds003838) | ### Copy-paste BibTeX ```bibtex @dataset{ds003838, title = {EEG, pupillometry, ECG and photoplethysmography, and behavioral data in the digit span task and rest}, author = {Yuri G. Pavlov and Dauren Kasanov and Alexandra I. Kosachenko and Alexander I. Kotyusov}, doi = {10.18112/openneuro.ds003838.v1.0.6}, url = {https://doi.org/10.18112/openneuro.ds003838.v1.0.6}, } ``` ## Technical Details - Subjects: 65 - Recordings: 130 - Tasks: 2 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 142.7015113888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 100.2 GB - File count: 130 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003838.v1.0.6 - Source: openneuro - OpenNeuro: [ds003838](https://openneuro.org/datasets/ds003838) - NeMAR: [ds003838](https://nemar.org/dataexplorer/detail?dataset_id=ds003838) ## API Reference Use the `DS003838` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003838(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG, pupillometry, ECG and photoplethysmography, and behavioral data in the digit span task and rest * **Study:** `ds003838` (OpenNeuro) * **Author (year):** `Pavlov2021_pupillometry` * **Canonical:** — Also importable as: `DS003838`, `Pavlov2021_pupillometry`. Modality: `eeg`. Subjects: 65; recordings: 130; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003838](https://openneuro.org/datasets/ds003838) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003838](https://nemar.org/dataexplorer/detail?dataset_id=ds003838) DOI: [https://doi.org/10.18112/openneuro.ds003838.v1.0.6](https://doi.org/10.18112/openneuro.ds003838.v1.0.6) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003838 >>> dataset = DS003838(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003838) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003838) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003844: ieeg dataset, 6 subjects *Dataset Clinical Epilepsy iEEG to BIDS -RESPect_intraoperative_iEEG* Access recordings and metadata through EEGDash. **Citation:** Zweiphenning W., Demuru M., van Blooijs D., Leijten F, Zijlmans M. (2021). *Dataset Clinical Epilepsy iEEG to BIDS -RESPect_intraoperative_iEEG*. [10.18112/openneuro.ds003844.v1.0.1](https://doi.org/10.18112/openneuro.ds003844.v1.0.1) Modality: ieeg Subjects: 6 Recordings: 38 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003844 dataset = DS003844(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003844(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003844( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003844, title = {Dataset Clinical Epilepsy iEEG to BIDS -RESPect_intraoperative_iEEG}, author = {Zweiphenning W. and Demuru M. and van Blooijs D. and Leijten F and Zijlmans M.}, doi = {10.18112/openneuro.ds003844.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003844.v1.0.1}, } ``` ## About This Dataset Dataset description This dataset is part of a bigger dataset of intracranial EEG (iEEG) called RESPect (Registry for Epilepsy Surgery Patients), a dataset recorded at the University Medical Center of Utrecht, the Netherlands. It consists of 12 patients: six patients recorded intraoperatively using electrocorticography (acute ECoG), six patients with long-term recordings (3 patients recorded with ECoG and 3 patients recorded with stereo-encephalography SEEG). For a detailed description see (Demuru M, van Blooijs D, Zweiphenning W, Hermes D, Leijten F, Zijlmans M, on behalf of the RESPect group. “A Practical Workflow for Organizing Clinical Intraoperative and Long-Term iEEG data in BIDS”.). This data is organized according to the Brain Imaging Data Structure specification. A community- driven specification for organizing neurophysiology data along with its metadata. For more information on this data specification, see [https://bids-specification.readthedocs.io/en/stable/](https://bids-specification.readthedocs.io/en/stable/) Each patient has their own folder (e.g., `sub-RESP0280`) which contains the iEEG recordings data for that patient, as well as the metadata needed to understand the raw data and event timing. Two different implementation of the BIDS structure were done according to the different type of recordings (i.e. intraoperative or long-term) Intraoperative ECoG Surgery with intraoperative ECoG is composed of three main situations that can be logically grouped into BIDS sessions: \* Pre-resection sessions, consisting of all recordings (with different configurations of the grid and strips/depth) carried out before the surgeon has started the planned resection. \* Intermediate sessions, consisting of all subsequent recordings performed before any iterative extension of the resection area. ### View full README Dataset description This dataset is part of a bigger dataset of intracranial EEG (iEEG) called RESPect (Registry for Epilepsy Surgery Patients), a dataset recorded at the University Medical Center of Utrecht, the Netherlands. It consists of 12 patients: six patients recorded intraoperatively using electrocorticography (acute ECoG), six patients with long-term recordings (3 patients recorded with ECoG and 3 patients recorded with stereo-encephalography SEEG). For a detailed description see (Demuru M, van Blooijs D, Zweiphenning W, Hermes D, Leijten F, Zijlmans M, on behalf of the RESPect group. “A Practical Workflow for Organizing Clinical Intraoperative and Long-Term iEEG data in BIDS”.). This data is organized according to the Brain Imaging Data Structure specification. A community- driven specification for organizing neurophysiology data along with its metadata. For more information on this data specification, see [https://bids-specification.readthedocs.io/en/stable/](https://bids-specification.readthedocs.io/en/stable/) Each patient has their own folder (e.g., `sub-RESP0280`) which contains the iEEG recordings data for that patient, as well as the metadata needed to understand the raw data and event timing. Two different implementation of the BIDS structure were done according to the different type of recordings (i.e. intraoperative or long-term) Intraoperative ECoG Surgery with intraoperative ECoG is composed of three main situations that can be logically grouped into BIDS sessions: \* Pre-resection sessions, consisting of all recordings (with different configurations of the grid and strips/depth) carried out before the surgeon has started the planned resection. \* Intermediate sessions, consisting of all subsequent recordings performed before any iterative extension of the resection area. \* Post-resection sessions, consisting of all the recordings performed after the last resection. Each situation is labelled with an increasing number starting from 1, indicative of the period in time respective to the surgical resection and a consecutive letter (starting from A) indicative of the position of the grid and strip/depth for a given session. As an example see patient RESP0280 who had 4 sessions recorded: two pre-resection sessions, one intermediate sessions and one post-resection session. The first session is SITUATION1A consisting of the first recording, then the grid was moved to another position, resulting in SITUATION1B. After that, the surgeon resected part of the brain and then there was another recording(SITUATION2A). Finally the surgeon applied a resection for the last time and the recording after that was defined as SITUATION3A. Long-term iEEG In long-term recordings, data that are recorded within one monitoring period are logically grouped in the same BIDS session and stored across runs indicating the day and time point of recording in the monitoring period. If extra electrodes were added/removed during this period, the session was divided into different sessions (e.g. ses-1A and ses-1b). We use the optional run key-value pair to specify the day and the start time of the recording (e.g. run-021315, day 2 after implantation, which is day 1 of the monitoring period, at 13:15). The task key-value pair in long-term iEEG recordings describes the patient’s state during the recording of this file. Different tasks have been defined, such as “rest” when a patient is awake but not doing a specific task, “sleep” when a patient is sleeping the majority of the file, or “SPESclin” when the clinical SPES protocol has been performed in this file. Other task definitions can be found in the annotation syntax ([https://github.com/UMCU-EpiLAB/umcuEpi_longterm_ieeg_respect_bids/master/manuals/IFU_annotatingtrc_ECoG](https://github.com/UMCU-EpiLAB/umcuEpi_longterm_ieeg_respect_bids/master/manuals/IFU_annotatingtrc_ECoG)). License This dataset is made available under the Public Domain Dedication and License CC v1.0, whose full text can be found at [https://creativecommons.org/publicdomain/zero/1.0/](https://creativecommons.org/publicdomain/zero/1.0/). We hope that all users will follow the ODC Attribution/Share-Alike Community Norms ([http://www.opendatacommons.org/norms/odc-by-sa/](http://www.opendatacommons.org/norms/odc-by-sa/)); in particular, while not legally required, we hope that all users of the data will acknowledge by citing Demuru M, van Blooijs D, Zweiphenning W, Hermes D, Leijten F, Zijlmans M, on behalf of the RESPect group. “A Practical Workflow for Organizing Clinical Intraoperative and Long-Term iEEG data in BIDS”. Submitted to Neuroinformatics. in any publications. Code available at: [https://github.com/UMCU-EpiLAB](https://github.com/UMCU-EpiLAB). Acknowledgements We would like to thank the patients for providing their data for this dataset, the RESPect team of University Medical Center of Utrecht, for the acquisition of the dataset. Please cite Demuru M, van Blooijs D, Zweiphenning W, Hermes D, Leijten F, Zijlmans M, on behalf of the RESPect group. “A Practical Workflow for Organizing Clinical Intraoperative and Long-Term iEEG data in BIDS”. Submitted to Neuroinformatics. in any publications. ## Dataset Information | Dataset ID | `DS003844` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset Clinical Epilepsy iEEG to BIDS -RESPect_intraoperative_iEEG | | Author (year) | `Zweiphenning2021` | | Canonical | `RESPect_intraop` | | Importable as | `DS003844`, `Zweiphenning2021`, `RESPect_intraop` | | Year | 2021 | | Authors | Zweiphenning W., Demuru M., van Blooijs D., Leijten F, Zijlmans M. | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003844.v1.0.1](https://doi.org/10.18112/openneuro.ds003844.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003844) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003844) | [Source URL](https://openneuro.org/datasets/ds003844) | ### Copy-paste BibTeX ```bibtex @dataset{ds003844, title = {Dataset Clinical Epilepsy iEEG to BIDS -RESPect_intraoperative_iEEG}, author = {Zweiphenning W. and Demuru M. and van Blooijs D. and Leijten F and Zijlmans M.}, doi = {10.18112/openneuro.ds003844.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003844.v1.0.1}, } ``` ## Technical Details - Subjects: 6 - Recordings: 38 - Tasks: 1 - Channels: 33 (24), 64 (9), 32 (5) - Sampling rate (Hz): 2048.0 (33), 256.0 (5) - Duration (hours): 2.779091118706597 - Pathology: Epilepsy - Modality: Resting State - Type: Clinical/Intervention - Size on disk: 2.6 GB - File count: 38 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003844.v1.0.1 - Source: openneuro - OpenNeuro: [ds003844](https://openneuro.org/datasets/ds003844) - NeMAR: [ds003844](https://nemar.org/dataexplorer/detail?dataset_id=ds003844) ## API Reference Use the `DS003844` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003844(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset Clinical Epilepsy iEEG to BIDS -RESPect_intraoperative_iEEG * **Study:** `ds003844` (OpenNeuro) * **Author (year):** `Zweiphenning2021` * **Canonical:** `RESPect_intraop` Also importable as: `DS003844`, `Zweiphenning2021`, `RESPect_intraop`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 6; recordings: 38; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003844](https://openneuro.org/datasets/ds003844) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003844](https://nemar.org/dataexplorer/detail?dataset_id=ds003844) DOI: [https://doi.org/10.18112/openneuro.ds003844.v1.0.1](https://doi.org/10.18112/openneuro.ds003844.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003844 >>> dataset = DS003844(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003844) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003844) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS003846: eeg dataset, 19 subjects *Prediction Error* Access recordings and metadata through EEGDash. **Citation:** Lukas Gehrke, Sezen Akman, Albert Chen, Pedro Lopes, Klaus Gramann (2021). *Prediction Error*. [10.18112/openneuro.ds003846.v2.0.2](https://doi.org/10.18112/openneuro.ds003846.v2.0.2) Modality: eeg Subjects: 19 Recordings: 50 License: CC0 Source: openneuro Citations: 5.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003846 dataset = DS003846(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003846(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003846( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003846, title = {Prediction Error}, author = {Lukas Gehrke and Sezen Akman and Albert Chen and Pedro Lopes and Klaus Gramann}, doi = {10.18112/openneuro.ds003846.v2.0.2}, url = {https://doi.org/10.18112/openneuro.ds003846.v2.0.2}, } ``` ## About This Dataset **Readme** In case of any questions, please contact: Lukas Gehrke, [lukas.gehrke@tu-berlin.de](mailto:lukas.gehrke@tu-berlin.de), orcid: 0000-0003-3661-1973 **Overview** Cyber-Physical Systems: Prediction Error These data were collected at [https://www.tu.berlin/bpn](https://www.tu.berlin/bpn). Data collection occurred either between 10:00 and 12:00 or between 14:00 and 18:00. To learn about the task, independent-, dependent-, and control variables, please consult the methods sections of the following two publications: ### View full README **Readme** In case of any questions, please contact: Lukas Gehrke, [lukas.gehrke@tu-berlin.de](mailto:lukas.gehrke@tu-berlin.de), orcid: 0000-0003-3661-1973 **Overview** Cyber-Physical Systems: Prediction Error These data were collected at [https://www.tu.berlin/bpn](https://www.tu.berlin/bpn). Data collection occurred either between 10:00 and 12:00 or between 14:00 and 18:00. To learn about the task, independent-, dependent-, and control variables, please consult the methods sections of the following two publications: [https://dl.acm.org/doi/abs/10.1145/3290605.3300657](https://dl.acm.org/doi/abs/10.1145/3290605.3300657) [https://iopscience.iop.org/article/10.1088/1741-2552/ac69bc/meta](https://iopscience.iop.org/article/10.1088/1741-2552/ac69bc/meta) - Contents of the dataset: Output from BIDS-validator Summary 324 Files, 9.76GB 19 - Subjects 5 - Sessions Available Tasks PredictionError Available Modalities EEG - Quality assessment of the data: Link to data paper, once done **Methods** **Subjects** The study sample consists of 19 participants (participant_id 1 to 19) with ages ranging from 18 to 34 years and varying cap sizes from 54 to 60. Stimulation is delivered in three blocks: Block_1, Block_2, and Block_3, utilizing different combinations of Visual, Vibro, and EMS. Participant Information: Age: Ranges from 18 to 34 years. Cap Size: Varied, with sizes ranging from 54 to 60. Stimulation Blocks: Block_1 and Block_2 include Visual, Visual + Vibro, and Visual + Vibro + EMS. Block_3 primarily involves Visual + Vibro + EMS. Usage of Stimulation Blocks: Most participants experience Visual stimulation in all blocks. Visual + Vibro is common in Block_1 and Block_2. Visual + Vibro + EMS is prevalent in Block_3. Some participants did not experience certain blocks (indicated by “0”). Other Observations: Cap size variation doesn’t show a clear pattern in relation to stimulation blocks. Participants exhibit diverse stimulation patterns, showcasing individualized experiences. **Task, Environment and Variables** This set of variables outlines key parameters in a neuroscience experiment involving a haptic task. Here’s a summary: box: Description: Represents the target object to be touched following its spawn. Units: String (presumably indicating the characteristics or identity of the object). normal_or_conflict: Description: Describes the behavior of the target object in the current trial, distinguishing between oddball and non-oddball conditions. Units: String (presumably indicating the nature of the trial). condition: Description: Indicates the level of haptic realism in the experiment. Units: String (presumably representing different levels of realism). cube: Description: Specifies the position of the target object, whether it is located on the left, right, or center. Units: String (presumably indicating spatial orientation). trial_nr: Description: Denotes the number of the current trial in the experiment. Units: Integer. **Apparatus** Here’s a summary of the recording environment: - *EEG Stream Name:* BrainVision - *EEG Reference and Ground:* FCz and AFz, respectively - *EEG Channel Locations:* 63 channels with specific names (e.g., Fp1, Fz, Pz) and types (EEG) - *Additional Channels:* 1 EOG (Electrooculogram) - *Power Line Frequency:* 50 Hz - *Manufacturer:* Brain Products - *Manufacturer’s Model Name:* BrainAmp DC - *Cap Manufacturer:* EasyCap - *Cap Model Name:* actiCap 64ch CACS-64 - *EEG Placement Scheme:* Positions chosen from a 10% system - *Channel Counts:* > - EEG Channels: 63 > - EOG Channels: 1 > - ECG Channels: 0 > - EMG Channels: 0 > - Miscellaneous Channels: 0 > - Trigger Channels: 0 This configuration indicates a high-density EEG setup with specific electrode placements, utilizing Brain Products’ BrainAmp DC model. The electrode cap is manufactured by EasyCap, with the specific model name actiCap 64ch CACS-64. The EEG data is sampled at an unspecified frequency, and the system is designed to capture electrical brain activity across a comprehensive set of channels. The recording includes an additional channel for recording eye movements (EOG). Overall, the setup appears suitable for detailed EEG investigations in neurophysiological research. The motion capture recording environment uses two devices: “rigid_head” and “rigid_handr,” which correspond to “HTCViveHead” and “HTCViveRightHand” in the BIDS (Brain Imaging Data Structure) naming convention. The tracked points include “Head” and “handR.” The motion data is captured using quaternions with channels named “quat_X,” “quat_Y,” “quat_Z,” and “quat_W.” Positional data includes channels “_X,” “_Y,” and “_Z.” The system is manufactured by HTC, with the model name “Vive,” and the recording has a sampling frequency of 90 Hz. Additional information such as software versions is not provided. ## Dataset Information | Dataset ID | `DS003846` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Prediction Error | | Author (year) | `Gehrke2021` | | Canonical | — | | Importable as | `DS003846`, `Gehrke2021` | | Year | 2021 | | Authors | Lukas Gehrke, Sezen Akman, Albert Chen, Pedro Lopes, Klaus Gramann | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003846.v2.0.2](https://doi.org/10.18112/openneuro.ds003846.v2.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003846) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003846) | [Source URL](https://openneuro.org/datasets/ds003846) | ### Copy-paste BibTeX ```bibtex @dataset{ds003846, title = {Prediction Error}, author = {Lukas Gehrke and Sezen Akman and Albert Chen and Pedro Lopes and Klaus Gramann}, doi = {10.18112/openneuro.ds003846.v2.0.2}, url = {https://doi.org/10.18112/openneuro.ds003846.v2.0.2}, } ``` ## Technical Details - Subjects: 19 - Recordings: 50 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 500.0 - Duration (hours): 22.727667222222223 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 9.8 GB - File count: 50 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003846.v2.0.2 - Source: openneuro - OpenNeuro: [ds003846](https://openneuro.org/datasets/ds003846) - NeMAR: [ds003846](https://nemar.org/dataexplorer/detail?dataset_id=ds003846) ## API Reference Use the `DS003846` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003846(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Prediction Error * **Study:** `ds003846` (OpenNeuro) * **Author (year):** `Gehrke2021` * **Canonical:** — Also importable as: `DS003846`, `Gehrke2021`. Modality: `eeg`. Subjects: 19; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003846](https://openneuro.org/datasets/ds003846) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003846](https://nemar.org/dataexplorer/detail?dataset_id=ds003846) DOI: [https://doi.org/10.18112/openneuro.ds003846.v2.0.2](https://doi.org/10.18112/openneuro.ds003846.v2.0.2) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003846 >>> dataset = DS003846(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003846) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003846) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003848: ieeg dataset, 6 subjects *Dataset Clinical Epilepsy iEEG to BIDS - RESPect_longterm_iEEG* Access recordings and metadata through EEGDash. **Citation:** van Blooijs D., Demuru M., Zweiphenning W, Hermes D., Leijten F., Zijlmans M. (2021). *Dataset Clinical Epilepsy iEEG to BIDS - RESPect_longterm_iEEG*. [10.18112/openneuro.ds003848.v1.0.3](https://doi.org/10.18112/openneuro.ds003848.v1.0.3) Modality: ieeg Subjects: 6 Recordings: 22 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003848 dataset = DS003848(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003848(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003848( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003848, title = {Dataset Clinical Epilepsy iEEG to BIDS - RESPect_longterm_iEEG}, author = {van Blooijs D. and Demuru M. and Zweiphenning W and Hermes D. and Leijten F. and Zijlmans M.}, doi = {10.18112/openneuro.ds003848.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds003848.v1.0.3}, } ``` ## About This Dataset Dataset description This dataset is part of a bigger dataset of intracranial EEG (iEEG) called RESPect (Registry for Epilepsy Surgery Patients), a dataset recorded at the University Medical Center of Utrecht, the Netherlands. It consists of 12 patients: six patients recorded intraoperatively using electrocorticography (acute ECoG), six patients with long-term recordings (3 patients recorded with ECoG and 3 patients recorded with stereo-encephalography SEEG). For a detailed description see Demuru M, van Blooijs D, Zweiphenning W, Hermes D, Leijten F, Zijlmans M, on behalf of the RESPect group. “A practical workflow for organizing clinical intraoperative and long-term iEEG data in BIDS“€, submitted to NeuroInformatics in 2020. This data is organized according to the Brain Imaging Data Structure specification. A community- driven specification for organizing neurophysiology data along with its metadata. For more information on this data specification, see [https://bids-specification.readthedocs.io/en/stable/](https://bids-specification.readthedocs.io/en/stable/) Each patient has their own folder (e.g., `sub-RESP0280`) which contains the iEEG recordings data for that patient, as well as the metadata needed to understand the raw data and event timing. Two different implementation of the BIDS structure were done according to the different type of recordings (i.e. intraoperative or long-term) Intraoperative ECoG Surgery with intraoperative ECoG is composed of three main situations that can be logically grouped into BIDS sessions: \* Pre-resection sessions, consisting of all recordings (with different configurations of the grid and strips/depth) carried out before the surgeon has started the planned resection. \* Intermediate sessions, consisting of all subsequent recordings performed before any iterative extension of the resection area. \* Post-resection sessions, consisting of all the recordings performed after the last resection. Each situation is labelled with an increasing number starting from 1, indicative of the period in time respective to the surgical resection and a consecutive letter (starting from A) indicative of the position of the grid and strip/depth for a given session. As an example see patient RESP0280 who had 4 sessions recorded: two pre-resection sessions, one intermediate sessions and one post-resection session. The first session is SITUATION1A consisting of the first recording, then the grid was moved to another position, resulting in SITUATION1B. After that, the surgeon resected part of the brain and then there was another recording(SITUATION2A). Finally the surgeon applied a resection for the last time and the recording after that was defined as SITUATION3A. In long-term recordings, data that are recorded within one monitoring period are logically grouped in the same BIDS session and stored across runs indicating the day and time point of recording in the monitoring period. If extra electrodes were added/removed during this period, the session was divided into different sessions (e.g. ses-1A and ses-1b). We use the optional run key-value pair to specify the day and the start time of the recording (e.g. run-021315, day 2 after implantation, which is day 1 of the monitoring period, at 13:15). The task key-value pair in long-term iEEG recordings describes the patient´s state during the recording of this file. Different tasks have been defined, such as “rest“€ when a patient is awake but not doing a specific task, “sleep“€ when a patient is sleeping the majority of the file, or “SPESclin“€ when the clinical SPES protocol has been performed in this file. Other task definitions can be found in the annotation syntax ([https://github.com/UMCU-EpiLAB/umcuEpi_longterm_ieeg_respect_bids/master/manuals/IFU_annotatingtrc_ECoG](https://github.com/UMCU-EpiLAB/umcuEpi_longterm_ieeg_respect_bids/master/manuals/IFU_annotatingtrc_ECoG)). License This dataset is made available under the Public Domain Dedication and License CC v1.0, whose full text can be found at [https://creativecommons.org/publicdomain/zero/1.0/](https://creativecommons.org/publicdomain/zero/1.0/). We hope that all users will follow the ODC Attribution/Share-Alike Community Norms ([http://www.opendatacommons.org/norms/odc-by-sa/](http://www.opendatacommons.org/norms/odc-by-sa/)); in particular, while not legally required, we hope that all users of the data will acknowledge by citing Demuru M, van Blooijs D, Zweiphenning W, Hermes D, Leijten F, Zijlmans M, on behalf of the RESPect group. “A practical workflow for organizing clinical intraoperative and long-term iEEG data in BIDS“€, submitted to NeuroInformatics in 2020, in any publications. Code available at: [https://github.com/UMCU-EpiLAB](https://github.com/UMCU-EpiLAB). Acknowledgements We would like to thank the patients for providing their data for this dataset, the RESPect team of University Medical Center of Utrecht, for the acquisition of the dataset. Please cite Demuru M, van Blooijs D, Zweiphenning W, Hermes D, Leijten F, Zijlmans M, on behalf of the RESPect group. “A practical workflow for organizing clinical intraoperative and long-term iEEG data in BIDS“€, submitted to NeuroInformatics in 2020, in any publications. ## Dataset Information | Dataset ID | `DS003848` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset Clinical Epilepsy iEEG to BIDS - RESPect_longterm_iEEG | | Author (year) | `Blooijs2021` | | Canonical | `RESPect_longterm` | | Importable as | `DS003848`, `Blooijs2021`, `RESPect_longterm` | | Year | 2021 | | Authors | van Blooijs D., Demuru M., Zweiphenning W, Hermes D., Leijten F., Zijlmans M. | | License | CC0 | | Citation / DOI | [10.18112/openneuro.ds003848.v1.0.3](https://doi.org/10.18112/openneuro.ds003848.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003848) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003848) | [Source URL](https://openneuro.org/datasets/ds003848) | ### Copy-paste BibTeX ```bibtex @dataset{ds003848, title = {Dataset Clinical Epilepsy iEEG to BIDS - RESPect_longterm_iEEG}, author = {van Blooijs D. and Demuru M. and Zweiphenning W and Hermes D. and Leijten F. and Zijlmans M.}, doi = {10.18112/openneuro.ds003848.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds003848.v1.0.3}, } ``` ## Technical Details - Subjects: 6 - Recordings: 22 - Tasks: 6 - Channels: 133 (18), 68 (4) - Sampling rate (Hz): 2048.0 (21), 512.0 - Duration (hours): 20.275720486111112 - Pathology: Epilepsy - Modality: Other - Type: Clinical/Intervention - Size on disk: 65.0 GB - File count: 22 - Format: BIDS - License: CC0 - DOI: 10.18112/openneuro.ds003848.v1.0.3 - Source: openneuro - OpenNeuro: [ds003848](https://openneuro.org/datasets/ds003848) - NeMAR: [ds003848](https://nemar.org/dataexplorer/detail?dataset_id=ds003848) ## API Reference Use the `DS003848` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003848(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset Clinical Epilepsy iEEG to BIDS - RESPect_longterm_iEEG * **Study:** `ds003848` (OpenNeuro) * **Author (year):** `Blooijs2021` * **Canonical:** `RESPect_longterm` Also importable as: `DS003848`, `Blooijs2021`, `RESPect_longterm`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 6; recordings: 22; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003848](https://openneuro.org/datasets/ds003848) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003848](https://nemar.org/dataexplorer/detail?dataset_id=ds003848) DOI: [https://doi.org/10.18112/openneuro.ds003848.v1.0.3](https://doi.org/10.18112/openneuro.ds003848.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003848 >>> dataset = DS003848(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003848) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003848) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS003876: ieeg dataset, 39 subjects *Epilepsy-iEEG-Interictal-Multicenter-Dataset* Access recordings and metadata through EEGDash. **Citation:** Gunnarsdottir, Kristin, Li, Adam, Smith, Rachel, Kang, Joon, Korzeniewska, Anna, Crone, Nathan, Rouse, Adam, Cheng, Jennifer, Kinsman, Michael, Landazuri, Patrick, Uysal, Utku, Ulloa, Carol, Cameron, Nathaniel, Cajigas, Iahn, Jagid, Jonathan, Kanner, Andres, Elarjani, Turki, Bicchi, Manuel, Inati, Sara, Zaghloul, Kareem, Boerwinkle, Varina, Wyckoff, Sarah, Barot, Niravkumar, Gonzalez-Martinez, Jorge, Sarma, Sridevi (2021). *Epilepsy-iEEG-Interictal-Multicenter-Dataset*. [10.18112/openneuro.ds003876.v1.0.2](https://doi.org/10.18112/openneuro.ds003876.v1.0.2) Modality: ieeg Subjects: 39 Recordings: 54 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003876 dataset = DS003876(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003876(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003876( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003876, title = {Epilepsy-iEEG-Interictal-Multicenter-Dataset}, author = {Gunnarsdottir, Kristin and Li, Adam and Smith, Rachel and Kang, Joon and Korzeniewska, Anna and Crone, Nathan and Rouse, Adam and Cheng, Jennifer and Kinsman, Michael and Landazuri, Patrick and Uysal, Utku and Ulloa, Carol and Cameron, Nathaniel and Cajigas, Iahn and Jagid, Jonathan and Kanner, Andres and Elarjani, Turki and Bicchi, Manuel and Inati, Sara and Zaghloul, Kareem and Boerwinkle, Varina and Wyckoff, Sarah and Barot, Niravkumar and Gonzalez-Martinez, Jorge and Sarma, Sridevi}, doi = {10.18112/openneuro.ds003876.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003876.v1.0.2}, } ``` ## About This Dataset **Epilepsy Interictal Dataset** This dataset was updated and prepared for release as part of a manuscript by Bernabei & Li et al. (in preparation). A subset of the data has been featured in [1] and [2]. **Summary** This dataset comprises of de-identified subjects with interictal iEEG recordings possibly with sleep or awake state annotated. The subjects come from the following centers: - National Institute of Health (NIH) - Johns Hopkins Hospital (JHH) ### View full README **Epilepsy Interictal Dataset** This dataset was updated and prepared for release as part of a manuscript by Bernabei & Li et al. (in preparation). A subset of the data has been featured in [1] and [2]. **Summary** This dataset comprises of de-identified subjects with interictal iEEG recordings possibly with sleep or awake state annotated. The subjects come from the following centers: - National Institute of Health (NIH) - Johns Hopkins Hospital (JHH) - University of Miami Florida Jackson Memorial Hospital (UMF) In the actual study, there is also data from Kansas University Medical Center (KUMC), University of Pittsburgh Medical Center and Cleveland Clinic, whose data is not shared due to restrictions imposed by the centers there. Some subjects, namely with the `rns` prefix in their subject ID were treated with RNS rather then surgical resection/ablation. **Derivatives** The processed data corresponding to the `source-sink` analysis and `hfo` comparisons are shown in the `derivatives/` folder. The HFO analysis consists of two folders, one is an RMS detector and the other is a Hilbert detector. See the paper for details. **Ties to Other Datasets** NIH `pt1, pt2, pt3`, JHH `jh103, jh105` subjects are also datasets in `https://openneuro.org/datasets/ds003029`, where the ictal snapshots are stored. These correspond to the following: - pt1: pt01 - pt2: pt2 - pt3: pt3 - jh103: jh103 - jh105: jh105 Moreover, the cclinic subjects are used in that study, but not open-access due to data sharing limitations at Cleveland Clinic. Those ictal datasets were analyzed in [https://www.nature.com/articles/s41593-021-00901-w](https://www.nature.com/articles/s41593-021-00901-w). **References** [1] Li, A., Huynh, C., Fitzgerald, Z. et al. Neural fragility as an EEG marker of the seizure onset zone. Nat Neurosci 24, 1465–1474 (2021). [https://doi.org/10.1038/s41593-021-00901-w](https://doi.org/10.1038/s41593-021-00901-w) [2] Kristin M. Gunnarsdottir, Adam Li, Rachel J. Smith, Joon-Yi Kang, Nathan E. Crone, Anna Korzeniewska, Adam Rouse, Nathaniel Cameron, Iahn Cajigas, Sara Inati, Kareem A. Zaghloul, Varina L. Boerwinkle, Sarah Wyckoff, Nirav Barot, Jorge Gonzalez-Martinez, Sridevi V. Sarma. Source-sink connectivity: a novel resting-state EEG marker of the epileptogenic zone. bioRxiv 2021.10.15.464594; doi: [https://doi.org/10.1101/2021.10.15.464594](https://doi.org/10.1101/2021.10.15.464594) [3] Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) [4] Holdgraf, C., Appelhoff, S., Bickel, S., Bouchard, K., D’Ambrosio, S., David, O., … Hermes, D. (2019). iEEG-BIDS, extending the Brain Imaging Data Structure specification to human intracranial electrophysiology. Scientific Data, 6, 102. [https://doi.org/10.1038/s41597-019-0105-7](https://doi.org/10.1038/s41597-019-0105-7) ## Dataset Information | Dataset ID | `DS003876` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Epilepsy-iEEG-Interictal-Multicenter-Dataset | | Author (year) | `Gunnarsdottir2021` | | Canonical | — | | Importable as | `DS003876`, `Gunnarsdottir2021` | | Year | 2021 | | Authors | Gunnarsdottir, Kristin, Li, Adam, Smith, Rachel, Kang, Joon, Korzeniewska, Anna, Crone, Nathan, Rouse, Adam, Cheng, Jennifer, Kinsman, Michael, Landazuri, Patrick, Uysal, Utku, Ulloa, Carol, Cameron, Nathaniel, Cajigas, Iahn, Jagid, Jonathan, Kanner, Andres, Elarjani, Turki, Bicchi, Manuel, Inati, Sara, Zaghloul, Kareem, Boerwinkle, Varina, Wyckoff, Sarah, Barot, Niravkumar, Gonzalez-Martinez, Jorge, Sarma, Sridevi | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003876.v1.0.2](https://doi.org/10.18112/openneuro.ds003876.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003876) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003876) | [Source URL](https://openneuro.org/datasets/ds003876) | ### Copy-paste BibTeX ```bibtex @dataset{ds003876, title = {Epilepsy-iEEG-Interictal-Multicenter-Dataset}, author = {Gunnarsdottir, Kristin and Li, Adam and Smith, Rachel and Kang, Joon and Korzeniewska, Anna and Crone, Nathan and Rouse, Adam and Cheng, Jennifer and Kinsman, Michael and Landazuri, Patrick and Uysal, Utku and Ulloa, Carol and Cameron, Nathaniel and Cajigas, Iahn and Jagid, Jonathan and Kanner, Andres and Elarjani, Turki and Bicchi, Manuel and Inati, Sara and Zaghloul, Kareem and Boerwinkle, Varina and Wyckoff, Sarah and Barot, Niravkumar and Gonzalez-Martinez, Jorge and Sarma, Sridevi}, doi = {10.18112/openneuro.ds003876.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds003876.v1.0.2}, } ``` ## Technical Details - Subjects: 39 - Recordings: 54 - Tasks: 3 - Channels: 128 (10), 129 (8), 86 (4), 135 (4), 98 (4), 111 (2), 101 (2), 47 (2), 110 (2), 182, 118, 114, 170, 168, 95, 146, 121, 107, 46, 186, 125, 134, 193, 190, 147 - Sampling rate (Hz): 1000.0 (25), 2000.0 (7), 999.4121105232217 (6), 1024.0 (5), 999.9999999999999 (4), 499.7071044492829 (2), 1024.5997950800408 (2), 500.0 (2), 512.0 - Duration (hours): 5.757709854501917 - Pathology: Epilepsy - Modality: Resting State - Type: Clinical/Intervention - Size on disk: 5.0 GB - File count: 54 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003876.v1.0.2 - Source: openneuro - OpenNeuro: [ds003876](https://openneuro.org/datasets/ds003876) - NeMAR: [ds003876](https://nemar.org/dataexplorer/detail?dataset_id=ds003876) ## API Reference Use the `DS003876` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003876(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Epilepsy-iEEG-Interictal-Multicenter-Dataset * **Study:** `ds003876` (OpenNeuro) * **Author (year):** `Gunnarsdottir2021` * **Canonical:** — Also importable as: `DS003876`, `Gunnarsdottir2021`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 39; recordings: 54; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003876](https://openneuro.org/datasets/ds003876) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003876](https://nemar.org/dataexplorer/detail?dataset_id=ds003876) DOI: [https://doi.org/10.18112/openneuro.ds003876.v1.0.2](https://doi.org/10.18112/openneuro.ds003876.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003876 >>> dataset = DS003876(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003876) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003876) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS003885: eeg dataset, 24 subjects *Capacity for movement is an organisational principle in object representations: EEG data from Experiment 1* Access recordings and metadata through EEGDash. **Citation:** Shatek, Sophia M., Robinson, Amanda K., Grootswagers, Tijl, Carlson, Thomas A. (2021). *Capacity for movement is an organisational principle in object representations: EEG data from Experiment 1*. [10.18112/openneuro.ds003885.v1.0.7](https://doi.org/10.18112/openneuro.ds003885.v1.0.7) Modality: eeg Subjects: 24 Recordings: 24 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003885 dataset = DS003885(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003885(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003885( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003885, title = {Capacity for movement is an organisational principle in object representations: EEG data from Experiment 1}, author = {Shatek, Sophia M. and Robinson, Amanda K. and Grootswagers, Tijl and Carlson, Thomas A.}, doi = {10.18112/openneuro.ds003885.v1.0.7}, url = {https://doi.org/10.18112/openneuro.ds003885.v1.0.7}, } ``` ## About This Dataset **Overview** This data is from the paper “Capacity for movement is a major organisational principle in object representations”. This is the data of Experiment 1 (EEG: aliveness). The preprint is here: [https://doi.org/10.31234/osf.io/3x2qh](https://doi.org/10.31234/osf.io/3x2qh) Abstract: The ability to perceive moving objects is crucial for threat identification and survival. Recent neuroimaging evidence has shown that goal-directed movement is an important element of object processing in the brain. However, prior work has primarily used moving stimuli that are also animate, making it difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between how the brain processes movement and aliveness by including stimuli that are alive but still (e.g., plants), and stimuli that are not alive but move (e.g., waves). We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Movement explained significant variance in the neural data over and above that of aliveness, showing that capacity for movement is an important dimension in the representation of visual objects in humans. In this experiment, participants completed two tasks - classification and passive viewing. In the classification task, participants classified single images that appeared on the screen as “alive” or “not alive”. This task was time-pressured, and trials timed out after 1 second. In the passive viewing task, participants viewed rapid (RSVP) streams of images, and pressed a button to indicate when the fixation cross changed colour. Contents of the dataset: > - Raw EEG data is available in individual subject folders (BrainVision raw formats .eeg, .vmrk, .vhdr). Pre-processed EEG data is available in the derivatives folders in EEGlab (.set, .fdt) and cosmoMVPA dataset (.mat) format. This experiment has 24 subjects. > - Scripts for data analysis and running the experiment are available in the code folder. Note that all code runs on both EEG experiments together, so you must download both this and the movement experiment data in order to replicate analyses. ### View full README **Overview** This data is from the paper “Capacity for movement is a major organisational principle in object representations”. This is the data of Experiment 1 (EEG: aliveness). The preprint is here: [https://doi.org/10.31234/osf.io/3x2qh](https://doi.org/10.31234/osf.io/3x2qh) Abstract: The ability to perceive moving objects is crucial for threat identification and survival. Recent neuroimaging evidence has shown that goal-directed movement is an important element of object processing in the brain. However, prior work has primarily used moving stimuli that are also animate, making it difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between how the brain processes movement and aliveness by including stimuli that are alive but still (e.g., plants), and stimuli that are not alive but move (e.g., waves). We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Movement explained significant variance in the neural data over and above that of aliveness, showing that capacity for movement is an important dimension in the representation of visual objects in humans. In this experiment, participants completed two tasks - classification and passive viewing. In the classification task, participants classified single images that appeared on the screen as “alive” or “not alive”. This task was time-pressured, and trials timed out after 1 second. In the passive viewing task, participants viewed rapid (RSVP) streams of images, and pressed a button to indicate when the fixation cross changed colour. Contents of the dataset: > - Raw EEG data is available in individual subject folders (BrainVision raw formats .eeg, .vmrk, .vhdr). Pre-processed EEG data is available in the derivatives folders in EEGlab (.set, .fdt) and cosmoMVPA dataset (.mat) format. This experiment has 24 subjects. > - Scripts for data analysis and running the experiment are available in the code folder. Note that all code runs on both EEG experiments together, so you must download both this and the movement experiment data in order to replicate analyses. > - Stimuli are also available (400 CC0 images) > - Results of decoding analyses are available in the derivatives folder. Further notes: Note that the code is designed to run analyses for data and its partner data (experiments 2 and 3 of the paper). Copies in both folders are identical. Scripts need to be run in a particular order (detailed at the top of each script) **Further explanations of the code:** 1. Run pre-processing of EEG (analyse_EEG_preprocessing.m), and behavioural data (analyse_behavioural_EEG.m) 2. Ensure that the MTurk data has been run (analyse_behavioural_MTurk.m), from the Experiment 1 folder. 3. Run RSA (analyse_rsa.m; reliant on behavioural data and pre-processed EEG data), and run decoding (analyse_decoding.m; reliant on pre-processed EEG data) 4. Run GLMs (analyse_glms.m; reliant on RSA, behavioural) To only look at the results, the results for each of these analyses is saved in the derivatives already, so there is no need to run any of them again. Each file named plot_X.m will create a graph as in the paper. Each is reliant on saved data from the above analyses, which are saved in the derivatives folder. **Citing this dataset** If using this data, please cite the associated paper: Preprint - [https://doi.org/10.31234/osf.io/3x2qh](https://doi.org/10.31234/osf.io/3x2qh) **Contact** Contact Sophia Shatek ([sophia.shatek@sydney.edu.au](mailto:sophia.shatek@sydney.edu.au)) for additional information. ORCID: 0000-0002-7787-1379 ## Dataset Information | Dataset ID | `DS003885` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Capacity for movement is an organisational principle in object representations: EEG data from Experiment 1 | | Author (year) | `Shatek2021_E1` | | Canonical | — | | Importable as | `DS003885`, `Shatek2021_E1` | | Year | 2021 | | Authors | Shatek, Sophia M., Robinson, Amanda K., Grootswagers, Tijl, Carlson, Thomas A. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003885.v1.0.7](https://doi.org/10.18112/openneuro.ds003885.v1.0.7) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003885) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003885) | [Source URL](https://openneuro.org/datasets/ds003885) | ### Copy-paste BibTeX ```bibtex @dataset{ds003885, title = {Capacity for movement is an organisational principle in object representations: EEG data from Experiment 1}, author = {Shatek, Sophia M. and Robinson, Amanda K. and Grootswagers, Tijl and Carlson, Thomas A.}, doi = {10.18112/openneuro.ds003885.v1.0.7}, url = {https://doi.org/10.18112/openneuro.ds003885.v1.0.7}, } ``` ## Technical Details - Subjects: 24 - Recordings: 24 - Tasks: 1 - Channels: 128 - Sampling rate (Hz): 1000.0 - Duration (hours): 27.061438888888887 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 46.1 GB - File count: 24 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003885.v1.0.7 - Source: openneuro - OpenNeuro: [ds003885](https://openneuro.org/datasets/ds003885) - NeMAR: [ds003885](https://nemar.org/dataexplorer/detail?dataset_id=ds003885) ## API Reference Use the `DS003885` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003885(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Capacity for movement is an organisational principle in object representations: EEG data from Experiment 1 * **Study:** `ds003885` (OpenNeuro) * **Author (year):** `Shatek2021_E1` * **Canonical:** — Also importable as: `DS003885`, `Shatek2021_E1`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003885](https://openneuro.org/datasets/ds003885) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003885](https://nemar.org/dataexplorer/detail?dataset_id=ds003885) DOI: [https://doi.org/10.18112/openneuro.ds003885.v1.0.7](https://doi.org/10.18112/openneuro.ds003885.v1.0.7) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003885 >>> dataset = DS003885(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003885) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003885) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003887: eeg dataset, 24 subjects *Capacity for movement is an organisational principle in object representations: EEG data from Experiment 2* Access recordings and metadata through EEGDash. **Citation:** Shatek, Sophia M., Robinson, Amanda K., Grootswagers, Tijl, Carlson, Thomas A. (2021). *Capacity for movement is an organisational principle in object representations: EEG data from Experiment 2*. [10.18112/openneuro.ds003887.v1.2.2](https://doi.org/10.18112/openneuro.ds003887.v1.2.2) Modality: eeg Subjects: 24 Recordings: 24 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003887 dataset = DS003887(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003887(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003887( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003887, title = {Capacity for movement is an organisational principle in object representations: EEG data from Experiment 2}, author = {Shatek, Sophia M. and Robinson, Amanda K. and Grootswagers, Tijl and Carlson, Thomas A.}, doi = {10.18112/openneuro.ds003887.v1.2.2}, url = {https://doi.org/10.18112/openneuro.ds003887.v1.2.2}, } ``` ## About This Dataset **Overview** This data is from the paper “Capacity for movement is a major organisational principle in object representations”. This is the data of Experiment 3 (EEG: movement). Access the preprint here: [https://psyarxiv.com/3x2qh/](https://psyarxiv.com/3x2qh/) Abstract: The ability to perceive moving objects is crucial for threat identification and survival. Recent neuroimaging evidence has shown that goal-directed movement is an important element of object processing in the brain. However, prior work has primarily used moving stimuli that are also animate, making it difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between how the brain processes movement and aliveness by including stimuli that are alive but still (e.g., plants), and stimuli that are not alive but move (e.g., waves). We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Movement explained significant variance in the neural data over and above that of aliveness, showing that capacity for movement is an important dimension in the representation of visual objects in humans. In this experiment, participants completed two tasks - classification and passive viewing. In the classification task, participants classified single images that appeared on the screen as “can move” or “still”. This task was time-pressured, and trials timed out after 1 second. In the passive viewing task, participants viewed rapid (RSVP) streams of images, and pressed a button to indicate when the fixation cross changed colour. Contents of the dataset: > - Raw EEG data is available in individual subject folders (BrainVision raw formats .eeg, .vmrk, .vhdr). Pre-processed EEG data is available in the derivatives folders in EEGlab (.set, .fdt) and cosmoMVPA dataset (.mat) format. This experiment has 24 subjects. > - Scripts for data analysis and running the experiment are available in the code folder. Note that all code runs on both EEG experiments together, so you must download both this and the movement experiment data in order to replicate analyses. ### View full README **Overview** This data is from the paper “Capacity for movement is a major organisational principle in object representations”. This is the data of Experiment 3 (EEG: movement). Access the preprint here: [https://psyarxiv.com/3x2qh/](https://psyarxiv.com/3x2qh/) Abstract: The ability to perceive moving objects is crucial for threat identification and survival. Recent neuroimaging evidence has shown that goal-directed movement is an important element of object processing in the brain. However, prior work has primarily used moving stimuli that are also animate, making it difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between how the brain processes movement and aliveness by including stimuli that are alive but still (e.g., plants), and stimuli that are not alive but move (e.g., waves). We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Movement explained significant variance in the neural data over and above that of aliveness, showing that capacity for movement is an important dimension in the representation of visual objects in humans. In this experiment, participants completed two tasks - classification and passive viewing. In the classification task, participants classified single images that appeared on the screen as “can move” or “still”. This task was time-pressured, and trials timed out after 1 second. In the passive viewing task, participants viewed rapid (RSVP) streams of images, and pressed a button to indicate when the fixation cross changed colour. Contents of the dataset: > - Raw EEG data is available in individual subject folders (BrainVision raw formats .eeg, .vmrk, .vhdr). Pre-processed EEG data is available in the derivatives folders in EEGlab (.set, .fdt) and cosmoMVPA dataset (.mat) format. This experiment has 24 subjects. > - Scripts for data analysis and running the experiment are available in the code folder. Note that all code runs on both EEG experiments together, so you must download both this and the movement experiment data in order to replicate analyses. > - Stimuli are also available (400 CC0 images) > - Results of decoding analyses are available in the derivatives folder. Further notes: Note that the code is designed to run analyses for data and its partner data (experiments 2 and 3 of the paper). Copies in both folders are identical. Scripts need to be run in a particular order (detailed at the top of each script) Further explanations of the code: 1. Run pre-processing of EEG (analyse_EEG_preprocessing.m), and behavioural data (analyse_behavioural_EEG.m) 2. Ensure that the MTurk data has been run (analyse_behavioural_MTurk.m), from the Experiment 1 folder. 3. Run RSA (analyse_rsa.m; reliant on behavioural data and pre-processed EEG data), and run decoding (analyse_decoding.m; reliant on pre-processed EEG data) 4. Run GLMs (analyse_glms.m; reliant on RSA, behavioural) To only look at the results, the results for each of these analyses is saved in the derivatives already, so there is no need to run any of them again. Each file named plot_X.m will create a graph as in the paper. Each is reliant on saved data from the above analyses, which are saved in the derivatives folder. **Citing this dataset** If using this data, please cite the associated paper: **Contact** Contact Sophia Shatek ([sophia.shatek@sydney.edu.au](mailto:sophia.shatek@sydney.edu.au)) for additional information. ORCID: 0000-0002-7787-1379 ## Dataset Information | Dataset ID | `DS003887` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Capacity for movement is an organisational principle in object representations: EEG data from Experiment 2 | | Author (year) | `Shatek2021_E2` | | Canonical | — | | Importable as | `DS003887`, `Shatek2021_E2` | | Year | 2021 | | Authors | Shatek, Sophia M., Robinson, Amanda K., Grootswagers, Tijl, Carlson, Thomas A. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003887.v1.2.2](https://doi.org/10.18112/openneuro.ds003887.v1.2.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003887) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003887) | [Source URL](https://openneuro.org/datasets/ds003887) | ### Copy-paste BibTeX ```bibtex @dataset{ds003887, title = {Capacity for movement is an organisational principle in object representations: EEG data from Experiment 2}, author = {Shatek, Sophia M. and Robinson, Amanda K. and Grootswagers, Tijl and Carlson, Thomas A.}, doi = {10.18112/openneuro.ds003887.v1.2.2}, url = {https://doi.org/10.18112/openneuro.ds003887.v1.2.2}, } ``` ## Technical Details - Subjects: 24 - Recordings: 24 - Tasks: 1 - Channels: 128 - Sampling rate (Hz): 1000.0 - Duration (hours): 26.81515555555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 45.7 GB - File count: 24 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003887.v1.2.2 - Source: openneuro - OpenNeuro: [ds003887](https://openneuro.org/datasets/ds003887) - NeMAR: [ds003887](https://nemar.org/dataexplorer/detail?dataset_id=ds003887) ## API Reference Use the `DS003887` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003887(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Capacity for movement is an organisational principle in object representations: EEG data from Experiment 2 * **Study:** `ds003887` (OpenNeuro) * **Author (year):** `Shatek2021_E2` * **Canonical:** — Also importable as: `DS003887`, `Shatek2021_E2`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003887](https://openneuro.org/datasets/ds003887) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003887](https://nemar.org/dataexplorer/detail?dataset_id=ds003887) DOI: [https://doi.org/10.18112/openneuro.ds003887.v1.2.2](https://doi.org/10.18112/openneuro.ds003887.v1.2.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003887 >>> dataset = DS003887(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003887) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003887) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003922: meg dataset, 14 subjects *Multisensory Correlation Detector* Access recordings and metadata through EEGDash. **Citation:** Pesnot Lerousseau, J., Parise, C., Ernst, MO., van Wassenhove, V. (2021). *Multisensory Correlation Detector*. [10.18112/openneuro.ds003922.v1.0.1](https://doi.org/10.18112/openneuro.ds003922.v1.0.1) Modality: meg Subjects: 14 Recordings: 164 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003922 dataset = DS003922(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003922(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003922( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003922, title = {Multisensory Correlation Detector}, author = {Pesnot Lerousseau, J. and Parise, C. and Ernst, MO. and van Wassenhove, V.}, doi = {10.18112/openneuro.ds003922.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003922.v1.0.1}, } ``` ## About This Dataset **DESCRIPTION** Magnetoencephalography (MEG) dataset recorded during the presentation of audiovisual sequences with a causality judgment task and temporal order judgment task. This MEG dataset was prepared in the Brain Imaging Data Structure (MEG-BIDS, Niso et al. 2018) format using MNE-BIDS (Appelhoff et al. 2019). **PUBLISHED IN** Pesnot Lerousseau, J., Parise, C., Ernst, MO., van Wassenhove, V. (2022). Multisensory correlation computations in the human brain identified by a time-resolved encoding model. \*Nature Communications\*. [http://doi.org/10.1038/s41467-022-29687-6](http://doi.org/10.1038/s41467-022-29687-6) **PARTICIPANTS** ### View full README **DESCRIPTION** Magnetoencephalography (MEG) dataset recorded during the presentation of audiovisual sequences with a causality judgment task and temporal order judgment task. This MEG dataset was prepared in the Brain Imaging Data Structure (MEG-BIDS, Niso et al. 2018) format using MNE-BIDS (Appelhoff et al. 2019). **PUBLISHED IN** Pesnot Lerousseau, J., Parise, C., Ernst, MO., van Wassenhove, V. (2022). Multisensory correlation computations in the human brain identified by a time-resolved encoding model. \*Nature Communications\*. [http://doi.org/10.1038/s41467-022-29687-6](http://doi.org/10.1038/s41467-022-29687-6) **PARTICIPANTS** The dataset contains 13 participants (Ab140232, Jl150443, Mm150194, Al150424, Mp110340, Rt160359, Cb140229, Cc160310, Lb160367, Mb160304, Mk150295, Sl160372, Mp150285). **EXPERIMENT** The experiment consisted of 10 consecutive recording blocks of 8 minutes each, whose order was counterbalanced across participants. Three blocks tested participants on a Causality judgement, and three blocks tested participants with a Temporal order judgement. Importantly, the same audiovisual sequences were used in both tasks in order to maintain a constant flow of feedforward multisensory inputs while manipulating the endogenous task requirements. Each block was composed of 25 repetitions of the 6 possible audiovisual sequences. A total of 75 presentations of each stimulus sequence were thus tested in each task. Four additional recording blocks consisted of participants passively hearing (auditory localizer, 2 blocks) or viewing (visual localizer, 2 blocks) one constitutive modality of the audiovisual sequence. Each localizer block was composed of 25 repetitions of the 6 possible stimuli (auditory or visual part of each stimuli), yielding a total of 50 presentations of each auditory and visual stimuli (2 tasks x 3 blocks x 25 repetitions x 6 sequences + 2 modalities x 25 repetitions x 2 blocks x 6 sequences = 1500 trials in total). **STIMULI** Six audiovisual sequences were presented (DD, DC, CC, AA, AV, VV). **BLOCKS** Ten blocks were presented (3 Causality, 3 Temporal, 2 Auditory, 2 Visual). **EVENTS** - ‘Causality/DD’:11 - ‘Causality/DC’:12 - ‘Causality/CC’:13 - ‘Causality/AA’:14 - ‘Causality/AV’:15 - ‘Causality/VV’:16 - ‘Temporal/DD’:21 - ‘Temporal/DC’:22 - ‘Temporal/CC’:23 - ‘Temporal/AA’:24 - ‘Temporal/AV’:25 - ‘Temporal/VV’:26 - ‘Auditory/DD’:41 - ‘Auditory/DC’:42 - ‘Auditory/CC’:43 - ‘Auditory/AA’:44 - ‘Auditory/AV’:45 - ‘Auditory/VV’:46 - ‘Visual/DD’:51 - ‘Visual/DC’:52 - ‘Visual/CC’:53 - ‘Visual/AA’:54 - ‘Visual/AV’:55 - ‘Visual/VV’:56 **MEG** Brain magnetic fields were recorded in a MSR using a 306 MEG system (Neuromag Elekta LTD, Helsinki). MEG recordings were sampled at 1 kHz and band-pass filtered between 0.03 Hz and 330 Hz. Four head position coils (HPI) measured the head position of participants before each block; three fiducial markers (nasion and pre-auricular points) were used for digitization and anatomicalMRI (aMRI) immediately following MEG acquisition. Electrooculograms (EOG, horizontal and vertical eye movements) and electrocardiogram (ECG) were simultaneously recorded. Prior to the session, 2 min of empty room recordings was acquired for the computation of the noise covariance matrix. Bad MEG channels were marked manually. **MRI** The T1 weighted aMRI was recorded using a 3-T Siemens Trio MRI scanner. Parameters of the sequence were: voxel size: 1.0 × 1.0 × 1.1 mm; acquisition time: 466 s; repetition time TR = 2300 ms; and echo time TE = 2.98 ms **BEHAVIOR** File sourcedata/behavioral_data.txt **REFERENCES** Pesnot Lerousseau, J., Parise, C., Ernst, MO., van Wassenhove, V. (2022). Multisensory correlation computations in the human brain identified by a time-resolved encoding model. Nature Communications. [http://doi.org/10.1038/s41467-022-29687-6](http://doi.org/10.1038/s41467-022-29687-6) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [http://doi.org/10.1038/sdata.2018.110](http://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `DS003922` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Multisensory Correlation Detector | | Author (year) | `Lerousseau2021` | | Canonical | — | | Importable as | `DS003922`, `Lerousseau2021` | | Year | 2021 | | Authors | Pesnot Lerousseau, J., Parise, C., Ernst, MO., van Wassenhove, V. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003922.v1.0.1](https://doi.org/10.18112/openneuro.ds003922.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003922) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003922) | [Source URL](https://openneuro.org/datasets/ds003922) | ### Copy-paste BibTeX ```bibtex @dataset{ds003922, title = {Multisensory Correlation Detector}, author = {Pesnot Lerousseau, J. and Parise, C. and Ernst, MO. and van Wassenhove, V.}, doi = {10.18112/openneuro.ds003922.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003922.v1.0.1}, } ``` ## Technical Details - Subjects: 14 - Recordings: 164 - Tasks: 3 - Channels: 342 (128), 323 (23) - Sampling rate (Hz): 1000.0 - Duration (hours): 16.487458055555557 - Pathology: Healthy - Modality: Multisensory - Type: Perception - Size on disk: 75.7 GB - File count: 164 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003922.v1.0.1 - Source: openneuro - OpenNeuro: [ds003922](https://openneuro.org/datasets/ds003922) - NeMAR: [ds003922](https://nemar.org/dataexplorer/detail?dataset_id=ds003922) ## API Reference Use the `DS003922` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003922(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multisensory Correlation Detector * **Study:** `ds003922` (OpenNeuro) * **Author (year):** `Lerousseau2021` * **Canonical:** — Also importable as: `DS003922`, `Lerousseau2021`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 14; recordings: 164; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003922](https://openneuro.org/datasets/ds003922) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003922](https://nemar.org/dataexplorer/detail?dataset_id=ds003922) DOI: [https://doi.org/10.18112/openneuro.ds003922.v1.0.1](https://doi.org/10.18112/openneuro.ds003922.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003922 >>> dataset = DS003922(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003922) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003922) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS003944: eeg dataset, 82 subjects *EEG: First Episode Psychosis vs. Control Resting Task 1* Access recordings and metadata through EEGDash. **Citation:** Dean Salisbury, Dylan Seebold, Brian Coffman (2021). *EEG: First Episode Psychosis vs. Control Resting Task 1*. [10.18112/openneuro.ds003944.v1.0.1](https://doi.org/10.18112/openneuro.ds003944.v1.0.1) Modality: eeg Subjects: 82 Recordings: 82 License: CC0 Source: openneuro Citations: 7.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003944 dataset = DS003944(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003944(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003944( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003944, title = {EEG: First Episode Psychosis vs. Control Resting Task 1}, author = {Dean Salisbury and Dylan Seebold and Brian Coffman}, doi = {10.18112/openneuro.ds003944.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003944.v1.0.1}, } ``` ## About This Dataset Resting EEG and MEG data was gathered for two independently collected samples of healthy and First Episode Psychosis (FEP) individuals. To obtain resting data, EEG channels were recorded for 5 minutes using an Elekta Neuromag Vectorview system. EEG was recorded using a low-impedance 10-10 system 60-channel cap. The first collected sample of EEG data is provided here. This sample includes a portion of subjects from the second acquisition (EEG: First Episode Psychosis vs. Control Resting Task 2), since they were collected using the same montage. The subjects from Task 2 that have been included here are: sub-2140A, sub-2170A, sub-2174A, sub-2176A, sub-2177A, sub-2184A, sub-2193A, sub-2214A, sub-2217A, sub-2221A. The phenotype directory contains clinical assessment results and data divided by type for all subjects. The assessment results were categorized as follows: BPRS - Brief Psychiatric Rating Scale, SANS - Scale for Assessment of Negative Symptoms, SAPS - Scale for Assessment of Positive Symptoms, GAFGAS - Global Assessment of Functioning, SFS - Social Functioning Scale, MATRICS - MATRICS Consensus Cognitive Battery, WASI - Wechsler Abbreviated Scale of Intelligence, Hollingshead - Hollingshead Four-Factor Index of Socioeconomic Status, Medications - Chlorpromazine equivalency of prescribed medication at time of EEG scan. Values/scores that were not collected and questions without given responses are denoted by n/a. ## Dataset Information | Dataset ID | `DS003944` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: First Episode Psychosis vs. Control Resting Task 1 | | Author (year) | `Salisbury2021_First` | | Canonical | — | | Importable as | `DS003944`, `Salisbury2021_First` | | Year | 2021 | | Authors | Dean Salisbury, Dylan Seebold, Brian Coffman | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003944.v1.0.1](https://doi.org/10.18112/openneuro.ds003944.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003944) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003944) | [Source URL](https://openneuro.org/datasets/ds003944) | ### Copy-paste BibTeX ```bibtex @dataset{ds003944, title = {EEG: First Episode Psychosis vs. Control Resting Task 1}, author = {Dean Salisbury and Dylan Seebold and Brian Coffman}, doi = {10.18112/openneuro.ds003944.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003944.v1.0.1}, } ``` ## Technical Details - Subjects: 82 - Recordings: 82 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 (81), 3000.00030000003 - Duration (hours): 6.999305547125 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 6.2 GB - File count: 82 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003944.v1.0.1 - Source: openneuro - OpenNeuro: [ds003944](https://openneuro.org/datasets/ds003944) - NeMAR: [ds003944](https://nemar.org/dataexplorer/detail?dataset_id=ds003944) ## API Reference Use the `DS003944` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003944(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: First Episode Psychosis vs. Control Resting Task 1 * **Study:** `ds003944` (OpenNeuro) * **Author (year):** `Salisbury2021_First` * **Canonical:** — Also importable as: `DS003944`, `Salisbury2021_First`. Modality: `eeg`. Subjects: 82; recordings: 82; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003944](https://openneuro.org/datasets/ds003944) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003944](https://nemar.org/dataexplorer/detail?dataset_id=ds003944) DOI: [https://doi.org/10.18112/openneuro.ds003944.v1.0.1](https://doi.org/10.18112/openneuro.ds003944.v1.0.1) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003944 >>> dataset = DS003944(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003944) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003944) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003947: eeg dataset, 61 subjects *EEG: First Episode Psychosis vs. Control Resting Task 2* Access recordings and metadata through EEGDash. **Citation:** Dean Salisbury, Dylan Seebold, Brian Coffman (2021). *EEG: First Episode Psychosis vs. Control Resting Task 2*. [10.18112/openneuro.ds003947.v1.0.1](https://doi.org/10.18112/openneuro.ds003947.v1.0.1) Modality: eeg Subjects: 61 Recordings: 61 License: CC0 Source: openneuro Citations: 8.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003947 dataset = DS003947(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003947(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003947( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003947, title = {EEG: First Episode Psychosis vs. Control Resting Task 2}, author = {Dean Salisbury and Dylan Seebold and Brian Coffman}, doi = {10.18112/openneuro.ds003947.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003947.v1.0.1}, } ``` ## About This Dataset Resting EEG and MEG data was gathered for two independently collected samples of healthy and First Episode Psychosis (FEP) individuals. To obtain resting data, EEG channels were recorded for 5 minutes using an Elekta Neuromag Vectorview system. EEG was recorded using a low-impedance 10-10 system 60-channel cap. The second collected sample of EEG data is provided here. This sample excludes a portion of subjects that have been included with the first acquisition (EEG: First Episode Psychosis vs. Control Resting Task 1), since they were collected using the same montage. The subjects that have been excluded here and are included in Task 1 are: sub-2140A, sub-2170A, sub-2174A, sub-2176A, sub-2177A, sub-2184A, sub-2193A, sub-2214A, sub-2217A, sub-2221A. The phenotype directory contains clinical assessment results and data divided by type for all subjects. The assessment results were categorized as follows: BPRS - Brief Psychiatric Rating Scale, SANS - Scale for Assessment of Negative Symptoms, SAPS - Scale for Assessment of Positive Symptoms, GAFGAS - Global Assessment of Functioning, SFS - Social Functioning Scale, MATRICS - MATRICS Consensus Cognitive Battery, WASI - Wechsler Abbreviated Scale of Intelligence, Hollingshead - Hollingshead Four-Factor Index of Socioeconomic Status, Medications - Chlorpromazine equivalency of prescribed medication at time of EEG scan. Values/scores that were not collected and questions without given responses are denoted by n/a. ## Dataset Information | Dataset ID | `DS003947` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: First Episode Psychosis vs. Control Resting Task 2 | | Author (year) | `Salisbury2021_First_Episode` | | Canonical | — | | Importable as | `DS003947`, `Salisbury2021_First_Episode` | | Year | 2021 | | Authors | Dean Salisbury, Dylan Seebold, Brian Coffman | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003947.v1.0.1](https://doi.org/10.18112/openneuro.ds003947.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003947) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003947) | [Source URL](https://openneuro.org/datasets/ds003947) | ### Copy-paste BibTeX ```bibtex @dataset{ds003947, title = {EEG: First Episode Psychosis vs. Control Resting Task 2}, author = {Dean Salisbury and Dylan Seebold and Brian Coffman}, doi = {10.18112/openneuro.ds003947.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds003947.v1.0.1}, } ``` ## Technical Details - Subjects: 61 - Recordings: 61 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 3000.00030000003 (54), 1000.0 (7) - Duration (hours): 5.265971754930556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 12.5 GB - File count: 61 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003947.v1.0.1 - Source: openneuro - OpenNeuro: [ds003947](https://openneuro.org/datasets/ds003947) - NeMAR: [ds003947](https://nemar.org/dataexplorer/detail?dataset_id=ds003947) ## API Reference Use the `DS003947` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003947(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: First Episode Psychosis vs. Control Resting Task 2 * **Study:** `ds003947` (OpenNeuro) * **Author (year):** `Salisbury2021_First_Episode` * **Canonical:** — Also importable as: `DS003947`, `Salisbury2021_First_Episode`. Modality: `eeg`. Subjects: 61; recordings: 61; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003947](https://openneuro.org/datasets/ds003947) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003947](https://nemar.org/dataexplorer/detail?dataset_id=ds003947) DOI: [https://doi.org/10.18112/openneuro.ds003947.v1.0.1](https://doi.org/10.18112/openneuro.ds003947.v1.0.1) NEMAR citation count: 8 ### Examples ```pycon >>> from eegdash.dataset import DS003947 >>> dataset = DS003947(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003947) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003947) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003969: eeg dataset, 98 subjects *Meditation vs thinking task* Access recordings and metadata through EEGDash. **Citation:** Arnaud Delorme, Claire Braboszcz (2021). *Meditation vs thinking task*. [10.18112/openneuro.ds003969.v1.0.0](https://doi.org/10.18112/openneuro.ds003969.v1.0.0) Modality: eeg Subjects: 98 Recordings: 392 License: CC0 Source: openneuro Citations: 7.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003969 dataset = DS003969(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003969(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003969( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003969, title = {Meditation vs thinking task}, author = {Arnaud Delorme and Claire Braboszcz}, doi = {10.18112/openneuro.ds003969.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003969.v1.0.0}, } ``` ## About This Dataset Data collection took place at the Meditation Research Institute (MRI) in Rishikesh, India, under the supervision of Arnaud Delorme, Ph.D. The project was approved by the local MRI Indian ethical committee and the ethical committee of the University of California San Diego (IRB project # 090731). Participants sat either on a blanket on the floor or on a chair for both experimental periods depending on their personal preference. Participants were asked to keep their eyes closed, and all lighting in the room was turned off during data collection. An intercom allowed communication between the experimental and the recording room. Participants performed four blocks, 2 meditation blocks interspaced by two thining blocks (in which they are instructed to think actively). Half of the participants start with a meditation block, and half of them start with a thinking block. The first meditation block is a breath counting meditation for all participants. The second block is a tradition-specific meditation - except for the control group, for which it is a breath counting meditation. ## Dataset Information | Dataset ID | `DS003969` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Meditation vs thinking task | | Author (year) | `Delorme2021` | | Canonical | — | | Importable as | `DS003969`, `Delorme2021` | | Year | 2021 | | Authors | Arnaud Delorme, Claire Braboszcz | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003969.v1.0.0](https://doi.org/10.18112/openneuro.ds003969.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003969) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003969) | [Source URL](https://openneuro.org/datasets/ds003969) | ### Copy-paste BibTeX ```bibtex @dataset{ds003969, title = {Meditation vs thinking task}, author = {Arnaud Delorme and Claire Braboszcz}, doi = {10.18112/openneuro.ds003969.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003969.v1.0.0}, } ``` ## Technical Details - Subjects: 98 - Recordings: 392 - Tasks: 4 - Channels: 79 (294), 72 (98) - Sampling rate (Hz): 1024.0 (386), 2048.0 (6) - Duration (hours): 66.53361111111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 54.5 GB - File count: 392 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003969.v1.0.0 - Source: openneuro - OpenNeuro: [ds003969](https://openneuro.org/datasets/ds003969) - NeMAR: [ds003969](https://nemar.org/dataexplorer/detail?dataset_id=ds003969) ## API Reference Use the `DS003969` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003969(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Meditation vs thinking task * **Study:** `ds003969` (OpenNeuro) * **Author (year):** `Delorme2021` * **Canonical:** — Also importable as: `DS003969`, `Delorme2021`. Modality: `eeg`. Subjects: 98; recordings: 392; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003969](https://openneuro.org/datasets/ds003969) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003969](https://nemar.org/dataexplorer/detail?dataset_id=ds003969) DOI: [https://doi.org/10.18112/openneuro.ds003969.v1.0.0](https://doi.org/10.18112/openneuro.ds003969.v1.0.0) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003969 >>> dataset = DS003969(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003969) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003969) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS003987: eeg dataset, 23 subjects *EEG: Amphetamine trials 5CCPT and Probabilistic Learning* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh, Greg Light, Neal Swerdlow, Jonathan Brigman, Jared Young (2022). *EEG: Amphetamine trials 5CCPT and Probabilistic Learning*. [10.18112/openneuro.ds003987.v1.0.0](https://doi.org/10.18112/openneuro.ds003987.v1.0.0) Modality: eeg Subjects: 23 Recordings: 69 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS003987 dataset = DS003987(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS003987(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS003987( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds003987, title = {EEG: Amphetamine trials 5CCPT and Probabilistic Learning}, author = {James F Cavanagh and Greg Light and Neal Swerdlow and Jonathan Brigman and Jared Young}, doi = {10.18112/openneuro.ds003987.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003987.v1.0.0}, } ``` ## About This Dataset Two different tasks. From: “Electrophysiological biomarkers of behavioral dimensions from cross-species paradigms” Second phase (UH3 phase): Amphetamine trials. Data for both Bhakta et al. 5CCPT paper (humans only) and Cavanagh et al. PLT paper (humans and mice). N=23 humans. 3 drug conditions: placebo, 10mg, 20mg. N=28 mice in code folder. 4 drug condis: placebo, 0.1, 0.3, 1.0 mg/kg. EEG Triggers were odd binary recombinations that were re-translated into 0-255 in Matlab. See .m scripts and Trigger Translator.xls **\*\*\*\*\*\*\*\*\*OK! LISTEN! The .bdf files were to big to import using this function. So I imported them in EEGLab, downsampled to 500 Hz, then saved them as .set files. THEN I ran the import script on these .set files. So you do not need to re-downsample in STEP1 if you run anything from the code folder. \*\*\*\*\*\*\*\*\*** Data collected circa 2016-2019 in San Diego. Data analyzed circa 2017-2021 in New Mexico. - James F Cavanagh 06/16/2021 ## Dataset Information | Dataset ID | `DS003987` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Amphetamine trials 5CCPT and Probabilistic Learning | | Author (year) | `Cavanagh2022_Amphetamine_trials_5` | | Canonical | — | | Importable as | `DS003987`, `Cavanagh2022_Amphetamine_trials_5` | | Year | 2022 | | Authors | James F Cavanagh, Greg Light, Neal Swerdlow, Jonathan Brigman, Jared Young | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds003987.v1.0.0](https://doi.org/10.18112/openneuro.ds003987.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds003987) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds003987) | [Source URL](https://openneuro.org/datasets/ds003987) | ### Copy-paste BibTeX ```bibtex @dataset{ds003987, title = {EEG: Amphetamine trials 5CCPT and Probabilistic Learning}, author = {James F Cavanagh and Greg Light and Neal Swerdlow and Jonathan Brigman and Jared Young}, doi = {10.18112/openneuro.ds003987.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds003987.v1.0.0}, } ``` ## Technical Details - Subjects: 23 - Recordings: 69 - Tasks: 1 - Channels: 71 - Sampling rate (Hz): 500.0930232558139 - Duration (hours): 52.076385814525466 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 25.6 GB - File count: 69 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds003987.v1.0.0 - Source: openneuro - OpenNeuro: [ds003987](https://openneuro.org/datasets/ds003987) - NeMAR: [ds003987](https://nemar.org/dataexplorer/detail?dataset_id=ds003987) ## API Reference Use the `DS003987` class to access this dataset programmatically. ### *class* eegdash.dataset.DS003987(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Amphetamine trials 5CCPT and Probabilistic Learning * **Study:** `ds003987` (OpenNeuro) * **Author (year):** `Cavanagh2022_Amphetamine_trials_5` * **Canonical:** — Also importable as: `DS003987`, `Cavanagh2022_Amphetamine_trials_5`. Modality: `eeg`. Subjects: 23; recordings: 69; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003987](https://openneuro.org/datasets/ds003987) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003987](https://nemar.org/dataexplorer/detail?dataset_id=ds003987) DOI: [https://doi.org/10.18112/openneuro.ds003987.v1.0.0](https://doi.org/10.18112/openneuro.ds003987.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003987 >>> dataset = DS003987(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds003987) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds003987) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004000: eeg dataset, 43 subjects *Fribourg Ultimatum Game in Schizophrenia Study* Access recordings and metadata through EEGDash. **Citation:** Anna Padée, Pascal Missonnier, Anne Prévot, Grégoire Favre, Isabelle Gothuey, Marco Merlo, Jonas Richiardi (2022). *Fribourg Ultimatum Game in Schizophrenia Study*. [10.18112/openneuro.ds004000.v1.0.0](https://doi.org/10.18112/openneuro.ds004000.v1.0.0) Modality: eeg Subjects: 43 Recordings: 86 License: CC0 Source: openneuro Citations: 6.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004000 dataset = DS004000(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004000(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004000( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004000, title = {Fribourg Ultimatum Game in Schizophrenia Study}, author = {Anna Padée and Pascal Missonnier and Anne Prévot and Grégoire Favre and Isabelle Gothuey and Marco Merlo and Jonas Richiardi}, doi = {10.18112/openneuro.ds004000.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004000.v1.0.0}, } ``` ## About This Dataset This is a schizophrenia in ultimatum game task study for Fribourg University. Participants were asked to play the UG in both roles, both as responder and proposer. 128 electrode EEG was recorded during the task. 19 patients with psychosis epoisodes and 24 healths controls were recorded during the task. This dataset was recorded at the Fribourg University in Switzerland. The project was approved by the Ethics Committee of the University of Fribourg (reference number: 054/13-CER-FR). Participants sat in a shielded room, in a comfortable chair and played the game, while EEG was recorded. For each role, participants performed three blocks, consisting of 30 repetitions each. ## Dataset Information | Dataset ID | `DS004000` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Fribourg Ultimatum Game in Schizophrenia Study | | Author (year) | `Padee2022` | | Canonical | — | | Importable as | `DS004000`, `Padee2022` | | Year | 2022 | | Authors | Anna Padée, Pascal Missonnier, Anne Prévot, Grégoire Favre, Isabelle Gothuey, Marco Merlo, Jonas Richiardi | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004000.v1.0.0](https://doi.org/10.18112/openneuro.ds004000.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004000) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004000) | [Source URL](https://openneuro.org/datasets/ds004000) | ### Copy-paste BibTeX ```bibtex @dataset{ds004000, title = {Fribourg Ultimatum Game in Schizophrenia Study}, author = {Anna Padée and Pascal Missonnier and Anne Prévot and Grégoire Favre and Isabelle Gothuey and Marco Merlo and Jonas Richiardi}, doi = {10.18112/openneuro.ds004000.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004000.v1.0.0}, } ``` ## Technical Details - Subjects: 43 - Recordings: 86 - Tasks: 2 - Channels: 132 - Sampling rate (Hz): 2048.0 - Duration (hours): 12.335 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 22.5 GB - File count: 86 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004000.v1.0.0 - Source: openneuro - OpenNeuro: [ds004000](https://openneuro.org/datasets/ds004000) - NeMAR: [ds004000](https://nemar.org/dataexplorer/detail?dataset_id=ds004000) ## API Reference Use the `DS004000` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004000(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Fribourg Ultimatum Game in Schizophrenia Study * **Study:** `ds004000` (OpenNeuro) * **Author (year):** `Padee2022` * **Canonical:** — Also importable as: `DS004000`, `Padee2022`. Modality: `eeg`. Subjects: 43; recordings: 86; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004000](https://openneuro.org/datasets/ds004000) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004000](https://nemar.org/dataexplorer/detail?dataset_id=ds004000) DOI: [https://doi.org/10.18112/openneuro.ds004000.v1.0.0](https://doi.org/10.18112/openneuro.ds004000.v1.0.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS004000 >>> dataset = DS004000(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004000) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004000) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004010: eeg dataset, 24 subjects *MAVIS* Access recordings and metadata through EEGDash. **Citation:** Leonhard Waschke, Thomas Donoghue, Lorenz Fiedler, Sydney Smith, Douglas Garrett, Bradley Voytek, Jonas Obleser (2022). *MAVIS*. [10.18112/openneuro.ds004010.v1.0.0](https://doi.org/10.18112/openneuro.ds004010.v1.0.0) Modality: eeg Subjects: 24 Recordings: 24 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004010 dataset = DS004010(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004010(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004010( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004010, title = {MAVIS}, author = {Leonhard Waschke and Thomas Donoghue and Lorenz Fiedler and Sydney Smith and Douglas Garrett and Bradley Voytek and Jonas Obleser}, doi = {10.18112/openneuro.ds004010.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004010.v1.0.0}, } ``` ## About This Dataset EEG data from 24 healthy participants performing a multisensory detection task was collected to investigate the dynamics of EEG activity during varying selective attention and the processing of sensory stimuli with distinct features. Participants detected targets in simultaneous audio-visual noise. ## Dataset Information | Dataset ID | `DS004010` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MAVIS | | Author (year) | `Waschke2022` | | Canonical | `MAVIS` | | Importable as | `DS004010`, `Waschke2022`, `MAVIS` | | Year | 2022 | | Authors | Leonhard Waschke, Thomas Donoghue, Lorenz Fiedler, Sydney Smith, Douglas Garrett, Bradley Voytek, Jonas Obleser | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004010.v1.0.0](https://doi.org/10.18112/openneuro.ds004010.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004010) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004010) | [Source URL](https://openneuro.org/datasets/ds004010) | ### Copy-paste BibTeX ```bibtex @dataset{ds004010, title = {MAVIS}, author = {Leonhard Waschke and Thomas Donoghue and Lorenz Fiedler and Sydney Smith and Douglas Garrett and Bradley Voytek and Jonas Obleser}, doi = {10.18112/openneuro.ds004010.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004010.v1.0.0}, } ``` ## Technical Details - Subjects: 24 - Recordings: 24 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 26.45728861111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 23.1 GB - File count: 24 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004010.v1.0.0 - Source: openneuro - OpenNeuro: [ds004010](https://openneuro.org/datasets/ds004010) - NeMAR: [ds004010](https://nemar.org/dataexplorer/detail?dataset_id=ds004010) ## API Reference Use the `DS004010` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004010(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MAVIS * **Study:** `ds004010` (OpenNeuro) * **Author (year):** `Waschke2022` * **Canonical:** `MAVIS` Also importable as: `DS004010`, `Waschke2022`, `MAVIS`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004010](https://openneuro.org/datasets/ds004010) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004010](https://nemar.org/dataexplorer/detail?dataset_id=ds004010) DOI: [https://doi.org/10.18112/openneuro.ds004010.v1.0.0](https://doi.org/10.18112/openneuro.ds004010.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004010 >>> dataset = DS004010(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004010) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004010) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004011: meg dataset, 22 subjects *The nature of neural object representations during dynamic occlusion* Access recordings and metadata through EEGDash. **Citation:** Lina Teichmann, Denise Moerel, Anina Rich, Chris Baker (2022). *The nature of neural object representations during dynamic occlusion*. [10.18112/openneuro.ds004011.v1.0.3](https://doi.org/10.18112/openneuro.ds004011.v1.0.3) Modality: meg Subjects: 22 Recordings: 132 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004011 dataset = DS004011(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004011(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004011( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004011, title = {The nature of neural object representations during dynamic occlusion}, author = {Lina Teichmann and Denise Moerel and Anina Rich and Chris Baker}, doi = {10.18112/openneuro.ds004011.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds004011.v1.0.3}, } ``` ## About This Dataset The main folder contains the raw MEG data for all participants in standard bids format. See references. The ‘sourcedata’ folder contains the behavioural data collected during the MEG session as well as the eyetracking data. The data in this folder follows the following trial structure: - sourcedata > - beh > - sub-[participant number] > > - sub-[participant number]_task-occlusion_run-[run number]_events.csv: contains all the events for each trial in the MEG session, detailing what was shown on the screen. > > - sub-[participant number]_task-occlusion_run-[run number]_occframes.csv: contains all the stimulus positions for each occlusion trial in the MEG session. > > - sub-[participant number]_task-occlusion_run-[run number]_disframes.csv: contains all the stimulus positions for each disappearance trial in the MEG session. > - eyetracking > - sub-[participant number]_Occ.edf: edf file containing the eye positions during the MEG session. The ‘derivatives’ folder contains the pre-processed MEG data for each participant. The data in this folder follows the following trial structure: - derivatives > - preprocessed > - cosmo_p[participant number].mat: cosmomvpa formatted file with the pre-processed data, epoched for each trial, containing the following variables: > > - ds_diss: cosmo data struct containing the disappearance trials epoched relative to stimulus onset (MEG channels) > > - ds_occ: cosmo data struct containing the disappearance trials epoched relative to stimulus onset (MEG channels) > > - ds_loc: cosmo data struct containing the unpredictable position stream trials epoched relative to stimulus onset (MEG channels) > > - ds_eyes_diss: cosmo data struct containing the disappearance trials epoched relative to stimulus onset (eye-x, eye-y, pupil size) > > - ds_eyes_occ: cosmo data struct containing the disappearance trials epoched relative to stimulus onset (eye-x, eye-y, pupil size) > > - ds_eyes_loc: cosmo data struct containing the unpredictable position stream trials epoched relative to stimulus onset (eye-x, eye-y, pupil size) > -  cosmo_p[participant number]_position_epochs.mat: cosmomvpa formatted file with the pre-processed data, epoched relative to each position change, containing the following variables: > - ds_tiny: cell with two entries. First entry contains the disappearance trials epoched relative to position change. Second entry contains the occlusion trials epoched relative to position change. (MEG channels) > - ds_tiny_eyes: cell with two entries. First entry contains the disappearance trials epoched relative to position change. Second entry contains the occlusion trials epoched relative to position change. (eye-x, eye-y, pupil size) References: Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `DS004011` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The nature of neural object representations during dynamic occlusion | | Author (year) | `Teichmann2022` | | Canonical | — | | Importable as | `DS004011`, `Teichmann2022` | | Year | 2022 | | Authors | Lina Teichmann, Denise Moerel, Anina Rich, Chris Baker | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004011.v1.0.3](https://doi.org/10.18112/openneuro.ds004011.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004011) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004011) | [Source URL](https://openneuro.org/datasets/ds004011) | ### Copy-paste BibTeX ```bibtex @dataset{ds004011, title = {The nature of neural object representations during dynamic occlusion}, author = {Lina Teichmann and Denise Moerel and Anina Rich and Chris Baker}, doi = {10.18112/openneuro.ds004011.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds004011.v1.0.3}, } ``` ## Technical Details - Subjects: 22 - Recordings: 132 - Tasks: 1 - Channels: 309 - Sampling rate (Hz): 1200.0 - Duration (hours): 39.53718888888889 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 198.1 GB - File count: 132 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004011.v1.0.3 - Source: openneuro - OpenNeuro: [ds004011](https://openneuro.org/datasets/ds004011) - NeMAR: [ds004011](https://nemar.org/dataexplorer/detail?dataset_id=ds004011) ## API Reference Use the `DS004011` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004011(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The nature of neural object representations during dynamic occlusion * **Study:** `ds004011` (OpenNeuro) * **Author (year):** `Teichmann2022` * **Canonical:** — Also importable as: `DS004011`, `Teichmann2022`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 22; recordings: 132; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004011](https://openneuro.org/datasets/ds004011) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004011](https://nemar.org/dataexplorer/detail?dataset_id=ds004011) DOI: [https://doi.org/10.18112/openneuro.ds004011.v1.0.3](https://doi.org/10.18112/openneuro.ds004011.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004011 >>> dataset = DS004011(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004011) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004011) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS004012: meg dataset, 30 subjects *BRAR_NQ* Access recordings and metadata through EEGDash. **Citation:** Nur Syairah Ab Rani, Nurfaizatul Aisyah Ab Aziz, Mohammed Farouq Reza, Muzaimi Mustapha (2022). *BRAR_NQ*. [10.18112/openneuro.ds004012.v1.0.0](https://doi.org/10.18112/openneuro.ds004012.v1.0.0) Modality: meg Subjects: 30 Recordings: 294 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004012 dataset = DS004012(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004012(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004012( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004012, title = {BRAR_NQ}, author = {Nur Syairah Ab Rani and Nurfaizatul Aisyah Ab Aziz and Mohammed Farouq Reza and Muzaimi Mustapha}, doi = {10.18112/openneuro.ds004012.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004012.v1.0.0}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `DS004012` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BRAR_NQ | | Author (year) | `Rani2022` | | Canonical | `Rani2019` | | Importable as | `DS004012`, `Rani2022`, `Rani2019` | | Year | 2022 | | Authors | Nur Syairah Ab Rani, Nurfaizatul Aisyah Ab Aziz, Mohammed Farouq Reza, Muzaimi Mustapha | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004012.v1.0.0](https://doi.org/10.18112/openneuro.ds004012.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004012) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004012) | [Source URL](https://openneuro.org/datasets/ds004012) | ### Copy-paste BibTeX ```bibtex @dataset{ds004012, title = {BRAR_NQ}, author = {Nur Syairah Ab Rani and Nurfaizatul Aisyah Ab Aziz and Mohammed Farouq Reza and Muzaimi Mustapha}, doi = {10.18112/openneuro.ds004012.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004012.v1.0.0}, } ``` ## Technical Details - Subjects: 30 - Recordings: 294 - Tasks: 10 - Channels: 383 - Sampling rate (Hz): 1000.0 - Duration (hours): 15.016307222222222 - Pathology: Healthy - Modality: — - Type: — - Size on disk: 78.3 GB - File count: 294 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004012.v1.0.0 - Source: openneuro - OpenNeuro: [ds004012](https://openneuro.org/datasets/ds004012) - NeMAR: [ds004012](https://nemar.org/dataexplorer/detail?dataset_id=ds004012) ## API Reference Use the `DS004012` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004012(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BRAR_NQ * **Study:** `ds004012` (OpenNeuro) * **Author (year):** `Rani2022` * **Canonical:** `Rani2019` Also importable as: `DS004012`, `Rani2022`, `Rani2019`. Modality: `meg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 30; recordings: 294; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004012](https://openneuro.org/datasets/ds004012) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004012](https://nemar.org/dataexplorer/detail?dataset_id=ds004012) DOI: [https://doi.org/10.18112/openneuro.ds004012.v1.0.0](https://doi.org/10.18112/openneuro.ds004012.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004012 >>> dataset = DS004012(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004012) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004012) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS004015: eeg dataset, 36 subjects *Attended speaker paradigm (cEEGrid data)* Access recordings and metadata through EEGDash. **Citation:** Bjoern Holtze, Marc Rosenkranz, Manuela Jaeger, Stefan Debener, Bojana Mirkovic (2022). *Attended speaker paradigm (cEEGrid data)*. [10.18112/openneuro.ds004015.v1.0.2](https://doi.org/10.18112/openneuro.ds004015.v1.0.2) Modality: eeg Subjects: 36 Recordings: 36 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004015 dataset = DS004015(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004015(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004015( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004015, title = {Attended speaker paradigm (cEEGrid data)}, author = {Bjoern Holtze and Marc Rosenkranz and Manuela Jaeger and Stefan Debener and Bojana Mirkovic}, doi = {10.18112/openneuro.ds004015.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004015.v1.0.2}, } ``` ## About This Dataset Within this study cEEGrid data from two previous studies were pooled. 15 participants from Jaeger et al. (2020) and 21 from Holtze et al. (2021) were included. Participants performed a two-competing speaker paradigm in both original studies. Participants were instructed to either attend to the left or right audio book. The paradigm consisted of six (Jaeger et al. 2020) or five (Holtze et al. 2021) 10-minute blocks of audio book presentation. In Jaeger et al. (2020) both audio books were always presented equally loud. In Holtze et al. 2021, a 10-minute block could either be presented in the omnidirectional condition (both audio books were presented equally loud) or in the beamforming condition (the to-be-attended audio book was louder than the to-be-ignored audio book). The first 10-minute block was always presented in the omnidirectional condition whereas the conditions were alternated for the later four blocks, with one half of the participants starting with the omnidirectonal condition and the other half starting with the beamforming condition. The article ([https://doi.org/10.3389/fnins.2022.869426](https://doi.org/10.3389/fnins.2022.869426)) contains all methodological details - Björn Holtze (February, 2022) ## Dataset Information | Dataset ID | `DS004015` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Attended speaker paradigm (cEEGrid data) | | Author (year) | `Holtze2022_Attended` | | Canonical | — | | Importable as | `DS004015`, `Holtze2022_Attended` | | Year | 2022 | | Authors | Bjoern Holtze, Marc Rosenkranz, Manuela Jaeger, Stefan Debener, Bojana Mirkovic | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004015.v1.0.2](https://doi.org/10.18112/openneuro.ds004015.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004015) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004015) | [Source URL](https://openneuro.org/datasets/ds004015) | ### Copy-paste BibTeX ```bibtex @dataset{ds004015, title = {Attended speaker paradigm (cEEGrid data)}, author = {Bjoern Holtze and Marc Rosenkranz and Manuela Jaeger and Stefan Debener and Bojana Mirkovic}, doi = {10.18112/openneuro.ds004015.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004015.v1.0.2}, } ``` ## Technical Details - Subjects: 36 - Recordings: 36 - Tasks: 1 - Channels: 18 - Sampling rate (Hz): 500.0 - Duration (hours): 47.28973222222222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 6.0 GB - File count: 36 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004015.v1.0.2 - Source: openneuro - OpenNeuro: [ds004015](https://openneuro.org/datasets/ds004015) - NeMAR: [ds004015](https://nemar.org/dataexplorer/detail?dataset_id=ds004015) ## API Reference Use the `DS004015` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004015(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Attended speaker paradigm (cEEGrid data) * **Study:** `ds004015` (OpenNeuro) * **Author (year):** `Holtze2022_Attended` * **Canonical:** — Also importable as: `DS004015`, `Holtze2022_Attended`. Modality: `eeg`. Subjects: 36; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004015](https://openneuro.org/datasets/ds004015) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004015](https://nemar.org/dataexplorer/detail?dataset_id=ds004015) DOI: [https://doi.org/10.18112/openneuro.ds004015.v1.0.2](https://doi.org/10.18112/openneuro.ds004015.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004015 >>> dataset = DS004015(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004015) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004015) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004017: eeg dataset, 21 subjects *Embodied Learning for Literacy EEG* Access recordings and metadata through EEGDash. **Citation:** Linn Damsgaard, Marta Topor, Anne-Mette Veber Nielsen, Anne Kær Gejl, Anne Sofie Bøgh Malling, Mark Schram Christensen, Rasmus Ahmt Hansen, Søren Kildahl, Jacob Wienecke (2022). *Embodied Learning for Literacy EEG*. [10.18112/openneuro.ds004017.v1.0.3](https://doi.org/10.18112/openneuro.ds004017.v1.0.3) Modality: eeg Subjects: 21 Recordings: 63 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004017 dataset = DS004017(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004017(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004017( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004017, title = {Embodied Learning for Literacy EEG}, author = {Linn Damsgaard and Marta Topor and Anne-Mette Veber Nielsen and Anne Kær Gejl and Anne Sofie Bøgh Malling and Mark Schram Christensen and Rasmus Ahmt Hansen and Søren Kildahl and Jacob Wienecke}, doi = {10.18112/openneuro.ds004017.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds004017.v1.0.3}, } ``` ## About This Dataset There are three files per participant collected for each of the three stages of the procedure. Stage 1 (before measurement): A two-alternative forced choice discrimination task including letters “b” and “d” Stage 2 (intervention measurement): A simple visual search task including a target letter (either b or d) and three distractor letters chosen at random (p or q). Stage 3 (after measurement): A two-alternative forced choice discrimination task including letters “b” and “d” Participants were assigned to two groups. Participants in the intervention group were: sub-04, sub-05, sub-06, sub-07, sub-09, sub-10, sub-12, sub-14, sub-15, sub-16, sub-21 Participants in the control group were: sub-01, sub-02, sub-03, sub-08, sub-11, sub-13, sub-17, sub-18, sub-19, sub-20 Events in all recordings correspond to stimulus presentation. The value of 100 represents letter b stimuli and 200 represents letter d stimuli. Events marked with 10 (b) and 20 (d) represent practice trials. The detailed description of the tasks and the procedure can be found in this preprint: For questions about the tasks and the data please email Jacob Wienecke at [wienecke@nexs.ku.dk](mailto:wienecke@nexs.ku.dk). Marta Topor 17/03/2022 ## Dataset Information | Dataset ID | `DS004017` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Embodied Learning for Literacy EEG | | Author (year) | `Damsgaard2022` | | Canonical | — | | Importable as | `DS004017`, `Damsgaard2022` | | Year | 2022 | | Authors | Linn Damsgaard, Marta Topor, Anne-Mette Veber Nielsen, Anne Kær Gejl, Anne Sofie Bøgh Malling, Mark Schram Christensen, Rasmus Ahmt Hansen, Søren Kildahl, Jacob Wienecke | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004017.v1.0.3](https://doi.org/10.18112/openneuro.ds004017.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004017) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004017) | [Source URL](https://openneuro.org/datasets/ds004017) | ### Copy-paste BibTeX ```bibtex @dataset{ds004017, title = {Embodied Learning for Literacy EEG}, author = {Linn Damsgaard and Marta Topor and Anne-Mette Veber Nielsen and Anne Kær Gejl and Anne Sofie Bøgh Malling and Mark Schram Christensen and Rasmus Ahmt Hansen and Søren Kildahl and Jacob Wienecke}, doi = {10.18112/openneuro.ds004017.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds004017.v1.0.3}, } ``` ## Technical Details - Subjects: 21 - Recordings: 63 - Tasks: — - Channels: 65 - Sampling rate (Hz): 2048.0 - Duration (hours): 7.723333333333334 - Pathology: Healthy - Modality: Visual - Type: Learning - Size on disk: 20.9 GB - File count: 63 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004017.v1.0.3 - Source: openneuro - OpenNeuro: [ds004017](https://openneuro.org/datasets/ds004017) - NeMAR: [ds004017](https://nemar.org/dataexplorer/detail?dataset_id=ds004017) ## API Reference Use the `DS004017` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004017(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Embodied Learning for Literacy EEG * **Study:** `ds004017` (OpenNeuro) * **Author (year):** `Damsgaard2022` * **Canonical:** — Also importable as: `DS004017`, `Damsgaard2022`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 21; recordings: 63; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004017](https://openneuro.org/datasets/ds004017) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004017](https://nemar.org/dataexplorer/detail?dataset_id=ds004017) DOI: [https://doi.org/10.18112/openneuro.ds004017.v1.0.3](https://doi.org/10.18112/openneuro.ds004017.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004017 >>> dataset = DS004017(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004017) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004017) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004018: eeg dataset, 16 subjects *EEG recordings for 200 object images presented in RSVP sequences at 5Hz or 20Hz* Access recordings and metadata through EEGDash. **Citation:** Grootswagers, T., Robinson, A.K., Carlson, T.A. (2022). *EEG recordings for 200 object images presented in RSVP sequences at 5Hz or 20Hz*. [10.18112/openneuro.ds004018.v2.0.0](https://doi.org/10.18112/openneuro.ds004018.v2.0.0) Modality: eeg Subjects: 16 Recordings: 32 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004018 dataset = DS004018(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004018(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004018( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004018, title = {EEG recordings for 200 object images presented in RSVP sequences at 5Hz or 20Hz}, author = {Grootswagers, T. and Robinson, A.K. and Carlson, T.A.}, doi = {10.18112/openneuro.ds004018.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds004018.v2.0.0}, } ``` ## About This Dataset Grootswagers T.\*, Robinson A.K.\*, Carlson T.A. (2019). The representational dynamics of visual objects in rapid serial visual processing streams. NeuroImage, 188, 668-679 [https://doi.org/10.1016/j.neuroimage.2018.12.046](https://doi.org/10.1016/j.neuroimage.2018.12.046) See also [https://osf.io/a7knv/](https://osf.io/a7knv/) ## Dataset Information | Dataset ID | `DS004018` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG recordings for 200 object images presented in RSVP sequences at 5Hz or 20Hz | | Author (year) | `Grootswagers2022_RSVP` | | Canonical | — | | Importable as | `DS004018`, `Grootswagers2022_RSVP` | | Year | 2022 | | Authors | Grootswagers, T., Robinson, A.K., Carlson, T.A. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004018.v2.0.0](https://doi.org/10.18112/openneuro.ds004018.v2.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004018) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004018) | [Source URL](https://openneuro.org/datasets/ds004018) | ### Copy-paste BibTeX ```bibtex @dataset{ds004018, title = {EEG recordings for 200 object images presented in RSVP sequences at 5Hz or 20Hz}, author = {Grootswagers, T. and Robinson, A.K. and Carlson, T.A.}, doi = {10.18112/openneuro.ds004018.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds004018.v2.0.0}, } ``` ## Technical Details - Subjects: 16 - Recordings: 32 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 0.7925055555555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 10.6 GB - File count: 32 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004018.v2.0.0 - Source: openneuro - OpenNeuro: [ds004018](https://openneuro.org/datasets/ds004018) - NeMAR: [ds004018](https://nemar.org/dataexplorer/detail?dataset_id=ds004018) ## API Reference Use the `DS004018` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004018(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG recordings for 200 object images presented in RSVP sequences at 5Hz or 20Hz * **Study:** `ds004018` (OpenNeuro) * **Author (year):** `Grootswagers2022_RSVP` * **Canonical:** — Also importable as: `DS004018`, `Grootswagers2022_RSVP`. Modality: `eeg`. Subjects: 16; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004018](https://openneuro.org/datasets/ds004018) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004018](https://nemar.org/dataexplorer/detail?dataset_id=ds004018) DOI: [https://doi.org/10.18112/openneuro.ds004018.v2.0.0](https://doi.org/10.18112/openneuro.ds004018.v2.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004018 >>> dataset = DS004018(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004018) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004018) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004019: eeg dataset, 62 subjects *Effect of obesity on arithmetic processing in preteens with high and low math skills. An event-related potentials study* Access recordings and metadata through EEGDash. **Citation:** Graciela C. Alatorre-Cruz, Heather Downs, Darcy Hagood, Seth T. Sorensen, D. Keith Williams, Linda Larson-Prior (2022). *Effect of obesity on arithmetic processing in preteens with high and low math skills. An event-related potentials study*. [10.18112/openneuro.ds004019.v1.0.0](https://doi.org/10.18112/openneuro.ds004019.v1.0.0) Modality: eeg Subjects: 62 Recordings: 62 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004019 dataset = DS004019(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004019(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004019( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004019, title = {Effect of obesity on arithmetic processing in preteens with high and low math skills. An event-related potentials study}, author = {Graciela C. Alatorre-Cruz and Heather Downs and Darcy Hagood and Seth T. Sorensen and D. Keith Williams and Linda Larson-Prior}, doi = {10.18112/openneuro.ds004019.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004019.v1.0.0}, } ``` ## About This Dataset Introduction This EEG dataset contains the electrophysiological signal from sixty-two obese and non-obese preteens during a delayed-verification math task. The stimuli were designed and administered using E-Prime software (Version 2) at Arkansas Children Nutrition Center (ACNC), Little Rock, Arkansas. The University of Arkansas for Medical Sciences (UAMS) approved the study protocol. This research was supported by USDA/Agricultural Research Service Project 6026-51000-012-06S. > Raw data files The data was acquired with a Geodesic Net Amps 300 system running Netstation 4.5.2 software using the 128-channel Geodesic Hydrocell Sensor Net™ (Magstim EGI., Eugene OR, USA). No operations have been performed on the data. Participant data The *Participants.tsv* file contains age, gender, body mass index (BMI), and performance. How to cite All use of this dataset in a publication context requires the following paper to be cited: Alatorre-Cruz, G.C., Downs, H., Hagood, D., Sorensen, S.T., Williams, D.K., Larson-Prior, L. (2022). Effect of obesity on arithmetic processing in preteens with high and low math skills. An event-related potentials study. Frontiers in Human Neurosciences, In press. Contact Questions regarding the EEG data may be addressed to Catalina Alatorre-Cruz ([gcalatorrecruz@uams.edu](mailto:gcalatorrecruz@uams.edu)). Question regarding the project, in general, may be addressed to Linda Larson-Prior ([ljlarsonprior@uams.edu](mailto:ljlarsonprior@uams.edu)). ## Dataset Information | Dataset ID | `DS004019` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Effect of obesity on arithmetic processing in preteens with high and low math skills. An event-related potentials study | | Author (year) | `AlatorreCruz2022_Effect` | | Canonical | — | | Importable as | `DS004019`, `AlatorreCruz2022_Effect` | | Year | 2022 | | Authors | Graciela C. Alatorre-Cruz, Heather Downs, Darcy Hagood, Seth T. Sorensen, D. Keith Williams, Linda Larson-Prior | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004019.v1.0.0](https://doi.org/10.18112/openneuro.ds004019.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004019) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004019) | [Source URL](https://openneuro.org/datasets/ds004019) | ### Copy-paste BibTeX ```bibtex @dataset{ds004019, title = {Effect of obesity on arithmetic processing in preteens with high and low math skills. An event-related potentials study}, author = {Graciela C. Alatorre-Cruz and Heather Downs and Darcy Hagood and Seth T. Sorensen and D. Keith Williams and Linda Larson-Prior}, doi = {10.18112/openneuro.ds004019.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004019.v1.0.0}, } ``` ## Technical Details - Subjects: 62 - Recordings: 62 - Tasks: 1 - Channels: 128 - Sampling rate (Hz): 500.0 - Duration (hours): Not calculated - Pathology: Obese - Modality: Visual - Type: Other - Size on disk: 17.3 GB - File count: 62 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004019.v1.0.0 - Source: openneuro - OpenNeuro: [ds004019](https://openneuro.org/datasets/ds004019) - NeMAR: [ds004019](https://nemar.org/dataexplorer/detail?dataset_id=ds004019) ## API Reference Use the `DS004019` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004019(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Effect of obesity on arithmetic processing in preteens with high and low math skills. An event-related potentials study * **Study:** `ds004019` (OpenNeuro) * **Author (year):** `AlatorreCruz2022_Effect` * **Canonical:** — Also importable as: `DS004019`, `AlatorreCruz2022_Effect`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Obese`. Subjects: 62; recordings: 62; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004019](https://openneuro.org/datasets/ds004019) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004019](https://nemar.org/dataexplorer/detail?dataset_id=ds004019) DOI: [https://doi.org/10.18112/openneuro.ds004019.v1.0.0](https://doi.org/10.18112/openneuro.ds004019.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004019 >>> dataset = DS004019(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004019) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004019) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004022: eeg dataset, 7 subjects *Multimodal EEG and fNIRS Biosignal Acquisition during Motor Imagery Tasks in Patients with Orthopedic Impairment* Access recordings and metadata through EEGDash. **Citation:** Seho Lee, Hee Ra Jung, In-Nea Wang, Min-Kyung Jung, Hakseung Kim, Dong-Joo Kim (2022). *Multimodal EEG and fNIRS Biosignal Acquisition during Motor Imagery Tasks in Patients with Orthopedic Impairment*. [10.18112/openneuro.ds004022.v1.0.0](https://doi.org/10.18112/openneuro.ds004022.v1.0.0) Modality: eeg Subjects: 7 Recordings: 21 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004022 dataset = DS004022(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004022(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004022( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004022, title = {Multimodal EEG and fNIRS Biosignal Acquisition during Motor Imagery Tasks in Patients with Orthopedic Impairment}, author = {Seho Lee and Hee Ra Jung and In-Nea Wang and Min-Kyung Jung and Hakseung Kim and Dong-Joo Kim}, doi = {10.18112/openneuro.ds004022.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004022.v1.0.0}, } ``` ## About This Dataset This dataset consists of raw 18-channel EEG and functional near infrareds(fNIRS) from 7 human paticipants with orthopedic Impairment during motor imagery(MI). The participants performed a series of MI-related trials across three sessions. These sessions comprised 40 trials, of which four different MI tasks were presented in random order (e.g., Reach → Twist → Lift → Reach → Grasp → Grasp → Twist → Reach → Lift → Reach). Each trial began with 3 s of fixation cross. The monitor then displayed a 4 s visual cue, followed by 3 s of letters indicating the ready state with a gray screen to eliminate the afterimage. The participants were then instructed to perform the imaginary movement for 5 s in the given order. ## Dataset Information | Dataset ID | `DS004022` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Multimodal EEG and fNIRS Biosignal Acquisition during Motor Imagery Tasks in Patients with Orthopedic Impairment | | Author (year) | `Lee2022` | | Canonical | — | | Importable as | `DS004022`, `Lee2022` | | Year | 2022 | | Authors | Seho Lee, Hee Ra Jung, In-Nea Wang, Min-Kyung Jung, Hakseung Kim, Dong-Joo Kim | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004022.v1.0.0](https://doi.org/10.18112/openneuro.ds004022.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004022) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004022) | [Source URL](https://openneuro.org/datasets/ds004022) | ### Copy-paste BibTeX ```bibtex @dataset{ds004022, title = {Multimodal EEG and fNIRS Biosignal Acquisition during Motor Imagery Tasks in Patients with Orthopedic Impairment}, author = {Seho Lee and Hee Ra Jung and In-Nea Wang and Min-Kyung Jung and Hakseung Kim and Dong-Joo Kim}, doi = {10.18112/openneuro.ds004022.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004022.v1.0.0}, } ``` ## Technical Details - Subjects: 7 - Recordings: 21 - Tasks: 1 - Channels: 18 (19), 16 (2) - Sampling rate (Hz): 500.0 - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: 616.6 MB - File count: 21 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004022.v1.0.0 - Source: openneuro - OpenNeuro: [ds004022](https://openneuro.org/datasets/ds004022) - NeMAR: [ds004022](https://nemar.org/dataexplorer/detail?dataset_id=ds004022) ## API Reference Use the `DS004022` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004022(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal EEG and fNIRS Biosignal Acquisition during Motor Imagery Tasks in Patients with Orthopedic Impairment * **Study:** `ds004022` (OpenNeuro) * **Author (year):** `Lee2022` * **Canonical:** — Also importable as: `DS004022`, `Lee2022`. Modality: `eeg`. Subjects: 7; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004022](https://openneuro.org/datasets/ds004022) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004022](https://nemar.org/dataexplorer/detail?dataset_id=ds004022) DOI: [https://doi.org/10.18112/openneuro.ds004022.v1.0.0](https://doi.org/10.18112/openneuro.ds004022.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004022 >>> dataset = DS004022(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004022) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004022) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004024: eeg dataset, 13 subjects *TMS-EEG-MRI-fMRI-DWI data on paired associative stimulation and connectivity (Shirley Ryan AbilityLab, Chicago, IL)* Access recordings and metadata through EEGDash. **Citation:** Julio Cesar Hernandez Pavon, Nils Schneider Garces, John Patrick Begnoche, Lee Miller, Tommi Raij (2022). *TMS-EEG-MRI-fMRI-DWI data on paired associative stimulation and connectivity (Shirley Ryan AbilityLab, Chicago, IL)*. [10.18112/openneuro.ds004024.v1.0.1](https://doi.org/10.18112/openneuro.ds004024.v1.0.1) Modality: eeg Subjects: 13 Recordings: 497 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004024 dataset = DS004024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004024, title = {TMS-EEG-MRI-fMRI-DWI data on paired associative stimulation and connectivity (Shirley Ryan AbilityLab, Chicago, IL)}, author = {Julio Cesar Hernandez Pavon and Nils Schneider Garces and John Patrick Begnoche and Lee Miller and Tommi Raij}, doi = {10.18112/openneuro.ds004024.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004024.v1.0.1}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS004024` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | TMS-EEG-MRI-fMRI-DWI data on paired associative stimulation and connectivity (Shirley Ryan AbilityLab, Chicago, IL) | | Author (year) | `Pavon2022` | | Canonical | — | | Importable as | `DS004024`, `Pavon2022` | | Year | 2022 | | Authors | Julio Cesar Hernandez Pavon, Nils Schneider Garces, John Patrick Begnoche, Lee Miller, Tommi Raij | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004024.v1.0.1](https://doi.org/10.18112/openneuro.ds004024.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004024) | [Source URL](https://openneuro.org/datasets/ds004024) | ### Copy-paste BibTeX ```bibtex @dataset{ds004024, title = {TMS-EEG-MRI-fMRI-DWI data on paired associative stimulation and connectivity (Shirley Ryan AbilityLab, Chicago, IL)}, author = {Julio Cesar Hernandez Pavon and Nils Schneider Garces and John Patrick Begnoche and Lee Miller and Tommi Raij}, doi = {10.18112/openneuro.ds004024.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004024.v1.0.1}, } ``` ## Technical Details - Subjects: 13 - Recordings: 497 - Tasks: 3 - Channels: 69 - Sampling rate (Hz): 20000.0 - Duration (hours): 54.50040104166667 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 1021.2 GB - File count: 497 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004024.v1.0.1 - Source: openneuro - OpenNeuro: [ds004024](https://openneuro.org/datasets/ds004024) - NeMAR: [ds004024](https://nemar.org/dataexplorer/detail?dataset_id=ds004024) ## API Reference Use the `DS004024` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004024(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TMS-EEG-MRI-fMRI-DWI data on paired associative stimulation and connectivity (Shirley Ryan AbilityLab, Chicago, IL) * **Study:** `ds004024` (OpenNeuro) * **Author (year):** `Pavon2022` * **Canonical:** — Also importable as: `DS004024`, `Pavon2022`. Modality: `eeg`. Subjects: 13; recordings: 497; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004024](https://openneuro.org/datasets/ds004024) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004024](https://nemar.org/dataexplorer/detail?dataset_id=ds004024) DOI: [https://doi.org/10.18112/openneuro.ds004024.v1.0.1](https://doi.org/10.18112/openneuro.ds004024.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004024 >>> dataset = DS004024(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004024) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004033: eeg dataset, 18 subjects *Electrode walking study* Access recordings and metadata through EEGDash. **Citation:** Joanna Scanlon, Nadine Jacobsen, Marike Maack, Stefan Debener (2022). *Electrode walking study*. [10.18112/openneuro.ds004033.v1.0.0](https://doi.org/10.18112/openneuro.ds004033.v1.0.0) Modality: eeg Subjects: 18 Recordings: 36 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004033 dataset = DS004033(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004033(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004033( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004033, title = {Electrode walking study}, author = {Joanna Scanlon and Nadine Jacobsen and Marike Maack and Stefan Debener}, doi = {10.18112/openneuro.ds004033.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004033.v1.0.0}, } ``` ## About This Dataset The electrode and synchronized walking study: Participants performed 3 tasks outside: eyes open/eyes close, standing, walking alone & walking with experimenter oddball task, and 3 walking with experimenter tasks Only part of the second task (standing and walking oddball) was used for the first paper. Each of 18 participants performed the task using both active (session 1) and passive (session 2) electrodes, in counterbalanced order Task 1: Eyes open / Eyes closed; 1. Participant stands facing a wall with eyes open (or closed) for 1 min. Then 1 min of eyes closed (or open). This is counterbalanced and repeated. 2X each type. Task 2: Oddball task: Standing / Walking alone / Walking together. Participants performed an oddball task in which they listened to the tones through headphones and silently counted the deviant tones. The tones were 800 and 1000 Hz, with the standard/target status counterbalanced across participants. During the walking conditions, participants walked clockwise around an outdoor (roofed) basketball arena, following pylons. Each block was about 5-6 minutes. Blocks were counterbalanced and repeated 2X each. Task 3: Walking together: Natural / Control / Synchronize 3. Participants walked with the experimenter for 6 minutes in 3 conditions. The experimenter listened to a metronome while walking, and synchronized their steps with it (also true during walking together oddball task). During Natural walking, participant is just asked to walk with the experimenter, with no other instruction. In Control, participant is `blinded` using `side-blinders` which block their view of the experimenter. In Synchronize, participants try to synchronize their steps with the experimenter. All walking & oddball conditions started with a countdown (this has a specific trigger for oddball conds, not for task 3 conds. It plays during the first ~ 12 seconds of the 6 min trial - Joanna Scanlon (Feb 2022) ## Dataset Information | Dataset ID | `DS004033` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Electrode walking study | | Author (year) | `Scanlon2022` | | Canonical | — | | Importable as | `DS004033`, `Scanlon2022` | | Year | 2022 | | Authors | Joanna Scanlon, Nadine Jacobsen, Marike Maack, Stefan Debener | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004033.v1.0.0](https://doi.org/10.18112/openneuro.ds004033.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004033) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004033) | [Source URL](https://openneuro.org/datasets/ds004033) | ### Copy-paste BibTeX ```bibtex @dataset{ds004033, title = {Electrode walking study}, author = {Joanna Scanlon and Nadine Jacobsen and Marike Maack and Stefan Debener}, doi = {10.18112/openneuro.ds004033.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004033.v1.0.0}, } ``` ## Technical Details - Subjects: 18 - Recordings: 36 - Tasks: 2 - Channels: 67 - Sampling rate (Hz): 500.0 - Duration (hours): 42.64460166666667 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 19.8 GB - File count: 36 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004033.v1.0.0 - Source: openneuro - OpenNeuro: [ds004033](https://openneuro.org/datasets/ds004033) - NeMAR: [ds004033](https://nemar.org/dataexplorer/detail?dataset_id=ds004033) ## API Reference Use the `DS004033` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004033(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrode walking study * **Study:** `ds004033` (OpenNeuro) * **Author (year):** `Scanlon2022` * **Canonical:** — Also importable as: `DS004033`, `Scanlon2022`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 18; recordings: 36; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004033](https://openneuro.org/datasets/ds004033) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004033](https://nemar.org/dataexplorer/detail?dataset_id=ds004033) DOI: [https://doi.org/10.18112/openneuro.ds004033.v1.0.0](https://doi.org/10.18112/openneuro.ds004033.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004033 >>> dataset = DS004033(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004033) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004033) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004040: eeg dataset, 13 subjects *Trance channeling EEG study* Access recordings and metadata through EEGDash. **Citation:** Cedric Cannard, Arnaud Delorme, Helane Wahbeh (2022). *Trance channeling EEG study*. [10.18112/openneuro.ds004040.v1.0.0](https://doi.org/10.18112/openneuro.ds004040.v1.0.0) Modality: eeg Subjects: 13 Recordings: 26 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004040 dataset = DS004040(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004040(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004040( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004040, title = {Trance channeling EEG study}, author = {Cedric Cannard and Arnaud Delorme and Helane Wahbeh}, doi = {10.18112/openneuro.ds004040.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004040.v1.0.0}, } ``` ## About This Dataset This group contains 13 participants that went through a thorough screening and did 2 sessions (different days) each. Experiment design corresponded in alternating (5 minutes) blocs of trance channeling and resting state (3 periods per session for each condition). ## Dataset Information | Dataset ID | `DS004040` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Trance channeling EEG study | | Author (year) | `Cannard2022` | | Canonical | — | | Importable as | `DS004040`, `Cannard2022` | | Year | 2022 | | Authors | Cedric Cannard, Arnaud Delorme, Helane Wahbeh | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004040.v1.0.0](https://doi.org/10.18112/openneuro.ds004040.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004040) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004040) | [Source URL](https://openneuro.org/datasets/ds004040) | ### Copy-paste BibTeX ```bibtex @dataset{ds004040, title = {Trance channeling EEG study}, author = {Cedric Cannard and Arnaud Delorme and Helane Wahbeh}, doi = {10.18112/openneuro.ds004040.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004040.v1.0.0}, } ``` ## Technical Details - Subjects: 13 - Recordings: 26 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 512.0 - Duration (hours): 25.56194444444445 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 11.6 GB - File count: 26 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004040.v1.0.0 - Source: openneuro - OpenNeuro: [ds004040](https://openneuro.org/datasets/ds004040) - NeMAR: [ds004040](https://nemar.org/dataexplorer/detail?dataset_id=ds004040) ## API Reference Use the `DS004040` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004040(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Trance channeling EEG study * **Study:** `ds004040` (OpenNeuro) * **Author (year):** `Cannard2022` * **Canonical:** — Also importable as: `DS004040`, `Cannard2022`. Modality: `eeg`. Subjects: 13; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004040](https://openneuro.org/datasets/ds004040) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004040](https://nemar.org/dataexplorer/detail?dataset_id=ds004040) DOI: [https://doi.org/10.18112/openneuro.ds004040.v1.0.0](https://doi.org/10.18112/openneuro.ds004040.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004040 >>> dataset = DS004040(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004040) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004040) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004043: eeg dataset, 20 subjects *The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes* Access recordings and metadata through EEGDash. **Citation:** Moerel, Denise, Grootswagers, Tijl, Robinson, Amanda, Shatek, Sophia, Woolgar, Alexandra, Carlson, Thomas, Rich, Anina (2022). *The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes*. [10.18112/openneuro.ds004043.v1.1.0](https://doi.org/10.18112/openneuro.ds004043.v1.1.0) Modality: eeg Subjects: 20 Recordings: 20 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004043 dataset = DS004043(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004043(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004043( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004043, title = {The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes}, author = {Moerel, Denise and Grootswagers, Tijl and Robinson, Amanda and Shatek, Sophia and Woolgar, Alexandra and Carlson, Thomas and Rich, Anina}, doi = {10.18112/openneuro.ds004043.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004043.v1.1.0}, } ``` ## About This Dataset Experiment Details Human electroencephalography recordings from 20 participants. Participants viewed rapid sequences of overlaid oriented grating pairs while detecting a “target” grating of a particular orientation. We manipulated attention, one grating was attended and the other ignored (cued by colour), and temporal expectation, with stimulus onset timing either predictable or not. Experiment length: 1 hour More information: [https://doi.org/10.17605/OSF.IO/5B8K6](https://doi.org/10.17605/OSF.IO/5B8K6) (OSF repository with more information and example analysis code) Moerel, D., Grootswagers, T., Robinson, A. K., Shatek, S. M., Woolgar, A., Carlson, T. A., & Rich, A. N. (2021). Undivided attention: The temporal effects of attention dissociated from decision, memory, and expectation. bioRxiv. doi: [https://doi.org/10.1101/2021.05.24.445376](https://doi.org/10.1101/2021.05.24.445376) ## Dataset Information | Dataset ID | `DS004043` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes | | Author (year) | `Moerel2022_time` | | Canonical | — | | Importable as | `DS004043`, `Moerel2022_time` | | Year | 2022 | | Authors | Moerel, Denise, Grootswagers, Tijl, Robinson, Amanda, Shatek, Sophia, Woolgar, Alexandra, Carlson, Thomas, Rich, Anina | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004043.v1.1.0](https://doi.org/10.18112/openneuro.ds004043.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004043) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004043) | [Source URL](https://openneuro.org/datasets/ds004043) | ### Copy-paste BibTeX ```bibtex @dataset{ds004043, title = {The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes}, author = {Moerel, Denise and Grootswagers, Tijl and Robinson, Amanda and Shatek, Sophia and Woolgar, Alexandra and Carlson, Thomas and Rich, Anina}, doi = {10.18112/openneuro.ds004043.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004043.v1.1.0}, } ``` ## Technical Details - Subjects: 20 - Recordings: 20 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 18.234355555555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 15.4 GB - File count: 20 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004043.v1.1.0 - Source: openneuro - OpenNeuro: [ds004043](https://openneuro.org/datasets/ds004043) - NeMAR: [ds004043](https://nemar.org/dataexplorer/detail?dataset_id=ds004043) ## API Reference Use the `DS004043` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004043(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes * **Study:** `ds004043` (OpenNeuro) * **Author (year):** `Moerel2022_time` * **Canonical:** — Also importable as: `DS004043`, `Moerel2022_time`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004043](https://openneuro.org/datasets/ds004043) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004043](https://nemar.org/dataexplorer/detail?dataset_id=ds004043) DOI: [https://doi.org/10.18112/openneuro.ds004043.v1.1.0](https://doi.org/10.18112/openneuro.ds004043.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004043 >>> dataset = DS004043(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004043) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004043) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004067: eeg dataset, 80 subjects *Moral conviction and metacognitive ability shape multiple stages of information processing* Access recordings and metadata through EEGDash. **Citation:** Yoder, Keith J, Decety, Jean (2022). *Moral conviction and metacognitive ability shape multiple stages of information processing*. [10.18112/openneuro.ds004067.v1.0.1](https://doi.org/10.18112/openneuro.ds004067.v1.0.1) Modality: eeg Subjects: 80 Recordings: 84 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004067 dataset = DS004067(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004067(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004067( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004067, title = {Moral conviction and metacognitive ability shape multiple stages of information processing}, author = {Yoder, Keith J and Decety, Jean}, doi = {10.18112/openneuro.ds004067.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004067.v1.0.1}, } ``` ## About This Dataset Experiment Details Human electroencephalography recordings from 80 participants. Participants first provided their attitudes about a set of sociopolitical issues, then view photographs of protests that were ostensibly about those same issues. Prior to each photo, they saw a pie chart indicating social support for the issue (low, medium, or high). After each photo, they indicated their support for the protestors. Other data and analysis scripts can be found on OSF (DOI 10.17605/OSF.IO/32DAS) or at the github repository for the project ([https://github.com/Social-Cognitive-Neuroscience-Lab/EEGMoralization](https://github.com/Social-Cognitive-Neuroscience-Lab/EEGMoralization)) ## Dataset Information | Dataset ID | `DS004067` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Moral conviction and metacognitive ability shape multiple stages of information processing | | Author (year) | `Yoder2022` | | Canonical | — | | Importable as | `DS004067`, `Yoder2022` | | Year | 2022 | | Authors | Yoder, Keith J, Decety, Jean | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004067.v1.0.1](https://doi.org/10.18112/openneuro.ds004067.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004067) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004067) | [Source URL](https://openneuro.org/datasets/ds004067) | ### Copy-paste BibTeX ```bibtex @dataset{ds004067, title = {Moral conviction and metacognitive ability shape multiple stages of information processing}, author = {Yoder, Keith J and Decety, Jean}, doi = {10.18112/openneuro.ds004067.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004067.v1.0.1}, } ``` ## Technical Details - Subjects: 80 - Recordings: 84 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 2000.0 - Duration (hours): 59.642625 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 100.8 GB - File count: 84 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004067.v1.0.1 - Source: openneuro - OpenNeuro: [ds004067](https://openneuro.org/datasets/ds004067) - NeMAR: [ds004067](https://nemar.org/dataexplorer/detail?dataset_id=ds004067) ## API Reference Use the `DS004067` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004067(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Moral conviction and metacognitive ability shape multiple stages of information processing * **Study:** `ds004067` (OpenNeuro) * **Author (year):** `Yoder2022` * **Canonical:** — Also importable as: `DS004067`, `Yoder2022`. Modality: `eeg`. Subjects: 80; recordings: 84; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004067](https://openneuro.org/datasets/ds004067) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004067](https://nemar.org/dataexplorer/detail?dataset_id=ds004067) DOI: [https://doi.org/10.18112/openneuro.ds004067.v1.0.1](https://doi.org/10.18112/openneuro.ds004067.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004067 >>> dataset = DS004067(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004067) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004067) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004075: eeg dataset, 29 subjects *what_are_we_talking_about* Access recordings and metadata through EEGDash. **Citation:** Adam Boncz, Brigitta Toth, István Winkler (2022). *what_are_we_talking_about*. [10.18112/openneuro.ds004075.v1.0.0](https://doi.org/10.18112/openneuro.ds004075.v1.0.0) Modality: eeg Subjects: 29 Recordings: 116 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004075 dataset = DS004075(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004075(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004075( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004075, title = {what_are_we_talking_about}, author = {Adam Boncz and Brigitta Toth and István Winkler}, doi = {10.18112/openneuro.ds004075.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004075.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS004075` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | what_are_we_talking_about | | Author (year) | `Boncz2022` | | Canonical | — | | Importable as | `DS004075`, `Boncz2022` | | Year | 2022 | | Authors | Adam Boncz, Brigitta Toth, István Winkler | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004075.v1.0.0](https://doi.org/10.18112/openneuro.ds004075.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004075) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004075) | [Source URL](https://openneuro.org/datasets/ds004075) | ### Copy-paste BibTeX ```bibtex @dataset{ds004075, title = {what_are_we_talking_about}, author = {Adam Boncz and Brigitta Toth and István Winkler}, doi = {10.18112/openneuro.ds004075.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004075.v1.0.0}, } ``` ## Technical Details - Subjects: 29 - Recordings: 116 - Tasks: 4 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 12.554977777777776 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 7.4 GB - File count: 116 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004075.v1.0.0 - Source: openneuro - OpenNeuro: [ds004075](https://openneuro.org/datasets/ds004075) - NeMAR: [ds004075](https://nemar.org/dataexplorer/detail?dataset_id=ds004075) ## API Reference Use the `DS004075` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004075(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) what_are_we_talking_about * **Study:** `ds004075` (OpenNeuro) * **Author (year):** `Boncz2022` * **Canonical:** — Also importable as: `DS004075`, `Boncz2022`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 29; recordings: 116; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004075](https://openneuro.org/datasets/ds004075) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004075](https://nemar.org/dataexplorer/detail?dataset_id=ds004075) DOI: [https://doi.org/10.18112/openneuro.ds004075.v1.0.0](https://doi.org/10.18112/openneuro.ds004075.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004075 >>> dataset = DS004075(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004075) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004075) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004078: meg dataset, 12 subjects *A synchronized multimodal neuroimaging dataset to study brain language processing* Access recordings and metadata through EEGDash. **Citation:** Shaonan Wang, Xiaohan Zhang, Jiajun Zhang, Chengqing Zong (2022). *A synchronized multimodal neuroimaging dataset to study brain language processing*. [10.18112/openneuro.ds004078.v1.0.4](https://doi.org/10.18112/openneuro.ds004078.v1.0.4) Modality: meg Subjects: 12 Recordings: 720 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004078 dataset = DS004078(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004078(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004078( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004078, title = {A synchronized multimodal neuroimaging dataset to study brain language processing}, author = {Shaonan Wang and Xiaohan Zhang and Jiajun Zhang and Chengqing Zong}, doi = {10.18112/openneuro.ds004078.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds004078.v1.0.4}, } ``` ## About This Dataset **Overview** This synchronized multimodal neuroimaging dataset for studying brain language processing (SMN4Lang) contains: 1. fMRI and MEG data collected on the same 12 participant while they were listening to 6 hours of naturalistic stories; 2. high-resolution structural (T1, T2), diffusion MRI and resting-state fMRI data for each participant; 3. rich linguistic annotations for the stimuli, including word frequencies, part-of-speech tags, syntactic tree structures, time-aligned characters and words, various types of word and character embeddings. More details about the dataset are described as follows. ### View full README **Overview** This synchronized multimodal neuroimaging dataset for studying brain language processing (SMN4Lang) contains: 1. fMRI and MEG data collected on the same 12 participant while they were listening to 6 hours of naturalistic stories; 2. high-resolution structural (T1, T2), diffusion MRI and resting-state fMRI data for each participant; 3. rich linguistic annotations for the stimuli, including word frequencies, part-of-speech tags, syntactic tree structures, time-aligned characters and words, various types of word and character embeddings. More details about the dataset are described as follows. **Participants** All 12 participants were recruited from universities in Beijing, of which 4 were female, and 8 were male, with an age range 23-30 year. They completed both fMRI and MEG visits (first completed fMRI then MEG experiments which had a gap of 1 month at least), All participants were right-handed adults with Mandarin Chinese as native language who reported having normal hearing and no history of neurological disorders. They were paid and gave written informed consent. The study was conducted under the approval of the Institutional Review Board of Peking University. **Experimental Procedures** Before each scanning, participants first completed a simple information survey form and an informed consent. During both fMRI and MEG scanning, participants were instructed to listen and pay attention to the story stimulus, remain still, answer questions on the screen after each audio was finished. Stimulus presentation was implemented using Psychtoolbox-3. Specifically, at the beginning of each run, there was instruction of “Waiting for the scanning” on the screen followed with 8 seconds blank. Then, the instruction became “This audio is about to start, please listen carefully” which lasted for 2.65 seconds before playing the audio; during audio play, a centrally located fixation cross was presented; finally, two questions about the story were presented each with four answers to choose from during which time was controlled by participants. Auditory story stimuli were delivered via S14 insert earphones for fMRI studies (with headphones or foam padding were placed over the earphones to reduce scanner noise) and Elekta matching insert earphones for MEG studies. The fMRI recording was split into 7 visits with each lasting 1.5 hours in which the T1, T2, resting MRI were collected on the first visit, fMRI with listening tasks was collected from 1 to 6 visits, and the diffusion MRI were collected on the last visit. During MRI scanning including T1, T2, diffusion and resting, participants were instructed to lie relaxed and still in the machine. The MEG recording was split into 6 visits with each lasting 1.5 hours. **Stimuli** Stimuli are 60 story audios with 4 to 7 minutes long, comprising various topics such as education and culture. All audios were downloaded from Renmin Daily Review website read by the same male broadcaster. The corresponding texts were also downloaded from the Renmin Daily Review website in which errors were manually corrected to make sure audio and texts are aligned. **Annotations** Rich annotations of audios and texts are provided in the derivatives/annotations folder, including: 1. Speech to text alignment: The onset and offset time of each character and words in the audio are provided in the “stimuli/time_align” folder. Note that the onset and offset time were added by 10.65 seconds to align with the time of fMRI images because the fMRI scan was started 10.65 seconds before playing the audio. 2. Frequency: Character and word frequencies in the “stimuli/frequency” folder were calculated from the Xinhua news corpus and then log-transformed. 3. Textual embeddings: Text embeddings computed by different pre-trained language models (including Word2Vec, BERT, and GPT2) are provided in the “stimuli/embeddings” folder. Both the character-level and word-level embeddings computed by Word2Vec and BERT model and the word-level embeddings computed by GPT2 model are provided. 4. Syntactic annotations: The POS tag of each word, the constituent tree structure, and the dependency tree structure are provided in the “stimuli/syntactic_annotations” folder. The POS tags were annotated by experts following criterion of Peking Chinese Treebank. The constituent tree structure was manually annotated by linguistic students following PKU Chinese Treebank criterion with the TreeEditor tools and all results were double checked by different experts. The dependency tree structure was transformed from the constituent tree using Stanford CoreNLP tools. **Preprocessing** The MRI data, including the structural, functional, resting and diffusion images, were preprocessed using the “minimal preprocessing pipelines (HCP)” . The MEG data was first preprocessed using the temporal Signal Space Separation (tSSS) method and the bad channels were excluded. And then the independent component analysis (ICA) method was applied to remove the ocular artefacts using the MNE software. **Usage Notes** For the MEG data of sub-08_run-16 and sub-09_run-7, the stimuli-starting triggers were not recorded due to technical problems. The first trigger in these two runs were the stimuli-ending triggers and the starting time can be computed by subtracting the stimuli duration from the time point of the first trigger. ## Dataset Information | Dataset ID | `DS004078` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A synchronized multimodal neuroimaging dataset to study brain language processing | | Author (year) | `Wang2022_StudyBRAIN` | | Canonical | — | | Importable as | `DS004078`, `Wang2022_StudyBRAIN` | | Year | 2022 | | Authors | Shaonan Wang, Xiaohan Zhang, Jiajun Zhang, Chengqing Zong | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004078.v1.0.4](https://doi.org/10.18112/openneuro.ds004078.v1.0.4) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004078) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004078) | [Source URL](https://openneuro.org/datasets/ds004078) | ### Copy-paste BibTeX ```bibtex @dataset{ds004078, title = {A synchronized multimodal neuroimaging dataset to study brain language processing}, author = {Shaonan Wang and Xiaohan Zhang and Jiajun Zhang and Chengqing Zong}, doi = {10.18112/openneuro.ds004078.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds004078.v1.0.4}, } ``` ## Technical Details - Subjects: 12 - Recordings: 720 - Tasks: 1 - Channels: 328 - Sampling rate (Hz): 1000.0 - Duration (hours): 68.09214861111111 - Pathology: Healthy - Modality: Auditory - Type: Other - Size on disk: 631.1 GB - File count: 720 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004078.v1.0.4 - Source: openneuro - OpenNeuro: [ds004078](https://openneuro.org/datasets/ds004078) - NeMAR: [ds004078](https://nemar.org/dataexplorer/detail?dataset_id=ds004078) ## API Reference Use the `DS004078` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004078(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A synchronized multimodal neuroimaging dataset to study brain language processing * **Study:** `ds004078` (OpenNeuro) * **Author (year):** `Wang2022_StudyBRAIN` * **Canonical:** — Also importable as: `DS004078`, `Wang2022_StudyBRAIN`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 12; recordings: 720; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004078](https://openneuro.org/datasets/ds004078) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004078](https://nemar.org/dataexplorer/detail?dataset_id=ds004078) DOI: [https://doi.org/10.18112/openneuro.ds004078.v1.0.4](https://doi.org/10.18112/openneuro.ds004078.v1.0.4) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS004078 >>> dataset = DS004078(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004078) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004078) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS004080: ieeg dataset, 74 subjects *CCEP ECoG dataset across age 4-51* Access recordings and metadata through EEGDash. **Citation:** D. van Blooijs, M.A. van den Boom, J.F. van der Aar, G.J.M. Huiskamp, G. Castegnaro, M. Demuru, W.J.E.M. Zweiphenning, P. van Eijsden, K. J. Miller, F.S.S. Leijten, D. Hermes (2022). *CCEP ECoG dataset across age 4-51*. [10.18112/openneuro.ds004080.v1.2.4](https://doi.org/10.18112/openneuro.ds004080.v1.2.4) Modality: ieeg Subjects: 74 Recordings: 117 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004080 dataset = DS004080(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004080(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004080( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004080, title = {CCEP ECoG dataset across age 4-51}, author = {D. van Blooijs and M.A. van den Boom and J.F. van der Aar and G.J.M. Huiskamp and G. Castegnaro and M. Demuru and W.J.E.M. Zweiphenning and P. van Eijsden and K. J. Miller and F.S.S. Leijten and D. Hermes}, doi = {10.18112/openneuro.ds004080.v1.2.4}, url = {https://doi.org/10.18112/openneuro.ds004080.v1.2.4}, } ``` ## About This Dataset **Dataset description** This dataset consists of 74 patients age 4-51 years old where Cortico-Cortical Evoked Potentials (CCEPs) were measured with Electro-CorticoGraphy (ECoG) during single pulse electrical stimulation. For a detailed description see: - Developmental trajectory of transmission speed in the human brain. D. van Blooijs¹, M.A. van den Boom¹, J.F. van der Aar, G.J.M. Huiskamp, G. Castegnaro, M. Demuru, W.J.E.M. Zweiphenning, P. van Eijsden, K. J. Miller, F.S.S. Leijten, D. Hermes, Nature Neuroscience, 2023, [https://doi.org/10.1038/s41593-023-01272-0](https://doi.org/10.1038/s41593-023-01272-0) > ¹ these authors contributed equally. This dataset is part of the RESPect (Registry for Epilepsy Surgery Patients) database, a dataset recorded at the University Medical Center of Utrecht, the Netherlands. The study was approved by the Medical Ethical Committee from the UMC Utrecht. **Contact** ### View full README **Dataset description** This dataset consists of 74 patients age 4-51 years old where Cortico-Cortical Evoked Potentials (CCEPs) were measured with Electro-CorticoGraphy (ECoG) during single pulse electrical stimulation. For a detailed description see: - Developmental trajectory of transmission speed in the human brain. D. van Blooijs¹, M.A. van den Boom¹, J.F. van der Aar, G.J.M. Huiskamp, G. Castegnaro, M. Demuru, W.J.E.M. Zweiphenning, P. van Eijsden, K. J. Miller, F.S.S. Leijten, D. Hermes, Nature Neuroscience, 2023, [https://doi.org/10.1038/s41593-023-01272-0](https://doi.org/10.1038/s41593-023-01272-0) > ¹ these authors contributed equally. This dataset is part of the RESPect (Registry for Epilepsy Surgery Patients) database, a dataset recorded at the University Medical Center of Utrecht, the Netherlands. The study was approved by the Medical Ethical Committee from the UMC Utrecht. **Contact** - Dorien van Blooijs: [D.vanBlooijs@umcutrecht.nl](mailto:D.vanBlooijs@umcutrecht.nl) - Frans Leijten: [F.S.S.leijten@umcutrecht.nl](mailto:F.S.S.leijten@umcutrecht.nl) - Dora Hermes: [hermes.dora@mayo.edu](mailto:hermes.dora@mayo.edu) **Data organization** This data is organized according to the Brain Imaging Data Structure specification. A community-driven specification for organizing neurophysiology data along with its metadata. For more information on this data specification, see [https://bids-specification.readthedocs.io/en/stable/](https://bids-specification.readthedocs.io/en/stable/) Each patient has their own folder (e.g., `sub-ccepAgeUMCU01` to `sub-ccepAgeUMCU74`) which contains the iEEG recordings data for that patient, as well as the metadata needed to understand the raw data and event timing. Data are logically grouped in the same BIDS session and stored across runs that indicating the day and time point of recording during the monitoring period. If extra electrodes were added/removed during this period, the session was divided into different sessions (e.g. ses-1a and ses-1b). We use the optional run key-value pair to specify the day and the start time of the recording (e.g. run-021315, day 2 after implantation, which is day 1 of the monitoring period, at 13:15). The task key-value pair in long-term iEEG recordings describes the patient’s state during the recording of this file. The task label is “SPESclin“ since these files contain data collected during clinical single pulse electrical stimulation (SPES). Electrode positions include Destrieux atlas labels that were estimated by running Freesurfer on the individual subject MRI scan and taking the most common surface label within a sphere around the electrode. All shared electrode positions were then converted to MNI152 space using the Freesurfer surface based non-linear transformation. We note that this surface based transformation distorts the dimensions of the grids, but maintains the gyral anatomy. **License** This dataset is made available under the Public Domain Dedication and License CC v1.0, whose full text can be found at [https://creativecommons.org/publicdomain/zero/1.0/](https://creativecommons.org/publicdomain/zero/1.0/). We hope that all users will follow the ODC Attribution/Share-Alike Community Norms ([http://www.opendatacommons.org/norms/odc-by-sa/](http://www.opendatacommons.org/norms/odc-by-sa/)); in particular, while not legally required, we hope that all users of the data will acknowledge by citing the following in any publication: Developmental trajectory of transmission speed in the human brain, D. van Blooijs, M.A. van den Boom, J.F. van der Aar, G.J.M. Huiskamp, G. Castegnaro, M. Demuru, W.J.E.M. Zweiphenning, P. van Eijsden, K. J. Miller, F.S.S. Leijten, D. Hermes, Nature Neuroscience, 2023, [https://doi.org/10.1038/s41593-023-01272-0](https://doi.org/10.1038/s41593-023-01272-0) **Code** Code to analyses these data is available at: [https://github.com/MultimodalNeuroimagingLab/mnl_ccepAge](https://github.com/MultimodalNeuroimagingLab/mnl_ccepAge) **Acknowledgements** We thank the SEIN-UMCU RESPect database group (C.J.J. van Asch, L. van de Berg, S. Blok, M.D. Bourez, K.P.J. Braun, J.W. Dankbaar, C.H. Ferrier, T.A. Gebbink, P.H. Gosselaar, R. van Griethuysen, M.G.G. Hobbelink, F.W.A. Hoefnagels, N.E.C. van Klink, M.A. van ‘t Klooster, G.A.P. deKort, M.H.M. Mantione, A. Muhlebner, J.M. Ophorst, P.C. van Rijen, S.M.A. van der Salm, E.V. Schaft, M.M.J. van Schooneveld, H. Smeding, D. Sun, A. Velders, M.J.E. van Zandvoort, G.J.M. Zijlmans, E. Zuidhoek and J. Zwemmer) for their contributions and help in collecting the data, and G. Ojeda Valencia for proofreading the manuscript. **Funding** Research reported in this publication was supported by the National Institute of Mental Health of the National Institutes of Health under Award Number R01MH122258 (DH, FSSL, the content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health), the EpilepsieNL under Award Number NEF17-07 (DvB) and the UMC Utrecht Alexandre Suerman MD/PhD Stipendium 2015 (WZ). ## Dataset Information | Dataset ID | `DS004080` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | CCEP ECoG dataset across age 4-51 | | Author (year) | `Blooijs2023_CCEP_ECoG` | | Canonical | `RESPect_CCEP` | | Importable as | `DS004080`, `Blooijs2023_CCEP_ECoG`, `RESPect_CCEP` | | Year | 2022 | | Authors | 1. van Blooijs, M.A. van den Boom, J.F. van der Aar, G.J.M. Huiskamp, G. Castegnaro, M. Demuru, W.J.E.M. Zweiphenning, P. van Eijsden, K. J. Miller, F.S.S. Leijten, D. Hermes | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004080.v1.2.4](https://doi.org/10.18112/openneuro.ds004080.v1.2.4) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004080) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004080) | [Source URL](https://openneuro.org/datasets/ds004080) | ### Copy-paste BibTeX ```bibtex @dataset{ds004080, title = {CCEP ECoG dataset across age 4-51}, author = {D. van Blooijs and M.A. van den Boom and J.F. van der Aar and G.J.M. Huiskamp and G. Castegnaro and M. Demuru and W.J.E.M. Zweiphenning and P. van Eijsden and K. J. Miller and F.S.S. Leijten and D. Hermes}, doi = {10.18112/openneuro.ds004080.v1.2.4}, url = {https://doi.org/10.18112/openneuro.ds004080.v1.2.4}, } ``` ## Technical Details - Subjects: 74 - Recordings: 117 - Tasks: 1 - Channels: 133 (70), 68 (18), 130 (13), 98 (4), 131 (4), 96 (4), 64 (2), 94, 93 - Sampling rate (Hz): 2048.0 (112), 512.0 (5) - Duration (hours): 89.39102945963542 - Pathology: Epilepsy - Modality: Other - Type: Clinical/Intervention - Size on disk: 269.1 GB - File count: 117 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004080.v1.2.4 - Source: openneuro - OpenNeuro: [ds004080](https://openneuro.org/datasets/ds004080) - NeMAR: [ds004080](https://nemar.org/dataexplorer/detail?dataset_id=ds004080) ## API Reference Use the `DS004080` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004080(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CCEP ECoG dataset across age 4-51 * **Study:** `ds004080` (OpenNeuro) * **Author (year):** `Blooijs2023_CCEP_ECoG` * **Canonical:** `RESPect_CCEP` Also importable as: `DS004080`, `Blooijs2023_CCEP_ECoG`, `RESPect_CCEP`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 74; recordings: 117; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004080](https://openneuro.org/datasets/ds004080) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004080](https://nemar.org/dataexplorer/detail?dataset_id=ds004080) DOI: [https://doi.org/10.18112/openneuro.ds004080.v1.2.4](https://doi.org/10.18112/openneuro.ds004080.v1.2.4) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004080 >>> dataset = DS004080(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004080) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004080) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004100: ieeg dataset, 57 subjects *HUP iEEG Epilepsy Dataset* Access recordings and metadata through EEGDash. **Citation:** John M. Bernabei, Adam Li, Andrew Y. Revell, Rachel J. Smith, Kristin M. Gunnarsdottir, Ian Z. Ong, Kathryn A. Davis, Nishant Sinha, Sridevi Sarma, Brian Litt (2022). *HUP iEEG Epilepsy Dataset*. [10.18112/openneuro.ds004100.v1.1.3](https://doi.org/10.18112/openneuro.ds004100.v1.1.3) Modality: ieeg Subjects: 57 Recordings: 319 License: CC0 Source: openneuro Citations: 21.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004100 dataset = DS004100(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004100(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004100( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004100, title = {HUP iEEG Epilepsy Dataset}, author = {John M. Bernabei and Adam Li and Andrew Y. Revell and Rachel J. Smith and Kristin M. Gunnarsdottir and Ian Z. Ong and Kathryn A. Davis and Nishant Sinha and Sridevi Sarma and Brian Litt}, doi = {10.18112/openneuro.ds004100.v1.1.3}, url = {https://doi.org/10.18112/openneuro.ds004100.v1.1.3}, } ``` ## About This Dataset

HUP iEEG dataset

This dataset was prepared for release as part of a manuscript by Bernabei & Li et al. (in preparation). A subset of the data has been featured in Kini & Bernabei et al., Brain (2019) [1], and Bernabei & Sinha et al., Brain (2022) [2].

Dataset description

These files contain de-identified patient data collected as part of surgical treatment for drug resistant epilepsy at the Hospital of the University of Pennsylvania. Each of the 58 subjects underwent intracranial EEG with subdural grid, strip, and depth electrodes (ECoG) or purely stereotactically-placed depth electrodes (SEEG). Each patient also underwent subsequent treatment with surgical resection or laser ablation. Electrophysiologic data for both interictal and ictal periods is available, as are electrode localizations in ICBM152 MNI space. Furthermore, clinically-determined seizure onset channels are provided, as are channels which overlap with the resection/ablation zone, which was rigorously determined by segmenting the resection cavity.

BIDS Conversion

MNE-BIDS was used to convert the dataset into BIDS format.

References

[1] Kini L.\*, Bernabei J.M.\*, Mikhail F., Hadar P., Shah P., Khambhati A., Oechsel K., Archer R., Boccanfuso J.A., Conrad E., Stein J., Das S., Kheder A., Lucas T.H., Davis K.A., Bassett D.S., Litt B., Virtual resection predicts surgical outcome for drug resistant epilepsy. Brain, 2019. [2] Bernabei J.M.\*, Sinha N.\*, Arnold T.C., Conrad E., Ong I., Pattnaik A.R., Stein J.M., Shinohara R.T., Lucas T.H., Bassett D.S., Davis K.A., Litt B., Normative intracranial EEG maps epileptogenic tissues in focal epilepsy. Brain, 2022 [3] Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) [4] Holdgraf, C., Appelhoff, S., Bickel, S., Bouchard, K., D’Ambrosio, S., David, O., … Hermes, D. (2019). iEEG-BIDS, extending the Brain Imaging Data Structure specification to human intracranial electrophysiology. Scientific Data, 6, 102. [https://doi.org/10.1038/s41597-019-0105-7](https://doi.org/10.1038/s41597-019-0105-7) ## Dataset Information | Dataset ID | `DS004100` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | HUP iEEG Epilepsy Dataset | | Author (year) | `Bernabei2022` | | Canonical | `HUPiEEG` | | Importable as | `DS004100`, `Bernabei2022`, `HUPiEEG` | | Year | 2022 | | Authors | John M. Bernabei, Adam Li, Andrew Y. Revell, Rachel J. Smith, Kristin M. Gunnarsdottir, Ian Z. Ong, Kathryn A. Davis, Nishant Sinha, Sridevi Sarma, Brian Litt | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004100.v1.1.3](https://doi.org/10.18112/openneuro.ds004100.v1.1.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004100) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004100) | [Source URL](https://openneuro.org/datasets/ds004100) | ### Copy-paste BibTeX ```bibtex @dataset{ds004100, title = {HUP iEEG Epilepsy Dataset}, author = {John M. Bernabei and Adam Li and Andrew Y. Revell and Rachel J. Smith and Kristin M. Gunnarsdottir and Ian Z. Ong and Kathryn A. Davis and Nishant Sinha and Sridevi Sarma and Brian Litt}, doi = {10.18112/openneuro.ds004100.v1.1.3}, url = {https://doi.org/10.18112/openneuro.ds004100.v1.1.3}, } ``` ## Technical Details - Subjects: 57 - Recordings: 319 - Tasks: 2 - Channels: 122 (21), 128 (18), 118 (17), 172 (15), 126 (14), 104 (13), 82 (12), 127 (12), 180 (12), 96 (12), 92 (7), 80 (7), 190 (7), 108 (7), 74 (7), 121 (7), 136 (7), 109 (7), 117 (7), 102 (7), 174 (7), 149 (7), 120 (7), 163 (6), 98 (6), 63 (5), 186 (5), 162 (5), 100 (5), 164 (5), 88 (5), 59 (5), 116 (5), 52 (5), 71 (5), 105 (4), 90 (4), 61 (4), 85 (3), 94 (2), 192 (2), 232 - Sampling rate (Hz): 512.0 (165), 1024.0 (78), 500.0 (69), 256.0 (7) - Duration (hours): 25.717898949652778 - Pathology: Epilepsy - Modality: Other - Type: Clinical/Intervention - Size on disk: 13.2 GB - File count: 319 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004100.v1.1.3 - Source: openneuro - OpenNeuro: [ds004100](https://openneuro.org/datasets/ds004100) - NeMAR: [ds004100](https://nemar.org/dataexplorer/detail?dataset_id=ds004100) ## API Reference Use the `DS004100` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004100(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HUP iEEG Epilepsy Dataset * **Study:** `ds004100` (OpenNeuro) * **Author (year):** `Bernabei2022` * **Canonical:** `HUPiEEG` Also importable as: `DS004100`, `Bernabei2022`, `HUPiEEG`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 57; recordings: 319; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004100](https://openneuro.org/datasets/ds004100) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004100](https://nemar.org/dataexplorer/detail?dataset_id=ds004100) DOI: [https://doi.org/10.18112/openneuro.ds004100.v1.1.3](https://doi.org/10.18112/openneuro.ds004100.v1.1.3) NEMAR citation count: 21 ### Examples ```pycon >>> from eegdash.dataset import DS004100 >>> dataset = DS004100(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004100) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004100) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004105: eeg dataset, 17 subjects *BCIT Auditory Cueing* Access recordings and metadata through EEGDash. **Citation:** Javier Garcia (data), Justin Brooks (data), Scott Kerick (data), Tony Johnson (data and curation), Tim Mullen (data), Jean Vettel (data), Jonathan Touryan (curation), Kay Robbins (curation) (2022). *BCIT Auditory Cueing*. [10.18112/openneuro.ds004105.v1.0.0](https://doi.org/10.18112/openneuro.ds004105.v1.0.0) Modality: eeg Subjects: 17 Recordings: 34 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004105 dataset = DS004105(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004105(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004105( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004105, title = {BCIT Auditory Cueing}, author = {Javier Garcia (data) and Justin Brooks (data) and Scott Kerick (data) and Tony Johnson (data and curation) and Tim Mullen (data) and Jean Vettel (data) and Jonathan Touryan (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004105.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004105.v1.0.0}, } ``` ## About This Dataset **Introduction** *Overview:* Subjects in the Auditory Cueing study performed a long-duration simulated driving task with perturbations and audio stimuli in a visually sparse environment. The purpose of this effort was to supplement and extend the related driving research to collect prolonged time-on-task measurements of subjects performing a driving task in a simulated environment in order to assess fatigue-based performance through novel biomarkers. Similar to the Baseline Driving study, the Auditory Cueing study was intended to identify periods of driver fatigue via predictive algorithms formulated from the analysis of driver EEG data, ### View full README **Introduction** *Overview:* Subjects in the Auditory Cueing study performed a long-duration simulated driving task with perturbations and audio stimuli in a visually sparse environment. The purpose of this effort was to supplement and extend the related driving research to collect prolonged time-on-task measurements of subjects performing a driving task in a simulated environment in order to assess fatigue-based performance through novel biomarkers. Similar to the Baseline Driving study, the Auditory Cueing study was intended to identify periods of driver fatigue via predictive algorithms formulated from the analysis of driver EEG data, in comparison to the objective performance measures, and in contrast with the (non-fatigued) Calibration driving session for the subject. Auditory Cueing extended the Baseline Driving paradigm by adding predictive and non-predictive (random) pre-perturbation onset audio cues and increasing the frequency and magnitude of perturbation events vs. baseline driving. Further information is available on request from [cancta.net](https://cancta.net). **Methods** *Subjects:* Volunteers from the local community recruited through advertisements. *Apparatus:* Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI); Video Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz); EEG (BioSemi 64 (+8) channel systems with 4 eye and 2 mastoid channels recorded; SR=2048 Hz); Eye Tracking (Sensomotoric Instruments (SMI); REDEYE250). *Initial setup:* Upon arrival to the lab, subjects were given an introduction to the primary study for which they were recruited and provided informed consent and provided demographics information. This was followed by a practice session, to acclimate the subject to the driving simulator. The driving practice task lasted 10-15 min, until asymptotic performance in steering and speed control was demonstrated and lack of motion sickness was reported. Subjects were then outfitted and prepped for eye tracking and EEG acquisition. *Task organization within the study:* Subjects always began recording sessions by performing a Calibration Driving task, which was a 15-minute drive where the subject controlled only the steering (and speed was controlled by the simulator). Following this, subjects would perform Auditory Cueing condition A and Auditory Cueing condition B, with counter-balancing used across subjects as to which of them came first. This study only contains the Auditory Cueing portion of the study. *Auditory cueing task details:* Auditory Cueing A was 45 minutes of continuous driving, with subjects responsible for steering and maintaining speed, while a tone was played periodically at random. Auditory Cueing B was similar, but the tones were correlated with the onset of a perturbation event. Both driving tasks were conducted on the same simulated long, straight road. In each case, the subject was instructed to stay within the boundaries of the right-most lane, and to drive at the posted speed limits. The vehicle was periodically subject to lateral perturbing forces, which could be applied to either side of the vehicle, pushing the vehicle out of the center of the lane; and the subject was instructed to execute corrective steering actions to return the vehicle to the center of the lane. *Independent variables:* Auditory Cue (randomly presented before perturbation vs. predictive) *Dependent variables:* Reaction times to perturbations, continuous performance based on vehicle log (steering wheel angle, lane position, heading error, etc.), reaction times to target vehicles (police), Task-Induced Fatigue Scale (TIFS), Karolinska Sleepiness Scale (KSS), Visual Analog Scale of Fatigue (VAS-F). Note: Questionnaire data is available upon request from [cancta.net](https://cancta.net). *Additional data acquired:* Participant Enrollment Questionnaire, Subject Questionnaire for Current Session, Simulator Sickness Questionnaire. *Experimental Location:* Teledyne Corporation, Durham, NC. *Note:* This dataset has a corresponding dataset in the BCIT Calibration Driving ds004118 which has the 15 minute driving task performed prior to this one. ## Dataset Information | Dataset ID | `DS004105` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BCIT Auditory Cueing | | Author (year) | `Garcia2022` | | Canonical | `BCIT_Auditory_Cueing` | | Importable as | `DS004105`, `Garcia2022`, `BCIT_Auditory_Cueing` | | Year | 2022 | | Authors | Javier Garcia (data), Justin Brooks (data), Scott Kerick (data), Tony Johnson (data and curation), Tim Mullen (data), Jean Vettel (data), Jonathan Touryan (curation), Kay Robbins (curation) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004105.v1.0.0](https://doi.org/10.18112/openneuro.ds004105.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004105) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004105) | [Source URL](https://openneuro.org/datasets/ds004105) | ### Copy-paste BibTeX ```bibtex @dataset{ds004105, title = {BCIT Auditory Cueing}, author = {Javier Garcia (data) and Justin Brooks (data) and Scott Kerick (data) and Tony Johnson (data and curation) and Tim Mullen (data) and Jean Vettel (data) and Jonathan Touryan (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004105.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004105.v1.0.0}, } ``` ## Technical Details - Subjects: 17 - Recordings: 34 - Tasks: 1 - Channels: 74 - Sampling rate (Hz): 1024.0 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Multisensory - Type: Attention - Size on disk: 20.4 GB - File count: 34 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004105.v1.0.0 - Source: openneuro - OpenNeuro: [ds004105](https://openneuro.org/datasets/ds004105) - NeMAR: [ds004105](https://nemar.org/dataexplorer/detail?dataset_id=ds004105) ## API Reference Use the `DS004105` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004105(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Auditory Cueing * **Study:** `ds004105` (OpenNeuro) * **Author (year):** `Garcia2022` * **Canonical:** `BCIT_Auditory_Cueing` Also importable as: `DS004105`, `Garcia2022`, `BCIT_Auditory_Cueing`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004105](https://openneuro.org/datasets/ds004105) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004105](https://nemar.org/dataexplorer/detail?dataset_id=ds004105) DOI: [https://doi.org/10.18112/openneuro.ds004105.v1.0.0](https://doi.org/10.18112/openneuro.ds004105.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004105 >>> dataset = DS004105(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004105) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004105) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004106: eeg dataset, 27 subjects *BCIT Advanced Guard Duty* Access recordings and metadata through EEGDash. **Citation:** Jonathan Touryan (data and curation), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Kaleb McDowell(data), Tony Johnson (curation), Kay Robbins (curation) (2022). *BCIT Advanced Guard Duty*. [10.18112/openneuro.ds004106.v1.0.0](https://doi.org/10.18112/openneuro.ds004106.v1.0.0) Modality: eeg Subjects: 27 Recordings: 29 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004106 dataset = DS004106(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004106(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004106( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004106, title = {BCIT Advanced Guard Duty}, author = {Jonathan Touryan (data and curation) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Kaleb McDowell(data) and Tony Johnson (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004106.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004106.v1.0.0}, } ``` ## About This Dataset **Introduction** *Overview:* The Advanced Guard Duty study was designed to measure sustained vigilance in realistic settings by having subjects verify information on replica ID badges. The task was performed in conjunction with two other tasks a calibration driving task and a baseline driving task. The data collected for the two driving tasks is not included in this dataset. Another study (Basic Guard Duty) not included in this collection had a similar set-up but a different experimental design and a different subject pool. In the Basic Guard Duty study the rate of ID presentation varied among tasks. In the Advanced Guard Duty study both the rate of ID presentation and the criteria for verification varied among blocks. Further information is available on request from [cancta.net](https://cancta.net). **Methods** *Subjects:* volunteers from the local community recruited through advertisements. *Apparatus:* Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI); Video Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz); ### View full README **Introduction** *Overview:* The Advanced Guard Duty study was designed to measure sustained vigilance in realistic settings by having subjects verify information on replica ID badges. The task was performed in conjunction with two other tasks a calibration driving task and a baseline driving task. The data collected for the two driving tasks is not included in this dataset. Another study (Basic Guard Duty) not included in this collection had a similar set-up but a different experimental design and a different subject pool. In the Basic Guard Duty study the rate of ID presentation varied among tasks. In the Advanced Guard Duty study both the rate of ID presentation and the criteria for verification varied among blocks. Further information is available on request from [cancta.net](https://cancta.net). **Methods** *Subjects:* volunteers from the local community recruited through advertisements. *Apparatus:* Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI); Video Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz); EEG (BioSemi 256 (+8) channel systems with 4 eye and 2 mastoid channels recorded; SR=1024 Hz); Eye Tracking (Sensomotoric Instruments (SMI); REDEYE250). *Initial setup:* Upon arrival to the lab, subjects were given an introduction to the primary study for which they were recruited and provided informed consent and provided demographics information. This was followed by a practice session, to acclimate the subject to the driving simulator. The driving practice task lasted 10-15 min, until asymptotic performance in steering and speed control was demonstrated and lack of motion sickness was reported. Subjects were then outfitted and prepped for eye tracking and EEG acquisition. *Task organization:* Subjects always began recording sessions by performing a Calibration Driving task, which was a 15-minute drive where the subject controlled only the steering (and speed was controlled by the simulator). Following this, subjects would perform the Baseline Driving task and the Guard Duty task, with counter-balancing used across subjects as to which of them came first. This dataset only contains the Guard Duty task. The Baseline Driving run was 60 minutes of driving, performed in 6 blocks of 10 minutes each, with subjects responsible for speed and steering control. The Calibration and Baseline driving tasks were conducted on the same simulated long, straight road in a visually sparse environment. The subject was instructed to stay within the boundaries of the right-most lane, and to drive at the posted speed limits. The vehicle was periodically subject to lateral perturbing forces, which could be applied to either side of the vehicle, pushing the vehicle out of the center of the lane; and the subject was instructed > to execute corrective steering actions to return the vehicle to the center of the lane. *Guard duty task details:* The guard duty task entailed a serial presentation of replica identification (ID) cards (750 x 450 pixels) paired with a reference image (300 x 400 pixels). The replica ID cards had eight components or fields in addition to a common background. These components were: photo, name, date of birth (DOB), date of issue, date of expiration, area access, ID number, bar code and watermark. The reference images consisted of color photographs of faces. Both the ID photo and reference image were chosen from the Multi-PIE database (Gross, Matthews, Cohn, Kanade, & Baker, 2010). This database consists of color photographs (forward facing head shots) of individuals taken at different points in time. Therefore, while the ID photo and reference image were of the same individual, the images were not identical (e.g., different hair style, different clothes, different lighting). The task was divided into ten blocks of five minutes each. At the beginning of each block, participants were instructed that they were guarding a restricted area that required a particular letter designation on the ID card for access (e.g., area C access required). Participants were asked to determine if the individual in the image, paired with the corresponding ID card, should have access to their restricted area. Some of the ID cards were valid and some were not (e.g., expiration date passed, incorrect access area, or photos did not match). Participants were instructed to press either an *allow\*or\*deny* button for each image-ID pairing. The two-alternative forced-choice response was self-paced with a maximum time limit of 20 s. If the participants chose to deny access, they were subsequently asked to provide a reason. Reasons for denied access were selected from a numerical list of five options: 1:incorrect access, 2:expired ID, 3:suspicious DOB, 4:face mismatch, 5:no watermark. If the participant did not respond within the allotted time, the computer forced a deny decision. The restricted area (area A-E) assigned at the beginning of each block was randomly chosen without replacement such that all participants completed two blocks guarding each of the five areas. To maintain consistency across participants, expiration dates were automatically generated at the beginning of the experiment to have a symmetrical distribution around the current date. This distribution was such that the majority of IDs had expiration dates temporally close to the current date (i.e., in the near future or recent past). In each block, the image-ID pairings were presented at one of six different stochastic queuing rates, ranging from 1 to 25 per minute (1, 2.5, 10, 15, 20, and 25 per minute). The queuing rate varied within each block according to a predefined profile. The rate profile had randomly permuted epochs of each queuing rate. Each epoch lasted 30 s with approximately twice as many low rate epochs (1 and 2.5 image-IDs per minute) as high. The rate profiles were shifted for each participant (Latin square design) so that each rate profile was assigned to every block for at least two participants. The current rate was indicated through a processing queue, on the extreme right-hand side of the display, notifying each participant how many IDs are waiting to be checked. For slow rates, most participants were able to process all IDs in their queue and had periods where they were waiting for the next ID (i.e., blank screen). For fast rates, most participants were not able to processes IDs as quickly as they were added to the queue, increasing the size of the processing queue. IDs in the queue persisted until they were processed by the participant or the block ended. At the beginning of the experiment, participants were instructed to correctly process each image-ID while keeping the queue as short as possible. The stochastic queuing rate was used to increase task realism, incorporating periods of high and low task demand, the dynamic rate itself was not explicitly considered an independent factor in the present study. All blocks contained the same ratio of valid and invalid image-ID pairings (82% valid, 18% invalid). The majority of invalid IDs were due to incorrect access (6%) and expiration (6%) whereas the rest were invalid for the other reasons: suspicious DOB (2%), face mismatch (2%), no watermark (2%). This second group of invalid IDs served as catch trials to verify that participants were examining all fields of the ID. *Independent variables:* ID presentation rate and verification criteria (varied by block). *Dependent variables:* ID disposition accuracy and processing times, Task-Induced Fatigue Scale (TIFS), Karolinska Sleepiness Scale (KSS), Visual Analog Scale of Fatigue (VAS-F). Note: questionnaire data is available upon request from [cancta.net](https://cancta.net). *Additional data acquired:* Participant Enrollment Questionnaire, Subject Questionnaire for Current Session, Simulator Sickness Questionnaire. *Experimental Location:* Science Applications International Corporation, Louisville, CO. ## Dataset Information | Dataset ID | `DS004106` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BCIT Advanced Guard Duty | | Author (year) | `Touryan2022` | | Canonical | `BCITAdvancedGuardDuty` | | Importable as | `DS004106`, `Touryan2022`, `BCITAdvancedGuardDuty` | | Year | 2022 | | Authors | Jonathan Touryan (data and curation), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Kaleb McDowell(data), Tony Johnson (curation), Kay Robbins (curation) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004106.v1.0.0](https://doi.org/10.18112/openneuro.ds004106.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004106) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004106) | [Source URL](https://openneuro.org/datasets/ds004106) | ### Copy-paste BibTeX ```bibtex @dataset{ds004106, title = {BCIT Advanced Guard Duty}, author = {Jonathan Touryan (data and curation) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Kaleb McDowell(data) and Tony Johnson (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004106.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004106.v1.0.0}, } ``` ## Technical Details - Subjects: 27 - Recordings: 29 - Tasks: 1 - Channels: 262 - Sampling rate (Hz): 1024.0 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 67.6 GB - File count: 29 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004106.v1.0.0 - Source: openneuro - OpenNeuro: [ds004106](https://openneuro.org/datasets/ds004106) - NeMAR: [ds004106](https://nemar.org/dataexplorer/detail?dataset_id=ds004106) ## API Reference Use the `DS004106` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004106(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Advanced Guard Duty * **Study:** `ds004106` (OpenNeuro) * **Author (year):** `Touryan2022` * **Canonical:** `BCITAdvancedGuardDuty` Also importable as: `DS004106`, `Touryan2022`, `BCITAdvancedGuardDuty`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 27; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004106](https://openneuro.org/datasets/ds004106) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004106](https://nemar.org/dataexplorer/detail?dataset_id=ds004106) DOI: [https://doi.org/10.18112/openneuro.ds004106.v1.0.0](https://doi.org/10.18112/openneuro.ds004106.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004106 >>> dataset = DS004106(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004106) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004106) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004107: meg dataset, 9 subjects *MIND DATA* Access recordings and metadata through EEGDash. **Citation:** M.P. Weisend, F.M. Hanlon, R. Montano, S.P. Ahlfors, A.C. Leuthold, D. Pantazis, J.C. Mosher, A.P. Georgopoulos, M.S. Hamalainen, C.J. Aine (2022). *MIND DATA*. [10.18112/openneuro.ds004107.v1.0.0](https://doi.org/10.18112/openneuro.ds004107.v1.0.0) Modality: meg Subjects: 9 Recordings: 89 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004107 dataset = DS004107(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004107(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004107( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004107, title = {MIND DATA}, author = {M.P. Weisend and F.M. Hanlon and R. Montano and S.P. Ahlfors and A.C. Leuthold and D. Pantazis and J.C. Mosher and A.P. Georgopoulos and M.S. Hamalainen and C.J. Aine}, doi = {10.18112/openneuro.ds004107.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004107.v1.0.0}, } ``` ## About This Dataset This data was part of the study of: M.P. Weisend, F.M. Hanlon, R. Montaño, S.P. Ahlfors, A.C. Leuthold, D. Pantazis, J.C. Mosher, A.P. Georgopoulos, M.S. Hämäläinen, C.J. Aine,, V. (2007). Paving the way for cross-site pooling of magnetoencephalography (MEG) data. International Congress Series, Volume 1300, Pages 615-618,. It was converted to BIDS with MNE-BIDS: Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Following the MEG-BIDS format: Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `DS004107` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MIND DATA | | Author (year) | `Weisend2022` | | Canonical | `Weisend2007` | | Importable as | `DS004107`, `Weisend2022`, `Weisend2007` | | Year | 2022 | | Authors | M.P. Weisend, F.M. Hanlon, R. Montano, S.P. Ahlfors, A.C. Leuthold, D. Pantazis, J.C. Mosher, A.P. Georgopoulos, M.S. Hamalainen, C.J. Aine | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004107.v1.0.0](https://doi.org/10.18112/openneuro.ds004107.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004107) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004107) | [Source URL](https://openneuro.org/datasets/ds004107) | ### Copy-paste BibTeX ```bibtex @dataset{ds004107, title = {MIND DATA}, author = {M.P. Weisend and F.M. Hanlon and R. Montano and S.P. Ahlfors and A.C. Leuthold and D. Pantazis and J.C. Mosher and A.P. Georgopoulos and M.S. Hamalainen and C.J. Aine}, doi = {10.18112/openneuro.ds004107.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004107.v1.0.0}, } ``` ## Technical Details - Subjects: 9 - Recordings: 89 - Tasks: 6 - Channels: 318 (84), 317 (5) - Sampling rate (Hz): 1792.8858642578125 (57), 1250.0 (32) - Duration (hours): 23.117616255933115 - Pathology: Healthy - Modality: Multisensory - Type: Other - Size on disk: 77.2 GB - File count: 89 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004107.v1.0.0 - Source: openneuro - OpenNeuro: [ds004107](https://openneuro.org/datasets/ds004107) - NeMAR: [ds004107](https://nemar.org/dataexplorer/detail?dataset_id=ds004107) ## API Reference Use the `DS004107` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004107(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MIND DATA * **Study:** `ds004107` (OpenNeuro) * **Author (year):** `Weisend2022` * **Canonical:** `Weisend2007` Also importable as: `DS004107`, `Weisend2022`, `Weisend2007`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 9; recordings: 89; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004107](https://openneuro.org/datasets/ds004107) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004107](https://nemar.org/dataexplorer/detail?dataset_id=ds004107) DOI: [https://doi.org/10.18112/openneuro.ds004107.v1.0.0](https://doi.org/10.18112/openneuro.ds004107.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004107 >>> dataset = DS004107(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004107) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004107) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS004117: eeg dataset, 23 subjects *Sternberg Working Memory* Access recordings and metadata through EEGDash. **Citation:** Julie Onton (data), Scott Makeig (data and curation), Arnaud Delorme (data and curation), Dung Truong (curation), Kay Robbins (curation) (2022). *Sternberg Working Memory*. [10.18112/openneuro.ds004117.v1.0.1](https://doi.org/10.18112/openneuro.ds004117.v1.0.1) Modality: eeg Subjects: 23 Recordings: 85 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004117 dataset = DS004117(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004117(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004117( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004117, title = {Sternberg Working Memory}, author = {Julie Onton (data) and Scott Makeig (data and curation) and Arnaud Delorme (data and curation) and Dung Truong (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004117.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004117.v1.0.1}, } ``` ## About This Dataset **Modified Sternberg Working Memory Experiment** *Project name:* EEG and working memory *Years the project ran:* 2004-05 *Brief overview of experiment task:* The purpose of this Modified Sternberg task study was to explore source-resolved EEG brain dynamics associated with selectively committing a series of letters to memory, then after a brief maintenance period responding by button press either yes or no to the question of whether a presented query letter had been in the just-presented set of to-be-memorized letters. The task is a modified version of the classic Sternberg working memory task, with two added features: ### View full README **Modified Sternberg Working Memory Experiment** *Project name:* EEG and working memory *Years the project ran:* 2004-05 *Brief overview of experiment task:* The purpose of this Modified Sternberg task study was to explore source-resolved EEG brain dynamics associated with selectively committing a series of letters to memory, then after a brief maintenance period responding by button press either yes or no to the question of whether a presented query letter had been in the just-presented set of to-be-memorized letters. The task is a modified version of the classic Sternberg working memory task, with two added features: (1) interspersing the sequence of presented (black) letters to be memorized with (green) letters to be ignored, and (2) delivering auditory feedback on each trial as to the correctness of the participant response (beep = correct, buzz = incorrect). *Data collection:* Scalp EEG data were collected from 71 scalp electrode channels, each referred to a right mastoid electrode, at a sampling rate of 250 Hz/channel within an analog passband of 0.1 to 100 Hz. **Contact person: Julie Onton , ORCID#:0000-0002-5602-3557. \*Access information:\* Contributed to OpenNeuro.org and NEMAR.org in BIDS format following annotation using HED 8.0.0 in April, 2022. \*Independent variables:\* Letter category (to_memorize, to_ignore); numbers of presented letters to_memorize/to_ignore (3/5, 5/3, 7/1); probe letter category (in/not in the presented set). Note, only letters to be memorized appear as in set probe letters. \*Dependent variables:\* EEG; button press response latency; participant response (correct/incorrect). \*Participant pool:\* The dataset includes data collected from 23 healthy young adult subjects (7 male, 6 female, 11 unidentified) between the ages of 19 and 40 years of age. \*Apparatus:\* A Neurobehavioral Systems, Inc. EEG system running under Window98 acquired the data. The experiment control program was Presentation (Neurobehavioral Systems, Inc.). \*Initial setup:\* EEG data were collected from 71 channels (69 scalp and two periocular electrodes, all referred to right mastoid) with an analog pass band of 0.01 to 100 Hz (SA Instrumentation, San Diego). Input impedances were brought under 5 kOhms by careful scalp preparation. Data for subjects 1-12 was acquired at a sampling rate of 250Hz. The data for subject 14 was acquired at 1000 Hz and the data for subjects 15-24 was a acquired using a 500 Hz sampling rate. \*Task organization:\* Data was organized into runs of 25 trials each followed by a rest. Each block was a separate run in the BIDS dataset. \*Task details:\* Each trial consisted of the following sequence of events: \*\*[Trial initiation]**. After a self-selected, variable delay, the subject initiated the next trial by pressing either response button, triggering the reappearance of the fixation cross. **[Letter sequence presentation]**. In these experiments, following a 5s presentation of a central fixation cross cue, a series of 8 visual letters (~2 deg of visual angle) were presented at screen center for 1.2s followed by a 0.2s ISI: - Either 3, 5, or 7 of these were colored black. - The participant was to memorize as letters in this set. - The other 5, 3, or 1 letters in the sequence were colored green and participants were to ignore these. - The letters were drawn without substitution from the English alphabet (omitting only A, E, I, O, and U). - The presentation order of black and green letters was pseudo-random. **[Memory maintenance]**. In place of a ninth letter, a dash appeared on the screen to signal the beginning of a Memory Maintenance period lasting between 2 to 4 s. During this period subjects were to silently rehearse the identities of the memorized letters. **[Memory probe]**. A (red) probe letter then appeared, prompting the subject to respond by pressing one of two buttons (with the thumb or index finger of their dominant hand) to indicate whether or not the probe letter had been in the trial?s to-be-memorized letter set. **[Response feedback]**. An auditory feedback signal (a confirmatory beep or cautionary buzz), then presented beginning at 400 ms after the button press, informed the subject whether their response was correct or incorrect. Note: responses in the task were largely correct. **[Session time structure]**. Each task session comprised of 3 or 4 task blocks of 25 trials each separated into individual run files. **Experiment location**: Swartz Center for Neural Computation (SCCN), University of California San Diego, La Jolla CA (USA). **Note 1**: Results presented in Onton, J., Delorme, A. and Makeig, S., 2005. Frontal midline EEG dynamics during working memory. Neuroimage, 27(2), pp.341-356. **Note 2**: This paradigm is one of 20 event-related EEG task paradigms selected for replication by the EEGManyLabs project. For details, see [https://psyarxiv.com/528nr/](https://psyarxiv.com/528nr/). Contact: Yuri Pavlov <[pavlovug@gmail.com](mailto:pavlovug@gmail.com)>. **Note 3**: Participant 5 did not have feedback events in the trials. **Note 4**: The code subdirectory has several auxilliary files that were produced during the curation process. The curation was done using a series of Jupyter notebooks that are available as run in the code/curation_notebooks subdirectory. During the running of these curation notebooks information about the status was logged using the HEDLogger. The output of the logging process is in code/curation_logs. Updated versions of the curation notebooks can be found at: [https://github.com/hed-standard/hed-examples/tree/main/hedcode/jupyter_notebooks/dataset_specific_processing/sternberg](https://github.com/hed-standard/hed-examples/tree/main/hedcode/jupyter_notebooks/dataset_specific_processing/sternberg) ## Dataset Information | Dataset ID | `DS004117` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Sternberg Working Memory | | Author (year) | `Onton2022` | | Canonical | — | | Importable as | `DS004117`, `Onton2022` | | Year | 2022 | | Authors | Julie Onton (data), Scott Makeig (data and curation), Arnaud Delorme (data and curation), Dung Truong (curation), Kay Robbins (curation) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004117.v1.0.1](https://doi.org/10.18112/openneuro.ds004117.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004117) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004117) | [Source URL](https://openneuro.org/datasets/ds004117) | ### Copy-paste BibTeX ```bibtex @dataset{ds004117, title = {Sternberg Working Memory}, author = {Julie Onton (data) and Scott Makeig (data and curation) and Arnaud Delorme (data and curation) and Dung Truong (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004117.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004117.v1.0.1}, } ``` ## Technical Details - Subjects: 23 - Recordings: 85 - Tasks: 1 - Channels: 71 - Sampling rate (Hz): 250.0 (47), 500.0 (24), 500.059 (11), 1000.0 (3) - Duration (hours): 15.464153618985735 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 5.8 GB - File count: 85 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004117.v1.0.1 - Source: openneuro - OpenNeuro: [ds004117](https://openneuro.org/datasets/ds004117) - NeMAR: [ds004117](https://nemar.org/dataexplorer/detail?dataset_id=ds004117) ## API Reference Use the `DS004117` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004117(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sternberg Working Memory * **Study:** `ds004117` (OpenNeuro) * **Author (year):** `Onton2022` * **Canonical:** — Also importable as: `DS004117`, `Onton2022`. Modality: `eeg`. Subjects: 23; recordings: 85; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004117](https://openneuro.org/datasets/ds004117) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004117](https://nemar.org/dataexplorer/detail?dataset_id=ds004117) DOI: [https://doi.org/10.18112/openneuro.ds004117.v1.0.1](https://doi.org/10.18112/openneuro.ds004117.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004117 >>> dataset = DS004117(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004117) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004117) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004118: eeg dataset, 156 subjects *BCIT Calibration Driving* Access recordings and metadata through EEGDash. **Citation:** Jonathan Touryan (data and curation), Greg Apker (data), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Kaleb McDowell (data), Tony Johnson (curation), Kay Robbins (curation) (2022). *BCIT Calibration Driving*. [10.18112/openneuro.ds004118.v1.0.1](https://doi.org/10.18112/openneuro.ds004118.v1.0.1) Modality: eeg Subjects: 156 Recordings: 247 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004118 dataset = DS004118(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004118(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004118( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004118, title = {BCIT Calibration Driving}, author = {Jonathan Touryan (data and curation) and Greg Apker (data) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Kaleb McDowell (data) and Tony Johnson (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004118.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004118.v1.0.1}, } ``` ## About This Dataset **BCIT Calibration Driving** **Introduction** *Overview:* The Calibration Driving study was intended to provide calibration data for applying fatigue-based driver performance prediction algorithms. Calibration data sets were designed to be the first component of every recording session within the BCIT program, which featured multiple studies investigating fatigue. ### View full README **BCIT Calibration Driving** **Introduction** *Overview:* The Calibration Driving study was intended to provide calibration data for applying fatigue-based driver performance prediction algorithms. Calibration data sets were designed to be the first component of every recording session within the BCIT program, which featured multiple studies investigating fatigue. Collectively, the Calibration Driving recordings comprise a ‘virtual’ study, in which driving performance at the calibration level can be analyzed. When analyzed with other same-subject data, involving much longer tasks, the calibration data sets can be used as the basis for non-fatigue state performance. Further information is available on request from [cancta.net](https://cancta.net). The task was performed using identical systems at three different sites: - Army Research Laboratory, Aberdeen MD (T1) - Teledyne Corporation, Durham, NC (T2) - Science Applications International Corporation (SAIC), Louisville, CO (T3) All sites used identical driving simulator setups. The data collected at site T1 used a 64-channel Biosemi EEG headset as did the data collected at site T2, while site T3 used a 256-channel Biosemi EEG headset. Data from site T1 has legacy subject IDs in the range 1000 to 1999. Data from site T2 has legacy subject IDs in the range 2000 to 2999. Data from site T3 has legacy subject IDs in the range 3000 to 3999. Legacy subject IDs are unique across the entire BCIT program. **Methods** *Subjects:* Subjects at Aberdeen Proving Grounds were recruited, on a voluntary basis from among the scientists and engineers working at APG. Subjects recruited by Teledyne and SAIC were found via advertising and community outreach efforts, and primarily consisted of local college students. *Apparatus:* Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI); > Video Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz); > EEG (BioSemi 256 (+8) channel systems with 4 eye and 2 mastoid channels recorded; SR=1024 Hz); > Eye Tracking (Sensomotoric Instruments (SMI); REDEYE250). *Initial setup:* Upon arrival to the lab, subjects were given an introduction to the primary study for which they were recruited and provided informed consent and provided demographics information. This was followed by a practice session, to acclimate the subject to the driving simulator. The driving practice task lasted 10-15 min, until asymptotic performance in steering and speed control was demonstrated and lack of motion sickness was reported. Subjects were then outfitted and prepped for eye tracking and EEG acquisition. *Task organization:* The Calibration study featured a 15-minute trial, requiring the driver to control the steering of a simulated vehicle on a long, straight road in a visually sparse environment. With the vehicle speed controlled by the driving simulator, the only task for the subject was to maintain the vehicle position in the center of the lane. The vehicle was periodically subject to lateral perturbing forces, which could be applied to either side of the vehicle, pushing the vehicle out of the center of the lane; and the subject was instructed to execute corrective steering actions to return the vehicle to the center of the lane. *Independent variables:* None. *Dependent variables:* Reaction times to perturbations, continuous performance based on vehicle log (steering wheel angle, lane position, heading error, etc.), Task-Induced Fatigue Scale (TIFS), Karolinska Sleepiness Scale (KSS), Visual Analog Scale of Fatigue (VAS-F). *Note:* questionnaire data is available upon request from [cancta.net](https://cancta.net). *Additional data acquired:* Participant Enrollment Questionnaire, Subject Questionnaire for Current Session, Simulator Sickness Questionnaire. *Experimental Locations:* Army Research Laboratory, Aberdeen MD (site T1); Teledyne Corporation, Durham, NC (site T2); Science Applications International Corporation (SAIC), Louisville, CO (site T3). *Note:* This 15-minute task was performed prior to every run in the BCIT experimental series. Thus, the runs have corresponding runs in one or more of BCIT Advanced Guard Duty (ds004106), BCIT Basic Guard Duty (ds004119), BCIT Baseline Driving (ds004120), BCIT Mind Wandering (ds004121), BCIT Speed Control (ds004122) and Traffic Complexity (ds004123) that were conducted on the same subject during the same session. The Calibration Driving run was always conducted first. ## Dataset Information | Dataset ID | `DS004118` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BCIT Calibration Driving | | Author (year) | `Touryan2022_BCIT_Calibration` | | Canonical | `Touryan1999` | | Importable as | `DS004118`, `Touryan2022_BCIT_Calibration`, `Touryan1999` | | Year | 2022 | | Authors | Jonathan Touryan (data and curation), Greg Apker (data), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Kaleb McDowell (data), Tony Johnson (curation), Kay Robbins (curation) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004118.v1.0.1](https://doi.org/10.18112/openneuro.ds004118.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004118) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004118) | [Source URL](https://openneuro.org/datasets/ds004118) | ### Copy-paste BibTeX ```bibtex @dataset{ds004118, title = {BCIT Calibration Driving}, author = {Jonathan Touryan (data and curation) and Greg Apker (data) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Kaleb McDowell (data) and Tony Johnson (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004118.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004118.v1.0.1}, } ``` ## Technical Details - Subjects: 156 - Recordings: 247 - Tasks: 1 - Channels: 266 (128), 74 (119) - Sampling rate (Hz): 1024.0 (226), 2048.0 (21) - Duration (hours): Not calculated - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 124.3 GB - File count: 247 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004118.v1.0.1 - Source: openneuro - OpenNeuro: [ds004118](https://openneuro.org/datasets/ds004118) - NeMAR: [ds004118](https://nemar.org/dataexplorer/detail?dataset_id=ds004118) ## API Reference Use the `DS004118` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004118(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Calibration Driving * **Study:** `ds004118` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Calibration` * **Canonical:** `Touryan1999` Also importable as: `DS004118`, `Touryan2022_BCIT_Calibration`, `Touryan1999`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 156; recordings: 247; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004118](https://openneuro.org/datasets/ds004118) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004118](https://nemar.org/dataexplorer/detail?dataset_id=ds004118) DOI: [https://doi.org/10.18112/openneuro.ds004118.v1.0.1](https://doi.org/10.18112/openneuro.ds004118.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004118 >>> dataset = DS004118(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004118) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004118) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004119: eeg dataset, 21 subjects *BCIT Basic Guard Duty* Access recordings and metadata through EEGDash. **Citation:** Jonathan Touryan (data and curation), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Kaleb McDowell (data), Tony Johnson (curation), Kay Robbins (curation) (2022). *BCIT Basic Guard Duty*. [10.18112/openneuro.ds004119.v1.0.0](https://doi.org/10.18112/openneuro.ds004119.v1.0.0) Modality: eeg Subjects: 21 Recordings: 22 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004119 dataset = DS004119(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004119(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004119( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004119, title = {BCIT Basic Guard Duty}, author = {Jonathan Touryan (data and curation) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Kaleb McDowell (data) and Tony Johnson (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004119.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004119.v1.0.0}, } ``` ## About This Dataset **BCIT Basic Guard Duty** **Introduction** *Overview:* The Basic Guard Duty study was designed to measure sustained vigilance in realistic settings by having subjects verify information on replica ID badges. The task was performed in conjunction with two other tasks a calibration driving task and a baseline driving task. The data collected for the two driving tasks is not included in this dataset. ### View full README **BCIT Basic Guard Duty** **Introduction** *Overview:* The Basic Guard Duty study was designed to measure sustained vigilance in realistic settings by having subjects verify information on replica ID badges. The task was performed in conjunction with two other tasks a calibration driving task and a baseline driving task. The data collected for the two driving tasks is not included in this dataset. Another study (Advanced Guard Duty), which included a similar set-up but a different experimental design and a different subject pool, is not included in this dataset. In the Basic Guard Duty study the rate of ID presentation varied among tasks. In the Advanced Guard Duty study both the rate of ID presentation and the criteria for verification varied among blocks. Further information is available on request from [cancta.net](https://cancta.net). **Methods** *Subjects:* Volunteers from the local community recruited through advertisements. *Apparatus:* Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI); Video Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz); EEG (BioSemi 256 (+8) channel systems with 4 eye and 2 mastoid channels recorded; SR=1024 Hz); Eye Tracking (Sensomotoric Instruments (SMI); REDEYE250). *Initial setup:* Upon arrival to the lab, subjects were given an introduction to the primary study for which they were recruited and provided informed consent and provided demographics information. This was followed by a practice session, to acclimate the subject to the driving simulator. The driving practice task lasted 10-15 min, until asymptotic performance in steering and speed control was demonstrated and lack of motion sickness was reported. Subjects were then outfitted and prepped for eye tracking and EEG acquisition. *Task organization:* Subjects always began recording sessions by performing a Calibration Driving task, which was a 15-minute drive where the subject controlled only the steering (and speed was controlled by the simulator). Following this, subjects would perform the Baseline Driving task and the Guard Duty task, with counter-balancing used across subjects as to which of them came first. The Baseline Driving and Calibration Driving tasks are not included in this dataset. *Guard duty task details:* The guard duty task entailed a serial presentation of replica identification (ID) cards (750 ? 450 pixels) paired with a reference image (300 x 400 pixels). The replica ID cards had eight components or fields in addition to a common background. These components were: photo, name, date of birth (DOB), date of issue, date of expiration, area access, ID number, bar code and watermark. The reference images consisted of color photographs of faces. Both the ID photo and reference image were chosen from the Multi-PIE database (Gross, Matthews, Cohn, Kanade, & Baker, 2010). This database consists of color photographs (forward facing head shots) of individuals taken at different points in time. Therefore, while the ID photo and reference image were of the same individual, the images were not identical (e.g., different hair style, different clothes, different lighting). The task was divided into ten blocks of five minutes each. At the beginning of each block, participants were instructed that they were guarding a restricted area that required a particular letter designation on the ID card for access (e.g., area C access required). Participants were asked to determine if the individual in the image, paired with the corresponding ID card, should have access to their restricted area. Some of the ID cards were valid and some were not (e.g., expiration date passed, incorrect access area, or photos did not match). Participants were instructed to press either an *allow\*or\*deny* button for each image-ID pairing. The two-alternative forced-choice response was self-paced with a maximum time limit of 20s. If the participants chose to deny access, they were subsequently asked to provide a reason. Reasons for denied access were selected from a numerical list of five options: 1:incorrect access, 2:expired ID, 3:suspicious DOB, 4:face mismatch, 5:no watermark. If the participant did not respond within the allotted time, the computer forced a deny decision. The restricted area (area A-E) assigned at the beginning of each block was randomly chosen without replacement such that all participants completed two blocks guarding each of the five areas. To maintain consistency across participants, expiration dates were automatically generated at the beginning of the experiment to have a symmetrical distribution around the current date. This distribution was such that the majority of IDs had expiration dates temporally close to the current date (i.e., in the near future or recent past). In each block, the image-ID pairings were presented at one of six different stochastic queuing rates, ranging from 1 to 25 per minute (1, 2.5, 10, 15, 20, and 25 per minute). The queuing rate varied within each block according to a predefined profile. The rate profile had randomly permuted epochs of each queuing rate. Each epoch lasted 30s with approximately twice as many low rate epochs (1 and 2.5 image-IDs per minute) as high. The rate profiles were shifted for each participant (Latin square design) so that each rate profile was assigned to every block for at least two participants. The current rate was indicated through a processing queue, on the extreme right-hand side of the display, notifying each participant how many IDs are waiting to be checked. For slow rates, most participants were able to process all IDs in their queue and had periods where they were waiting for the next ID (i.e., blank screen). For fast rates, most participants were not able to processes IDs as quickly as they were added to the queue, increasing the size of the processing queue. IDs in the queue persisted until they were processed by the participant or the block ended. At the beginning of the experiment, participants were instructed to correctly process each image-ID while keeping the queue as short as possible. Whereas the stochastic queuing rate was used to increase task realism, incorporating periods of high and low task demand, the dynamic rate itself was not explicitly considered an independent factor in the present study. All blocks contained the same ratio of valid and invalid image-ID pairings (82% valid, 18% invalid). The majority of invalid IDs were due to incorrect access (6%) and expiration (6%) whereas the rest were invalid for the other reasons: suspicious DOB (2%), face mismatch (2%), no watermark (2%). This second group of invalid IDs served as catch trials to verify that participants were examining all fields of the ID. *Independent variables:* ID presentation rate (varied by block) *Dependent variables:* ID disposition accuracy and processing times, Task-Induced Fatigue Scale (TIFS), Karolinska Sleepiness Scale (KSS), Visual Analog Scale of Fatigue (VAS-F). Note: The questionnaire data is available upon request from [cancta.net](https://cancta.net). *Additional data acquired:* Participant Enrollment Questionnaire, Subject Questionnaire for Current Session, Simulator Sickness Questionnaire. *Experimental Location:* Science Applications International Corporation, Louisville, CO *Note 1:* This dataset has corresponding runs in the BCIT Calibration Driving ds004118 during which a the 15 minute driving task was performed prior to this one. *Note 2:* This dataset has a corresponding runs in the BCIT Baseline Driving ds004120 which were conducted on the same subject during the same session, counterbalanced with these. ## Dataset Information | Dataset ID | `DS004119` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BCIT Basic Guard Duty | | Author (year) | `Touryan2022_BCIT_Basic` | | Canonical | `BCIT` | | Importable as | `DS004119`, `Touryan2022_BCIT_Basic`, `BCIT` | | Year | 2022 | | Authors | Jonathan Touryan (data and curation), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Kaleb McDowell (data), Tony Johnson (curation), Kay Robbins (curation) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004119.v1.0.0](https://doi.org/10.18112/openneuro.ds004119.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004119) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004119) | [Source URL](https://openneuro.org/datasets/ds004119) | ### Copy-paste BibTeX ```bibtex @dataset{ds004119, title = {BCIT Basic Guard Duty}, author = {Jonathan Touryan (data and curation) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Kaleb McDowell (data) and Tony Johnson (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004119.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004119.v1.0.0}, } ``` ## Technical Details - Subjects: 21 - Recordings: 22 - Tasks: 1 - Channels: 262 - Sampling rate (Hz): 1024.0 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 55.1 GB - File count: 22 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004119.v1.0.0 - Source: openneuro - OpenNeuro: [ds004119](https://openneuro.org/datasets/ds004119) - NeMAR: [ds004119](https://nemar.org/dataexplorer/detail?dataset_id=ds004119) ## API Reference Use the `DS004119` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004119(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Basic Guard Duty * **Study:** `ds004119` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Basic` * **Canonical:** `BCIT` Also importable as: `DS004119`, `Touryan2022_BCIT_Basic`, `BCIT`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004119](https://openneuro.org/datasets/ds004119) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004119](https://nemar.org/dataexplorer/detail?dataset_id=ds004119) DOI: [https://doi.org/10.18112/openneuro.ds004119.v1.0.0](https://doi.org/10.18112/openneuro.ds004119.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004119 >>> dataset = DS004119(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004119) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004119) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004120: eeg dataset, 109 subjects *BCIT Baseline Driving* Access recordings and metadata through EEGDash. **Citation:** Jonathan Touryan (data and curation), Greg Apker (data), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Kaleb McDowell (data), Tony Johnson (curation), Kay Robbins (curation) (2022). *BCIT Baseline Driving*. [10.18112/openneuro.ds004120.v1.0.0](https://doi.org/10.18112/openneuro.ds004120.v1.0.0) Modality: eeg Subjects: 109 Recordings: 131 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004120 dataset = DS004120(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004120(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004120( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004120, title = {BCIT Baseline Driving}, author = {Jonathan Touryan (data and curation) and Greg Apker (data) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Kaleb McDowell (data) and Tony Johnson (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004120.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004120.v1.0.0}, } ``` ## About This Dataset **BCIT Baseline Driving** **Introduction** *Overview:* The Baseline Driving study was designed to collect extended time-on-task measurements of subjects performing a driving task in a simulated environment in order to assess fatigue-based performance through novel biomarkers. The Baseline Driving study was intended to identify periods of driver fatigue via predictive algorithms formulated from the analysis of driver EEG data, in comparison to the objective ### View full README **BCIT Baseline Driving** **Introduction** *Overview:* The Baseline Driving study was designed to collect extended time-on-task measurements of subjects performing a driving task in a simulated environment in order to assess fatigue-based performance through novel biomarkers. The Baseline Driving study was intended to identify periods of driver fatigue via predictive algorithms formulated from the analysis of driver EEG data, in comparison to the objective performance measures, and in contrast with the (non-fatigued) Calibration driving session for the subject. Baseline driving data sets were designed to be the second component of every recording session within the BCIT program, which featured multiple studies investigating fatigue. Collectively, the Baseline Driving recordings comprise a virtual study, in which long time-on-task driving performance can be analyzed for fatigue-related EEG biomarkers based on measured driving performance degradation. Further information is available on request from [cancta.net](https://cancta.net). The task was performed using identical systems at three different sites: - Army Research Laboratory, Aberdeen MD (T1) - Teledyne Corporation, Durham, NC (T2) - Science Applications International Corporation (SAIC), Louisville, CO (T3) All sites used identical driving simulator setups. The data collected at site T1 used a 64-channel Biosemi EEG headset as did the data collected at site T2, while site T3 used a 256-channel Biosemi EEG headset. Data from site T1 has legacy subject IDs in the range 1000 to 1999. Data from site T2 has legacy subject IDs in the range 2000 to 2999. Data from site T3 has legacy subject IDs in the range 3000 to 3999. Legacy subject IDs are unique across the entire BCIT program. **Methods** *Subjects:* Subjects at Aberdeen Proving Grounds were recruited, on a voluntary basis from among the scientists and engineers working at APG. Subjects recruited by Teledyne and SAIC were found via advertising and community outreach efforts, and primarily consisted of local college students. *Apparatus:* Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI); Video Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz); EEG (BioSemi 256 (+8) channel systems with 4 eye and 2 mastoid channels recorded; SR=1024 Hz); Eye Tracking (Sensomotoric Instruments (SMI); REDEYE250). Eye tracking data is not included in this dataset. *Initial setup:* Upon arrival to the lab, subjects were given an introduction to the primary study for which they were recruited and provided informed consent and provided demographics information. This was followed by a practice session, to acclimate the subject to the driving simulator. The driving practice task lasted 10-15 min, until asymptotic performance in steering and speed control was demonstrated and lack of motion sickness was reported. Subjects were then outfitted and prepped for eye tracking and EEG acquisition. *Task organization:* Subjects always began recording sessions by performing a Calibration Driving task, which was a 15-minute drive where the subject controlled only the steering (and speed was controlled by the simulator). Following this, subjects would perform the Baseline Driving task and the Guard Duty task, with counter-balancing used across subjects as to which of them came first. The Baseline Driving run was 60 minutes of driving, performed in 6 blocks of 10 minutes each, with subjects responsible for speed and steering control. The subject was instructed to stay within the boundaries of the right-most lane, and to drive at the posted speed limits. The vehicle was periodically subject to lateral perturbing forces, which could be applied to either side of the vehicle, pushing the vehicle out of the center of the lane; and the subject was instructed to execute corrective steering actions to return the vehicle to the center of the lane. *Independent variables:* For T1 (ARL) and T3 (SAIC) there were no independent variables. For T2 data sets (Teledyne), independent variables were Visual Complexity (high vs. low), Perturbation Frequency (high vs. low). *Dependent variables:* Reaction times to perturbations, continuous performance based on vehicle log (steering wheel angle, lane position, heading error, etc.), Task-Induced Fatigue Scale (TIFS), Karolinska Sleepiness Scale (KSS), Visual Analog Scale of Fatigue (VAS-F). Note: questionnaire data is available upon request from [cancta.net](https://cancta.net). *Additional data acquired:* Participant Enrollment Questionnaire, Subject Questionnaire for Current Session, Simulator Sickness Questionnaire. *Experimental Locations:* Army Research Laboratory, Aberdeen MD (site T1); Teledyne Corporation, Durham, NC (site T2); Science Applications International Corporation (SAIC), Louisville, CO (site T3). *Note 1:* This dataset has a corresponding dataset in the BCIT Calibration Driving ds004118 which has the 15 minute driving task performed prior to this one. *Note 2:* Some of the subjects in this dataset performed either the BCIT Basic Guard Duty Task (ds004118) or the BCIT Advanced Guard Duty Task (ds004106) counterbalanced during the same session. ## Dataset Information | Dataset ID | `DS004120` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BCIT Baseline Driving | | Author (year) | `Touryan2022_BCIT_Baseline` | | Canonical | `BCITBaselineDriving` | | Importable as | `DS004120`, `Touryan2022_BCIT_Baseline`, `BCITBaselineDriving` | | Year | 2022 | | Authors | Jonathan Touryan (data and curation), Greg Apker (data), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Kaleb McDowell (data), Tony Johnson (curation), Kay Robbins (curation) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004120.v1.0.0](https://doi.org/10.18112/openneuro.ds004120.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004120) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004120) | [Source URL](https://openneuro.org/datasets/ds004120) | ### Copy-paste BibTeX ```bibtex @dataset{ds004120, title = {BCIT Baseline Driving}, author = {Jonathan Touryan (data and curation) and Greg Apker (data) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Kaleb McDowell (data) and Tony Johnson (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004120.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004120.v1.0.0}, } ``` ## Technical Details - Subjects: 109 - Recordings: 131 - Tasks: 1 - Channels: 266 (81), 74 (50) - Sampling rate (Hz): 1024.0 (109), 2048.0 (22) - Duration (hours): Not calculated - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 302.5 GB - File count: 131 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004120.v1.0.0 - Source: openneuro - OpenNeuro: [ds004120](https://openneuro.org/datasets/ds004120) - NeMAR: [ds004120](https://nemar.org/dataexplorer/detail?dataset_id=ds004120) ## API Reference Use the `DS004120` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004120(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Baseline Driving * **Study:** `ds004120` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Baseline` * **Canonical:** `BCITBaselineDriving` Also importable as: `DS004120`, `Touryan2022_BCIT_Baseline`, `BCITBaselineDriving`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 109; recordings: 131; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004120](https://openneuro.org/datasets/ds004120) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004120](https://nemar.org/dataexplorer/detail?dataset_id=ds004120) DOI: [https://doi.org/10.18112/openneuro.ds004120.v1.0.0](https://doi.org/10.18112/openneuro.ds004120.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004120 >>> dataset = DS004120(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004120) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004120) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004121: eeg dataset, 21 subjects *BCIT Mind Wandering* Access recordings and metadata through EEGDash. **Citation:** Jonathan Touryan (data and curation), Greg Apker (data), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Justin Brooks (data), Kaleb McDowell (data), Tony Johnson (curation), Kay Robbins (curation) (2022). *BCIT Mind Wandering*. [10.18112/openneuro.ds004121.v1.0.0](https://doi.org/10.18112/openneuro.ds004121.v1.0.0) Modality: eeg Subjects: 21 Recordings: 60 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004121 dataset = DS004121(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004121(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004121( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004121, title = {BCIT Mind Wandering}, author = {Jonathan Touryan (data and curation) and Greg Apker (data) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Justin Brooks (data) and Kaleb McDowell (data) and Tony Johnson (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004121.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004121.v1.0.0}, } ``` ## About This Dataset **BCIT Mind Wandering** **Introduction** *Overview:* Subjects in the Mind Wandering study performed a long-duration simulated driving task with perturbations and audio stimuli in a visually complex environment. The purpose of this effort was to supplement and extend the related driving research to collect prolonged time-on-task measurements of subjects performing a driving task in a simulated environment ### View full README **BCIT Mind Wandering** **Introduction** *Overview:* Subjects in the Mind Wandering study performed a long-duration simulated driving task with perturbations and audio stimuli in a visually complex environment. The purpose of this effort was to supplement and extend the related driving research to collect prolonged time-on-task measurements of subjects performing a driving task in a simulated environment in order to assess fatigue-based performance through novel biomarkers. Similar to the Baseline Driving study, the Mind Wandring study was intended to identify periods of driver fatigue via predictive algorithms formulated from the analysis of driver EEG data, in comparison to the objective performance measures, and in contrast with the (non-fatigued) Calibration driving session for the subject. Mind Wandering extended the paradigm by adding different types of background audio (task relevant, non-task relevant, internal focus) and a vigilance task (identify police vehicles), in addition to increasing perturbation magnitude and frequency vs. baseline driving. Further information is available on request from [cancta.net](https://cancta.net). **Methods** *Subjects:* Volunteers from the local community recruited through advertisements. *Apparatus:* Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI); Video Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz); EEG (BioSemi 64 (+8) channel systems with 4 eye and 2 mastoid channels recorded; SR=2048 Hz); Eye Tracking (Sensomotoric Instruments (SMI); REDEYE250). *Initial setup:* Upon arrival to the lab, subjects were given an introduction to the primary study for which they were recruited and provided informed consent and provided demographics information. This was followed by a practice session, to acclimate the subject to the driving simulator. The driving practice task lasted 10-15 min, until asymptotic performance in steering and speed control was demonstrated and lack of motion sickness was reported. Subjects were then outfitted and prepped for eye tracking and EEG acquisition. *Task organization within the study:* Subjects always began recording sessions by performing a Calibration Driving task, which was a 15-minute drive where the subject controlled only the steering (and speed was controlled by the simulator). *Mind wandering task details:* Subjects would perform Mind Wandering conditions A, B, and C, with counter-balancing used across subjects as to which of them came first. Mind Wandering A was 30 minutes of continuous driving, with subjects responsible for steering and maintaining speed, while task relevant audio (traffic safety) played in the background. Subjects were instructed to look for police vehicles and respond by pressing a button on the steering wheel. Mind Wandering B and C were similar, with non-task relevant audio (e.g. sports broadcast) in B and internal focus audio (mindfulness breathing exercise) in C. Both driving tasks were conducted on the same simulated long, straight road, that contained a mix of regular traffic and police vehicles. In each case, the subject was instructed to stay within the boundaries of the right-most lane, and to drive at the posted speed limits. The vehicle was periodically subject to lateral perturbing forces, which could be applied to either side of the vehicle, pushing the vehicle out of the center of the lane; and the subject was instructed to execute corrective steering actions to return the vehicle to the center of the lane. *Independent variables:* Background Audio (task relevant vs. non-task relevant vs. internal focus). *Dependent variables:* Reaction times to perturbations, continuous performance based on vehicle log (steering wheel angle, lane position, heading error, etc.), reaction times to target vehicles (police), Task-Induced Fatigue Scale (TIFS), Karolinska Sleepiness Scale (KSS), Visual Analog Scale of Fatigue (VAS-F). Note: questionnaire data is available upon request from [cancta.net](https://cancta.net). *Additional data acquired:* Participant Enrollment Questionnaire, Subject Questionnaire for Current Session, Simulator Sickness Questionnaire. *Experimental Location:* Teledyne Corporation, Durham, NC. *Note:* This dataset has a corresponding dataset in the BCIT Calibration Driving ds004118 which has the 15 minute driving task prior to this one. ## Dataset Information | Dataset ID | `DS004121` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BCIT Mind Wandering | | Author (year) | `Touryan2022_BCIT_Mind` | | Canonical | `BCITMindWandering` | | Importable as | `DS004121`, `Touryan2022_BCIT_Mind`, `BCITMindWandering` | | Year | 2022 | | Authors | Jonathan Touryan (data and curation), Greg Apker (data), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Justin Brooks (data), Kaleb McDowell (data), Tony Johnson (curation), Kay Robbins (curation) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004121.v1.0.0](https://doi.org/10.18112/openneuro.ds004121.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004121) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004121) | [Source URL](https://openneuro.org/datasets/ds004121) | ### Copy-paste BibTeX ```bibtex @dataset{ds004121, title = {BCIT Mind Wandering}, author = {Jonathan Touryan (data and curation) and Greg Apker (data) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Justin Brooks (data) and Kaleb McDowell (data) and Tony Johnson (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004121.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004121.v1.0.0}, } ``` ## Technical Details - Subjects: 21 - Recordings: 60 - Tasks: 1 - Channels: 74 - Sampling rate (Hz): 1024.0 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Multisensory - Type: Attention - Size on disk: 23.9 GB - File count: 60 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004121.v1.0.0 - Source: openneuro - OpenNeuro: [ds004121](https://openneuro.org/datasets/ds004121) - NeMAR: [ds004121](https://nemar.org/dataexplorer/detail?dataset_id=ds004121) ## API Reference Use the `DS004121` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004121(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Mind Wandering * **Study:** `ds004121` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Mind` * **Canonical:** `BCITMindWandering` Also importable as: `DS004121`, `Touryan2022_BCIT_Mind`, `BCITMindWandering`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004121](https://openneuro.org/datasets/ds004121) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004121](https://nemar.org/dataexplorer/detail?dataset_id=ds004121) DOI: [https://doi.org/10.18112/openneuro.ds004121.v1.0.0](https://doi.org/10.18112/openneuro.ds004121.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004121 >>> dataset = DS004121(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004121) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004121) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004122: eeg dataset, 32 subjects *BCIT Speed Control* Access recordings and metadata through EEGDash. **Citation:** Jonathan Touryan (data and curation), Greg Apker (data), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Justin Brooks (data), Kaleb McDowell (data), Tony Johnson (curation), Kay Robbins (curation) (2022). *BCIT Speed Control*. [10.18112/openneuro.ds004122.v1.0.0](https://doi.org/10.18112/openneuro.ds004122.v1.0.0) Modality: eeg Subjects: 32 Recordings: 63 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004122 dataset = DS004122(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004122(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004122( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004122, title = {BCIT Speed Control}, author = {Jonathan Touryan (data and curation) and Greg Apker (data) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Justin Brooks (data) and Kaleb McDowell (data) and Tony Johnson (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004122.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004122.v1.0.0}, } ``` ## About This Dataset **BCIT Speed Control** **Introduction** *Overview:* The Speed Control study was designed to collect extended time-on-task measurements of subjects performing a driving task in a simulated environment in order to assess fatigue-based performance through novel biomarkers. Similar to the Baseline Driving study, the Speed Control study was intended to identify periods of driver fatigue via predictive algorithms formulated from the analysis of driver ### View full README **BCIT Speed Control** **Introduction** *Overview:* The Speed Control study was designed to collect extended time-on-task measurements of subjects performing a driving task in a simulated environment in order to assess fatigue-based performance through novel biomarkers. Similar to the Baseline Driving study, the Speed Control study was intended to identify periods of driver fatigue via predictive algorithms formulated from the analysis of driver EEG data, in comparison to the objective performance measures, and in contrast with the (non-fatigued) Calibration driving session for the subject. Speed Control extended the paradigm by modulating driver control of the throttle and increasing the frequency and magnitude of perturbation events vs. Baseline Driving. Further information is available on request from [cancta.net](https://cancta.net). **Methods** *Subjects:* Volunteers from the local community recruited through advertisements. *Apparatus:* Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI); Video Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz); EEG (BioSemi 64 (+8) channel systems with 4 eye and 2 mastoid channels recorded; SR=2048 Hz); Eye Tracking (Sensomotoric Instruments (SMI); REDEYE250). Eye tracking data is not included with this dataset. *Initial setup:* Upon arrival to the lab, subjects were given an introduction to the primary study for which they were recruited and provided informed consent and provided demographics information. This was followed by a practice session, to acclimate the subject to the driving simulator. The driving practice task lasted 10-15 min, until asymptotic performance in steering and speed control was demonstrated and lack of motion sickness was reported. Subjects were then outfitted and prepped for eye tracking and EEG acquisition. *Task organization:* Subjects performed the Speed Control condition A and Speed Control condition B, with counter-balancing used across subjects as to which of them came first. Speed Control A was 45 minutes of continuous driving, with subjects responsible for steering control, with the simulator maintaining a constant speed automatically. Speed Control B was similar, but the subject was responsible for both steering and maintaining speed. Both driving tasks were conducted on the same simulated long, straight road. In each case, the subject was instructed to stay within the boundaries of the right-most lane, and to drive at the posted speed limits. The vehicle was periodically subject to lateral perturbing forces, which could be applied to either side of the vehicle, pushing the vehicle out of the center of the lane; and the subject was instructed to execute corrective steering actions to return the vehicle to the center of the lane. *Independent variables:* Speed Control (cruise vs. manual). *Dependent variables:* Reaction times to perturbations, continuous performance based on vehicle log (steering wheel angle, lane position, heading error, etc.), Task-Induced Fatigue Scale (TIFS), Karolinska Sleepiness Scale (KSS), Visual Analog Scale of Fatigue (VAS-F). *Note:* questionnaire data is available upon request from [cancta.net](https://cancta.net). *Additional data acquired:* Participant Enrollment Questionnaire, Subject Questionnaire for Current Session, Simulator Sickness Questionnaire. *Experimental Locations:* Teledyne Corporation, Durham, NC. *Note:* This dataset has a corresponding dataset in the BCIT Calibration Driving ds004118 which has the 15 minute driving task performed prior to this one. ## Dataset Information | Dataset ID | `DS004122` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BCIT Speed Control | | Author (year) | `Touryan2022_BCIT_Speed` | | Canonical | — | | Importable as | `DS004122`, `Touryan2022_BCIT_Speed` | | Year | 2022 | | Authors | Jonathan Touryan (data and curation), Greg Apker (data), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Justin Brooks (data), Kaleb McDowell (data), Tony Johnson (curation), Kay Robbins (curation) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004122.v1.0.0](https://doi.org/10.18112/openneuro.ds004122.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004122) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004122) | [Source URL](https://openneuro.org/datasets/ds004122) | ### Copy-paste BibTeX ```bibtex @dataset{ds004122, title = {BCIT Speed Control}, author = {Jonathan Touryan (data and curation) and Greg Apker (data) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Justin Brooks (data) and Kaleb McDowell (data) and Tony Johnson (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004122.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004122.v1.0.0}, } ``` ## Technical Details - Subjects: 32 - Recordings: 63 - Tasks: 1 - Channels: 74 - Sampling rate (Hz): 1024.0 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 36.2 GB - File count: 63 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004122.v1.0.0 - Source: openneuro - OpenNeuro: [ds004122](https://openneuro.org/datasets/ds004122) - NeMAR: [ds004122](https://nemar.org/dataexplorer/detail?dataset_id=ds004122) ## API Reference Use the `DS004122` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004122(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Speed Control * **Study:** `ds004122` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Speed` * **Canonical:** — Also importable as: `DS004122`, `Touryan2022_BCIT_Speed`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 32; recordings: 63; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004122](https://openneuro.org/datasets/ds004122) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004122](https://nemar.org/dataexplorer/detail?dataset_id=ds004122) DOI: [https://doi.org/10.18112/openneuro.ds004122.v1.0.0](https://doi.org/10.18112/openneuro.ds004122.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004122 >>> dataset = DS004122(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004122) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004122) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004123: eeg dataset, 29 subjects *BCIT Traffic Complexity* Access recordings and metadata through EEGDash. **Citation:** Jonathan Touryan (data and curation), Greg Apker (data), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Justin Brooks (data), Kaleb McDowell (data), Tony Johnson (curation), Kay Robbins (curation) (2022). *BCIT Traffic Complexity*. [10.18112/openneuro.ds004123.v1.0.0](https://doi.org/10.18112/openneuro.ds004123.v1.0.0) Modality: eeg Subjects: 29 Recordings: 30 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004123 dataset = DS004123(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004123(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004123( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004123, title = {BCIT Traffic Complexity}, author = {Jonathan Touryan (data and curation) and Greg Apker (data) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Justin Brooks (data) and Kaleb McDowell (data) and Tony Johnson (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004123.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004123.v1.0.0}, } ``` ## About This Dataset **BCIT Traffic Complexity** **Introduction** *Overview:* The Traffic Complexity study was designed to collect extended time-on-task measurements of subjects performing a driving task in a simulated environment in order to assess fatigue-based performance through novel biomarkers. Similar to the Baseline Driving study, the Speed Control study was intended to identify periods of driver fatigue via predictive algorithms formulated from the analysis of driver EEG data, ### View full README **BCIT Traffic Complexity** **Introduction** *Overview:* The Traffic Complexity study was designed to collect extended time-on-task measurements of subjects performing a driving task in a simulated environment in order to assess fatigue-based performance through novel biomarkers. Similar to the Baseline Driving study, the Speed Control study was intended to identify periods of driver fatigue via predictive algorithms formulated from the analysis of driver EEG data, in comparison to the objective performance measures, and in contrast with the (non-fatigued) Calibration driving session for the subject. Traffic Complexity extended the paradigm by modulating the visual complexity and the frequency of perturbation events vs. Baseline Driving. Further information is available on request from [cancta.net](https://cancta.net). **Methods** *Subjects:* Volunteers from the local community recruited through advertisements. *Apparatus:* Driving simulator with steering wheel and brake / foot pedals (Real Time Technologies; Dearborn, MI); Video Refresh Rate (VRR) = 900 Hz; Vehicle data log file Sampling Rate (SR) = 100 Hz); EEG (BioSemi 64 (+8) channel systems with 4 eye and 2 mastoid channels recorded; SR=2048 Hz); Eye Tracking (Sensomotoric Instruments (SMI); REDEYE250). *Initial setup:* Upon arrival to the lab, subjects were given an introduction to the primary study for which they were recruited and provided informed consent and provided demographics information. This was followed by a practice session, to acclimate the subject to the driving simulator. The driving practice task lasted 10-15 min, until asymptotic performance in steering and speed control was demonstrated and lack of motion sickness was reported. Subjects were then outfitted and prepped for eye tracking and EEG acquisition. *Task organization:* Subjects would perform the Baseline Driving task and the Traffic Complexity task, with counter-balancing used across subjects as to which of them came first. The Baseline Driving run was 45 minutes of continuous driving, with subjects responsible for speed and steering control. Both driving tasks were conducted on the same simulated long, straight road. The Baseline run was done in a visually sparse environment, and the Traffic Complexity runs included pedestrians and other traffic. In each case, the subject was instructed to stay within the boundaries of the right-most lane, and to drive at the posted speed limits. The vehicle was periodically subject to lateral perturbing forces, which could be applied to either side of the vehicle, pushing the vehicle out of the center of the lane; and the subject was instructed to execute corrective steering actions to return the vehicle to the center of the lane. *Independent variables:* Visual Complexity (high vs. low), Perturbation Frequency (high vs. low). *Dependent variables:* Reaction times to perturbations, continuous performance based on vehicle log (steering wheel angle, lane position, heading error, etc.), Task-Induced Fatigue Scale (TIFS), Karolinska Sleepiness Scale (KSS), Visual Analog Scale of Fatigue (VAS-F). *Note:* Questionnaire data is available upon request from [cancta.net](https://cancta.net). *Additional data acquired:* Participant Enrollment Questionnaire, Subject Questionnaire for Current Session, Simulator Sickness Questionnaire. *Experimental Locations:* Teledyne Corporation, Durham, NC. *Note 1:* This dataset has a corresponding dataset in the BCIT Calibration Driving ds004118 which has the 15 minute driving task performed prior to this one. *Note 2:* This dataset has a corresponding dataset in the BCIT Baseline Driving ds004120 which was a longer driving task in a sparse environment. ## Dataset Information | Dataset ID | `DS004123` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BCIT Traffic Complexity | | Author (year) | `Touryan2022_BCIT_Traffic` | | Canonical | `BCIT_Traffic_Complexity` | | Importable as | `DS004123`, `Touryan2022_BCIT_Traffic`, `BCIT_Traffic_Complexity` | | Year | 2022 | | Authors | Jonathan Touryan (data and curation), Greg Apker (data), Brent Lance (data), Scott Kerick (data), Anthony Ries (data), Justin Brooks (data), Kaleb McDowell (data), Tony Johnson (curation), Kay Robbins (curation) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004123.v1.0.0](https://doi.org/10.18112/openneuro.ds004123.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004123) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004123) | [Source URL](https://openneuro.org/datasets/ds004123) | ### Copy-paste BibTeX ```bibtex @dataset{ds004123, title = {BCIT Traffic Complexity}, author = {Jonathan Touryan (data and curation) and Greg Apker (data) and Brent Lance (data) and Scott Kerick (data) and Anthony Ries (data) and Justin Brooks (data) and Kaleb McDowell (data) and Tony Johnson (curation) and Kay Robbins (curation)}, doi = {10.18112/openneuro.ds004123.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004123.v1.0.0}, } ``` ## Technical Details - Subjects: 29 - Recordings: 30 - Tasks: 1 - Channels: 74 - Sampling rate (Hz): 1024.0 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 17.5 GB - File count: 30 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004123.v1.0.0 - Source: openneuro - OpenNeuro: [ds004123](https://openneuro.org/datasets/ds004123) - NeMAR: [ds004123](https://nemar.org/dataexplorer/detail?dataset_id=ds004123) ## API Reference Use the `DS004123` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004123(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Traffic Complexity * **Study:** `ds004123` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Traffic` * **Canonical:** `BCIT_Traffic_Complexity` Also importable as: `DS004123`, `Touryan2022_BCIT_Traffic`, `BCIT_Traffic_Complexity`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 29; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004123](https://openneuro.org/datasets/ds004123) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004123](https://nemar.org/dataexplorer/detail?dataset_id=ds004123) DOI: [https://doi.org/10.18112/openneuro.ds004123.v1.0.0](https://doi.org/10.18112/openneuro.ds004123.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004123 >>> dataset = DS004123(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004123) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004123) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004127: ieeg dataset, 8 subjects *Somatosensory Cortex Rat DISC Data* Access recordings and metadata through EEGDash. **Citation:** Amada Abrego, Wasif Khan, John P Seymour (2022). *Somatosensory Cortex Rat DISC Data*. [10.18112/openneuro.ds004127.v3.0.0](https://doi.org/10.18112/openneuro.ds004127.v3.0.0) Modality: ieeg Subjects: 8 Recordings: 73 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004127 dataset = DS004127(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004127(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004127( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004127, title = {Somatosensory Cortex Rat DISC Data}, author = {Amada Abrego and Wasif Khan and John P Seymour}, doi = {10.18112/openneuro.ds004127.v3.0.0}, url = {https://doi.org/10.18112/openneuro.ds004127.v3.0.0}, } ``` ## About This Dataset Project Title: DISC Validation in Rat Somatosensory Cortex Project ID: 000 Expected experimentation period: Start date: N/A End date: N/A Project Description: DISC were implanted in rat somatosensory cortex. While anesthetized, whiskers were being stimulated using an air puffer. Task name corresponds to the id of the whisker being stimulated. For more information of the task go to our biorxiv’s paper: [https://www.biorxiv.org/content/10.1101/2021.09.20.460996v3](https://www.biorxiv.org/content/10.1101/2021.09.20.460996v3) Participant categories: N/A Trigger channels: N/A Events: N/A ## Dataset Information | Dataset ID | `DS004127` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Somatosensory Cortex Rat DISC Data | | Author (year) | `Abrego2022` | | Canonical | — | | Importable as | `DS004127`, `Abrego2022` | | Year | 2022 | | Authors | Amada Abrego, Wasif Khan, John P Seymour | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004127.v3.0.0](https://doi.org/10.18112/openneuro.ds004127.v3.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004127) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004127) | [Source URL](https://openneuro.org/datasets/ds004127) | ### Copy-paste BibTeX ```bibtex @dataset{ds004127, title = {Somatosensory Cortex Rat DISC Data}, author = {Amada Abrego and Wasif Khan and John P Seymour}, doi = {10.18112/openneuro.ds004127.v3.0.0}, url = {https://doi.org/10.18112/openneuro.ds004127.v3.0.0}, } ``` ## Technical Details - Subjects: 8 - Recordings: 73 - Tasks: 11 - Channels: 128 (28), 110 (9), 102 (9), 104 (9), 112 (9), 105 (6), 106 (2), 111 - Sampling rate (Hz): 20000.0 - Duration (hours): 6.083819444444444 - Pathology: Other - Modality: Tactile - Type: Other - Size on disk: 187.5 GB - File count: 73 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004127.v3.0.0 - Source: openneuro - OpenNeuro: [ds004127](https://openneuro.org/datasets/ds004127) - NeMAR: [ds004127](https://nemar.org/dataexplorer/detail?dataset_id=ds004127) ## API Reference Use the `DS004127` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004127(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Somatosensory Cortex Rat DISC Data * **Study:** `ds004127` (OpenNeuro) * **Author (year):** `Abrego2022` * **Canonical:** — Also importable as: `DS004127`, `Abrego2022`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Other`. Subjects: 8; recordings: 73; tasks: 11. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004127](https://openneuro.org/datasets/ds004127) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004127](https://nemar.org/dataexplorer/detail?dataset_id=ds004127) DOI: [https://doi.org/10.18112/openneuro.ds004127.v3.0.0](https://doi.org/10.18112/openneuro.ds004127.v3.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004127 >>> dataset = DS004127(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004127) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004127) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004147: eeg dataset, 12 subjects *Average Task Value* Access recordings and metadata through EEGDash. **Citation:** Cameron D. Hassall, Laurence T. Hunt, Clay B. Holroyd (2022). *Average Task Value*. [10.18112/openneuro.ds004147.v1.0.2](https://doi.org/10.18112/openneuro.ds004147.v1.0.2) Modality: eeg Subjects: 12 Recordings: 12 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004147 dataset = DS004147(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004147(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004147( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004147, title = {Average Task Value}, author = {Cameron D. Hassall and Laurence T. Hunt and Clay B. Holroyd}, doi = {10.18112/openneuro.ds004147.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004147.v1.0.2}, } ``` ## About This Dataset **Average Task Value** Twelve participants completed three learning tasks. In each task the goal was to learn cue-response mappings for six cues. The cues were various coloured shapes. The possible responses were left (‘d’ key) or right (‘k’ key). There were two types of cues. Low-value cues had a feedback validity of 0.5 (i.e., a coin toss). High-value cues had a feedback validity of 0.8 (80% chance of a win if the correct action was chosen). The low-value task contained only low-value cues. The high-value task contained only high-value cues. The mid-value task contained three low-value cues and three high-value cues. Participants completed 144 trials of each task. Preprint: [https://doi.org/10.1101/2021.09.16.460600](https://doi.org/10.1101/2021.09.16.460600) Analysis code: [https://github.com/chassall/averagetaskvalue](https://github.com/chassall/averagetaskvalue) ## Dataset Information | Dataset ID | `DS004147` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Average Task Value | | Author (year) | `Hassall2022_Average` | | Canonical | — | | Importable as | `DS004147`, `Hassall2022_Average` | | Year | 2022 | | Authors | Cameron D. Hassall, Laurence T. Hunt, Clay B. Holroyd | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004147.v1.0.2](https://doi.org/10.18112/openneuro.ds004147.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004147) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004147) | [Source URL](https://openneuro.org/datasets/ds004147) | ### Copy-paste BibTeX ```bibtex @dataset{ds004147, title = {Average Task Value}, author = {Cameron D. Hassall and Laurence T. Hunt and Clay B. Holroyd}, doi = {10.18112/openneuro.ds004147.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004147.v1.0.2}, } ``` ## Technical Details - Subjects: 12 - Recordings: 12 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 1000.0 - Duration (hours): 9.60806111111111 - Pathology: Healthy - Modality: Visual - Type: Learning - Size on disk: 4.0 GB - File count: 12 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004147.v1.0.2 - Source: openneuro - OpenNeuro: [ds004147](https://openneuro.org/datasets/ds004147) - NeMAR: [ds004147](https://nemar.org/dataexplorer/detail?dataset_id=ds004147) ## API Reference Use the `DS004147` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004147(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Average Task Value * **Study:** `ds004147` (OpenNeuro) * **Author (year):** `Hassall2022_Average` * **Canonical:** — Also importable as: `DS004147`, `Hassall2022_Average`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004147](https://openneuro.org/datasets/ds004147) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004147](https://nemar.org/dataexplorer/detail?dataset_id=ds004147) DOI: [https://doi.org/10.18112/openneuro.ds004147.v1.0.2](https://doi.org/10.18112/openneuro.ds004147.v1.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004147 >>> dataset = DS004147(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004147) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004147) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004148: eeg dataset, 60 subjects *A test-retest resting and cognitive state EEG dataset* Access recordings and metadata through EEGDash. **Citation:** Yulin Wang, Wei Duan, Lihong Ding, Debo Dong, Xu Lei (2022). *A test-retest resting and cognitive state EEG dataset*. [10.18112/openneuro.ds004148.v1.0.0](https://doi.org/10.18112/openneuro.ds004148.v1.0.0) Modality: eeg Subjects: 60 Recordings: 900 License: CC0 Source: openneuro Citations: 12.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004148 dataset = DS004148(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004148(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004148( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004148, title = {A test-retest resting and cognitive state EEG dataset}, author = {Yulin Wang and Wei Duan and Lihong Ding and Debo Dong and Xu Lei}, doi = {10.18112/openneuro.ds004148.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004148.v1.0.0}, } ``` ## About This Dataset **General information** This dataset contains resting(eyes closed, eyes open) and cognitive(subtraction, music, memory) state EEG recordings with 60 participants during three experimental sessions together with sleep, emotion, mental health, and mind-wandering related measures **Dataset** **Presentation** > The data collection was initiated in September 2019 and was terminated in April 2021. The detailed description of the dataset is currently under working > by Yulin Wang, and will submit to Scientific Data for publication. **EEG acquisition** > * EEG system (Brain Products GmbH, Steing- rabenstr, Germany, 64 electrodes) > * Sampling frequency: 500Hz > * Impedances were kept below 5k **Contact** > * If you have any questions or comments, please contact: > * Xu Lei: [xlei@swu.edu.cn](mailto:xlei@swu.edu.cn) > * Yulin Wang: [yulin.wang90.swu@gmail.com](mailto:yulin.wang90.swu@gmail.com) ## Dataset Information | Dataset ID | `DS004148` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A test-retest resting and cognitive state EEG dataset | | Author (year) | `Wang2022_test_retest_resting` | | Canonical | — | | Importable as | `DS004148`, `Wang2022_test_retest_resting` | | Year | 2022 | | Authors | Yulin Wang, Wei Duan, Lihong Ding, Debo Dong, Xu Lei | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004148.v1.0.0](https://doi.org/10.18112/openneuro.ds004148.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004148) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004148) | [Source URL](https://openneuro.org/datasets/ds004148) | ### Copy-paste BibTeX ```bibtex @dataset{ds004148, title = {A test-retest resting and cognitive state EEG dataset}, author = {Yulin Wang and Wei Duan and Lihong Ding and Debo Dong and Xu Lei}, doi = {10.18112/openneuro.ds004148.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004148.v1.0.0}, } ``` ## Technical Details - Subjects: 60 - Recordings: 900 - Tasks: 5 - Channels: 61 - Sampling rate (Hz): 500.0 - Duration (hours): 75.0 - Pathology: Healthy - Modality: Other - Type: Other - Size on disk: 30.7 GB - File count: 900 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004148.v1.0.0 - Source: openneuro - OpenNeuro: [ds004148](https://openneuro.org/datasets/ds004148) - NeMAR: [ds004148](https://nemar.org/dataexplorer/detail?dataset_id=ds004148) ## API Reference Use the `DS004148` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004148(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A test-retest resting and cognitive state EEG dataset * **Study:** `ds004148` (OpenNeuro) * **Author (year):** `Wang2022_test_retest_resting` * **Canonical:** — Also importable as: `DS004148`, `Wang2022_test_retest_resting`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 60; recordings: 900; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004148](https://openneuro.org/datasets/ds004148) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004148](https://nemar.org/dataexplorer/detail?dataset_id=ds004148) DOI: [https://doi.org/10.18112/openneuro.ds004148.v1.0.0](https://doi.org/10.18112/openneuro.ds004148.v1.0.0) NEMAR citation count: 12 ### Examples ```pycon >>> from eegdash.dataset import DS004148 >>> dataset = DS004148(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004148) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004148) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004151: eeg dataset, 57 subjects *Effect of obesity on inhibitory control in preadolescents during stop-signal task. An event-related potentials study* Access recordings and metadata through EEGDash. **Citation:** Graciela C. Alatorre-Cruz, Heather Downs, Darcy Hagood, Seth T. Sorensen, D. Keith Williams, Linda Larson-Prior (2022). *Effect of obesity on inhibitory control in preadolescents during stop-signal task. An event-related potentials study*. [10.18112/openneuro.ds004151.v1.0.0](https://doi.org/10.18112/openneuro.ds004151.v1.0.0) Modality: eeg Subjects: 57 Recordings: 57 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004151 dataset = DS004151(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004151(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004151( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004151, title = {Effect of obesity on inhibitory control in preadolescents during stop-signal task. An event-related potentials study}, author = {Graciela C. Alatorre-Cruz and Heather Downs and Darcy Hagood and Seth T. Sorensen and D. Keith Williams and Linda Larson-Prior}, doi = {10.18112/openneuro.ds004151.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004151.v1.0.0}, } ``` ## About This Dataset Introduction This EEG dataset contains the electrophysiological signal from fifty-seven obese and non-obese preteens during a stop-signal task. The stimuli were designed and administered using E-Prime software (Version 2) at Arkansas Children Nutrition Center (ACNC), Little Rock, Arkansas. The University of Arkansas for Medical Sciences (UAMS) approved the study protocol. This research was supported by USDA/Agricultural Research Service Project 6026-51000-012-06S. Raw data files The data were acquired with a Geodesic Net Amps 300 system running Netstation 4.5.2 software using the 128-channel Geodesic Hydrocell Sensor Net™ (Magstim EGI., Eugene OR, USA). No operations have been performed on the data. Participant data The *Participants.tsv* file contains age, gender, body mass index (BMI), and weight status (WS) How to cite All use of this dataset in a publication context requires the following paper to be cited: Alatorre-Cruz GC, Downs H, Hagood D, Sorensen ST, Williams DK, Larson-Prior L. Effect of obesity on inhibitory control in preadolescents during stop-signal task. An event-related potentials study. Int J Psychophysiol. 2021 Jul;165:56-67. doi: 10.1016/j.ijpsycho.2021.04.003 Contact Questions regarding the EEG data may be addressed to Catalina Alatorre-Cruz ([gcalatorrecruz@uams.edu](mailto:gcalatorrecruz@uams.edu)). Question regarding the project, in general, may be addressed to Linda Larson-Prior ([ljlarsonprior@uams.edu](mailto:ljlarsonprior@uams.edu)). ## Dataset Information | Dataset ID | `DS004151` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Effect of obesity on inhibitory control in preadolescents during stop-signal task. An event-related potentials study | | Author (year) | `AlatorreCruz2022_Effect_obesity` | | Canonical | — | | Importable as | `DS004151`, `AlatorreCruz2022_Effect_obesity` | | Year | 2022 | | Authors | Graciela C. Alatorre-Cruz, Heather Downs, Darcy Hagood, Seth T. Sorensen, D. Keith Williams, Linda Larson-Prior | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004151.v1.0.0](https://doi.org/10.18112/openneuro.ds004151.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004151) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004151) | [Source URL](https://openneuro.org/datasets/ds004151) | ### Copy-paste BibTeX ```bibtex @dataset{ds004151, title = {Effect of obesity on inhibitory control in preadolescents during stop-signal task. An event-related potentials study}, author = {Graciela C. Alatorre-Cruz and Heather Downs and Darcy Hagood and Seth T. Sorensen and D. Keith Williams and Linda Larson-Prior}, doi = {10.18112/openneuro.ds004151.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004151.v1.0.0}, } ``` ## Technical Details - Subjects: 57 - Recordings: 57 - Tasks: 1 - Channels: 128 - Sampling rate (Hz): 500.0 - Duration (hours): Not calculated - Pathology: Obese - Modality: Visual - Type: Attention - Size on disk: 23.1 GB - File count: 57 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004151.v1.0.0 - Source: openneuro - OpenNeuro: [ds004151](https://openneuro.org/datasets/ds004151) - NeMAR: [ds004151](https://nemar.org/dataexplorer/detail?dataset_id=ds004151) ## API Reference Use the `DS004151` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004151(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Effect of obesity on inhibitory control in preadolescents during stop-signal task. An event-related potentials study * **Study:** `ds004151` (OpenNeuro) * **Author (year):** `AlatorreCruz2022_Effect_obesity` * **Canonical:** — Also importable as: `DS004151`, `AlatorreCruz2022_Effect_obesity`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Obese`. Subjects: 57; recordings: 57; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004151](https://openneuro.org/datasets/ds004151) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004151](https://nemar.org/dataexplorer/detail?dataset_id=ds004151) DOI: [https://doi.org/10.18112/openneuro.ds004151.v1.0.0](https://doi.org/10.18112/openneuro.ds004151.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004151 >>> dataset = DS004151(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004151) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004151) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004152: eeg dataset, 21 subjects *Drum Trainer* Access recordings and metadata through EEGDash. **Citation:** Cameron D. Hassall, Yan Yan, Laurence T. Hunt (2022). *Drum Trainer*. [10.18112/openneuro.ds004152.v1.1.2](https://doi.org/10.18112/openneuro.ds004152.v1.1.2) Modality: eeg Subjects: 21 Recordings: 21 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004152 dataset = DS004152(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004152(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004152( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004152, title = {Drum Trainer}, author = {Cameron D. Hassall and Yan Yan and Laurence T. Hunt}, doi = {10.18112/openneuro.ds004152.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds004152.v1.1.2}, } ``` ## About This Dataset **Drum Trainer** Twenty-one participants learned to play two drumming patterns (pattern 1: AABA, pattern 2: AAABAA) at three different tempos (fast: 150 bpm, medium: 100 bpm, slow: 60 bpm). Responses were recorded using the f and j keys on a standard keyboard. Visual feedback in the form of a coloured circle coincided with each button press and indicated whether the response was early, on time, or late. Feedback was determined by comparing the response time, relative to the previous response, to a window around the target duration. The window was adapted trial-by-trial to ensure that roughly half the outcomes were “on time”. Participant 12 should be excluded from event-locked analyses due to bad triggers (trigger cable was partially disconnected). Timing Repeat for 72 trials: fixation dot (until response) -> feedback circle (50 ms) Condition Codes 1: “Fast, pattern 1, left-hand start” 2: “Fast, pattern 1, right-hand start” 3: “Fast, pattern 2, left-hand start” 4: “Fast, pattern 2, right-hand start” 5: “Medium, pattern 1, left-hand start” 6: “Medium, pattern 1, right-hand start” 7: “Medium, pattern 2, left-hand start” 8: “Medium, pattern 2, right-hand start” 9: “Slow, pattern 1, left-hand start” 10: “Slow, pattern 1, right-hand start” 11: “Slow, pattern 2, left-hand start” 12: “Slow, pattern 2, right-hand start” Trigger Modifiers Add 0: Metronome beat (pre-block) Add 20: Ready screen (pre-block) Add 40: First response (can’t be early, on time, or late) Add 60: Early response Add 80: On time response Add 100: Late response Add 120: Red X (wrong key pressed) ## Dataset Information | Dataset ID | `DS004152` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Drum Trainer | | Author (year) | `Hassall2022_Drum` | | Canonical | — | | Importable as | `DS004152`, `Hassall2022_Drum` | | Year | 2022 | | Authors | Cameron D. Hassall, Yan Yan, Laurence T. Hunt | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004152.v1.1.2](https://doi.org/10.18112/openneuro.ds004152.v1.1.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004152) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004152) | [Source URL](https://openneuro.org/datasets/ds004152) | ### Copy-paste BibTeX ```bibtex @dataset{ds004152, title = {Drum Trainer}, author = {Cameron D. Hassall and Yan Yan and Laurence T. Hunt}, doi = {10.18112/openneuro.ds004152.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds004152.v1.1.2}, } ``` ## Technical Details - Subjects: 21 - Recordings: 21 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 1000.0 - Duration (hours): 11.451405555555557 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 4.8 GB - File count: 21 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004152.v1.1.2 - Source: openneuro - OpenNeuro: [ds004152](https://openneuro.org/datasets/ds004152) - NeMAR: [ds004152](https://nemar.org/dataexplorer/detail?dataset_id=ds004152) ## API Reference Use the `DS004152` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004152(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Drum Trainer * **Study:** `ds004152` (OpenNeuro) * **Author (year):** `Hassall2022_Drum` * **Canonical:** — Also importable as: `DS004152`, `Hassall2022_Drum`. Modality: `eeg`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004152](https://openneuro.org/datasets/ds004152) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004152](https://nemar.org/dataexplorer/detail?dataset_id=ds004152) DOI: [https://doi.org/10.18112/openneuro.ds004152.v1.1.2](https://doi.org/10.18112/openneuro.ds004152.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004152 >>> dataset = DS004152(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004152) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004152) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004166: eeg dataset, 71 subjects *Effects of Forward and Backward Span Trainings on Working Memory: Evidence from a Randomized, Controlled Trial* Access recordings and metadata through EEGDash. **Citation:** Yang Li (data and curation), Wenjin Fu (data), Qiumei Zhang (data), Xiongying Chen (data), Xiaohong Li (data), Boqi Du (data), Xiaoxiang Deng (data), Feng Ji (curation), Qi Dong (curation), Feng Ji (curation), Susanne M. Jaeggi (curation), Chuansheng Chen (curation), Jun Li (data and curation) (2022). *Effects of Forward and Backward Span Trainings on Working Memory: Evidence from a Randomized, Controlled Trial*. [10.18112/openneuro.ds004166.v1.0.0](https://doi.org/10.18112/openneuro.ds004166.v1.0.0) Modality: eeg Subjects: 71 Recordings: 213 License: CC0 Source: openneuro Citations: 1.0 Metadata: Good (80%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004166 dataset = DS004166(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004166(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004166( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004166, title = {Effects of Forward and Backward Span Trainings on Working Memory: Evidence from a Randomized, Controlled Trial}, author = {Yang Li (data and curation) and Wenjin Fu (data) and Qiumei Zhang (data) and Xiongying Chen (data) and Xiaohong Li (data) and Boqi Du (data) and Xiaoxiang Deng (data) and Feng Ji (curation) and Qi Dong (curation) and Feng Ji (curation) and Susanne M. Jaeggi (curation) and Chuansheng Chen (curation) and Jun Li (data and curation)}, doi = {10.18112/openneuro.ds004166.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004166.v1.0.0}, } ``` ## About This Dataset **Effects of Forward and Backward Span Trainings on Working Memory: Evidence from a Randomized, Controlled Trial** **Introduction** *Overview:* Both forward and backward working memory span tasks have been used in cognitive training, but no study has : been conducted to test whether the two types of trainings are equally effective. Based on data from a larger randomized controlled trial, this study tested the effects of backward span training, forward span training, and no intervention. ### View full README **Effects of Forward and Backward Span Trainings on Working Memory: Evidence from a Randomized, Controlled Trial** **Introduction** *Overview:* Both forward and backward working memory span tasks have been used in cognitive training, but no study has : been conducted to test whether the two types of trainings are equally effective. Based on data from a larger randomized controlled trial, this study tested the effects of backward span training, forward span training, and no intervention. Event-related potential (ERP) signals were recorded at the pre-, mid-, and post-tests while the subjects were performing a distractor version of the change detection task, which included three conditions (2 targets and 0 distractor [2T0D]; > 4 targets and 0 distractor [4T0D]; and 2 targets and 2 distractors [2T2D]). Behavioral data were collected from two additional > tasks: a multi-object version of the change detection task, and a suppress task. Compared to no intervention, both forward and backward span trainings led to significantly greater improvement in working memory maintenance, based on indices from both behavioral (Kmax) and ERP data (CDA_2T0D and CDA_4T0D). Backward span training also improved interference control based on the ERP data (CDA_filtering efficiency) to a greater extent than did forward span training and no intervention, but the three groups did not differ in terms of behavioral indices of interference control. These results have potential implications for optimizing the current cognitive training on working memory. **Methods** *Subjects:* Volunteers from university recruited through advertisements. *Apparatus:* At all three time points (pre-, mid-, and post-tests), we used a 64-channel Synamps RT system (Neuroscan, El Paso, USA) to record the electroencephalogram (EEG) signals. Subjects were required to sit in a comfortable chair inside a darkened, electrically shielded recording chamber during the EEG recording. The electrode impedance was low (below 5kΩ). The reference electrode was on the left mastoid. Electrodes were set both below and above the right eye to record the vertical electrooculographies (EOGs). Electrodes were set at the outer canthi > of each eye to record the horizontal EOGs. *EEG dataset:* Backward group (sub-01~sub020); Forward group (sub-101~sub120); Control group (sub-201~sub220); Sudoku group (sub-301~sub320). : Pre-test(ses-01); Mid-test(ses-01); Post-test(ses-01); ## Dataset Information | Dataset ID | `DS004166` | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Effects of Forward and Backward Span Trainings on Working Memory: Evidence from a Randomized, Controlled Trial | | Author (year) | `Li2022` | | Canonical | — | | Importable as | `DS004166`, `Li2022` | | Year | 2022 | | Authors | Yang Li (data and curation), Wenjin Fu (data), Qiumei Zhang (data), Xiongying Chen (data), Xiaohong Li (data), Boqi Du (data), Xiaoxiang Deng (data), Feng Ji (curation), Qi Dong (curation), Feng Ji (curation), Susanne M. Jaeggi (curation), Chuansheng Chen (curation), Jun Li (data and curation) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004166.v1.0.0](https://doi.org/10.18112/openneuro.ds004166.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004166) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004166) | [Source URL](https://openneuro.org/datasets/ds004166) | ### Copy-paste BibTeX ```bibtex @dataset{ds004166, title = {Effects of Forward and Backward Span Trainings on Working Memory: Evidence from a Randomized, Controlled Trial}, author = {Yang Li (data and curation) and Wenjin Fu (data) and Qiumei Zhang (data) and Xiongying Chen (data) and Xiaohong Li (data) and Boqi Du (data) and Xiaoxiang Deng (data) and Feng Ji (curation) and Qi Dong (curation) and Feng Ji (curation) and Susanne M. Jaeggi (curation) and Chuansheng Chen (curation) and Jun Li (data and curation)}, doi = {10.18112/openneuro.ds004166.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004166.v1.0.0}, } ``` ## Technical Details - Subjects: 71 - Recordings: 213 - Tasks: 1 - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Healthy - Modality: Visual - Type: Learning - Size on disk: 77.4 GB - File count: 213 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004166.v1.0.0 - Source: openneuro - OpenNeuro: [ds004166](https://openneuro.org/datasets/ds004166) - NeMAR: [ds004166](https://nemar.org/dataexplorer/detail?dataset_id=ds004166) ## API Reference Use the `DS004166` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004166(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Effects of Forward and Backward Span Trainings on Working Memory: Evidence from a Randomized, Controlled Trial * **Study:** `ds004166` (OpenNeuro) * **Author (year):** `Li2022` * **Canonical:** — Also importable as: `DS004166`, `Li2022`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 71; recordings: 213; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004166](https://openneuro.org/datasets/ds004166) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004166](https://nemar.org/dataexplorer/detail?dataset_id=ds004166) DOI: [https://doi.org/10.18112/openneuro.ds004166.v1.0.0](https://doi.org/10.18112/openneuro.ds004166.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004166 >>> dataset = DS004166(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004166) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004166) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004194: ieeg dataset, 14 subjects *Visual ECoG dataset* Access recordings and metadata through EEGDash. **Citation:** Iris Groen, Kenichi Yuasa, Amber Brands, Giovanni Piantoni, Stephanie Montenegro, Adeen Flinker, Sasha Devore, Orrin Devinsky, Werner Doyle, Patricia Dugan, Daniel Friedman, Nick Ramsey, Natalia Petridou, Jonathan Winawer (2022). *Visual ECoG dataset*. [10.18112/openneuro.ds004194.v3.0.0](https://doi.org/10.18112/openneuro.ds004194.v3.0.0) Modality: ieeg Subjects: 14 Recordings: 209 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004194 dataset = DS004194(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004194(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004194( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004194, title = {Visual ECoG dataset}, author = {Iris Groen and Kenichi Yuasa and Amber Brands and Giovanni Piantoni and Stephanie Montenegro and Adeen Flinker and Sasha Devore and Orrin Devinsky and Werner Doyle and Patricia Dugan and Daniel Friedman and Nick Ramsey and Natalia Petridou and Jonathan Winawer}, doi = {10.18112/openneuro.ds004194.v3.0.0}, url = {https://doi.org/10.18112/openneuro.ds004194.v3.0.0}, } ``` ## About This Dataset **Details related to access to the data** - Contact person Please contact Iris Groen ([i.i.a.groen@uva.nl](mailto:i.i.a.groen@uva.nl), [https://orcid.org/0000-0002-5536-6128](https://orcid.org/0000-0002-5536-6128)) for more information. Please see the following papers for more details on the data collection and preprocessing: Groen IIA, Piantoni G, Montenegro S, Flinker A, Devore S, Devinsky O, Doyle W, Dugan P, Friedman D, Ramsey N, Petridou N, Winawer JA (2022) Temporal dynamics of neural responses in human visual cortex. The Journal of Neuroscience 42(40):7562-7580 ([https://doi.org/10.1523/JNEUROSCI.1812-21.2022](https://doi.org/10.1523/JNEUROSCI.1812-21.2022)) Yuasa K, Groen IIA, Piantoni G, Montenegro S, Flinker A, Devore S, Devinsky O, Doyle W, Dugan P, Friedman D, Ramsey N, Petridou N, Winawer JA. Precise Spatial Tuning of Visually Driven Alpha Oscillations in Human Visual Cortex. eLife12:RP90387 https://doi.org/10.7554/eLife.90387.1 Brands AM, Devore S, Devinsky O, Doyle W, Flinker A, Friedman D, Dugan P, Winawer JA, Groen IIA (2024). Temporal dynamics of short-term neural adaptation in human visual cortex. https://doi.org/10.1101/2023.09.13.557378 ### View full README **Details related to access to the data** - Contact person Please contact Iris Groen ([i.i.a.groen@uva.nl](mailto:i.i.a.groen@uva.nl), [https://orcid.org/0000-0002-5536-6128](https://orcid.org/0000-0002-5536-6128)) for more information. Please see the following papers for more details on the data collection and preprocessing: Groen IIA, Piantoni G, Montenegro S, Flinker A, Devore S, Devinsky O, Doyle W, Dugan P, Friedman D, Ramsey N, Petridou N, Winawer JA (2022) Temporal dynamics of neural responses in human visual cortex. The Journal of Neuroscience 42(40):7562-7580 ([https://doi.org/10.1523/JNEUROSCI.1812-21.2022](https://doi.org/10.1523/JNEUROSCI.1812-21.2022)) Yuasa K, Groen IIA, Piantoni G, Montenegro S, Flinker A, Devore S, Devinsky O, Doyle W, Dugan P, Friedman D, Ramsey N, Petridou N, Winawer JA. Precise Spatial Tuning of Visually Driven Alpha Oscillations in Human Visual Cortex. eLife12:RP90387 https://doi.org/10.7554/eLife.90387.1 Brands AM, Devore S, Devinsky O, Doyle W, Flinker A, Friedman D, Dugan P, Winawer JA, Groen IIA (2024). Temporal dynamics of short-term neural adaptation in human visual cortex. https://doi.org/10.1101/2023.09.13.557378 - Practical information to access the data Processed data and model fits reported in Groen et al., (2022) are available in derivatives/Groenetal2022TemporalDynamicsECoG as matlab .mat files. Matlab code to load, process and plot these data (including 3D renderings of the participant’s surface reconstructions and electrode positions) is available in [https://github.com/WinawerLab/ECoG_utils](https://github.com/WinawerLab/ECoG_utils) and [https://github.com/irisgroen/temporalECoG](https://github.com/irisgroen/temporalECoG). These repositories have dependencies on other Matlab toolboxes (e.g., FieldTrip). See instructions on Github for relevant links and guidelines. Processed data and model fits reported in Yuasa et al., (2023) are available in the Github repositories described in the paper. Processed data and model fits reported in Brands et al., (2024) are available in derivatives/Brandsetal2024TemporalAdaptationECoGCategories as python .py files. Python code to process and analyze these data is available in the Github repositories described in the paper. **Overview** - Project name Visual ECoG dataset - Years that the project ran Data were collected between 2017-2020. Exact recording dates have been scrubbed for anonymization purposes. - Brief overview of the tasks in the experiment Participants sub-p01 to sub-p11 viewed grayscale visual pattern stimuli that were varied in temporal or spatial properties. Participans sub-p11 to sub-p14 additionally saw color images of different image classes (faces, bodies, buildings, objects, scenes, and scrambled) that were varied in temporal properties. See ‘Independent Variables’ below for more details. In all tasks, participants were instructed to fixate a cross or point in the center of the screen and monitor it for a color change, i.e. to perform a stimulus-orthogonal task (see the task-specific \_events.json files, e.g., task-prf_events.json, for further details). - Description of the contents of the dataset The data consists of cortical iEEG recordings in 14 epilepsy patients in response to visual stimulation. Patients were implanted with standard clinical surface (grid) and depth electrodes. Two patients were additionally implanted with a high-density research grid. In addition to the ieeg recordings, pre-implantation MRI T1 scans are provided for the purpose of localizing electrodes. Participants performed a varying number of tasks and runs. - Independent variables The data are divided in 6 different sets of stimulus types or events: 1. prf: grayscale, oriented bar stimuli consisting of curved, band-pass filtered lines that were swept across the screen (up to (~16 degree of visual angle) in a fixed order for the purpose of estimating spatial population receptive fields (pRFs). 2. spatialpattern: grayscale, centrally presented pattern stimuli (~16 degree of visual angle diameter) consisting of curved, band-pass filtered lines that were systematically varied in level of contrast and density, as well as various oriented grating stimuli. 3. temporalpattern: grayscale, centrally presented pattern stimuli (~16 degree of visual angle diameter) consisting of curved, band-pass filtered lines that were systematically varied in temporal duration and interval. 4. soc: combination of the spatialpattern and temporalpattern stimuli. 5. sixcatloctemporal: color images of six stimulus classes: faces, bodies (hands/feet only), buildings, objects, scenes and scrambled, systematically varied in temporal duration and interval, whereby interval stimuli consisted of direct repeats of the identical image. 6. sixcatlocisidiff/sixcatlocdiffisi: color images of six stimulus classes: faces, bodies (hands/feet only), buildings, objects, scenes and scrambled, systematically varied in temporal duration and interval, whereby the first interval stimulus was followed by images from either the same or a different category (but not the identical image). Participant-, task- and run-specific stimuli are provided in the /stimuli folder as matlab .mat files. - Dependent variables The main BIDS folder contains the raw voltage data, split up in individual task runs. The /derivatives/ECoGCAR folder contains common-average-referenced version of the data. The /derivatives/ECoGBroadband folder contains time-varying broadband responses estimated by band-pass filtering the common-average-referenced voltage data and taking the average power envelope. The /derivatives/ECoGPreprocessed folder contains epoched trials used in Brands et al., (2024). The /derivatives/freesurfer folder contains surface reconstructions of each participant’s T1, along with retinotopic atlas files. The /derivatives/Groen2022TemporalDynamicsECoG contains preprocessed data and model fits that can be used to reproduce the results reported in Groen et al., (2022). The /derivatives/Brands2024TemporalAdaptationECoG contains preprocessed data and model fits that can be used to reproduce the results reported in Brands et al., (2024). - Quality assessment of the data Data quality and number of trials per subjects varies considerably across patients, for various reasons. First, for each recording session, attempts were made to optimize the environment for running visual experiments; e.g. room illumination was stabilized as much as possible by closing blinds when available, the visual display was calibrated (for most patients), and interference from medical staff or visitors was minimized. However, it was not possible to equate this with great precision across patients and sessions/runs. Second, implantations were determined based on clinical needs and electrode locations therefore vary across participants. The strength and robustness of the neural responses varies greatly with the electrode location (e.g. early vs higher-level visual cortex), as well as with uncontrolled factors such as how well the electrode made contact with the cortex and whether it was primarily situated on grey matter (surface/grid electrodes) or could be located in white matter (some depth electrodes). Electrodes that were marked as containing epileptic activity by clinicians, or that did not have good signal based on visual inspection of the raw data, are marked as ‘bad’ in the channels.tsv files. Third, patients varied greatly in their cognitive abilities and mental/medical state, which affected their ability to follow task instructions, e.g. to remain alert and fixation. Some patients were able to perform repeated runs of multiple tasks across multiple sessions, while others only managed to do a few runs. All patients included in this dataset have sufficiently good responses in some electrodes/tasks as judged by Groen et al., (2022) and Brands et al., (2024). However, when using this dataset to address further research questions, it is advisable to set stringent requirements on electrode and trial selection. See Groen et al., (2022) and associated code repository for an example preprocessing pipeline that selected for robust visual responses to temporally- and contrast-varying stimuli. **Methods** - Subjects All participants were intractable epilepsy patients who were undergoing ECoG for the purpose of monitoring seizures. Participants were included if their implantation covered parts of visual cortex and if they consented to participate in research. - Apparatus Data were collected in a clinical setting, i.e. at bedside in the patient’s hospital room. Information about iEEG recording apparatus is provided the meta data for each patient. Information about the visual stimulation equipment and behavioral response recordings are provided in Groen et al., (2022), Yuasa et al., (2023) and Brands et al., (2024). - Experimental location Data were collected at NYU University Langone Hospital (New York, USA) or at University Medical Center Utrecht (The Netherlands). - Missing data Stimulus files are missing for a few runs of sub-02. These are marked as N/A in the associated event files. **Notes** Further participant-specific notes: - For sub-03 and sub-04 the spatial pattern and temporal pattern stimuli are combined in the soc task runs, for the remaining participants these are split across the spatialpattern and temporalpattern task runs. - The pRF task from sub-04 has different prf parameters (bar duration and gap). - The first two runs of the pRF task from sub-05 are not of good quality (participant repeatedly broke fixation). In addition, the triggers in all pRF runs from sub-05 are not correct due to a stimulus coding problem and will need to be re-interpolated if one wishes to use these data. - Participants sub-10 and sub-11 have high density grids in addition to clinical grids. - Note that all stimuli and stimulus parameters can be found in the participant-specific stimulus \*.mat files. ## Dataset Information | Dataset ID | `DS004194` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Visual ECoG dataset | | Author (year) | `Groen2022` | | Canonical | — | | Importable as | `DS004194`, `Groen2022` | | Year | 2022 | | Authors | Iris Groen, Kenichi Yuasa, Amber Brands, Giovanni Piantoni, Stephanie Montenegro, Adeen Flinker, Sasha Devore, Orrin Devinsky, Werner Doyle, Patricia Dugan, Daniel Friedman, Nick Ramsey, Natalia Petridou, Jonathan Winawer | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004194.v3.0.0](https://doi.org/10.18112/openneuro.ds004194.v3.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004194) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004194) | [Source URL](https://openneuro.org/datasets/ds004194) | ### Copy-paste BibTeX ```bibtex @dataset{ds004194, title = {Visual ECoG dataset}, author = {Iris Groen and Kenichi Yuasa and Amber Brands and Giovanni Piantoni and Stephanie Montenegro and Adeen Flinker and Sasha Devore and Orrin Devinsky and Werner Doyle and Patricia Dugan and Daniel Friedman and Nick Ramsey and Natalia Petridou and Jonathan Winawer}, doi = {10.18112/openneuro.ds004194.v3.0.0}, url = {https://doi.org/10.18112/openneuro.ds004194.v3.0.0}, } ``` ## Technical Details - Subjects: 14 - Recordings: 209 - Tasks: 7 - Channels: 265 (66), 133 (20), 173 (20), 124 (14), 140 (12), 74 (12), 206 (12), 160 (10), 129 (10), 111 (10), 91 (10), 106 (7), 61 (6) - Sampling rate (Hz): 512.0 (197), 2048.0 (12) - Duration (hours): 6.3436859809027775 - Pathology: Epilepsy - Modality: Visual - Type: Perception - Size on disk: 7.8 GB - File count: 209 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004194.v3.0.0 - Source: openneuro - OpenNeuro: [ds004194](https://openneuro.org/datasets/ds004194) - NeMAR: [ds004194](https://nemar.org/dataexplorer/detail?dataset_id=ds004194) ## API Reference Use the `DS004194` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004194(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual ECoG dataset * **Study:** `ds004194` (OpenNeuro) * **Author (year):** `Groen2022` * **Canonical:** — Also importable as: `DS004194`, `Groen2022`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Epilepsy`. Subjects: 14; recordings: 209; tasks: 7. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004194](https://openneuro.org/datasets/ds004194) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004194](https://nemar.org/dataexplorer/detail?dataset_id=ds004194) DOI: [https://doi.org/10.18112/openneuro.ds004194.v3.0.0](https://doi.org/10.18112/openneuro.ds004194.v3.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS004194 >>> dataset = DS004194(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004194) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004194) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004196: eeg dataset, 4 subjects *Bimodal dataset on Inner speech* Access recordings and metadata through EEGDash. **Citation:** Foteini Liwicki, Vibha Gupta, Rajkumar Saini, Kanjar De, Nosheen Abid, Sumit Rakesh, Scott Wellington, Holly Wilson, Marcus Liwicki, Johan Eriksson (2022). *Bimodal dataset on Inner speech*. [10.18112/openneuro.ds004196.v2.0.2](https://doi.org/10.18112/openneuro.ds004196.v2.0.2) Modality: eeg Subjects: 4 Recordings: 4 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004196 dataset = DS004196(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004196(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004196( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004196, title = {Bimodal dataset on Inner speech}, author = {Foteini Liwicki and Vibha Gupta and Rajkumar Saini and Kanjar De and Nosheen Abid and Sumit Rakesh and Scott Wellington and Holly Wilson and Marcus Liwicki and Johan Eriksson}, doi = {10.18112/openneuro.ds004196.v2.0.2}, url = {https://doi.org/10.18112/openneuro.ds004196.v2.0.2}, } ``` ## About This Dataset Bimodal dataset on Inner Speech Code available: [https://github.com/LTU-Machine-Learning/Inner_Speech_EEG_FMRI](https://github.com/LTU-Machine-Learning/Inner_Speech_EEG_FMRI) Publication available: [https://www.nature.com/articles/s41597-023-02286-w](https://www.nature.com/articles/s41597-023-02286-w) Abstract: The recognition of inner speech, which could give a \\\`voice’ to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition. Multimodal datasets of brain data enable the fusion of neuroimaging modalities with complimentary properties, such as the high spatial resolution of functional magnetic resonance imaging (fMRI) and the temporal resolution of electroencephalography (EEG), and therefore are promising for decoding inner speech. This paper presents the first publicly available bimodal dataset containing EEG and fMRI data acquired nonsimultaneously during inner-speech production. Data were obtained from four healthy, right-handed participants during an inner-speech task with words in either a social or numerical category. Each of the 8-word stimuli were assessed with 40 trials, resulting in 320 trials in each modality for each participant. The aim of this work is to provide a publicly available bimodal dataset on inner speech, contributing towards speech prostheses. Short Dataset description: The dataset consists of 1280 trials in each modality (EEG, FMRI). The stimuli contain 8 words, selected from 2 different categories (social, numeric): Social: child, daughter, father, wife Numeric: four, three, ten, six There are 4 subjects in total: sub-01, sub-02, sub-03, sub-05. Initially, there were 5 participants, however, sub-04 data was rejected due to high fluctuations. Details of valid data are available in the file participants.tsv. For questions please contact: [foteini.liwicki@ltu.se](mailto:foteini.liwicki@ltu.se) ## Dataset Information | Dataset ID | `DS004196` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Bimodal dataset on Inner speech | | Author (year) | `Liwicki2022` | | Canonical | — | | Importable as | `DS004196`, `Liwicki2022` | | Year | 2022 | | Authors | Foteini Liwicki, Vibha Gupta, Rajkumar Saini, Kanjar De, Nosheen Abid, Sumit Rakesh, Scott Wellington, Holly Wilson, Marcus Liwicki, Johan Eriksson | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004196.v2.0.2](https://doi.org/10.18112/openneuro.ds004196.v2.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004196) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004196) | [Source URL](https://openneuro.org/datasets/ds004196) | ### Copy-paste BibTeX ```bibtex @dataset{ds004196, title = {Bimodal dataset on Inner speech}, author = {Foteini Liwicki and Vibha Gupta and Rajkumar Saini and Kanjar De and Nosheen Abid and Sumit Rakesh and Scott Wellington and Holly Wilson and Marcus Liwicki and Johan Eriksson}, doi = {10.18112/openneuro.ds004196.v2.0.2}, url = {https://doi.org/10.18112/openneuro.ds004196.v2.0.2}, } ``` ## Technical Details - Subjects: 4 - Recordings: 4 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 512.0 - Duration (hours): 1.511111111111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 9.3 GB - File count: 4 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004196.v2.0.2 - Source: openneuro - OpenNeuro: [ds004196](https://openneuro.org/datasets/ds004196) - NeMAR: [ds004196](https://nemar.org/dataexplorer/detail?dataset_id=ds004196) ## API Reference Use the `DS004196` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004196(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Bimodal dataset on Inner speech * **Study:** `ds004196` (OpenNeuro) * **Author (year):** `Liwicki2022` * **Canonical:** — Also importable as: `DS004196`, `Liwicki2022`. Modality: `eeg`. Subjects: 4; recordings: 4; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004196](https://openneuro.org/datasets/ds004196) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004196](https://nemar.org/dataexplorer/detail?dataset_id=ds004196) DOI: [https://doi.org/10.18112/openneuro.ds004196.v2.0.2](https://doi.org/10.18112/openneuro.ds004196.v2.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004196 >>> dataset = DS004196(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004196) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004196) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004200: eeg dataset, 20 subjects *Temporal Scaling* Access recordings and metadata through EEGDash. **Citation:** Cameron D. Hassall, Jack Harley, Nils Kolling, Laurence T. Hunt (2022). *Temporal Scaling*. [10.18112/openneuro.ds004200.v1.0.1](https://doi.org/10.18112/openneuro.ds004200.v1.0.1) Modality: eeg Subjects: 20 Recordings: 20 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004200 dataset = DS004200(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004200(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004200( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004200, title = {Temporal Scaling}, author = {Cameron D. Hassall and Jack Harley and Nils Kolling and Laurence T. Hunt}, doi = {10.18112/openneuro.ds004200.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004200.v1.0.1}, } ``` ## About This Dataset **Temporal Scaling** Twenty participants learned three temporal intervals. There were two subtasks, randomly interleaved. In the Production task, participants produced either a short, medium, or long temporal interval. In the Perception task, participants judged a computer-produced interval as correct or incorrect (again, for a short, medium, or long temporal interval). In both tasks participants received visual feedback (a checkmark or x). Preprint: [https://doi.org/10.1101/2020.12.11.421180](https://doi.org/10.1101/2020.12.11.421180) ## Dataset Information | Dataset ID | `DS004200` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Temporal Scaling | | Author (year) | `Hassall2022_Temporal` | | Canonical | — | | Importable as | `DS004200`, `Hassall2022_Temporal` | | Year | 2022 | | Authors | Cameron D. Hassall, Jack Harley, Nils Kolling, Laurence T. Hunt | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004200.v1.0.1](https://doi.org/10.18112/openneuro.ds004200.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004200) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004200) | [Source URL](https://openneuro.org/datasets/ds004200) | ### Copy-paste BibTeX ```bibtex @dataset{ds004200, title = {Temporal Scaling}, author = {Cameron D. Hassall and Jack Harley and Nils Kolling and Laurence T. Hunt}, doi = {10.18112/openneuro.ds004200.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004200.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 20 - Tasks: 1 - Channels: 37 - Sampling rate (Hz): 1000.0 - Duration (hours): 14.122722222222222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 7.2 GB - File count: 20 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004200.v1.0.1 - Source: openneuro - OpenNeuro: [ds004200](https://openneuro.org/datasets/ds004200) - NeMAR: [ds004200](https://nemar.org/dataexplorer/detail?dataset_id=ds004200) ## API Reference Use the `DS004200` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004200(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Temporal Scaling * **Study:** `ds004200` (OpenNeuro) * **Author (year):** `Hassall2022_Temporal` * **Canonical:** — Also importable as: `DS004200`, `Hassall2022_Temporal`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004200](https://openneuro.org/datasets/ds004200) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004200](https://nemar.org/dataexplorer/detail?dataset_id=ds004200) DOI: [https://doi.org/10.18112/openneuro.ds004200.v1.0.1](https://doi.org/10.18112/openneuro.ds004200.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004200 >>> dataset = DS004200(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004200) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004200) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004212: meg dataset, 5 subjects *THINGS-MEG* Access recordings and metadata through EEGDash. **Citation:** Martin N. Hebart, Oliver Contier, Lina Teichmann, Adam H. Rockter, Charles Zheng, Alexis Kidder, Anna Corriveau, Maryam Vaziri-Pashkam, Chris I. Baker (2022). *THINGS-MEG*. [10.18112/openneuro.ds004212.v3.0.0](https://doi.org/10.18112/openneuro.ds004212.v3.0.0) Modality: meg Subjects: 5 Recordings: 500 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004212 dataset = DS004212(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004212(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004212( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004212, title = {THINGS-MEG}, author = {Martin N. Hebart and Oliver Contier and Lina Teichmann and Adam H. Rockter and Charles Zheng and Alexis Kidder and Anna Corriveau and Maryam Vaziri-Pashkam and Chris I. Baker}, doi = {10.18112/openneuro.ds004212.v3.0.0}, url = {https://doi.org/10.18112/openneuro.ds004212.v3.0.0}, } ``` ## About This Dataset **THINGS-MEG** Understanding object representations visual and semantic processing of objects requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. This densely sampled fMRI dataset is part of THINGS-data, a multimodal collection of large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million behavioral judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless novel hypotheses at scale while assessing the reproducibility of previous findings. The multimodal data allows for studying both the temporal and spatial dynamics of object representations and their relationship to behavior and additionally provides the means for combining these datasets for novel insights into object processing. THINGS-data constitutes the core release of the [THINGS initiative](https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience. **Dataset overview** We collected extensively sampled object representations using magnetoencephalography (MEG). To this end, we drew on the THINGS database [(Hebart et al., 2019)](https://doi.org/10.1371/journal.pone.0223792), a richly-annotated database of 1,854 object concepts representative of the American English language which contains 26,107 manually-curated naturalistic object images. During the fMRI experiment, participants were shown a representative subset of THINGS images, spread across 12 separate sessions (N=4, 22,448 unique images of 1,854 objects). Images were shown in fast succession (1.5±0.2s), and participants were instructed to maintain central fixation. To ensure engagement, participants performed an oddball detection task responding to occasional artificially-generated images. A subset of images (n=200) were shown repeatedly in each session. Beyond the core functional imaging data in response to THINGS images, we acquired T1-weighted MRI scans to allow for cortical source localization. Eye movements were monitored in the MEG to ensure participants maintained central fixation. ## Dataset Information | Dataset ID | `DS004212` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | THINGS-MEG | | Author (year) | `Hebart2022` | | Canonical | `THINGS_MEG`, `THINGSMEG` | | Importable as | `DS004212`, `Hebart2022`, `THINGS_MEG`, `THINGSMEG` | | Year | 2022 | | Authors | Martin N. Hebart, Oliver Contier, Lina Teichmann, Adam H. Rockter, Charles Zheng, Alexis Kidder, Anna Corriveau, Maryam Vaziri-Pashkam, Chris I. Baker | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004212.v3.0.0](https://doi.org/10.18112/openneuro.ds004212.v3.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004212) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004212) | [Source URL](https://openneuro.org/datasets/ds004212) | ### Copy-paste BibTeX ```bibtex @dataset{ds004212, title = {THINGS-MEG}, author = {Martin N. Hebart and Oliver Contier and Lina Teichmann and Adam H. Rockter and Charles Zheng and Alexis Kidder and Anna Corriveau and Maryam Vaziri-Pashkam and Chris I. Baker}, doi = {10.18112/openneuro.ds004212.v3.0.0}, url = {https://doi.org/10.18112/openneuro.ds004212.v3.0.0}, } ``` ## Technical Details - Subjects: 5 - Recordings: 500 - Tasks: 1 - Channels: 310 - Sampling rate (Hz): 1200.0 - Duration (hours): 45.239891666666665 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 237.7 GB - File count: 500 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004212.v3.0.0 - Source: openneuro - OpenNeuro: [ds004212](https://openneuro.org/datasets/ds004212) - NeMAR: [ds004212](https://nemar.org/dataexplorer/detail?dataset_id=ds004212) ## API Reference Use the `DS004212` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004212(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) THINGS-MEG * **Study:** `ds004212` (OpenNeuro) * **Author (year):** `Hebart2022` * **Canonical:** `THINGS_MEG`, `THINGSMEG` Also importable as: `DS004212`, `Hebart2022`, `THINGS_MEG`, `THINGSMEG`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 5; recordings: 500; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004212](https://openneuro.org/datasets/ds004212) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004212](https://nemar.org/dataexplorer/detail?dataset_id=ds004212) DOI: [https://doi.org/10.18112/openneuro.ds004212.v3.0.0](https://doi.org/10.18112/openneuro.ds004212.v3.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004212 >>> dataset = DS004212(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004212) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004212) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS004229: meg dataset, 2 subjects *amnoise* Access recordings and metadata through EEGDash. **Citation:** Maria Mittag, Eric Larson, Maggie Clarke, Samu Taulu, Patricia K. Kuhl (2022). *amnoise*. [10.18112/openneuro.ds004229.v1.0.3](https://doi.org/10.18112/openneuro.ds004229.v1.0.3) Modality: meg Subjects: 2 Recordings: 3 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004229 dataset = DS004229(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004229(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004229( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004229, title = {amnoise}, author = {Maria Mittag and Eric Larson and Maggie Clarke and Samu Taulu and Patricia K. Kuhl}, doi = {10.18112/openneuro.ds004229.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds004229.v1.0.3}, } ``` ## About This Dataset **ILABS amnoise MEG BIDS dataset** This dataset contains MEG data from a single infant subject. For more information, see the following publications, which should be cited if you use this data: - Mittag, M., Larson, E., Clarke, M., Taulu, S., & Kuhl, P. K. (2021). Auditory deficits in infants at risk for dyslexia during a linguistic sensitive period predict future language. NeuroImage: Clinical, 30, 102578. [https://doi.org/10.1016/j.nicl.2021.102578](https://doi.org/10.1016/j.nicl.2021.102578) - Mittag, M., Larson, E., Taulu, S., Clarke, M., & Kuhl, P. K. (2022). Reduced Theta Sampling in Infants at Risk for Dyslexia across the Sensitive Period of Native Phoneme Learning. International Journal of Environmental Research and Public Health, 19(3), 1180. [https://doi.org/10.3390/ijerph19031180](https://doi.org/10.3390/ijerph19031180) The data were converted with MNE-BIDS: - Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) - Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `DS004229` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | amnoise | | Author (year) | `Mittag2022` | | Canonical | — | | Importable as | `DS004229`, `Mittag2022` | | Year | 2022 | | Authors | Maria Mittag, Eric Larson, Maggie Clarke, Samu Taulu, Patricia K. Kuhl | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004229.v1.0.3](https://doi.org/10.18112/openneuro.ds004229.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004229) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004229) | [Source URL](https://openneuro.org/datasets/ds004229) | ### Copy-paste BibTeX ```bibtex @dataset{ds004229, title = {amnoise}, author = {Maria Mittag and Eric Larson and Maggie Clarke and Samu Taulu and Patricia K. Kuhl}, doi = {10.18112/openneuro.ds004229.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds004229.v1.0.3}, } ``` ## Technical Details - Subjects: 2 - Recordings: 3 - Tasks: 2 - Channels: 332 - Sampling rate (Hz): 1200.0 - Duration (hours): 0.3313884259259259 - Pathology: Dyslexia - Modality: Auditory - Type: Perception - Size on disk: 1.8 GB - File count: 3 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004229.v1.0.3 - Source: openneuro - OpenNeuro: [ds004229](https://openneuro.org/datasets/ds004229) - NeMAR: [ds004229](https://nemar.org/dataexplorer/detail?dataset_id=ds004229) ## API Reference Use the `DS004229` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004229(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) amnoise * **Study:** `ds004229` (OpenNeuro) * **Author (year):** `Mittag2022` * **Canonical:** — Also importable as: `DS004229`, `Mittag2022`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Dyslexia`. Subjects: 2; recordings: 3; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004229](https://openneuro.org/datasets/ds004229) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004229](https://nemar.org/dataexplorer/detail?dataset_id=ds004229) DOI: [https://doi.org/10.18112/openneuro.ds004229.v1.0.3](https://doi.org/10.18112/openneuro.ds004229.v1.0.3) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004229 >>> dataset = DS004229(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004229) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004229) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS004252: eeg dataset, 1 subjects *Rotation-tolerant representations elucidate the time course of high-level object processing* Access recordings and metadata through EEGDash. **Citation:** Denise Moerel, Tijl Grootswagers, Amanda K. Robinson, Patrick Engeler, Alex O. Holcombe, Thomas A. Carlson (2022). *Rotation-tolerant representations elucidate the time course of high-level object processing*. [10.18112/openneuro.ds004252.v1.0.2](https://doi.org/10.18112/openneuro.ds004252.v1.0.2) Modality: eeg Subjects: 1 Recordings: 1 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004252 dataset = DS004252(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004252(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004252( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004252, title = {Rotation-tolerant representations elucidate the time course of high-level object processing}, author = {Denise Moerel and Tijl Grootswagers and Amanda K. Robinson and Patrick Engeler and Alex O. Holcombe and Thomas A. Carlson}, doi = {10.18112/openneuro.ds004252.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004252.v1.0.2}, } ``` ## About This Dataset Note: only the data for participant 1 has been uploaded. The rest of the dataset will be released upon publication. The pre-print can be found here: [https://doi.org/10.31234/osf.io/wp73u](https://doi.org/10.31234/osf.io/wp73u) The analysis codes, results, and figures can be found on OSF: [https://osf.io/r93es](https://osf.io/r93es). The main folder contains the raw EEG data in standard bids format. The ‘derivatives’ folder contains the pre-processed & epoched EEG data, formatted in line with cosmomvpa. For codes, results, & figures, see OSF: Engeler, P., Grootswagers, T., Robinson, A. K., Holcombe, A. O., Carlson, T. A., & Moerel, D. (2022, August 17). Rotation-tolerant representations elucidate the time course of high-level object processing. Retrieved from osf.io/r93es ## Dataset Information | Dataset ID | `DS004252` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Rotation-tolerant representations elucidate the time course of high-level object processing | | Author (year) | `Moerel2022_Rotation` | | Canonical | — | | Importable as | `DS004252`, `Moerel2022_Rotation` | | Year | 2022 | | Authors | Denise Moerel, Tijl Grootswagers, Amanda K. Robinson, Patrick Engeler, Alex O. Holcombe, Thomas A. Carlson | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004252.v1.0.2](https://doi.org/10.18112/openneuro.ds004252.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004252) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004252) | [Source URL](https://openneuro.org/datasets/ds004252) | ### Copy-paste BibTeX ```bibtex @dataset{ds004252, title = {Rotation-tolerant representations elucidate the time course of high-level object processing}, author = {Denise Moerel and Tijl Grootswagers and Amanda K. Robinson and Patrick Engeler and Alex O. Holcombe and Thomas A. Carlson}, doi = {10.18112/openneuro.ds004252.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004252.v1.0.2}, } ``` ## Technical Details - Subjects: 1 - Recordings: 1 - Tasks: 1 - Channels: 127 - Sampling rate (Hz): 1000.0 - Duration (hours): 0.7596333333333333 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 1.3 GB - File count: 1 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004252.v1.0.2 - Source: openneuro - OpenNeuro: [ds004252](https://openneuro.org/datasets/ds004252) - NeMAR: [ds004252](https://nemar.org/dataexplorer/detail?dataset_id=ds004252) ## API Reference Use the `DS004252` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004252(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Rotation-tolerant representations elucidate the time course of high-level object processing * **Study:** `ds004252` (OpenNeuro) * **Author (year):** `Moerel2022_Rotation` * **Canonical:** — Also importable as: `DS004252`, `Moerel2022_Rotation`. Modality: `eeg`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004252](https://openneuro.org/datasets/ds004252) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004252](https://nemar.org/dataexplorer/detail?dataset_id=ds004252) DOI: [https://doi.org/10.18112/openneuro.ds004252.v1.0.2](https://doi.org/10.18112/openneuro.ds004252.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004252 >>> dataset = DS004252(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004252) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004252) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004256: eeg dataset, 53 subjects *Encoding of Sound Source Elevation in Human Cortex* Access recordings and metadata through EEGDash. **Citation:** Ole Bialas, Marc Schoenwiesner, Burkhard Maess (2022). *Encoding of Sound Source Elevation in Human Cortex*. [10.18112/openneuro.ds004256.v1.0.5](https://doi.org/10.18112/openneuro.ds004256.v1.0.5) Modality: eeg Subjects: 53 Recordings: 53 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004256 dataset = DS004256(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004256(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004256( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004256, title = {Encoding of Sound Source Elevation in Human Cortex}, author = {Ole Bialas and Marc Schoenwiesner and Burkhard Maess}, doi = {10.18112/openneuro.ds004256.v1.0.5}, url = {https://doi.org/10.18112/openneuro.ds004256.v1.0.5}, } ``` ## About This Dataset **Overview** The dataset consists of data from two experiments in which subjects were presented bursts of noise from loudspeakers at different elevations. Subjects who participated in either experiment were initially tested in their ability to localize elevated sound sources. Both experiments were conducted in a hemi-anechoic chamber. **Localization Tests** Bursts of pink noise were presented from loudspeakers at different elevations and 10° azimuth (to the listeners right). In the localization test preceding experiment I, these loudspeakers were positioned at elevations of +50°, +25°, 0° and -25° while the localization test preceding experiment II also included a loudspeaker at -50° elevation. Localization test data is missing for sub-001, sub-002 and sub-003 **Deviant Detection (Experiment 1)** Subjects 001-023 participated in this experiment. Subjects heard a long trail of noise from one loudspeaker (adapter), followed by a short burst of noise from another loudspeaker (probe). The elevation of the adapter and probe are encoded in the event values: 2: adapter at 37.5°, probe at 12.5° 3: adapter at 37.5°, probe at -12.5° 4: adapter at 37.5°, probe at -37.5° 5: adapter at -37.5°, probe at 37.5° 6: adapter at -37.5°, probe at 12.5° 7: adapter at -37.5°, probe at -12.5° 8: no adapter, any non-target location (deviant) The behavioral data contains the trial numbers where a deviant was presented and weather the subject responded within one second by pressing a button. **One-Back (Experiment II)** Subjects 100-129 participated in this experiment. Subjects heard a long trail of white noise through open headphones followed by a short burst of noise from one of the loudspeakers. The loudspeaker’s elevation is encoded in the event values: 1: 37.5°, 2: 12.5°, 3:-23.5°, 4:-37.5° Roughly five percent of trials were targets where subjects heard a beep after the trial, prompting them to localize the previously heard sound. The number of those target trials, as well as the target’s elevation and the subject’s response can be found in thee behavioral data. ## Dataset Information | Dataset ID | `DS004256` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Encoding of Sound Source Elevation in Human Cortex | | Author (year) | `Bialas2022` | | Canonical | — | | Importable as | `DS004256`, `Bialas2022` | | Year | 2022 | | Authors | Ole Bialas, Marc Schoenwiesner, Burkhard Maess | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004256.v1.0.5](https://doi.org/10.18112/openneuro.ds004256.v1.0.5) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004256) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004256) | [Source URL](https://openneuro.org/datasets/ds004256) | ### Copy-paste BibTeX ```bibtex @dataset{ds004256, title = {Encoding of Sound Source Elevation in Human Cortex}, author = {Ole Bialas and Marc Schoenwiesner and Burkhard Maess}, doi = {10.18112/openneuro.ds004256.v1.0.5}, url = {https://doi.org/10.18112/openneuro.ds004256.v1.0.5}, } ``` ## Technical Details - Subjects: 53 - Recordings: 53 - Tasks: 2 - Channels: 64 - Sampling rate (Hz): 500.0 - Duration (hours): 42.33703611111111 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 18.2 GB - File count: 53 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004256.v1.0.5 - Source: openneuro - OpenNeuro: [ds004256](https://openneuro.org/datasets/ds004256) - NeMAR: [ds004256](https://nemar.org/dataexplorer/detail?dataset_id=ds004256) ## API Reference Use the `DS004256` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004256(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Encoding of Sound Source Elevation in Human Cortex * **Study:** `ds004256` (OpenNeuro) * **Author (year):** `Bialas2022` * **Canonical:** — Also importable as: `DS004256`, `Bialas2022`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 53; recordings: 53; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004256](https://openneuro.org/datasets/ds004256) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004256](https://nemar.org/dataexplorer/detail?dataset_id=ds004256) DOI: [https://doi.org/10.18112/openneuro.ds004256.v1.0.5](https://doi.org/10.18112/openneuro.ds004256.v1.0.5) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004256 >>> dataset = DS004256(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004256) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004256) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004262: eeg dataset, 21 subjects *Continuous Feedback Processing* Access recordings and metadata through EEGDash. **Citation:** Cameron D. Hassall, Yan Yan, Laurence T. Hunt (2022). *Continuous Feedback Processing*. [10.18112/openneuro.ds004262.v1.0.0](https://doi.org/10.18112/openneuro.ds004262.v1.0.0) Modality: eeg Subjects: 21 Recordings: 21 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004262 dataset = DS004262(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004262(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004262( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004262, title = {Continuous Feedback Processing}, author = {Cameron D. Hassall and Yan Yan and Laurence T. Hunt}, doi = {10.18112/openneuro.ds004262.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004262.v1.0.0}, } ``` ## About This Dataset **Continuous Feedback Processing** Twenty-one participants learned to predict the final level of an animated rising bar. Following the appearance of a fixation cross, participants used the mouse to indicate their guess (i.e., how high they thought the bar would rise). After a delay, participants watched the bar rise to its final level. Points were awarded based on the distance between their guess and the actual level. Each round was cued by the appearance of a gnome (cover story: the gnomes are playing a strongman game while visiting a fair). Cues varied in the degree to which the outcome was predictable (highly predictable, somewhat predictable, unpredictable). Participant 11 was excluded from the analysis due to excessive artifacts. Timing fixation cross (400-600 ms) -> gnome cue (1500 ms) -> bar outline (until response) -> animation (1 degree per second until complete) -> final outcome (1000 ms) Conditions (Gnome Types) 1: highly predictable - consistently low 2: highly predictable - consistently high 3: unpredictable - low or high with equal probability 4: somewhat predictable - usually (80%) low, sometimes high 5: somewhat predictable - usually (80%) high, sometimes low 6: unpredictable - random uniform distribution Trigger Modifiers Add 0: Fixation cross Add 10: Cue (gnome) onset Add 20: Bar outline appears Add 30: Participant response Add 40: Start of animation Add 50: End of animation (and start of 1-second delay) ## Dataset Information | Dataset ID | `DS004262` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Continuous Feedback Processing | | Author (year) | `Hassall2022_Continuous` | | Canonical | — | | Importable as | `DS004262`, `Hassall2022_Continuous` | | Year | 2022 | | Authors | Cameron D. Hassall, Yan Yan, Laurence T. Hunt | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004262.v1.0.0](https://doi.org/10.18112/openneuro.ds004262.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004262) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004262) | [Source URL](https://openneuro.org/datasets/ds004262) | ### Copy-paste BibTeX ```bibtex @dataset{ds004262, title = {Continuous Feedback Processing}, author = {Cameron D. Hassall and Yan Yan and Laurence T. Hunt}, doi = {10.18112/openneuro.ds004262.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004262.v1.0.0}, } ``` ## Technical Details - Subjects: 21 - Recordings: 21 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 1000.0 - Duration (hours): 8.347866666666667 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 3.5 GB - File count: 21 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004262.v1.0.0 - Source: openneuro - OpenNeuro: [ds004262](https://openneuro.org/datasets/ds004262) - NeMAR: [ds004262](https://nemar.org/dataexplorer/detail?dataset_id=ds004262) ## API Reference Use the `DS004262` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004262(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Continuous Feedback Processing * **Study:** `ds004262` (OpenNeuro) * **Author (year):** `Hassall2022_Continuous` * **Canonical:** — Also importable as: `DS004262`, `Hassall2022_Continuous`. Modality: `eeg`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004262](https://openneuro.org/datasets/ds004262) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004262](https://nemar.org/dataexplorer/detail?dataset_id=ds004262) DOI: [https://doi.org/10.18112/openneuro.ds004262.v1.0.0](https://doi.org/10.18112/openneuro.ds004262.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004262 >>> dataset = DS004262(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004262) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004262) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004264: eeg dataset, 21 subjects *Steer the Ship* Access recordings and metadata through EEGDash. **Citation:** Cameron D. Hassall, Yan Yan, Laurence T. Hunt (2022). *Steer the Ship*. [10.18112/openneuro.ds004264.v1.1.0](https://doi.org/10.18112/openneuro.ds004264.v1.1.0) Modality: eeg Subjects: 21 Recordings: 21 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004264 dataset = DS004264(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004264(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004264( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004264, title = {Steer the Ship}, author = {Cameron D. Hassall and Yan Yan and Laurence T. Hunt}, doi = {10.18112/openneuro.ds004264.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004264.v1.1.0}, } ``` ## About This Dataset **Steer the Ship** Twenty-one participants learned to control the trajectory of a ship, represented by centrally presented rotating arrow. Prior to each round participants were cued about the degree of controller and environmental noise (“wind”) they would experience. During the round, participants pressed the ‘f’ and ‘j’ keys to apply angular force in either a clockwise or counterclockwise direction. The goal of the task was to keep the ship closely oriented towards a target. Points were accumulated depending on the mean distance to target. The ship would crash if it strayed too far from the target (and the round would end). Each round lasted up to 1 minute. The underlying physics were based on the pole-and-cart problem (i.e., unstable). There were four noise conditions: 1: No noise 2: Environmental noise only (ship occasionally moved on its own) 3: Controller noise only (amount of force varied) 4: Environmental and controller noise Participant 12 should be excluded from event-locked analyses due to bad triggers (trigger cable was partially disconnected). Also note that the RT for the first button press in each round is not recorded (but is recorded in the participantActions column). Trigger Modifiers (added to condition numbers) Add 0: Condition cue Add 10: Start of round Add 20: Left button press Add 30: Right button press Add 40: Left button press (computer) Add 50: Right button press (computer) Add 60: Crash Add 70: Success (reached 1 minute of play) Add 80: Points displayed  ## Dataset Information | Dataset ID | `DS004264` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Steer the Ship | | Author (year) | `Hassall2022_Steer` | | Canonical | — | | Importable as | `DS004264`, `Hassall2022_Steer` | | Year | 2022 | | Authors | Cameron D. Hassall, Yan Yan, Laurence T. Hunt | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004264.v1.1.0](https://doi.org/10.18112/openneuro.ds004264.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004264) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004264) | [Source URL](https://openneuro.org/datasets/ds004264) | ### Copy-paste BibTeX ```bibtex @dataset{ds004264, title = {Steer the Ship}, author = {Cameron D. Hassall and Yan Yan and Laurence T. Hunt}, doi = {10.18112/openneuro.ds004264.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004264.v1.1.0}, } ``` ## Technical Details - Subjects: 21 - Recordings: 21 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 1000.0 - Duration (hours): 7.460555555555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 3.3 GB - File count: 21 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004264.v1.1.0 - Source: openneuro - OpenNeuro: [ds004264](https://openneuro.org/datasets/ds004264) - NeMAR: [ds004264](https://nemar.org/dataexplorer/detail?dataset_id=ds004264) ## API Reference Use the `DS004264` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004264(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Steer the Ship * **Study:** `ds004264` (OpenNeuro) * **Author (year):** `Hassall2022_Steer` * **Canonical:** — Also importable as: `DS004264`, `Hassall2022_Steer`. Modality: `eeg`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004264](https://openneuro.org/datasets/ds004264) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004264](https://nemar.org/dataexplorer/detail?dataset_id=ds004264) DOI: [https://doi.org/10.18112/openneuro.ds004264.v1.1.0](https://doi.org/10.18112/openneuro.ds004264.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004264 >>> dataset = DS004264(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004264) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004264) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004276: meg dataset, 19 subjects *Auditory single word recognition in MEG* Access recordings and metadata through EEGDash. **Citation:** Phoebe Gaston, Christian Brodbeck, Colin Phillips, Ellen Lau (2022). *Auditory single word recognition in MEG*. [10.18112/openneuro.ds004276.v1.0.0](https://doi.org/10.18112/openneuro.ds004276.v1.0.0) Modality: meg Subjects: 19 Recordings: 19 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004276 dataset = DS004276(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004276(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004276( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004276, title = {Auditory single word recognition in MEG}, author = {Phoebe Gaston and Christian Brodbeck and Colin Phillips and Ellen Lau}, doi = {10.18112/openneuro.ds004276.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004276.v1.0.0}, } ``` ## About This Dataset **Auditory single word recognition in MEG** This dataset is described in Gaston et al. (2022). Stimuli and TextGrids are available from the Massive Auditory Lexical Decision database (Tucker et al., 2019). Converted to BIDS using MNE-BIDS (Appelhoff et al., 2019; Niso et al., 2018). **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Gaston, P., Brodbeck, C., Phillips, C., & Lau, E. (2022). Auditory word comprehension is less incremental in isolated words. Neurobiology of Language Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) Tucker, B. V., Brenner, D., Danielson, D. K., Kelley, M. C., Nenadić, F., & Sims, M. (2019). The Massive Auditory Lexical Decision (MALD) database. Behavior Research Methods, 51(3), 1187–1204. [https://doi.org/10.3758/s13428-018-1056-1](https://doi.org/10.3758/s13428-018-1056-1) ## Dataset Information | Dataset ID | `DS004276` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Auditory single word recognition in MEG | | Author (year) | `Gaston2022` | | Canonical | — | | Importable as | `DS004276`, `Gaston2022` | | Year | 2022 | | Authors | Phoebe Gaston, Christian Brodbeck, Colin Phillips, Ellen Lau | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004276.v1.0.0](https://doi.org/10.18112/openneuro.ds004276.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004276) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004276) | [Source URL](https://openneuro.org/datasets/ds004276) | ### Copy-paste BibTeX ```bibtex @dataset{ds004276, title = {Auditory single word recognition in MEG}, author = {Phoebe Gaston and Christian Brodbeck and Colin Phillips and Ellen Lau}, doi = {10.18112/openneuro.ds004276.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004276.v1.0.0}, } ``` ## Technical Details - Subjects: 19 - Recordings: 19 - Tasks: 2 - Channels: 193 - Sampling rate (Hz): 1000.0 - Duration (hours): 5.2202725 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 11.6 GB - File count: 19 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004276.v1.0.0 - Source: openneuro - OpenNeuro: [ds004276](https://openneuro.org/datasets/ds004276) - NeMAR: [ds004276](https://nemar.org/dataexplorer/detail?dataset_id=ds004276) ## API Reference Use the `DS004276` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004276(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory single word recognition in MEG * **Study:** `ds004276` (OpenNeuro) * **Author (year):** `Gaston2022` * **Canonical:** — Also importable as: `DS004276`, `Gaston2022`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 19; recordings: 19; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004276](https://openneuro.org/datasets/ds004276) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004276](https://nemar.org/dataexplorer/detail?dataset_id=ds004276) DOI: [https://doi.org/10.18112/openneuro.ds004276.v1.0.0](https://doi.org/10.18112/openneuro.ds004276.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004276 >>> dataset = DS004276(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004276) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004276) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS004278: meg dataset, 30 subjects *Sustained Neural Representations of Personally Familiar People and Places During Cued Recall* Access recordings and metadata through EEGDash. **Citation:** Alexis Kidder(\*), Anna Corriveau(\*), Lina Teichmann, Susan Wardle, Chris Baker, [(\*) = equal contribution] (2022). *Sustained Neural Representations of Personally Familiar People and Places During Cued Recall*. [10.18112/openneuro.ds004278.v1.0.1](https://doi.org/10.18112/openneuro.ds004278.v1.0.1) Modality: meg Subjects: 30 Recordings: 30 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004278 dataset = DS004278(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004278(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004278( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004278, title = {Sustained Neural Representations of Personally Familiar People and Places During Cued Recall}, author = {Alexis Kidder(*) and Anna Corriveau(*) and Lina Teichmann and Susan Wardle and Chris Baker and [(*) = equal contribution]}, doi = {10.18112/openneuro.ds004278.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004278.v1.0.1}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `DS004278` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Sustained Neural Representations of Personally Familiar People and Places During Cued Recall | | Author (year) | `Kidder2022` | | Canonical | `Kidder2024` | | Importable as | `DS004278`, `Kidder2022`, `Kidder2024` | | Year | 2022 | | Authors | Alexis Kidder(\*), Anna Corriveau(\*), Lina Teichmann, Susan Wardle, Chris Baker, [(\*) = equal contribution] | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004278.v1.0.1](https://doi.org/10.18112/openneuro.ds004278.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004278) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004278) | [Source URL](https://openneuro.org/datasets/ds004278) | ### Copy-paste BibTeX ```bibtex @dataset{ds004278, title = {Sustained Neural Representations of Personally Familiar People and Places During Cued Recall}, author = {Alexis Kidder(*) and Anna Corriveau(*) and Lina Teichmann and Susan Wardle and Chris Baker and [(*) = equal contribution]}, doi = {10.18112/openneuro.ds004278.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004278.v1.0.1}, } ``` ## Technical Details - Subjects: 30 - Recordings: 30 - Tasks: 1 - Channels: 306 - Sampling rate (Hz): 1200.0 - Duration (hours): 15.533326388888888 - Pathology: Healthy - Modality: — - Type: Memory - Size on disk: 76.7 GB - File count: 30 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004278.v1.0.1 - Source: openneuro - OpenNeuro: [ds004278](https://openneuro.org/datasets/ds004278) - NeMAR: [ds004278](https://nemar.org/dataexplorer/detail?dataset_id=ds004278) ## API Reference Use the `DS004278` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004278(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sustained Neural Representations of Personally Familiar People and Places During Cued Recall * **Study:** `ds004278` (OpenNeuro) * **Author (year):** `Kidder2022` * **Canonical:** `Kidder2024` Also importable as: `DS004278`, `Kidder2022`, `Kidder2024`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004278](https://openneuro.org/datasets/ds004278) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004278](https://nemar.org/dataexplorer/detail?dataset_id=ds004278) DOI: [https://doi.org/10.18112/openneuro.ds004278.v1.0.1](https://doi.org/10.18112/openneuro.ds004278.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004278 >>> dataset = DS004278(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004278) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004278) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS004279: eeg dataset, 56 subjects *Large Spanish EEG* Access recordings and metadata through EEGDash. **Citation:** Carlos Valle Araya, Carolina Mendez-Orellana, Maria Rodriguez-Fernandez (2022). *Large Spanish EEG*. [10.18112/openneuro.ds004279.v1.1.2](https://doi.org/10.18112/openneuro.ds004279.v1.1.2) Modality: eeg Subjects: 56 Recordings: 60 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004279 dataset = DS004279(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004279(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004279( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004279, title = {Large Spanish EEG}, author = {Carlos Valle Araya and Carolina Mendez-Orellana and Maria Rodriguez-Fernandez}, doi = {10.18112/openneuro.ds004279.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds004279.v1.1.2}, } ``` ## About This Dataset EEG: silent and perceive speech on 30 Spanish sentences Large Spanish Speech EEG dataset Authors
    >
  • Carlos Valle
  • >
  • Carolina Mendez-Orellana
  • >
  • María Rodríguez-Fernández
Resources:
    >
  • Code availaible at: [https://github.com/cgvalle](https://github.com/cgvalle)/Large_Spanish_EEG
  • >
  • Publication: Valle, C., Mendez-Orellana, C., Herff, C., & Rodriguez-Fernandez, M. (2024). Identification of perceived sentences using deep neural networks in EEG. Journal of neural engineering, 21(5), 056044.
Abstract: Decoding speech from brain activity can enable communication for individuals with speech disorders. Deep neural networks have shown great potential for speech decoding applications, but the large data sets required for these models are usually not available for neural recordings of speech impaired subjects. Harnessing data from other participants would thus be ideal to create speech neuroprostheses without the need of patient-specific training data. In this study, we recorded 60 sessions from 56 healthy participants using 64 EEG channels and developed a neural network capable of subject-independent classification of perceived sentences. We found that sentence identity can be decoded from subjects without prior training achieving higher accuracy than mixed-subject models. The development of subject-independent models eliminates the need to collect data from a target subject, reducing time and data collection costs during deployment. These results open new avenues for creating speech neuroprostheses when subjects cannot provide training data. Experiment Design: We investigated the neural signals recorded using a 64-channel EEG system during speech perception and silent speech production tasks involving 30 daily use sentences in Spanish. The participants were instructed to listen to a spoken sentence from an audio recording and then silently repeat the sentence without any motor action. The experimental design, a modified version of a previous study (Dash, et al), comprises four segments: rest, perception, preparation, and silent speech production. The rest segment lasted five seconds, presenting a fixation cross (+) before the stimulus onset. During the perception section, the participants listened to the stimulus. Prior to subject S18, the perception section lasted 4 s, with each sentence being repeated 7 times. From subject S19 onward, the duration of the perception segment was increased to 5 s to match the duration of the silent speech production segment and the number of repetitions per sentence was decreased to 6 in order to maintain the overall length of the experiment. The preparation segment lasted one second and presented a blank screen, serving as a separation marker between the perception and silent speech production tasks. In the silent speech production segment lasting five seconds, subjects were instructed to silently repeat the previously heard stimulus without motor action only once. It is important to note that this study exclusively focuses on the outcomes derived from the speech perception task. Trials for each block of the 30 stimuli were presented in a randomized order to prevent anticipation and learning biases. Contact: Please contact us at this e-mail address if you have any question: [cgvalle@uc.cl](mailto:cgvalle@uc.cl) ## Dataset Information | Dataset ID | `DS004279` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Large Spanish EEG | | Author (year) | `Araya2022` | | Canonical | — | | Importable as | `DS004279`, `Araya2022` | | Year | 2022 | | Authors | Carlos Valle Araya, Carolina Mendez-Orellana, Maria Rodriguez-Fernandez | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004279.v1.1.2](https://doi.org/10.18112/openneuro.ds004279.v1.1.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004279) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004279) | [Source URL](https://openneuro.org/datasets/ds004279) | ### Copy-paste BibTeX ```bibtex @dataset{ds004279, title = {Large Spanish EEG}, author = {Carlos Valle Araya and Carolina Mendez-Orellana and Maria Rodriguez-Fernandez}, doi = {10.18112/openneuro.ds004279.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds004279.v1.1.2}, } ``` ## Technical Details - Subjects: 56 - Recordings: 60 - Tasks: 1 - Channels: 69 - Sampling rate (Hz): 1000.0 - Duration (hours): 53.72865 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 25.2 GB - File count: 60 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004279.v1.1.2 - Source: openneuro - OpenNeuro: [ds004279](https://openneuro.org/datasets/ds004279) - NeMAR: [ds004279](https://nemar.org/dataexplorer/detail?dataset_id=ds004279) ## API Reference Use the `DS004279` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004279(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Large Spanish EEG * **Study:** `ds004279` (OpenNeuro) * **Author (year):** `Araya2022` * **Canonical:** — Also importable as: `DS004279`, `Araya2022`. Modality: `eeg`. Subjects: 56; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004279](https://openneuro.org/datasets/ds004279) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004279](https://nemar.org/dataexplorer/detail?dataset_id=ds004279) DOI: [https://doi.org/10.18112/openneuro.ds004279.v1.1.2](https://doi.org/10.18112/openneuro.ds004279.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004279 >>> dataset = DS004279(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004279) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004279) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004284: eeg dataset, 18 subjects *eeg-neuroforecasting* Access recordings and metadata through EEGDash. **Citation:** Veillette, J., Heald, S., Wittenbrink, B., Nusbaum, H. (2022). *eeg-neuroforecasting*. [10.18112/openneuro.ds004284.v1.0.0](https://doi.org/10.18112/openneuro.ds004284.v1.0.0) Modality: eeg Subjects: 18 Recordings: 18 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004284 dataset = DS004284(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004284(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004284( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004284, title = {eeg-neuroforecasting}, author = {Veillette, J. and Heald, S. and Wittenbrink, B. and Nusbaum, H.}, doi = {10.18112/openneuro.ds004284.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004284.v1.0.0}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS004284` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | eeg-neuroforecasting | | Author (year) | `Veillette2022` | | Canonical | — | | Importable as | `DS004284`, `Veillette2022` | | Year | 2022 | | Authors | Veillette, J., Heald, S., Wittenbrink, B., Nusbaum, H. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004284.v1.0.0](https://doi.org/10.18112/openneuro.ds004284.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004284) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004284) | [Source URL](https://openneuro.org/datasets/ds004284) | ### Copy-paste BibTeX ```bibtex @dataset{ds004284, title = {eeg-neuroforecasting}, author = {Veillette, J. and Heald, S. and Wittenbrink, B. and Nusbaum, H.}, doi = {10.18112/openneuro.ds004284.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004284.v1.0.0}, } ``` ## Technical Details - Subjects: 18 - Recordings: 18 - Tasks: 1 - Channels: 129 - Sampling rate (Hz): 1000.0 - Duration (hours): 9.454245 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 16.4 GB - File count: 18 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004284.v1.0.0 - Source: openneuro - OpenNeuro: [ds004284](https://openneuro.org/datasets/ds004284) - NeMAR: [ds004284](https://nemar.org/dataexplorer/detail?dataset_id=ds004284) ## API Reference Use the `DS004284` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004284(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) eeg-neuroforecasting * **Study:** `ds004284` (OpenNeuro) * **Author (year):** `Veillette2022` * **Canonical:** — Also importable as: `DS004284`, `Veillette2022`. Modality: `eeg`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004284](https://openneuro.org/datasets/ds004284) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004284](https://nemar.org/dataexplorer/detail?dataset_id=ds004284) DOI: [https://doi.org/10.18112/openneuro.ds004284.v1.0.0](https://doi.org/10.18112/openneuro.ds004284.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004284 >>> dataset = DS004284(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004284) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004284) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004295: eeg dataset, 26 subjects *Reward gain and punishment avoidance reversal learning* Access recordings and metadata through EEGDash. **Citation:** Christopher Stolz, Alan Pickering, Erik M. Mueller (2022). *Reward gain and punishment avoidance reversal learning*. [10.18112/openneuro.ds004295.v1.0.0](https://doi.org/10.18112/openneuro.ds004295.v1.0.0) Modality: eeg Subjects: 26 Recordings: 26 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004295 dataset = DS004295(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004295(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004295( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004295, title = {Reward gain and punishment avoidance reversal learning}, author = {Christopher Stolz and Alan Pickering and Erik M. Mueller}, doi = {10.18112/openneuro.ds004295.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004295.v1.0.0}, } ``` ## About This Dataset Two reversal learning tasks with different reinforcer (monetary reward vs. primary threat reinforcer). Positive feedback in the reward task indicated monetary reward (+10 Cent) and negative feedback monetary non-reward (+0 Cent). In the punishment task, positive feedback signaled successful avoidance of a loud white noise burst and negative feedback the application of the noise burst. The white noise burst intensity was titrated to match monetary reward (+10 Cent) for every participant (81 dB, 84 dB, 87, dB, 90 dB). ## Dataset Information | Dataset ID | `DS004295` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Reward gain and punishment avoidance reversal learning | | Author (year) | `Stolz2022` | | Canonical | — | | Importable as | `DS004295`, `Stolz2022` | | Year | 2022 | | Authors | Christopher Stolz, Alan Pickering, Erik M. Mueller | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004295.v1.0.0](https://doi.org/10.18112/openneuro.ds004295.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004295) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004295) | [Source URL](https://openneuro.org/datasets/ds004295) | ### Copy-paste BibTeX ```bibtex @dataset{ds004295, title = {Reward gain and punishment avoidance reversal learning}, author = {Christopher Stolz and Alan Pickering and Erik M. Mueller}, doi = {10.18112/openneuro.ds004295.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004295.v1.0.0}, } ``` ## Technical Details - Subjects: 26 - Recordings: 26 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 1024.0 (25), 512.0 - Duration (hours): 34.312777777777775 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 31.5 GB - File count: 26 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004295.v1.0.0 - Source: openneuro - OpenNeuro: [ds004295](https://openneuro.org/datasets/ds004295) - NeMAR: [ds004295](https://nemar.org/dataexplorer/detail?dataset_id=ds004295) ## API Reference Use the `DS004295` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004295(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Reward gain and punishment avoidance reversal learning * **Study:** `ds004295` (OpenNeuro) * **Author (year):** `Stolz2022` * **Canonical:** — Also importable as: `DS004295`, `Stolz2022`. Modality: `eeg`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004295](https://openneuro.org/datasets/ds004295) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004295](https://nemar.org/dataexplorer/detail?dataset_id=ds004295) DOI: [https://doi.org/10.18112/openneuro.ds004295.v1.0.0](https://doi.org/10.18112/openneuro.ds004295.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004295 >>> dataset = DS004295(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004295) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004295) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004306: eeg dataset, 12 subjects *EEG Semantic Imagination and Perception Dataset* Access recordings and metadata through EEGDash. **Citation:** Holly Wilson, Mohammad Golbabaee, Michael Proulx, Eamonn O’Neill (2022). *EEG Semantic Imagination and Perception Dataset*. [10.18112/openneuro.ds004306.v1.0.2](https://doi.org/10.18112/openneuro.ds004306.v1.0.2) Modality: eeg Subjects: 12 Recordings: 15 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004306 dataset = DS004306(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004306(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004306( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004306, title = {EEG Semantic Imagination and Perception Dataset}, author = {Holly Wilson and Mohammad Golbabaee and Michael Proulx and Eamonn O'Neill}, doi = {10.18112/openneuro.ds004306.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004306.v1.0.2}, } ``` ## About This Dataset This dataset consists of electroencephalography (EEG) signals acquired with a 124 EEG ANT-Neuro device. **Participants and Sessions** There are 13 participants included, ten performed one session and three performed two sessions. All participants had normal or corrected vision and hearing, apart from sub-16. **Task** The task consisted of imagining and perceiving stimuli from three modalities; visual pictorial, visual orthographic (writing) or auditory. Each of the stimuli belonged to one of three categories: guitar, flower and penguin. These categories were selected based on being semantically dissimilar to one another, and because there were all of 2 syllables. **Dataset Versions** The dataset provided consists of the raw EEG data, a pre-processed version, and an epoched version. ## Dataset Information | Dataset ID | `DS004306` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG Semantic Imagination and Perception Dataset | | Author (year) | `Wilson2022` | | Canonical | — | | Importable as | `DS004306`, `Wilson2022` | | Year | 2022 | | Authors | Holly Wilson, Mohammad Golbabaee, Michael Proulx, Eamonn O’Neill | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004306.v1.0.2](https://doi.org/10.18112/openneuro.ds004306.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004306) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004306) | [Source URL](https://openneuro.org/datasets/ds004306) | ### Copy-paste BibTeX ```bibtex @dataset{ds004306, title = {EEG Semantic Imagination and Perception Dataset}, author = {Holly Wilson and Mohammad Golbabaee and Michael Proulx and Eamonn O'Neill}, doi = {10.18112/openneuro.ds004306.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004306.v1.0.2}, } ``` ## Technical Details - Subjects: 12 - Recordings: 15 - Tasks: 1 - Channels: 128 - Sampling rate (Hz): 1024.0 - Duration (hours): 18.18320014105903 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 654.9 MB - File count: 15 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004306.v1.0.2 - Source: openneuro - OpenNeuro: [ds004306](https://openneuro.org/datasets/ds004306) - NeMAR: [ds004306](https://nemar.org/dataexplorer/detail?dataset_id=ds004306) ## API Reference Use the `DS004306` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004306(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Semantic Imagination and Perception Dataset * **Study:** `ds004306` (OpenNeuro) * **Author (year):** `Wilson2022` * **Canonical:** — Also importable as: `DS004306`, `Wilson2022`. Modality: `eeg`. Subjects: 12; recordings: 15; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004306](https://openneuro.org/datasets/ds004306) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004306](https://nemar.org/dataexplorer/detail?dataset_id=ds004306) DOI: [https://doi.org/10.18112/openneuro.ds004306.v1.0.2](https://doi.org/10.18112/openneuro.ds004306.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004306 >>> dataset = DS004306(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004306) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004306) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004315: eeg dataset, 50 subjects *Mood Manipulation and PST, Experiment 1* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh, Trevor C J Jackson (2022). *Mood Manipulation and PST, Experiment 1*. [10.18112/openneuro.ds004315.v1.0.0](https://doi.org/10.18112/openneuro.ds004315.v1.0.0) Modality: eeg Subjects: 50 Recordings: 50 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004315 dataset = DS004315(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004315(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004315( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004315, title = {Mood Manipulation and PST, Experiment 1}, author = {James F Cavanagh and Trevor C J Jackson}, doi = {10.18112/openneuro.ds004315.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004315.v1.0.0}, } ``` ## About This Dataset Reinforcement learning task with 50 healthy controls (25 after a sad mood manipulation, 25 after a neutral mood manipulation) Task with a training section and testing section. Task adapted from here: [https://doi.org/10.1126/science.1102941](https://doi.org/10.1126/science.1102941). Separate mood manipulation occurred earlier. Task included in Matlab programming language. Data collected from 2019-2021 in Cognitive Rhythms and Computation Lab at University of New Mexico. Check the .xls sheet under code folder for more meta data. - Trevor CJ Jackson 06/25/2021 ## Dataset Information | Dataset ID | `DS004315` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mood Manipulation and PST, Experiment 1 | | Author (year) | `Cavanagh2022_E1` | | Canonical | — | | Importable as | `DS004315`, `Cavanagh2022_E1` | | Year | 2022 | | Authors | James F Cavanagh, Trevor C J Jackson | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004315.v1.0.0](https://doi.org/10.18112/openneuro.ds004315.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004315) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004315) | [Source URL](https://openneuro.org/datasets/ds004315) | ### Copy-paste BibTeX ```bibtex @dataset{ds004315, title = {Mood Manipulation and PST, Experiment 1}, author = {James F Cavanagh and Trevor C J Jackson}, doi = {10.18112/openneuro.ds004315.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004315.v1.0.0}, } ``` ## Technical Details - Subjects: 50 - Recordings: 50 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 500.0 - Duration (hours): 21.10388888888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 9.8 GB - File count: 50 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004315.v1.0.0 - Source: openneuro - OpenNeuro: [ds004315](https://openneuro.org/datasets/ds004315) - NeMAR: [ds004315](https://nemar.org/dataexplorer/detail?dataset_id=ds004315) ## API Reference Use the `DS004315` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004315(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mood Manipulation and PST, Experiment 1 * **Study:** `ds004315` (OpenNeuro) * **Author (year):** `Cavanagh2022_E1` * **Canonical:** — Also importable as: `DS004315`, `Cavanagh2022_E1`. Modality: `eeg`. Subjects: 50; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004315](https://openneuro.org/datasets/ds004315) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004315](https://nemar.org/dataexplorer/detail?dataset_id=ds004315) DOI: [https://doi.org/10.18112/openneuro.ds004315.v1.0.0](https://doi.org/10.18112/openneuro.ds004315.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004315 >>> dataset = DS004315(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004315) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004315) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004317: eeg dataset, 50 subjects *Mood Manipulation and PST, Experiment 2* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh, Trevor C J Jackson (2022). *Mood Manipulation and PST, Experiment 2*. [10.18112/openneuro.ds004317.v1.0.3](https://doi.org/10.18112/openneuro.ds004317.v1.0.3) Modality: eeg Subjects: 50 Recordings: 50 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004317 dataset = DS004317(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004317(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004317( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004317, title = {Mood Manipulation and PST, Experiment 2}, author = {James F Cavanagh and Trevor C J Jackson}, doi = {10.18112/openneuro.ds004317.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds004317.v1.0.3}, } ``` ## About This Dataset Reinforcement learning task with 50 healthy controls (25 after a sad mood manipulation, 25 after a happy mood manipulation) Task with a training section and testing section. Task adapted from here: [https://doi.org/10.1126/science.1102941](https://doi.org/10.1126/science.1102941). Mood Manipulation occurs during task before each training block. Task included in Matlab programming language. Data collected from 2019-2021 in Cognitive Rhythms and Computation Lab at University of New Mexico. Check the .xls sheet under code folder for more meta data. - Trevor CJ Jackson 10/27/2022 ## Dataset Information | Dataset ID | `DS004317` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mood Manipulation and PST, Experiment 2 | | Author (year) | `Cavanagh2022_E2` | | Canonical | — | | Importable as | `DS004317`, `Cavanagh2022_E2` | | Year | 2022 | | Authors | James F Cavanagh, Trevor C J Jackson | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004317.v1.0.3](https://doi.org/10.18112/openneuro.ds004317.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004317) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004317) | [Source URL](https://openneuro.org/datasets/ds004317) | ### Copy-paste BibTeX ```bibtex @dataset{ds004317, title = {Mood Manipulation and PST, Experiment 2}, author = {James F Cavanagh and Trevor C J Jackson}, doi = {10.18112/openneuro.ds004317.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds004317.v1.0.3}, } ``` ## Technical Details - Subjects: 50 - Recordings: 50 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 500.0 - Duration (hours): 37.76679166666667 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 18.3 GB - File count: 50 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004317.v1.0.3 - Source: openneuro - OpenNeuro: [ds004317](https://openneuro.org/datasets/ds004317) - NeMAR: [ds004317](https://nemar.org/dataexplorer/detail?dataset_id=ds004317) ## API Reference Use the `DS004317` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004317(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mood Manipulation and PST, Experiment 2 * **Study:** `ds004317` (OpenNeuro) * **Author (year):** `Cavanagh2022_E2` * **Canonical:** — Also importable as: `DS004317`, `Cavanagh2022_E2`. Modality: `eeg`. Subjects: 50; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004317](https://openneuro.org/datasets/ds004317) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004317](https://nemar.org/dataexplorer/detail?dataset_id=ds004317) DOI: [https://doi.org/10.18112/openneuro.ds004317.v1.0.3](https://doi.org/10.18112/openneuro.ds004317.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004317 >>> dataset = DS004317(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004317) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004317) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004324: eeg dataset, 26 subjects *ToonFaces* Access recordings and metadata through EEGDash. **Citation:** Luis Alberto Barradas Chacón, Selina C. Wriessnegger (2022). *ToonFaces*. [10.18112/openneuro.ds004324.v1.0.0](https://doi.org/10.18112/openneuro.ds004324.v1.0.0) Modality: eeg Subjects: 26 Recordings: 26 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004324 dataset = DS004324(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004324(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004324( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004324, title = {ToonFaces}, author = {Luis Alberto Barradas Chacón and Selina C. Wriessnegger}, doi = {10.18112/openneuro.ds004324.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004324.v1.0.0}, } ``` ## About This Dataset **Images of stylized faces improve ERP features used for emotion detection** For their ease of accessibility and low cost, current Brain-Computer Interfaces (BCI) used to detect subjective emotional and affective states rely largely on electroencephalographic (EEG) signals. Numerous datasets are publicly available for any researcher to design models for affect detection from EEG. However, few designs focus on optimally exploiting the nature of the stimulus elicitation to improve accuracy. We found that artificially enhanced human faces with exaggerated visual features significantly improve some commonly used neural correlates of emotion as measured by event-related potentials (ERPs). These images elicit an enhanced N170 component, well known in facial recognition encoding. Our findings suggest that the study of emotion elicitation could exploit consistent stimuli transformations to study the characteristics of ERPs related to specific affective stimuli. Furthermore, this specific result might be useful in the context of affective BCI design, where a higher accuracy in affect detection from EEG can improve the experience of a user. Participant information has been removed for annonimation reasons. **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS004324` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | ToonFaces | | Author (year) | `Chacon2022` | | Canonical | `ToonFaces` | | Importable as | `DS004324`, `Chacon2022`, `ToonFaces` | | Year | 2022 | | Authors | Luis Alberto Barradas Chacón, Selina C. Wriessnegger | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004324.v1.0.0](https://doi.org/10.18112/openneuro.ds004324.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004324) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004324) | [Source URL](https://openneuro.org/datasets/ds004324) | ### Copy-paste BibTeX ```bibtex @dataset{ds004324, title = {ToonFaces}, author = {Luis Alberto Barradas Chacón and Selina C. Wriessnegger}, doi = {10.18112/openneuro.ds004324.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004324.v1.0.0}, } ``` ## Technical Details - Subjects: 26 - Recordings: 26 - Tasks: 1 - Channels: 38 - Sampling rate (Hz): 500.0 - Duration (hours): 19.21581888888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 2.5 GB - File count: 26 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004324.v1.0.0 - Source: openneuro - OpenNeuro: [ds004324](https://openneuro.org/datasets/ds004324) - NeMAR: [ds004324](https://nemar.org/dataexplorer/detail?dataset_id=ds004324) ## API Reference Use the `DS004324` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004324(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ToonFaces * **Study:** `ds004324` (OpenNeuro) * **Author (year):** `Chacon2022` * **Canonical:** `ToonFaces` Also importable as: `DS004324`, `Chacon2022`, `ToonFaces`. Modality: `eeg`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004324](https://openneuro.org/datasets/ds004324) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004324](https://nemar.org/dataexplorer/detail?dataset_id=ds004324) DOI: [https://doi.org/10.18112/openneuro.ds004324.v1.0.0](https://doi.org/10.18112/openneuro.ds004324.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004324 >>> dataset = DS004324(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004324) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004324) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004330: meg dataset, 30 subjects *The spatiotemporal neural dynamics of object recognition for natural images and line drawings (MEG)* Access recordings and metadata through EEGDash. **Citation:** Johannes J.D. Singer, Radoslaw M. Cichy, Martin N. Hebart (2022). *The spatiotemporal neural dynamics of object recognition for natural images and line drawings (MEG)*. [10.18112/openneuro.ds004330.v1.0.0](https://doi.org/10.18112/openneuro.ds004330.v1.0.0) Modality: meg Subjects: 30 Recordings: 270 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004330 dataset = DS004330(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004330(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004330( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004330, title = {The spatiotemporal neural dynamics of object recognition for natural images and line drawings (MEG)}, author = {Johannes J.D. Singer and Radoslaw M. Cichy and Martin N. Hebart}, doi = {10.18112/openneuro.ds004330.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004330.v1.0.0}, } ``` ## About This Dataset This dataset contains the raw MEG data accompanying the paper “The spatiotemporal neural dynamics of object recognition for natural images and line drawings” (Link to preprint: [https://biorxiv.org/cgi/content/short/2022.08.12.503484v1](https://biorxiv.org/cgi/content/short/2022.08.12.503484v1)). Please cite the above paper if you use this data. The dataset includes: MEG data for 9 runs for each subjects. Events files that contain the onsets, durations and trial types for each trial in the experiment (excluding catch trials). For a full description of the paradigm and the employed procedures please see the manuscript. Results for the first-level analyses for this data can be found on OSF ([https://osf.io/vsc6y/](https://osf.io/vsc6y/)). Code for the analysis of the data can be found on Github ([https://github.com/Singerjohannes/object_drawing_dynamics/](https://github.com/Singerjohannes/object_drawing_dynamics/)). ## Dataset Information | Dataset ID | `DS004330` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The spatiotemporal neural dynamics of object recognition for natural images and line drawings (MEG) | | Author (year) | `Singer2022` | | Canonical | — | | Importable as | `DS004330`, `Singer2022` | | Year | 2022 | | Authors | Johannes J.D. Singer, Radoslaw M. Cichy, Martin N. Hebart | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004330.v1.0.0](https://doi.org/10.18112/openneuro.ds004330.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004330) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004330) | [Source URL](https://openneuro.org/datasets/ds004330) | ### Copy-paste BibTeX ```bibtex @dataset{ds004330, title = {The spatiotemporal neural dynamics of object recognition for natural images and line drawings (MEG)}, author = {Johannes J.D. Singer and Radoslaw M. Cichy and Martin N. Hebart}, doi = {10.18112/openneuro.ds004330.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004330.v1.0.0}, } ``` ## Technical Details - Subjects: 30 - Recordings: 270 - Tasks: 1 - Channels: 310 - Sampling rate (Hz): 1000.0 - Duration (hours): 36.68270277777778 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 153.7 GB - File count: 270 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004330.v1.0.0 - Source: openneuro - OpenNeuro: [ds004330](https://openneuro.org/datasets/ds004330) - NeMAR: [ds004330](https://nemar.org/dataexplorer/detail?dataset_id=ds004330) ## API Reference Use the `DS004330` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004330(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The spatiotemporal neural dynamics of object recognition for natural images and line drawings (MEG) * **Study:** `ds004330` (OpenNeuro) * **Author (year):** `Singer2022` * **Canonical:** — Also importable as: `DS004330`, `Singer2022`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 30; recordings: 270; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004330](https://openneuro.org/datasets/ds004330) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004330](https://nemar.org/dataexplorer/detail?dataset_id=ds004330) DOI: [https://doi.org/10.18112/openneuro.ds004330.v1.0.0](https://doi.org/10.18112/openneuro.ds004330.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004330 >>> dataset = DS004330(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004330) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004330) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS004346: meg dataset, 1 subjects *FLUX: A pipeline for MEG analysis* Access recordings and metadata through EEGDash. **Citation:** Oscar Ferrante, Ling Liu, Tamas Minarik, Urszula Gorska, Tara Ghafari, Huan Luo, Ole Jensen (2022). *FLUX: A pipeline for MEG analysis*. [10.18112/openneuro.ds004346.v1.0.8](https://doi.org/10.18112/openneuro.ds004346.v1.0.8) Modality: meg Subjects: 1 Recordings: 3 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004346 dataset = DS004346(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004346(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004346( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004346, title = {FLUX: A pipeline for MEG analysis}, author = {Oscar Ferrante and Ling Liu and Tamas Minarik and Urszula Gorska and Tara Ghafari and Huan Luo and Ole Jensen}, doi = {10.18112/openneuro.ds004346.v1.0.8}, url = {https://doi.org/10.18112/openneuro.ds004346.v1.0.8}, } ``` ## About This Dataset **References** Ferrante, O., Liu, L., Minarik, T., Gorska, U., Ghafari, T., Luo, H., & Jensen, O. (2022). FLUX: A pipeline for MEG analysis. NeuroImage, 253, 119047. [https://doi.org/10.1016/j.neuroimage.2022.119047](https://doi.org/10.1016/j.neuroimage.2022.119047) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `DS004346` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | FLUX: A pipeline for MEG analysis | | Author (year) | `Ferrante2022` | | Canonical | `FLUX` | | Importable as | `DS004346`, `Ferrante2022`, `FLUX` | | Year | 2022 | | Authors | Oscar Ferrante, Ling Liu, Tamas Minarik, Urszula Gorska, Tara Ghafari, Huan Luo, Ole Jensen | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004346.v1.0.8](https://doi.org/10.18112/openneuro.ds004346.v1.0.8) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004346) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004346) | [Source URL](https://openneuro.org/datasets/ds004346) | ### Copy-paste BibTeX ```bibtex @dataset{ds004346, title = {FLUX: A pipeline for MEG analysis}, author = {Oscar Ferrante and Ling Liu and Tamas Minarik and Urszula Gorska and Tara Ghafari and Huan Luo and Ole Jensen}, doi = {10.18112/openneuro.ds004346.v1.0.8}, url = {https://doi.org/10.18112/openneuro.ds004346.v1.0.8}, } ``` ## Technical Details - Subjects: 1 - Recordings: 3 - Tasks: 1 - Channels: 343 - Sampling rate (Hz): 1000.0 - Duration (hours): 0.803055 - Pathology: Healthy - Modality: — - Type: Attention - Size on disk: 3.6 GB - File count: 3 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004346.v1.0.8 - Source: openneuro - OpenNeuro: [ds004346](https://openneuro.org/datasets/ds004346) - NeMAR: [ds004346](https://nemar.org/dataexplorer/detail?dataset_id=ds004346) ## API Reference Use the `DS004346` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004346(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FLUX: A pipeline for MEG analysis * **Study:** `ds004346` (OpenNeuro) * **Author (year):** `Ferrante2022` * **Canonical:** `FLUX` Also importable as: `DS004346`, `Ferrante2022`, `FLUX`. Modality: `meg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 1; recordings: 3; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004346](https://openneuro.org/datasets/ds004346) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004346](https://nemar.org/dataexplorer/detail?dataset_id=ds004346) DOI: [https://doi.org/10.18112/openneuro.ds004346.v1.0.8](https://doi.org/10.18112/openneuro.ds004346.v1.0.8) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004346 >>> dataset = DS004346(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004346) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004346) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS004347: eeg dataset, 24 subjects *Symmetry perception and affective responses: a combined EEG/EMG study* Access recordings and metadata through EEGDash. **Citation:** Makin, A. D. J, Wilton, M. M, Pecchinenda, A., Bertamini, M. (2022). *Symmetry perception and affective responses: a combined EEG/EMG study*. [10.18112/openneuro.ds004347.v1.0.0](https://doi.org/10.18112/openneuro.ds004347.v1.0.0) Modality: eeg Subjects: 24 Recordings: 24 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004347 dataset = DS004347(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004347(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004347( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004347, title = {Symmetry perception and affective responses: a combined EEG/EMG study}, author = {Makin, A. D. J and Wilton, M. M and Pecchinenda, A. and Bertamini, M.}, doi = {10.18112/openneuro.ds004347.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004347.v1.0.0}, } ``` ## About This Dataset SPN1 Experiment 1 Project 1. After stimulus offset, participants reported whether patterns were regular or random. For full catalogue, see [https://osf.io/2sncj/](https://osf.io/2sncj/) ## Dataset Information | Dataset ID | `DS004347` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Symmetry perception and affective responses: a combined EEG/EMG study | | Author (year) | `Makin2022` | | Canonical | — | | Importable as | `DS004347`, `Makin2022` | | Year | 2022 | | Authors | Makin, A. D. J, Wilton, M. M, Pecchinenda, A., Bertamini, M. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004347.v1.0.0](https://doi.org/10.18112/openneuro.ds004347.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004347) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004347) | [Source URL](https://openneuro.org/datasets/ds004347) | ### Copy-paste BibTeX ```bibtex @dataset{ds004347, title = {Symmetry perception and affective responses: a combined EEG/EMG study}, author = {Makin, A. D. J and Wilton, M. M and Pecchinenda, A. and Bertamini, M.}, doi = {10.18112/openneuro.ds004347.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004347.v1.0.0}, } ``` ## Technical Details - Subjects: 24 - Recordings: 24 - Tasks: 1 - Channels: 72 - Sampling rate (Hz): 512.0 - Duration (hours): 6.375555555555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 2.4 GB - File count: 24 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004347.v1.0.0 - Source: openneuro - OpenNeuro: [ds004347](https://openneuro.org/datasets/ds004347) - NeMAR: [ds004347](https://nemar.org/dataexplorer/detail?dataset_id=ds004347) ## API Reference Use the `DS004347` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004347(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Symmetry perception and affective responses: a combined EEG/EMG study * **Study:** `ds004347` (OpenNeuro) * **Author (year):** `Makin2022` * **Canonical:** — Also importable as: `DS004347`, `Makin2022`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004347](https://openneuro.org/datasets/ds004347) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004347](https://nemar.org/dataexplorer/detail?dataset_id=ds004347) DOI: [https://doi.org/10.18112/openneuro.ds004347.v1.0.0](https://doi.org/10.18112/openneuro.ds004347.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004347 >>> dataset = DS004347(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004347) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004347) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004348: eeg dataset, 9 subjects *Ear-EEG Sleep Monitoring 2017 (EESM17)* Access recordings and metadata through EEGDash. **Citation:** Kaare B. Mikkelsen, David B. Villadsen, Laura Birch, Marit Otto, Preben Kidmose (2022). *Ear-EEG Sleep Monitoring 2017 (EESM17)*. [10.18112/openneuro.ds004348.v1.0.5](https://doi.org/10.18112/openneuro.ds004348.v1.0.5) Modality: eeg Subjects: 9 Recordings: 18 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004348 dataset = DS004348(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004348(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004348( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004348, title = {Ear-EEG Sleep Monitoring 2017 (EESM17)}, author = {Kaare B. Mikkelsen and David B. Villadsen and Laura Birch and Marit Otto and Preben Kidmose}, doi = {10.18112/openneuro.ds004348.v1.0.5}, url = {https://doi.org/10.18112/openneuro.ds004348.v1.0.5}, } ``` ## About This Dataset Ear-EEG Sleep Monitoring 2017 (EESM17) data set **Overview** This dataset was collected as part of a research project on ear-EEG sleep monitoring which took place in 2017. The data set contains nightly EEG recordings from 9 healthy participants (‘subjects’). The recordings consist of ‘partial polysomnography’ (PSG) measurements, including EEG, EOG and chin EMG combined with 14 ear-EEG electrodes. **Format** The dataset is formatted according to the Brain Imaging Data Structure. See the ‘dataset_description.json’ file for the specific BIDS version used. The EEG data format chosen is the ‘.set’ format of EEGLAB. For more information, see the following link: [https://bids-specification.readthedocs.io/en/stable/01-introduction.html](https://bids-specification.readthedocs.io/en/stable/01-introduction.html) **Task description** The subjects were instructed to perform two recordings. In the first recording, they had to simply relax in a chair either reading or watching television, prior to going to bed. These recordings are labeled as ‘wake’ task. After this, the real recording started, which took place during the night and began when the subject went to bed. These recordings are labeled as having task ‘sleep’. The recording equipment was mounted in the afternoon, and the recordings took place at the subject’s home. The data set was previously described in the paper: [https://doi.org/10.1186/s12938-017-0400-5](https://doi.org/10.1186/s12938-017-0400-5) When citing this data set, please refer to this paper. Please note that for all subjects, the sleep scoring begins at ‘Lights out’. **Notes** Due to a miscommunication in the original sleep study, two ear-EEG channels, ERB1 and ELB1, were not used. However, they are included in the data set. Both electrode positions were very close to the ERB and ELB positions. **Contact** For questions regarding this data set, contact: Kaare Mikkelsen, [Mikkelsen.kaare@ece.au.dk](mailto:Mikkelsen.kaare@ece.au.dk), [https://orcid.org/0000-0002-7360-8629](https://orcid.org/0000-0002-7360-8629) ## Dataset Information | Dataset ID | `DS004348` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Ear-EEG Sleep Monitoring 2017 (EESM17) | | Author (year) | `Mikkelsen2022` | | Canonical | `EESM17` | | Importable as | `DS004348`, `Mikkelsen2022`, `EESM17` | | Year | 2022 | | Authors | Kaare B. Mikkelsen, David B. Villadsen, Laura Birch, Marit Otto, Preben Kidmose | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004348.v1.0.5](https://doi.org/10.18112/openneuro.ds004348.v1.0.5) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004348) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004348) | [Source URL](https://openneuro.org/datasets/ds004348) | ### Copy-paste BibTeX ```bibtex @dataset{ds004348, title = {Ear-EEG Sleep Monitoring 2017 (EESM17)}, author = {Kaare B. Mikkelsen and David B. Villadsen and Laura Birch and Marit Otto and Preben Kidmose}, doi = {10.18112/openneuro.ds004348.v1.0.5}, url = {https://doi.org/10.18112/openneuro.ds004348.v1.0.5}, } ``` ## Technical Details - Subjects: 9 - Recordings: 18 - Tasks: 2 - Channels: 34 - Sampling rate (Hz): 200.0 - Duration (hours): 17.527765277777778 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 8.2 GB - File count: 18 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004348.v1.0.5 - Source: openneuro - OpenNeuro: [ds004348](https://openneuro.org/datasets/ds004348) - NeMAR: [ds004348](https://nemar.org/dataexplorer/detail?dataset_id=ds004348) ## API Reference Use the `DS004348` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004348(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Ear-EEG Sleep Monitoring 2017 (EESM17) * **Study:** `ds004348` (OpenNeuro) * **Author (year):** `Mikkelsen2022` * **Canonical:** `EESM17` Also importable as: `DS004348`, `Mikkelsen2022`, `EESM17`. Modality: `eeg`. Subjects: 9; recordings: 18; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004348](https://openneuro.org/datasets/ds004348) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004348](https://nemar.org/dataexplorer/detail?dataset_id=ds004348) DOI: [https://doi.org/10.18112/openneuro.ds004348.v1.0.5](https://doi.org/10.18112/openneuro.ds004348.v1.0.5) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004348 >>> dataset = DS004348(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004348) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004348) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004350: eeg dataset, 24 subjects *Executive Functionning Study for Assessing the Effect of Neurofeedback* Access recordings and metadata through EEGDash. **Citation:** Arnaud Delorme, Tracy Brandmeyer (2022). *Executive Functionning Study for Assessing the Effect of Neurofeedback*. [10.18112/openneuro.ds004350.v2.0.0](https://doi.org/10.18112/openneuro.ds004350.v2.0.0) Modality: eeg Subjects: 24 Recordings: 240 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004350 dataset = DS004350(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004350(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004350( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004350, title = {Executive Functionning Study for Assessing the Effect of Neurofeedback}, author = {Arnaud Delorme and Tracy Brandmeyer}, doi = {10.18112/openneuro.ds004350.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds004350.v2.0.0}, } ``` ## About This Dataset **Executive Functioning Tasks** The data of this dataset was collected as part of an executive functioning battery consisting of three separate tasks: 1) N-Back (NB) 2) Sustained Attention to Response Task (SART) 3) Local Global (LG) The original experiment details in which these tasks were conducted in addition to can be read about here ([https://doi.org/10.3389/fnhum.2020.00246](https://doi.org/10.3389/fnhum.2020.00246)). *Experiment Design:* Two sessions of each task were conducted on the first and last day of the neurofeedback experiment with 24 participants (mentioned above). **[N-Back (NB)]** Participants performed a visual sequential letter n-back working memory task, with memory load ranging from 1-back to 3-back. The visual stimuli consisted of a sequence of 4 letters (A, B, C, D) presented black on a gray background. Participants observed stimuli on a visual display and responded using the spacebar on a provided keyboard. In the 1-back condition, the target was any letter identical to the trial immediately preceding one. In the 2-back and 3-back conditions, the target was any letter that was presented two or three trials back, respectively. The stimuli were presented on a screen for a duration of 1 s, after which a fixation cross was presented for 500 ms. Participants responded to each stimulus by pressing the spacebar with their right hand upon target presentation. If no spacebar was pressed within 1500 ms of the stimulus presentation, a new stimulus was presented. Each n-back condition (1, 2, and 3-back) consisted of the presentation of 280 stimuli selected randomly in the 4-letter pool. **[Sustained Attention to Response Task (SART)]** Participants were presented with a series of single numerical digits (randomly selected from 0 to 9 - the same digit could not be presented twice in a row) and instructed to press the spacebar for each digit, except for when presented with the digit 3. Each number was presented for 400 ms in white on a gray background. The inter-stimulus interval was 2 s irrespective of the button press and a fixation cross was present at all times except for when the digits were presented. Participants performed the SART for approximately 10 minutes corresponding to 250 digit presentations. **[Local Global (LG)]** Participants were shown large letters (H and T) on a computer screen. The large letters were made up of an aggregate of smaller letters that could be congruent (i.e large H made of small Hs or large T made of small Ts) or incongruent (large H made of small Ts or large T made of small Hs) with respect to the large letter. The small letters were 0.8 cm high and the large letters were 8 cm high on the computer screen. A fixation cross was present at all times except when the stimulus letters were presented. Letters were shown on the computer screen until the subject responded. After each subject’s response, there was a delay of 1 s before the next stimulus was presented. Before each sequence of letters, instructions were shown on a computer screen indicating to participants whether they should respond to the presence of small (local condition) or large (global condition) letters. The participants were instructed to categorize specifically large letters or small letters and to press the letter H or T on the computer keyboard to indicate their choice. *Data Processing:* Data processing was performed in Matlab and EEGLAB. The EEG data was average referenced and down-sampled from 2048 to 256 Hz. A high-pass filter at 1 HZ using an elliptical non-linear filter was applied and the data was then average referenced. *Note:* The data files in this dataset were converted into the .set format for EEGLAB. The .bdf files that were converted for each of the tasks can be found in the sourcedata folder. *Exclusion Note:* The second run of NB in session 1 of sub-11 and the run of SART in session 1 of sub-18 were both excluded due to issues with conversion to .set format. However, the .bdf files of these runs can be found in the sourcedata folder. ## Dataset Information | Dataset ID | `DS004350` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Executive Functionning Study for Assessing the Effect of Neurofeedback | | Author (year) | `Delorme2022` | | Canonical | — | | Importable as | `DS004350`, `Delorme2022` | | Year | 2022 | | Authors | Arnaud Delorme, Tracy Brandmeyer | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004350.v2.0.0](https://doi.org/10.18112/openneuro.ds004350.v2.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004350) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004350) | [Source URL](https://openneuro.org/datasets/ds004350) | ### Copy-paste BibTeX ```bibtex @dataset{ds004350, title = {Executive Functionning Study for Assessing the Effect of Neurofeedback}, author = {Arnaud Delorme and Tracy Brandmeyer}, doi = {10.18112/openneuro.ds004350.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds004350.v2.0.0}, } ``` ## Technical Details - Subjects: 24 - Recordings: 240 - Tasks: 5 - Channels: 64 - Sampling rate (Hz): 256.0 - Duration (hours): 41.21611111111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 9.4 GB - File count: 240 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004350.v2.0.0 - Source: openneuro - OpenNeuro: [ds004350](https://openneuro.org/datasets/ds004350) - NeMAR: [ds004350](https://nemar.org/dataexplorer/detail?dataset_id=ds004350) ## API Reference Use the `DS004350` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004350(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Executive Functionning Study for Assessing the Effect of Neurofeedback * **Study:** `ds004350` (OpenNeuro) * **Author (year):** `Delorme2022` * **Canonical:** — Also importable as: `DS004350`, `Delorme2022`. Modality: `eeg`. Subjects: 24; recordings: 240; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004350](https://openneuro.org/datasets/ds004350) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004350](https://nemar.org/dataexplorer/detail?dataset_id=ds004350) DOI: [https://doi.org/10.18112/openneuro.ds004350.v2.0.0](https://doi.org/10.18112/openneuro.ds004350.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004350 >>> dataset = DS004350(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004350) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004350) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004356: eeg dataset, 22 subjects *Subcortical responses to music and speech are alike while cortical responses diverge* Access recordings and metadata through EEGDash. **Citation:** Tong Shan, Madeline S. Cappelloni, Ross K. Maddox (2022). *Subcortical responses to music and speech are alike while cortical responses diverge*. [10.18112/openneuro.ds004356.v2.2.1](https://doi.org/10.18112/openneuro.ds004356.v2.2.1) Modality: eeg Subjects: 22 Recordings: 24 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004356 dataset = DS004356(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004356(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004356( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004356, title = {Subcortical responses to music and speech are alike while cortical responses diverge}, author = {Tong Shan and Madeline S. Cappelloni and Ross K. Maddox}, doi = {10.18112/openneuro.ds004356.v2.2.1}, url = {https://doi.org/10.18112/openneuro.ds004356.v2.2.1}, } ``` ## About This Dataset **README** **Details related to access to the data** Please contact the following authors for further information: - Tong Shan (email: [tshan@ur.rochester.edu](mailto:tshan@ur.rochester.edu)) - Ross K. Maddox (email: [rmaddox@ur.rochester.edu](mailto:rmaddox@ur.rochester.edu)) ### View full README **README** **Details related to access to the data** Please contact the following authors for further information: - Tong Shan (email: [tshan@ur.rochester.edu](mailto:tshan@ur.rochester.edu)) - Ross K. Maddox (email: [rmaddox@ur.rochester.edu](mailto:rmaddox@ur.rochester.edu)) **Overview** The goal of this study is to derive Auditory Brainstem Response (ABR) from continuous music and speech stimuli using deconvolution method. Data collected from Jun to Aug, 2021. The details of the experiment can be found at Shan et al. (2024). There were two phases in this experiment. For the first phase, ten trials of one-minute clicks were presented to the subjects. For the second phase, the 12 types (six genres of music and six types of speech) of 12 s stimuli clips were presented. There were 40 trials for each type with shuffled order. Between trials, there was a 0.5 s pause. The code for stimulus preprocessing and EEG analysis is available on Github: [https://github.com/maddoxlab/Music_vs_Speech_abr](https://github.com/maddoxlab/Music_vs_Speech_abr) **Format** This dataset is formatted according to the EEG Brain Imaging Data Structure. It includes EEG recording from subject 001 to subject 024 (excluding subject 014 and subject 021) in raw brainvision format (including `.eeg`, `.vhdr`, and `.vmrk` triplet) and stimuli files in format of `.wav`. For some subjects (sub-03 & sub-19), there are 2 “runs” of data that the first run (`run-01`) only contains the click phase (phase 1), and the second run includes the data for the ABR analysis. Triggers with values of “1” were recorded to the onset of the stimulus, and shortly after triggers with values of “4” or “8” were stamped to indicate the stimulus types and the trial number out of 40. This was done by converting the decimal trial number to bits, denoted b, then calculating 2 \*\* (b + 2). Triggers of “999” denote the start of a new segment of EEG. We’ve specified these trial numbers and more metadata of the events in each of the `*_eeg_events.tsv` file, which is sufficient to know which trial corresponded to which type of stimulus and which file. **Subjects** 24 subjects participated in this study. **Subject inclusion criteria** 1. Age between 18-40. 2. Normal hearing: audiometric thresholds of 20 dB HL or better from 500 to 8000 Hz. 3. Speak English as their primary language. 4. Self-reported normal or correctable to normal vision. **Subject exclusion criteria** 1. Subject 014 self-withdrew partway through the experiment. 2. Subject 021 was excluded because of technical problems during data collection that led to unusable data. Therefore, after excluding the two subjects, there were 22 subjects (11 male and 11 female) with an age of 22.7 ± 5.1 (mean ± SD) years that we included in the analysis. Please see `subjects.tsv` for more demography. **Apparatus** Subjects were seated in a sound-isolating booth on a chair in front of a 24-inch BenQ monitor with a viewing distance of approximately 60 cm. Stimuli were presented at an average level of 65 dB SPL and a sampling rate of 48000 Hz through ER-2 insert earphones plugged into an RME Babyface Pro digital sound card. The stimulus presentation for the experiment was controlled by a python script using a custom package, `expyfun`. ## Dataset Information | Dataset ID | `DS004356` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Subcortical responses to music and speech are alike while cortical responses diverge | | Author (year) | `Shan2022` | | Canonical | — | | Importable as | `DS004356`, `Shan2022` | | Year | 2022 | | Authors | Tong Shan, Madeline S. Cappelloni, Ross K. Maddox | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004356.v2.2.1](https://doi.org/10.18112/openneuro.ds004356.v2.2.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004356) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004356) | [Source URL](https://openneuro.org/datasets/ds004356) | ### Copy-paste BibTeX ```bibtex @dataset{ds004356, title = {Subcortical responses to music and speech are alike while cortical responses diverge}, author = {Tong Shan and Madeline S. Cappelloni and Ross K. Maddox}, doi = {10.18112/openneuro.ds004356.v2.2.1}, url = {https://doi.org/10.18112/openneuro.ds004356.v2.2.1}, } ``` ## Technical Details - Subjects: 22 - Recordings: 24 - Tasks: 1 - Channels: 34 - Sampling rate (Hz): 10000.0 - Duration (hours): 46.26655555555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 213.1 GB - File count: 24 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004356.v2.2.1 - Source: openneuro - OpenNeuro: [ds004356](https://openneuro.org/datasets/ds004356) - NeMAR: [ds004356](https://nemar.org/dataexplorer/detail?dataset_id=ds004356) ## API Reference Use the `DS004356` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004356(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Subcortical responses to music and speech are alike while cortical responses diverge * **Study:** `ds004356` (OpenNeuro) * **Author (year):** `Shan2022` * **Canonical:** — Also importable as: `DS004356`, `Shan2022`. Modality: `eeg`. Subjects: 22; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004356](https://openneuro.org/datasets/ds004356) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004356](https://nemar.org/dataexplorer/detail?dataset_id=ds004356) DOI: [https://doi.org/10.18112/openneuro.ds004356.v2.2.1](https://doi.org/10.18112/openneuro.ds004356.v2.2.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004356 >>> dataset = DS004356(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004356) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004356) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004357: eeg dataset, 16 subjects *Features-EEG* Access recordings and metadata through EEGDash. **Citation:** Grootswagers, Tijl, Robinson, Amanda, Shatek, Sofia, Carlson, Thomas (2022). *Features-EEG*. [10.18112/openneuro.ds004357.v1.0.1](https://doi.org/10.18112/openneuro.ds004357.v1.0.1) Modality: eeg Subjects: 16 Recordings: 16 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004357 dataset = DS004357(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004357(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004357( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004357, title = {Features-EEG}, author = {Grootswagers, Tijl and Robinson, Amanda and Shatek, Sofia and Carlson, Thomas}, doi = {10.18112/openneuro.ds004357.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004357.v1.0.1}, } ``` ## About This Dataset Grootswagers T., Robinson A.K., Shatek S.M., Carlson T.A. (2024). Mapping the Dynamics of Visual Feature Coding: Insights into Perception and Integration. PLoS Computational Biology, 20(1) e1011760 [https://doi.org/10.1371/journal.pcbi.1011760](https://doi.org/10.1371/journal.pcbi.1011760) Experiment Details Electroencephalography recordings from 16 subjects to fast streams of gabor-like stimuli. Images were presented in rapid serial visual presentation streams at 6.67Hz and 20Hz rates. Participants performed an orthogonal fixation colour change detection task. Experiment length: 1 hour ## Dataset Information | Dataset ID | `DS004357` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Features-EEG | | Author (year) | `Grootswagers2022_EEG` | | Canonical | — | | Importable as | `DS004357`, `Grootswagers2022_EEG` | | Year | 2022 | | Authors | Grootswagers, Tijl, Robinson, Amanda, Shatek, Sofia, Carlson, Thomas | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004357.v1.0.1](https://doi.org/10.18112/openneuro.ds004357.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004357) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004357) | [Source URL](https://openneuro.org/datasets/ds004357) | ### Copy-paste BibTeX ```bibtex @dataset{ds004357, title = {Features-EEG}, author = {Grootswagers, Tijl and Robinson, Amanda and Shatek, Sofia and Carlson, Thomas}, doi = {10.18112/openneuro.ds004357.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004357.v1.0.1}, } ``` ## Technical Details - Subjects: 16 - Recordings: 16 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 11.307033333333331 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 19.3 GB - File count: 16 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004357.v1.0.1 - Source: openneuro - OpenNeuro: [ds004357](https://openneuro.org/datasets/ds004357) - NeMAR: [ds004357](https://nemar.org/dataexplorer/detail?dataset_id=ds004357) ## API Reference Use the `DS004357` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004357(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Features-EEG * **Study:** `ds004357` (OpenNeuro) * **Author (year):** `Grootswagers2022_EEG` * **Canonical:** — Also importable as: `DS004357`, `Grootswagers2022_EEG`. Modality: `eeg`. Subjects: 16; recordings: 16; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004357](https://openneuro.org/datasets/ds004357) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004357](https://nemar.org/dataexplorer/detail?dataset_id=ds004357) DOI: [https://doi.org/10.18112/openneuro.ds004357.v1.0.1](https://doi.org/10.18112/openneuro.ds004357.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004357 >>> dataset = DS004357(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004357) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004357) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004362: eeg dataset, 109 subjects *EEG Motor Movement/Imagery Dataset* Access recordings and metadata through EEGDash. **Citation:** Gerwin Schalk, Dennis J McFarland, Thilo Hinterberger, Niels Birbaumer, Jonathan R Wolpaw (2022). *EEG Motor Movement/Imagery Dataset*. [10.18112/openneuro.ds004362.v1.0.0](https://doi.org/10.18112/openneuro.ds004362.v1.0.0) Modality: eeg Subjects: 109 Recordings: 1526 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004362 dataset = DS004362(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004362(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004362( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004362, title = {EEG Motor Movement/Imagery Dataset}, author = {Gerwin Schalk and Dennis J McFarland and Thilo Hinterberger and Niels Birbaumer and Jonathan R Wolpaw}, doi = {10.18112/openneuro.ds004362.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004362.v1.0.0}, } ``` ## About This Dataset ##Acknowledgements This data set was originally created and contributed to PhysioBank by Gerwin Schalk (schalk at wadsworth dot org) and his colleagues at the BCI R&D Program, Wadsworth Center, New York State Department of Health, Albany, NY. W.A. Sarnacki collected the data. Aditya Joshi compiled the dataset and prepared the documentation. D.J. McFarland and J.R. Wolpaw were responsible for experimental design and project oversight, respectively. This work was supported by grants from NIH/NIBIB ((EB006356 (GS) and EB00856 (JRW and GS)). **To access the initial publication of this dataset, please visit this link to PhysioBank: https://physionet.org/content/eegmmidb/1.0.0/** **Experiment Protocol** > This data set consists of over 1500 one- and two-minute EEG recordings, obtained from 109 volunteers, as described below. Subjects performed different motor/imagery tasks while 64-channel EEG were recorded using the BCI2000 system ([http://www.bci2000.org](http://www.bci2000.org)). Each subject performed 14 experimental runs: two one-minute baseline runs (one with eyes open, one with eyes closed), and three two-minute runs of each of the four following tasks: **[Task 1]** A target appears on either the left or the right side of the screen. The subject opens and closes the corresponding fist until the target disappears. Then the subject relaxes. ### View full README ##Acknowledgements This data set was originally created and contributed to PhysioBank by Gerwin Schalk (schalk at wadsworth dot org) and his colleagues at the BCI R&D Program, Wadsworth Center, New York State Department of Health, Albany, NY. W.A. Sarnacki collected the data. Aditya Joshi compiled the dataset and prepared the documentation. D.J. McFarland and J.R. Wolpaw were responsible for experimental design and project oversight, respectively. This work was supported by grants from NIH/NIBIB ((EB006356 (GS) and EB00856 (JRW and GS)). **To access the initial publication of this dataset, please visit this link to PhysioBank: https://physionet.org/content/eegmmidb/1.0.0/** **Experiment Protocol** > This data set consists of over 1500 one- and two-minute EEG recordings, obtained from 109 volunteers, as described below. Subjects performed different motor/imagery tasks while 64-channel EEG were recorded using the BCI2000 system ([http://www.bci2000.org](http://www.bci2000.org)). Each subject performed 14 experimental runs: two one-minute baseline runs (one with eyes open, one with eyes closed), and three two-minute runs of each of the four following tasks: **[Task 1]** A target appears on either the left or the right side of the screen. The subject opens and closes the corresponding fist until the target disappears. Then the subject relaxes. **[Task 2]** A target appears on either the left or the right side of the screen. The subject imagines opening and closing the corresponding fist until the target disappears. Then the subject relaxes. **[Task 3]** A target appears on either the top or the bottom of the screen. The subject opens and closes either both fists (if the target is on top) or both feet (if the target is on the bottom) until the target disappears. Then the subject relaxes. **[Task 4]** A target appears on either the top or the bottom of the screen. The subject imagines opening and closing either both fists (if the target is on top) or both feet (if the target is on the bottom) until the target disappears. Then the subject relaxes. In summary, the experimental runs were: > 1. Baseline, eyes open > 2. Baseline, eyes closed > 3. Task 1 (open and close left or right fist) > 4. Task 2 (imagine opening and closing left or right fist) > 5. Task 3 (open and close both fists or both feet) > 6. Task 4 (imagine opening and closing both fists or both feet) > 7. Task 1 > 8. Task 2 > 9. Task 3 > 10. Task 4 > 11. Task 1 > 12. Task 2 > 13. Task 3 > 14. Task 4 Each event code includes an event type indicator (T0, T1, or T2) that is concatenated to the Task # it belongs with (i.e TASK1T2). The event type indicators change definition depending on the Task # it is associated with. For example, TASK1T2 would correspond to the onset of real motion in the right fist, while TASK3T2 would correspond to onset of real motion in both feet: **[T0]** corresponds to rest **[T1]** corresponds to onset of motion (real or imagined) of: - the left fist (in runs 3, 4, 7, 8, 11, and 12; for Task 1 (real) and Task 2 (imagined)) - both fists (in runs 5, 6, 9, 10, 13, and 14; for Task 3 (real) and Task 4 (imagined)) **[T2]** corresponds to onset of motion (real or imagined) of: - the right fist (in runs 3, 4, 7, 8, 11, and 12; Task 1 (real) and Task 2 (imagined)) - both feet (in runs 5, 6, 9, 10, 13, and 14; for Task 3 (real) and Task 4 (imagined)) *Note:* The data files in this dataset were converted into the .set format for EEGLAB. The event codes in the .set files of this dataset will contain the concatenated event codes above for all event files for clarity purposes. The non-converted .edf files along with the accompanying PhysioBank-compatible annotation files for all the runs of each subject can be found in the sourcedata folder. In the non-converted .edf files the event codes will only be shown as T0, T1, and T2 regardless of task type. All the Matlab scripts used for the .set conversion and renaming of event codes of the PhysioBank .edf files can be found in the code folder. **Montage** The EEGs were recorded from 64 electrodes as per the international 10-10 system (excluding electrodes Nz, F9, F10, FT9, FT10, A1, A2, TP9, TP10, P9, and P10), as shown in the figure in the code folder. The numbers below each electrode name indicate the order in which they appear in the records; note that signals in the records are numbered from 0 to 63, while the numbers in the figure range from 1 to 64. ## Dataset Information | Dataset ID | `DS004362` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG Motor Movement/Imagery Dataset | | Author (year) | `Schalk2022` | | Canonical | `PhysionetMI`, `EEGMotorMovementImagery` | | Importable as | `DS004362`, `Schalk2022`, `PhysionetMI`, `EEGMotorMovementImagery` | | Year | 2022 | | Authors | Gerwin Schalk, Dennis J McFarland, Thilo Hinterberger, Niels Birbaumer, Jonathan R Wolpaw | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004362.v1.0.0](https://doi.org/10.18112/openneuro.ds004362.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004362) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004362) | [Source URL](https://openneuro.org/datasets/ds004362) | ### Copy-paste BibTeX ```bibtex @dataset{ds004362, title = {EEG Motor Movement/Imagery Dataset}, author = {Gerwin Schalk and Dennis J McFarland and Thilo Hinterberger and Niels Birbaumer and Jonathan R Wolpaw}, doi = {10.18112/openneuro.ds004362.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004362.v1.0.0}, } ``` ## Technical Details - Subjects: 109 - Recordings: 1526 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 160.0 (1490), 128.0 (36) - Duration (hours): 48.534444444444446 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 7.8 GB - File count: 1526 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004362.v1.0.0 - Source: openneuro - OpenNeuro: [ds004362](https://openneuro.org/datasets/ds004362) - NeMAR: [ds004362](https://nemar.org/dataexplorer/detail?dataset_id=ds004362) ## API Reference Use the `DS004362` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004362(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Motor Movement/Imagery Dataset * **Study:** `ds004362` (OpenNeuro) * **Author (year):** `Schalk2022` * **Canonical:** `PhysionetMI`, `EEGMotorMovementImagery` Also importable as: `DS004362`, `Schalk2022`, `PhysionetMI`, `EEGMotorMovementImagery`. Modality: `eeg`. Subjects: 109; recordings: 1526; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004362](https://openneuro.org/datasets/ds004362) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004362](https://nemar.org/dataexplorer/detail?dataset_id=ds004362) DOI: [https://doi.org/10.18112/openneuro.ds004362.v1.0.0](https://doi.org/10.18112/openneuro.ds004362.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004362 >>> dataset = DS004362(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004362) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004362) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004367: eeg dataset, 40 subjects *Meta-rdk: Raw EEG data* Access recordings and metadata through EEGDash. **Citation:** Martin Rouy, Matthieu Roger, Dorian Goueytes, Michael Pereira, Paul Roux, Nathan Faivre (2022). *Meta-rdk: Raw EEG data*. [10.18112/openneuro.ds004367.v1.0.2](https://doi.org/10.18112/openneuro.ds004367.v1.0.2) Modality: eeg Subjects: 40 Recordings: 40 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004367 dataset = DS004367(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004367(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004367( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004367, title = {Meta-rdk: Raw EEG data}, author = {Martin Rouy and Matthieu Roger and Dorian Goueytes and Michael Pereira and Paul Roux and Nathan Faivre}, doi = {10.18112/openneuro.ds004367.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004367.v1.0.2}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS004367` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Meta-rdk: Raw EEG data | | Author (year) | `Rouy2022_Meta` | | Canonical | — | | Importable as | `DS004367`, `Rouy2022_Meta` | | Year | 2022 | | Authors | Martin Rouy, Matthieu Roger, Dorian Goueytes, Michael Pereira, Paul Roux, Nathan Faivre | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004367.v1.0.2](https://doi.org/10.18112/openneuro.ds004367.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004367) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004367) | [Source URL](https://openneuro.org/datasets/ds004367) | ### Copy-paste BibTeX ```bibtex @dataset{ds004367, title = {Meta-rdk: Raw EEG data}, author = {Martin Rouy and Matthieu Roger and Dorian Goueytes and Michael Pereira and Paul Roux and Nathan Faivre}, doi = {10.18112/openneuro.ds004367.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004367.v1.0.2}, } ``` ## Technical Details - Subjects: 40 - Recordings: 40 - Tasks: 1 - Channels: 68 - Sampling rate (Hz): 1200.0 - Duration (hours): 24.809597685185185 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 28.0 GB - File count: 40 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004367.v1.0.2 - Source: openneuro - OpenNeuro: [ds004367](https://openneuro.org/datasets/ds004367) - NeMAR: [ds004367](https://nemar.org/dataexplorer/detail?dataset_id=ds004367) ## API Reference Use the `DS004367` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004367(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Meta-rdk: Raw EEG data * **Study:** `ds004367` (OpenNeuro) * **Author (year):** `Rouy2022_Meta` * **Canonical:** — Also importable as: `DS004367`, `Rouy2022_Meta`. Modality: `eeg`. Subjects: 40; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004367](https://openneuro.org/datasets/ds004367) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004367](https://nemar.org/dataexplorer/detail?dataset_id=ds004367) DOI: [https://doi.org/10.18112/openneuro.ds004367.v1.0.2](https://doi.org/10.18112/openneuro.ds004367.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004367 >>> dataset = DS004367(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004367) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004367) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004368: eeg dataset, 39 subjects *Meta-rdk: Preprocessed EEG data* Access recordings and metadata through EEGDash. **Citation:** Martin Rouy, Matthieu Roger, Dorian Goueytes, Michael Pereira, Paul Roux, Nathan Faivre (2022). *Meta-rdk: Preprocessed EEG data*. [10.18112/openneuro.ds004368.v1.0.2](https://doi.org/10.18112/openneuro.ds004368.v1.0.2) Modality: eeg Subjects: 39 Recordings: 40 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004368 dataset = DS004368(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004368(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004368( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004368, title = {Meta-rdk: Preprocessed EEG data}, author = {Martin Rouy and Matthieu Roger and Dorian Goueytes and Michael Pereira and Paul Roux and Nathan Faivre}, doi = {10.18112/openneuro.ds004368.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004368.v1.0.2}, } ``` ## About This Dataset The study was approved by the ethical committee Sud Méditérannée II (217 R01). Twenty individuals with a schizophrenia spectrum disorder (schizophrenia or schizoaffective disorder, 16 males, 4 females) and 22 healthy participants (15 males, 7 females) from the general population took part in this study. Schizophrenia and schizoaffective disorders were diagnosed based on the Structured Clinical Interview for assessing the DSM-5 criteria. The control group was screened for current or past psychiatric illness, and individuals were excluded if they met the criteria for a severe and persistent mental disorder. We used a visual discrimination task. Stimuli consisted of 100 moving dots within a circle (3° radius) at the center of the screen. On each trial, participants indicated whether the motion direction of the dots was to the left or to the right by reaching and clicking on one of two choice targets (3° radius circle) at the top corners of the screen with a mouse. After 6 seconds without response, a buzz sound rang and a message was displayed inviting the participant to respond quicker. Motion coherence was adapted at the individual level via a 1up/2down staircase procedure in order to match task-performance between groups. Following each perceptual decision, participants were asked to report their confidence about their response using a vertical visual analog scale from 0% (Sure incorrect) to 100% (Sure correct), with 50% confidence meaning “Not sure at all”. ## Dataset Information | Dataset ID | `DS004368` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Meta-rdk: Preprocessed EEG data | | Author (year) | `Rouy2022_Meta_rdk` | | Canonical | — | | Importable as | `DS004368`, `Rouy2022_Meta_rdk` | | Year | 2022 | | Authors | Martin Rouy, Matthieu Roger, Dorian Goueytes, Michael Pereira, Paul Roux, Nathan Faivre | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004368.v1.0.2](https://doi.org/10.18112/openneuro.ds004368.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004368) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004368) | [Source URL](https://openneuro.org/datasets/ds004368) | ### Copy-paste BibTeX ```bibtex @dataset{ds004368, title = {Meta-rdk: Preprocessed EEG data}, author = {Martin Rouy and Matthieu Roger and Dorian Goueytes and Michael Pereira and Paul Roux and Nathan Faivre}, doi = {10.18112/openneuro.ds004368.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004368.v1.0.2}, } ``` ## Technical Details - Subjects: 39 - Recordings: 40 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 128.0 - Duration (hours): 0.0333333333333333 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 997.1 MB - File count: 40 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004368.v1.0.2 - Source: openneuro - OpenNeuro: [ds004368](https://openneuro.org/datasets/ds004368) - NeMAR: [ds004368](https://nemar.org/dataexplorer/detail?dataset_id=ds004368) ## API Reference Use the `DS004368` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004368(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Meta-rdk: Preprocessed EEG data * **Study:** `ds004368` (OpenNeuro) * **Author (year):** `Rouy2022_Meta_rdk` * **Canonical:** — Also importable as: `DS004368`, `Rouy2022_Meta_rdk`. Modality: `eeg`. Subjects: 39; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004368](https://openneuro.org/datasets/ds004368) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004368](https://nemar.org/dataexplorer/detail?dataset_id=ds004368) DOI: [https://doi.org/10.18112/openneuro.ds004368.v1.0.2](https://doi.org/10.18112/openneuro.ds004368.v1.0.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004368 >>> dataset = DS004368(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004368) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004368) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004369: eeg dataset, 41 subjects *Blink-Pause-Relation (Competing Speaker Paradigm)* Access recordings and metadata through EEGDash. **Citation:** Bjoern Holtze, Marc Rosenkranz, Martin Bleichner, Stefan Debener (2022). *Blink-Pause-Relation (Competing Speaker Paradigm)*. [10.18112/openneuro.ds004369.v1.0.1](https://doi.org/10.18112/openneuro.ds004369.v1.0.1) Modality: eeg Subjects: 41 Recordings: 41 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004369 dataset = DS004369(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004369(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004369( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004369, title = {Blink-Pause-Relation (Competing Speaker Paradigm)}, author = {Bjoern Holtze and Marc Rosenkranz and Martin Bleichner and Stefan Debener}, doi = {10.18112/openneuro.ds004369.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004369.v1.0.1}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS004369` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Blink-Pause-Relation (Competing Speaker Paradigm) | | Author (year) | `Holtze2022_Blink` | | Canonical | — | | Importable as | `DS004369`, `Holtze2022_Blink` | | Year | 2022 | | Authors | Bjoern Holtze, Marc Rosenkranz, Martin Bleichner, Stefan Debener | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004369.v1.0.1](https://doi.org/10.18112/openneuro.ds004369.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004369) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004369) | [Source URL](https://openneuro.org/datasets/ds004369) | ### Copy-paste BibTeX ```bibtex @dataset{ds004369, title = {Blink-Pause-Relation (Competing Speaker Paradigm)}, author = {Bjoern Holtze and Marc Rosenkranz and Martin Bleichner and Stefan Debener}, doi = {10.18112/openneuro.ds004369.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004369.v1.0.1}, } ``` ## Technical Details - Subjects: 41 - Recordings: 41 - Tasks: 1 - Channels: 7 - Sampling rate (Hz): 500.0 - Duration (hours): 37.333333333333336 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 2.0 GB - File count: 41 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004369.v1.0.1 - Source: openneuro - OpenNeuro: [ds004369](https://openneuro.org/datasets/ds004369) - NeMAR: [ds004369](https://nemar.org/dataexplorer/detail?dataset_id=ds004369) ## API Reference Use the `DS004369` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004369(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Blink-Pause-Relation (Competing Speaker Paradigm) * **Study:** `ds004369` (OpenNeuro) * **Author (year):** `Holtze2022_Blink` * **Canonical:** — Also importable as: `DS004369`, `Holtze2022_Blink`. Modality: `eeg`. Subjects: 41; recordings: 41; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004369](https://openneuro.org/datasets/ds004369) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004369](https://nemar.org/dataexplorer/detail?dataset_id=ds004369) DOI: [https://doi.org/10.18112/openneuro.ds004369.v1.0.1](https://doi.org/10.18112/openneuro.ds004369.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004369 >>> dataset = DS004369(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004369) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004369) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004370: ieeg dataset, 7 subjects *PRIOS* Access recordings and metadata through EEGDash. **Citation:** van Blooijs D, Blok S, Huiskamp GJM, Leijten FSS (2022). *PRIOS*. [10.18112/openneuro.ds004370.v1.0.2](https://doi.org/10.18112/openneuro.ds004370.v1.0.2) Modality: ieeg Subjects: 7 Recordings: 15 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004370 dataset = DS004370(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004370(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004370( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004370, title = {PRIOS}, author = {van Blooijs D and Blok S and Huiskamp GJM and Leijten FSS}, doi = {10.18112/openneuro.ds004370.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004370.v1.0.2}, } ``` ## About This Dataset **Dataset description** This dataset consists of 6 patients age 13-53 years old where Cortico-Cortical Evoked Potentials (CCEPs) were recorded with Electro-CorticoGraphy (ECoG) during single pulse electrical stimulation (SPES) in the awake patient for clinical routine (SPES-clinical) and under general propofol-anesthesia (SPES-propofol). For a detailed description see: - The effect of propofol on local effective brain networks (submitted). D. van Blooijs, S. Blok, G.J.M. Huiskamp, P. van Eijsden, H.G.E. Meijer, F.S.S. Leijten The study was approved by the Medical Ethical Committee from the UMC Utrecht, the Netherlands. **Contact** ### View full README **Dataset description** This dataset consists of 6 patients age 13-53 years old where Cortico-Cortical Evoked Potentials (CCEPs) were recorded with Electro-CorticoGraphy (ECoG) during single pulse electrical stimulation (SPES) in the awake patient for clinical routine (SPES-clinical) and under general propofol-anesthesia (SPES-propofol). For a detailed description see: - The effect of propofol on local effective brain networks (submitted). D. van Blooijs, S. Blok, G.J.M. Huiskamp, P. van Eijsden, H.G.E. Meijer, F.S.S. Leijten The study was approved by the Medical Ethical Committee from the UMC Utrecht, the Netherlands. **Contact** - Dorien van Blooijs: [D.vanBlooijs@umcutrecht.nl](mailto:D.vanBlooijs@umcutrecht.nl) - Frans Leijten: [F.S.S.leijten@umcutrecht.nl](mailto:F.S.S.leijten@umcutrecht.nl) **Data organization** This data is organized according to the Brain Imaging Data Structure specification. A community-driven specification for organizing neurophysiology data along with its metadata. For more information on this data specification, see [https://bids-specification.readthedocs.io/en/stable/](https://bids-specification.readthedocs.io/en/stable/) Each patient has their own folder (e.g., `sub-PRIOS01` to `sub-PRIOS09`) which contains the iEEG recordings data for that patient, as well as the metadata needed to understand the raw data and event timing. Data are logically grouped in the same BIDS session and stored across runs indicating the day and time point of recording during the monitoring period. We use the optional run key-value pair to specify the day and the start time of the recording (e.g. run-021315, day 2 after implantation, which is day 1 of the monitoring period, at 13:15). The task key-value pair in long-term iEEG recordings describes the patient’s state during the recording of this file. The task label is “SPESclin“ when these files contain data collected during clinical single pulse electrical stimulation (SPES) and “SPESprop” when these files contain data collected during single pulse electrical stimulation (SPES) in the operating room. Electrode positions were estimated by running Freesurfer on the individual subject MRI scan. All shared electrode positions were converted to MNI305 space using the Freesurfer surface based non-linear transformation. We note that this surface based transformation distorts the dimensions of the grids, but maintains the gyral anatomy. **License** This dataset is made available under the Public Domain Dedication and License CC v1.0, whose full text can be found at [https://creativecommons.org/publicdomain/zero/1.0/](https://creativecommons.org/publicdomain/zero/1.0/). We hope that all users will follow the ODC Attribution/Share-Alike Community Norms ([http://www.opendatacommons.org/norms/odc-by-sa/](http://www.opendatacommons.org/norms/odc-by-sa/)); in particular, while not legally required, we hope that all users of the data will acknowledge by citing the following in any publication. The effect of propofol on local effective brain networks (submitted). D. van Blooijs, S. Blok, G.J.M. Huiskamp, P. van Eijsden, H.G.E. Meijer, F.S.S. Leijten **Code** Code to analyses these data is available at: [https://github.com/UMCU-EpiLAB/umcuEpi_PRIOS](https://github.com/UMCU-EpiLAB/umcuEpi_PRIOS) **Acknowledgements** We thank all patients for participating in this study. **Funding** Research reported in this publication was supported by EpilepsieNL under Award Number NEF17-07 (DvB) and NEF 19-12 (DvB, SB) and the National Institute of Mental Health of the National Institutes of Health under Award Number R01MH122258 (DvB, the content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health). ## Dataset Information | Dataset ID | `DS004370` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PRIOS | | Author (year) | `Blooijs2022_PRIOS` | | Canonical | `PRIOS` | | Importable as | `DS004370`, `Blooijs2022_PRIOS`, `PRIOS` | | Year | 2022 | | Authors | van Blooijs D, Blok S, Huiskamp GJM, Leijten FSS | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004370.v1.0.2](https://doi.org/10.18112/openneuro.ds004370.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004370) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004370) | [Source URL](https://openneuro.org/datasets/ds004370) | ### Copy-paste BibTeX ```bibtex @dataset{ds004370, title = {PRIOS}, author = {van Blooijs D and Blok S and Huiskamp GJM and Leijten FSS}, doi = {10.18112/openneuro.ds004370.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004370.v1.0.2}, } ``` ## Technical Details - Subjects: 7 - Recordings: 15 - Tasks: 2 - Channels: 133 (7), 68 (6), 64 (2) - Sampling rate (Hz): 2048.0 - Duration (hours): 10.201727430555556 - Pathology: Surgery - Modality: Anesthesia - Type: Clinical/Intervention - Size on disk: 27.6 GB - File count: 15 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004370.v1.0.2 - Source: openneuro - OpenNeuro: [ds004370](https://openneuro.org/datasets/ds004370) - NeMAR: [ds004370](https://nemar.org/dataexplorer/detail?dataset_id=ds004370) ## API Reference Use the `DS004370` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004370(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PRIOS * **Study:** `ds004370` (OpenNeuro) * **Author (year):** `Blooijs2022_PRIOS` * **Canonical:** `PRIOS` Also importable as: `DS004370`, `Blooijs2022_PRIOS`, `PRIOS`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 7; recordings: 15; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004370](https://openneuro.org/datasets/ds004370) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004370](https://nemar.org/dataexplorer/detail?dataset_id=ds004370) DOI: [https://doi.org/10.18112/openneuro.ds004370.v1.0.2](https://doi.org/10.18112/openneuro.ds004370.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004370 >>> dataset = DS004370(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004370) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004370) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004381: eeg dataset, 18 subjects *Intraoperative EEG dataset during medianus-tibialis stimulation with 8 different rates* Access recordings and metadata through EEGDash. **Citation:** Giorgio Selmin, Vasileios Dimakopoulos, Niklaus Krayenbühl, Luca Regli, Johannes Sarnthein (2022). *Intraoperative EEG dataset during medianus-tibialis stimulation with 8 different rates*. [10.18112/openneuro.ds004381.v1.0.2](https://doi.org/10.18112/openneuro.ds004381.v1.0.2) Modality: eeg Subjects: 18 Recordings: 437 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004381 dataset = DS004381(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004381(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004381( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004381, title = {Intraoperative EEG dataset during medianus-tibialis stimulation with 8 different rates}, author = {Giorgio Selmin and Vasileios Dimakopoulos and Niklaus Krayenbühl and Luca Regli and Johannes Sarnthein}, doi = {10.18112/openneuro.ds004381.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004381.v1.0.2}, } ``` ## About This Dataset **Intraoperative EEG dataset during medianus-tibialis stimulation with 8 different rates** This dataset was obtained from the publication [1] wherein we varyied the stimulus repetition rate and recorded medianus and tibial nerve SEP. We randomly sampled a number of sweeps corresponding to recording durations up to 20 s and calculated the signal-to-noise ratio (SNR). There are 14 adults subjects and 4 children subjects with continuous EEG data split in sessions (tibial left/right, medianus left/right) and runs (1 run for each stimulation rate). We also provide processed data (derivatives) for all the sessions. In total there are 34 medianus SEP and 32 tibial SEP sessions. **Repository structure** ### View full README **Intraoperative EEG dataset during medianus-tibialis stimulation with 8 different rates** This dataset was obtained from the publication [1] wherein we varyied the stimulus repetition rate and recorded medianus and tibial nerve SEP. We randomly sampled a number of sweeps corresponding to recording durations up to 20 s and calculated the signal-to-noise ratio (SNR). There are 14 adults subjects and 4 children subjects with continuous EEG data split in sessions (tibial left/right, medianus left/right) and runs (1 run for each stimulation rate). We also provide processed data (derivatives) for all the sessions. In total there are 34 medianus SEP and 32 tibial SEP sessions. **Repository structure** **Main directory (SEP rate/)** Contains metadata files in the BIDS standard about the participants and the study. Folders are explained below. **Subfolders** ``` * ``` SEP rate/sub-\*\*/ Contains folders for each subject, named sub- and session information. ``` * ``` SEP rate/sub-\*\*/ses-01/eeg Contains the raw eeg data in .edf format for each subject. Each \*eeg.edf file contains EEG data from one stimulation rate (see scans.tsv column stimRate). Details about the channels are given in the corresponding .tsv file. \* SEP rate/derivatives Contains folders for each subject,named sub- and session information that include processed data ``` * ``` SEP rate/derivatives/sub-\*\*/ses-01/eeg/ Contains processed data for each subject. **Note from the paper** “The offline data processing used the continuous EEG that was recorded in parallel to the SEP recordings. Data analysis was performed with custom scripts in Matlab (www.mathworks.com). To detect the SEP stimulation artefact, we first filtered the EEG (high pass cutoff = 200 Hz) and performed local peak detection (minimum peak prominence between peaks = 30 ms, minimum peak width = 4 ms, samples = 0.2 ms). We used the times of the detected stimulus artifact as triggers to define sweeps with post-stimulus recording sweep length 50 ms for medianus SEP and 100 ms for tibial SEP. We resampled the data to sampling rate 1200 Hz before further processing. We classified sweeps with amplitude > 10 µV as artefact-ridden and excluded them from further analysis.” **BIDS Conversion** bids-starter-kid and custom Matlab scripts were used to convert the dataset into BIDS format. **References** [1] Dimakopoulos V, Selmin G, Regli L, Sarnthein J, Optimization of signal-to-noise ratio in short-duration SEP recordings by variation of stimulation rate, Clinical Neurophysiology, 2023, ISSN 1388-2457, [https://doi.org/10.1016/j.clinph.2023.03.008](https://doi.org/10.1016/j.clinph.2023.03.008). ## Dataset Information | Dataset ID | `DS004381` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Intraoperative EEG dataset during medianus-tibialis stimulation with 8 different rates | | Author (year) | `Selmin2022` | | Canonical | — | | Importable as | `DS004381`, `Selmin2022` | | Year | 2022 | | Authors | Giorgio Selmin, Vasileios Dimakopoulos, Niklaus Krayenbühl, Luca Regli, Johannes Sarnthein | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004381.v1.0.2](https://doi.org/10.18112/openneuro.ds004381.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004381) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004381) | [Source URL](https://openneuro.org/datasets/ds004381) | ### Copy-paste BibTeX ```bibtex @dataset{ds004381, title = {Intraoperative EEG dataset during medianus-tibialis stimulation with 8 different rates}, author = {Giorgio Selmin and Vasileios Dimakopoulos and Niklaus Krayenbühl and Luca Regli and Johannes Sarnthein}, doi = {10.18112/openneuro.ds004381.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004381.v1.0.2}, } ``` ## Technical Details - Subjects: 18 - Recordings: 437 - Tasks: 1 - Channels: 4 (333), 8 (47), 5 (26), 7 (26), 10 (5) - Sampling rate (Hz): 20000.0 - Duration (hours): 11.815105777777775 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 7.7 GB - File count: 437 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004381.v1.0.2 - Source: openneuro - OpenNeuro: [ds004381](https://openneuro.org/datasets/ds004381) - NeMAR: [ds004381](https://nemar.org/dataexplorer/detail?dataset_id=ds004381) ## API Reference Use the `DS004381` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004381(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Intraoperative EEG dataset during medianus-tibialis stimulation with 8 different rates * **Study:** `ds004381` (OpenNeuro) * **Author (year):** `Selmin2022` * **Canonical:** — Also importable as: `DS004381`, `Selmin2022`. Modality: `eeg`. Subjects: 18; recordings: 437; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004381](https://openneuro.org/datasets/ds004381) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004381](https://nemar.org/dataexplorer/detail?dataset_id=ds004381) DOI: [https://doi.org/10.18112/openneuro.ds004381.v1.0.2](https://doi.org/10.18112/openneuro.ds004381.v1.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004381 >>> dataset = DS004381(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004381) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004381) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004388: eeg dataset, 40 subjects *Somatosensory evoked potentials in the human spinal cord to mixed nerve stimulation* Access recordings and metadata through EEGDash. **Citation:** Birgit Nierula, Tilman Stephani, Merve Kaptan, André Moruaux, Burkhard Maess, Gabriel Curio, Vadim V. Nikulin, Falk Eippert (2023). *Somatosensory evoked potentials in the human spinal cord to mixed nerve stimulation*. [10.18112/openneuro.ds004388.v1.0.0](https://doi.org/10.18112/openneuro.ds004388.v1.0.0) Modality: eeg Subjects: 40 Recordings: 399 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004388 dataset = DS004388(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004388(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004388( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004388, title = {Somatosensory evoked potentials in the human spinal cord to mixed nerve stimulation}, author = {Birgit Nierula and Tilman Stephani and Merve Kaptan and André Moruaux and Burkhard Maess and Gabriel Curio and Vadim V. Nikulin and Falk Eippert}, doi = {10.18112/openneuro.ds004388.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004388.v1.0.0}, } ``` ## About This Dataset **Description** This is a data set consisting of simultaneous electroencephalography (EEG), electrospinography (ESG), electroneurography (ENG), and electromyography (EMG) recordings from 40 participants. There were four different recording conditions: i) resting state with eyes open, ii) mixed median nerve stimulation (arm nerve), iii) mixed tibial nerve stimulation (leg nerve), and iv) alternating mixed median or tibial nerve stimulation. For each participant, there is i) the simultaneous EEG-ESG-ENG-EMG-recording which also includes electrocardiographic and respiratory signals, ii) ESG electrode positions. For a detailed description please see the following article: XXX. This study was pre-registered on OSF: [https://osf.io/sgptzt](https://osf.io/sgptzt). **Citing this dataset** Should you make use of this data set in any publication, please cite the following article: XXXX **License** This data set is made available under the Creative Commons CC0 license. For more information, see [https://creativecommons.org/share-your-work/public-domain/cc0/](https://creativecommons.org/share-your-work/public-domain/cc0/) **Data set** This data set is organized according to the Brain Imaging Data Structure specification. For more information on this data specification, see [https://bids-specification.readthedocs.io/en/stable/](https://bids-specification.readthedocs.io/en/stable/) Each participant’s data are in one subdirectory (e.g., ‘sub-001’), which contains the raw data in eeglab format. Please note that the EEG channel Fz was referenced to i) the EEG reference (right mastoid, RM, channel name: Fz) and ii) the ESG reference (6th thoracic vertebra, TH6, channel name: Fz-TH6). Should you have any questions about this data set, please contact [nierula@cbs.mpg.de](mailto:nierula@cbs.mpg.de) or [eippert@cbs.mpg.de](mailto:eippert@cbs.mpg.de). ## Dataset Information | Dataset ID | `DS004388` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Somatosensory evoked potentials in the human spinal cord to mixed nerve stimulation | | Author (year) | `Nierula2023_Somatosensory` | | Canonical | — | | Importable as | `DS004388`, `Nierula2023_Somatosensory` | | Year | 2023 | | Authors | Birgit Nierula, Tilman Stephani, Merve Kaptan, André Moruaux, Burkhard Maess, Gabriel Curio, Vadim V. Nikulin, Falk Eippert | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004388.v1.0.0](https://doi.org/10.18112/openneuro.ds004388.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004388) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004388) | [Source URL](https://openneuro.org/datasets/ds004388) | ### Copy-paste BibTeX ```bibtex @dataset{ds004388, title = {Somatosensory evoked potentials in the human spinal cord to mixed nerve stimulation}, author = {Birgit Nierula and Tilman Stephani and Merve Kaptan and André Moruaux and Burkhard Maess and Gabriel Curio and Vadim V. Nikulin and Falk Eippert}, doi = {10.18112/openneuro.ds004388.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004388.v1.0.0}, } ``` ## Technical Details - Subjects: 40 - Recordings: 399 - Tasks: 3 - Channels: 115 (319), 114 (80) - Sampling rate (Hz): 10000.0 - Duration (hours): 43.48990325 - Pathology: Healthy - Modality: Tactile - Type: Perception - Size on disk: 682.5 GB - File count: 399 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004388.v1.0.0 - Source: openneuro - OpenNeuro: [ds004388](https://openneuro.org/datasets/ds004388) - NeMAR: [ds004388](https://nemar.org/dataexplorer/detail?dataset_id=ds004388) ## API Reference Use the `DS004388` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004388(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Somatosensory evoked potentials in the human spinal cord to mixed nerve stimulation * **Study:** `ds004388` (OpenNeuro) * **Author (year):** `Nierula2023_Somatosensory` * **Canonical:** — Also importable as: `DS004388`, `Nierula2023_Somatosensory`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 40; recordings: 399; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004388](https://openneuro.org/datasets/ds004388) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004388](https://nemar.org/dataexplorer/detail?dataset_id=ds004388) DOI: [https://doi.org/10.18112/openneuro.ds004388.v1.0.0](https://doi.org/10.18112/openneuro.ds004388.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004388 >>> dataset = DS004388(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004388) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004388) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004389: eeg dataset, 26 subjects *Somatosensory evoked potentials in the human spinal cord to mixed and sensory nerve stimulation* Access recordings and metadata through EEGDash. **Citation:** Birgit Nierula, Tilman Stephani, Merve Kaptan, André Moruaux, Burkhard Maess, Gabriel Curio, Vadim V. Nikulin, Falk Eippert (2023). *Somatosensory evoked potentials in the human spinal cord to mixed and sensory nerve stimulation*. [10.18112/openneuro.ds004389.v1.0.0](https://doi.org/10.18112/openneuro.ds004389.v1.0.0) Modality: eeg Subjects: 26 Recordings: 260 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004389 dataset = DS004389(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004389(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004389( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004389, title = {Somatosensory evoked potentials in the human spinal cord to mixed and sensory nerve stimulation}, author = {Birgit Nierula and Tilman Stephani and Merve Kaptan and André Moruaux and Burkhard Maess and Gabriel Curio and Vadim V. Nikulin and Falk Eippert}, doi = {10.18112/openneuro.ds004389.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004389.v1.0.0}, } ``` ## About This Dataset **Description** This is a data set consisting of simultaneous electroencephalography (EEG), electrospinography (ESG), electroneurography (ENG), and electromyography (EMG) recordings from 26 participants. There were nine different recording conditions: i) resting state with eyes open, ii) mixed median nerve stimulation (arm nerve), iii) mixed tibial nerve stimulation (leg nerve), iv) sensory nerve stimulation of the index finger, v) sensory nerve stimulation of the middle finger, vi) simultaneous senory nerve stimulation of the index and middle finger, vii) sensory nerve stimulation to the first toe, viii) sensory nerve stimulation to the second toe, ix) simultaneous senory nerve stimulation to the first and second toe. For each participant, there is i) the simultaneous EEG-ESG-ENG-EMG-recording which also includes electrocardiographic and respiratory signals, ii) ESG electrode positions. For a detailed description please see the following article: XXX. This study was pre-registered on OSF: [https://osf.io/mjdha](https://osf.io/mjdha). **Citing this dataset** Should you make use of this data set in any publication, please cite the following article: XXXX **License** This data set is made available under the Creative Commons CC0 license. For more information, see [https://creativecommons.org/share-your-work/public-domain/cc0/](https://creativecommons.org/share-your-work/public-domain/cc0/) **Data set** This data set is organized according to the Brain Imaging Data Structure specification. For more information on this data specification, see [https://bids-specification.readthedocs.io/en/stable/](https://bids-specification.readthedocs.io/en/stable/) Each participant’s data are in one subdirectory (e.g., ‘sub-001’), which contains the raw data in eeglab format. Please note that the EEG channel Fz was referenced to i) the EEG reference (right mastoid, RM, channel name: Fz) and ii) the ESG reference (6th thoracic vertebra, TH6, channel name: Fz-TH6). Should you have any questions about this data set, please contact [nierula@cbs.mpg.de](mailto:nierula@cbs.mpg.de) or [eippert@cbs.mpg.de](mailto:eippert@cbs.mpg.de). ## Dataset Information | Dataset ID | `DS004389` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Somatosensory evoked potentials in the human spinal cord to mixed and sensory nerve stimulation | | Author (year) | `Nierula2023_Somatosensory_evoked` | | Canonical | — | | Importable as | `DS004389`, `Nierula2023_Somatosensory_evoked` | | Year | 2023 | | Authors | Birgit Nierula, Tilman Stephani, Merve Kaptan, André Moruaux, Burkhard Maess, Gabriel Curio, Vadim V. Nikulin, Falk Eippert | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004389.v1.0.0](https://doi.org/10.18112/openneuro.ds004389.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004389) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004389) | [Source URL](https://openneuro.org/datasets/ds004389) | ### Copy-paste BibTeX ```bibtex @dataset{ds004389, title = {Somatosensory evoked potentials in the human spinal cord to mixed and sensory nerve stimulation}, author = {Birgit Nierula and Tilman Stephani and Merve Kaptan and André Moruaux and Burkhard Maess and Gabriel Curio and Vadim V. Nikulin and Falk Eippert}, doi = {10.18112/openneuro.ds004389.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004389.v1.0.0}, } ``` ## Technical Details - Subjects: 26 - Recordings: 260 - Tasks: 4 - Channels: 90 - Sampling rate (Hz): 10000.0 - Duration (hours): 30.655941527777777 - Pathology: Healthy - Modality: Tactile - Type: Perception - Size on disk: 376.5 GB - File count: 260 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004389.v1.0.0 - Source: openneuro - OpenNeuro: [ds004389](https://openneuro.org/datasets/ds004389) - NeMAR: [ds004389](https://nemar.org/dataexplorer/detail?dataset_id=ds004389) ## API Reference Use the `DS004389` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004389(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Somatosensory evoked potentials in the human spinal cord to mixed and sensory nerve stimulation * **Study:** `ds004389` (OpenNeuro) * **Author (year):** `Nierula2023_Somatosensory_evoked` * **Canonical:** — Also importable as: `DS004389`, `Nierula2023_Somatosensory_evoked`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 26; recordings: 260; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004389](https://openneuro.org/datasets/ds004389) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004389](https://nemar.org/dataexplorer/detail?dataset_id=ds004389) DOI: [https://doi.org/10.18112/openneuro.ds004389.v1.0.0](https://doi.org/10.18112/openneuro.ds004389.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004389 >>> dataset = DS004389(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004389) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004389) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004395: eeg dataset, 364 subjects *Penn Electrophysiology of Encoding and Retrieval Study (PEERS)* Access recordings and metadata through EEGDash. **Citation:** Michael J. Kahana, Joseph H. Rudoler, Lynn J. Lohnas, Karl Healey, Ada Aka, Adam Broitman, Elizabeth Crutchley, Patrick Crutchley, Kylie H. Alm, Brandon S. Katerman, Nicole E. Miller, Joel R. Kuhn, Yuxuan Li, Nicole M. Long, Jonathan Miller, Madison D. Paron, Jesse K. Pazdera, Isaac Pedisich, Christoph T. Weidemann (2023). *Penn Electrophysiology of Encoding and Retrieval Study (PEERS)*. [10.18112/openneuro.ds004395.v2.0.0](https://doi.org/10.18112/openneuro.ds004395.v2.0.0) Modality: eeg Subjects: 364 Recordings: 6483 License: CC0 Source: openneuro Citations: 6.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004395 dataset = DS004395(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004395(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004395( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004395, title = {Penn Electrophysiology of Encoding and Retrieval Study (PEERS)}, author = {Michael J. Kahana and Joseph H. Rudoler and Lynn J. Lohnas and Karl Healey and Ada Aka and Adam Broitman and Elizabeth Crutchley and Patrick Crutchley and Kylie H. Alm and Brandon S. Katerman and Nicole E. Miller and Joel R. Kuhn and Yuxuan Li and Nicole M. Long and Jonathan Miller and Madison D. Paron and Jesse K. Pazdera and Isaac Pedisich and Christoph T. Weidemann}, doi = {10.18112/openneuro.ds004395.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds004395.v2.0.0}, } ``` ## About This Dataset The Penn Electrophysiology of Encoding and Retrieval Study (PEERS) aimed to characterize the behavioral and electrophysiological (EEG) correlates of memory encoding and retrieval in highly practiced individuals. Across five PEERS experiments, 300+ subjects contributed more than 7,000 90 minute memory testing sessions with recorded EEG data. See the Computational Memory Lab’s [wiki page](https://memory.psych.upenn.edu/PEERS) for more detailed information, and [this paper](https://psyarxiv.com/bu5x8/) for a discussion of the main findings and lessons learned from this large-scale study. This dataset contains 3 experiments: \* ltpFR (a.k.a. PEERS1-3) \* ltpFR2 (a.k.a. PEERS4) \* VFFR (a.k.a. PEERS5) Electroencephalogram (EEG) data were recorded with either a 129-channel Geodesic Sensor Net (either GSN 200 model or HydroCel GSN model) using the Netstation acquisition environment (Electrical Geodesics, Inc.; EGI) or with a 128-channel BioSemi headcap using the Biosemi ActiveTwo acquisition system. *Note:* subject-specific electrode layouts were NOT recorded. Despite being labeled as “CapTrak” space, the coordinates reflect a generic electrode layout for a given headcap and do NOT represent any individual’s head shape. **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS004395` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Penn Electrophysiology of Encoding and Retrieval Study (PEERS) | | Author (year) | `Kahana2023` | | Canonical | `PEERS` | | Importable as | `DS004395`, `Kahana2023`, `PEERS` | | Year | 2023 | | Authors | Michael J. Kahana, Joseph H. Rudoler, Lynn J. Lohnas, Karl Healey, Ada Aka, Adam Broitman, Elizabeth Crutchley, Patrick Crutchley, Kylie H. Alm, Brandon S. Katerman, Nicole E. Miller, Joel R. Kuhn, Yuxuan Li, Nicole M. Long, Jonathan Miller, Madison D. Paron, Jesse K. Pazdera, Isaac Pedisich, Christoph T. Weidemann | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004395.v2.0.0](https://doi.org/10.18112/openneuro.ds004395.v2.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004395) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004395) | [Source URL](https://openneuro.org/datasets/ds004395) | ### Copy-paste BibTeX ```bibtex @dataset{ds004395, title = {Penn Electrophysiology of Encoding and Retrieval Study (PEERS)}, author = {Michael J. Kahana and Joseph H. Rudoler and Lynn J. Lohnas and Karl Healey and Ada Aka and Adam Broitman and Elizabeth Crutchley and Patrick Crutchley and Kylie H. Alm and Brandon S. Katerman and Nicole E. Miller and Joel R. Kuhn and Yuxuan Li and Nicole M. Long and Jonathan Miller and Madison D. Paron and Jesse K. Pazdera and Isaac Pedisich and Christoph T. Weidemann}, doi = {10.18112/openneuro.ds004395.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds004395.v2.0.0}, } ``` ## Technical Details - Subjects: 364 - Recordings: 6483 - Tasks: 3 - Channels: 129 (4980), 137 (1490), 144 (11), 272 (2) - Sampling rate (Hz): 500.0 (4946), 2048.0 (1466), 512.0 (28), 250.0 (17), 1000.0 (15), 1024.0 (11) - Duration (hours): 9115.806919930556 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 8.7 TB - File count: 6483 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004395.v2.0.0 - Source: openneuro - OpenNeuro: [ds004395](https://openneuro.org/datasets/ds004395) - NeMAR: [ds004395](https://nemar.org/dataexplorer/detail?dataset_id=ds004395) ## API Reference Use the `DS004395` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004395(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Penn Electrophysiology of Encoding and Retrieval Study (PEERS) * **Study:** `ds004395` (OpenNeuro) * **Author (year):** `Kahana2023` * **Canonical:** `PEERS` Also importable as: `DS004395`, `Kahana2023`, `PEERS`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 364; recordings: 6483; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004395](https://openneuro.org/datasets/ds004395) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004395](https://nemar.org/dataexplorer/detail?dataset_id=ds004395) DOI: [https://doi.org/10.18112/openneuro.ds004395.v2.0.0](https://doi.org/10.18112/openneuro.ds004395.v2.0.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS004395 >>> dataset = DS004395(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004395) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004395) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004398: meg dataset, 1 subjects *planmemreplay* Access recordings and metadata through EEGDash. **Citation:** G. Elliott Wimmer, Yunzhe Liu, Daniel C. McNamee, Raymond J. Dolan (2023). *planmemreplay*. [10.18112/openneuro.ds004398.v1.0.0](https://doi.org/10.18112/openneuro.ds004398.v1.0.0) Modality: meg Subjects: 1 Recordings: 1 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004398 dataset = DS004398(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004398(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004398( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004398, title = {planmemreplay}, author = {G. Elliott Wimmer and Yunzhe Liu and Daniel C. McNamee and Raymond J. Dolan}, doi = {10.18112/openneuro.ds004398.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004398.v1.0.0}, } ``` ## About This Dataset The MEG files contain a channel with triggers necessary for event marking and timing. Separate event files with onsets are provided in the participant directories for completeness only; the MEG triggers should be used for actual onsets in analysis. The delay between the trigger and the visual onset of an on-screen event sent by the projector is approximately 20 ms, as estimated using a photodiode. Localizer phase triggers: [Info to be added] Struct and Rew phase triggers: [Info to be added] Post triggers: [Info to be added] ## Dataset Information | Dataset ID | `DS004398` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | planmemreplay | | Author (year) | `Wimmer2023` | | Canonical | `Wimmer2024` | | Importable as | `DS004398`, `Wimmer2023`, `Wimmer2024` | | Year | 2023 | | Authors | 1. Elliott Wimmer, Yunzhe Liu, Daniel C. McNamee, Raymond J. Dolan | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004398.v1.0.0](https://doi.org/10.18112/openneuro.ds004398.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004398) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004398) | [Source URL](https://openneuro.org/datasets/ds004398) | ### Copy-paste BibTeX ```bibtex @dataset{ds004398, title = {planmemreplay}, author = {G. Elliott Wimmer and Yunzhe Liu and Daniel C. McNamee and Raymond J. Dolan}, doi = {10.18112/openneuro.ds004398.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004398.v1.0.0}, } ``` ## Technical Details - Subjects: 1 - Recordings: 1 - Tasks: 1 - Channels: 305 - Sampling rate (Hz): 600.0 - Duration (hours): Not calculated - Pathology: Not specified - Modality: Visual - Type: — - Size on disk: 1.3 GB - File count: 1 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004398.v1.0.0 - Source: openneuro - OpenNeuro: [ds004398](https://openneuro.org/datasets/ds004398) - NeMAR: [ds004398](https://nemar.org/dataexplorer/detail?dataset_id=ds004398) ## API Reference Use the `DS004398` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004398(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) planmemreplay * **Study:** `ds004398` (OpenNeuro) * **Author (year):** `Wimmer2023` * **Canonical:** `Wimmer2024` Also importable as: `DS004398`, `Wimmer2023`, `Wimmer2024`. Modality: `meg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004398](https://openneuro.org/datasets/ds004398) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004398](https://nemar.org/dataexplorer/detail?dataset_id=ds004398) DOI: [https://doi.org/10.18112/openneuro.ds004398.v1.0.0](https://doi.org/10.18112/openneuro.ds004398.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004398 >>> dataset = DS004398(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004398) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004398) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS004408: eeg dataset, 19 subjects *EEG responses to continuous naturalistic speech* Access recordings and metadata through EEGDash. **Citation:** Giovanni M Di Liberto, Michael P Broderick, Ole Bialas, Edmund C Lalor (2023). *EEG responses to continuous naturalistic speech*. [10.18112/openneuro.ds004408.v1.0.8](https://doi.org/10.18112/openneuro.ds004408.v1.0.8) Modality: eeg Subjects: 19 Recordings: 380 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004408 dataset = DS004408(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004408(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004408( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004408, title = {EEG responses to continuous naturalistic speech}, author = {Giovanni M Di Liberto and Michael P Broderick and Ole Bialas and Edmund C Lalor}, doi = {10.18112/openneuro.ds004408.v1.0.8}, url = {https://doi.org/10.18112/openneuro.ds004408.v1.0.8}, } ``` ## About This Dataset The data in one study [^1] and then added to by another [^2] and contains EEG responses of healthy, neurotypical adults who listened to naturalistic speech. The subjects listened to segments from an audio book version of “The Old Man and the Sea” and their brain activity was recorded using a 128-channel ActiveTwo EEG system (BioSemi). The stimuli folder contains .wav files of the presented audiobook segments as well as a .TextGrid file for each segment, containng the timing of words and phonemes in that segment. The text grids were generated using the forced-alignment software Prosodylab-Aligner [^3] and inspected by eye. Each subject’s folder contains one EEG-recording per audio segment and their starts are aligned (the EEG recordings are longer than the audio to a varying extent). The recordings are unfiltered, unreferenced and sampled at 512 Hz. ## Dataset Information | Dataset ID | `DS004408` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG responses to continuous naturalistic speech | | Author (year) | `Liberto2023` | | Canonical | — | | Importable as | `DS004408`, `Liberto2023` | | Year | 2023 | | Authors | Giovanni M Di Liberto, Michael P Broderick, Ole Bialas, Edmund C Lalor | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004408.v1.0.8](https://doi.org/10.18112/openneuro.ds004408.v1.0.8) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004408) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004408) | [Source URL](https://openneuro.org/datasets/ds004408) | ### Copy-paste BibTeX ```bibtex @dataset{ds004408, title = {EEG responses to continuous naturalistic speech}, author = {Giovanni M Di Liberto and Michael P Broderick and Ole Bialas and Edmund C Lalor}, doi = {10.18112/openneuro.ds004408.v1.0.8}, url = {https://doi.org/10.18112/openneuro.ds004408.v1.0.8}, } ``` ## Technical Details - Subjects: 19 - Recordings: 380 - Tasks: 1 - Channels: 128 - Sampling rate (Hz): 512.0 - Duration (hours): 20.59452419704861 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 18.7 GB - File count: 380 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004408.v1.0.8 - Source: openneuro - OpenNeuro: [ds004408](https://openneuro.org/datasets/ds004408) - NeMAR: [ds004408](https://nemar.org/dataexplorer/detail?dataset_id=ds004408) ## API Reference Use the `DS004408` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004408(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG responses to continuous naturalistic speech * **Study:** `ds004408` (OpenNeuro) * **Author (year):** `Liberto2023` * **Canonical:** — Also importable as: `DS004408`, `Liberto2023`. Modality: `eeg`. Subjects: 19; recordings: 380; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004408](https://openneuro.org/datasets/ds004408) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004408](https://nemar.org/dataexplorer/detail?dataset_id=ds004408) DOI: [https://doi.org/10.18112/openneuro.ds004408.v1.0.8](https://doi.org/10.18112/openneuro.ds004408.v1.0.8) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004408 >>> dataset = DS004408(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004408) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004408) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004444: eeg dataset, 30 subjects *The BMI-HDEEG dataset 1* Access recordings and metadata through EEGDash. **Citation:** Seitaro Iwama, Masumi Morishige, Yoshikazu Takahashi, Ryotaro Hirose, Midori Kodama, Junichi Ushiba (2023). *The BMI-HDEEG dataset 1*. [10.18112/openneuro.ds004444.v1.0.1](https://doi.org/10.18112/openneuro.ds004444.v1.0.1) Modality: eeg Subjects: 30 Recordings: 465 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004444 dataset = DS004444(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004444(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004444( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004444, title = {The BMI-HDEEG dataset 1}, author = {Seitaro Iwama and Masumi Morishige and Yoshikazu Takahashi and Ryotaro Hirose and Midori Kodama and Junichi Ushiba}, doi = {10.18112/openneuro.ds004444.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004444.v1.0.1}, } ``` ## About This Dataset Data Descriptor Article Iwama, S., Morishige, M., Kodama, M. et al. High-density scalp electroencephalogram dataset during sensorimotor rhythm-based brain-computer interfacing. Sci Data 10, 385 (2023). [https://doi.org/10.1038/s41597-023-02260-6](https://doi.org/10.1038/s41597-023-02260-6) Sample code [https://github.com/Junichi-Ushiba-Laboratory/pj-hd-smrbmi](https://github.com/Junichi-Ushiba-Laboratory/pj-hd-smrbmi) ## Dataset Information | Dataset ID | `DS004444` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The BMI-HDEEG dataset 1 | | Author (year) | `Iwama2023_D1` | | Canonical | `BMI_HDEEG_D1` | | Importable as | `DS004444`, `Iwama2023_D1`, `BMI_HDEEG_D1` | | Year | 2023 | | Authors | Seitaro Iwama, Masumi Morishige, Yoshikazu Takahashi, Ryotaro Hirose, Midori Kodama, Junichi Ushiba | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004444.v1.0.1](https://doi.org/10.18112/openneuro.ds004444.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004444) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004444) | [Source URL](https://openneuro.org/datasets/ds004444) | ### Copy-paste BibTeX ```bibtex @dataset{ds004444, title = {The BMI-HDEEG dataset 1}, author = {Seitaro Iwama and Masumi Morishige and Yoshikazu Takahashi and Ryotaro Hirose and Midori Kodama and Junichi Ushiba}, doi = {10.18112/openneuro.ds004444.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004444.v1.0.1}, } ``` ## Technical Details - Subjects: 30 - Recordings: 465 - Tasks: 1 - Channels: 129 - Sampling rate (Hz): 1000.0 - Duration (hours): 55.68745555555555 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 48.6 GB - File count: 465 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004444.v1.0.1 - Source: openneuro - OpenNeuro: [ds004444](https://openneuro.org/datasets/ds004444) - NeMAR: [ds004444](https://nemar.org/dataexplorer/detail?dataset_id=ds004444) ## API Reference Use the `DS004444` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004444(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The BMI-HDEEG dataset 1 * **Study:** `ds004444` (OpenNeuro) * **Author (year):** `Iwama2023_D1` * **Canonical:** `BMI_HDEEG_D1` Also importable as: `DS004444`, `Iwama2023_D1`, `BMI_HDEEG_D1`. Modality: `eeg`. Subjects: 30; recordings: 465; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004444](https://openneuro.org/datasets/ds004444) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004444](https://nemar.org/dataexplorer/detail?dataset_id=ds004444) DOI: [https://doi.org/10.18112/openneuro.ds004444.v1.0.1](https://doi.org/10.18112/openneuro.ds004444.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004444 >>> dataset = DS004444(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004444) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004444) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004446: eeg dataset, 30 subjects *The BMI-HDEEG dataset 2* Access recordings and metadata through EEGDash. **Citation:** Seitaro Iwama, Masumi Morishige, Yoshikazu Takahashi, Ryotaro Hirose, Midori Kodama, Junichi Ushiba (2023). *The BMI-HDEEG dataset 2*. [10.18112/openneuro.ds004446.v1.0.1](https://doi.org/10.18112/openneuro.ds004446.v1.0.1) Modality: eeg Subjects: 30 Recordings: 237 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004446 dataset = DS004446(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004446(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004446( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004446, title = {The BMI-HDEEG dataset 2}, author = {Seitaro Iwama and Masumi Morishige and Yoshikazu Takahashi and Ryotaro Hirose and Midori Kodama and Junichi Ushiba}, doi = {10.18112/openneuro.ds004446.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004446.v1.0.1}, } ``` ## About This Dataset Data Descriptor Article Iwama, S., Morishige, M., Kodama, M. et al. High-density scalp electroencephalogram dataset during sensorimotor rhythm-based brain-computer interfacing. Sci Data 10, 385 (2023). [https://doi.org/10.1038/s41597-023-02260-6](https://doi.org/10.1038/s41597-023-02260-6) Sample code [https://github.com/Junichi-Ushiba-Laboratory/pj-hd-smrbmi](https://github.com/Junichi-Ushiba-Laboratory/pj-hd-smrbmi) ## Dataset Information | Dataset ID | `DS004446` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The BMI-HDEEG dataset 2 | | Author (year) | `Iwama2023_D2` | | Canonical | `BMI_HDEEG_D2` | | Importable as | `DS004446`, `Iwama2023_D2`, `BMI_HDEEG_D2` | | Year | 2023 | | Authors | Seitaro Iwama, Masumi Morishige, Yoshikazu Takahashi, Ryotaro Hirose, Midori Kodama, Junichi Ushiba | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004446.v1.0.1](https://doi.org/10.18112/openneuro.ds004446.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004446) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004446) | [Source URL](https://openneuro.org/datasets/ds004446) | ### Copy-paste BibTeX ```bibtex @dataset{ds004446, title = {The BMI-HDEEG dataset 2}, author = {Seitaro Iwama and Masumi Morishige and Yoshikazu Takahashi and Ryotaro Hirose and Midori Kodama and Junichi Ushiba}, doi = {10.18112/openneuro.ds004446.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004446.v1.0.1}, } ``` ## Technical Details - Subjects: 30 - Recordings: 237 - Tasks: 1 - Channels: 129 - Sampling rate (Hz): 1000.0 - Duration (hours): 33.486105555555554 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 29.2 GB - File count: 237 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004446.v1.0.1 - Source: openneuro - OpenNeuro: [ds004446](https://openneuro.org/datasets/ds004446) - NeMAR: [ds004446](https://nemar.org/dataexplorer/detail?dataset_id=ds004446) ## API Reference Use the `DS004446` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004446(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The BMI-HDEEG dataset 2 * **Study:** `ds004446` (OpenNeuro) * **Author (year):** `Iwama2023_D2` * **Canonical:** `BMI_HDEEG_D2` Also importable as: `DS004446`, `Iwama2023_D2`, `BMI_HDEEG_D2`. Modality: `eeg`. Subjects: 30; recordings: 237; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004446](https://openneuro.org/datasets/ds004446) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004446](https://nemar.org/dataexplorer/detail?dataset_id=ds004446) DOI: [https://doi.org/10.18112/openneuro.ds004446.v1.0.1](https://doi.org/10.18112/openneuro.ds004446.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004446 >>> dataset = DS004446(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004446) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004446) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004447: eeg dataset, 22 subjects *The BMI-HDEEG dataset 3* Access recordings and metadata through EEGDash. **Citation:** Seitaro Iwama, Masumi Morishige, Yoshikazu Takahashi, Ryotaro Hirose, Midori Kodama, Junichi Ushiba (2023). *The BMI-HDEEG dataset 3*. [10.18112/openneuro.ds004447.v1.0.1](https://doi.org/10.18112/openneuro.ds004447.v1.0.1) Modality: eeg Subjects: 22 Recordings: 418 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004447 dataset = DS004447(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004447(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004447( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004447, title = {The BMI-HDEEG dataset 3}, author = {Seitaro Iwama and Masumi Morishige and Yoshikazu Takahashi and Ryotaro Hirose and Midori Kodama and Junichi Ushiba}, doi = {10.18112/openneuro.ds004447.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004447.v1.0.1}, } ``` ## About This Dataset Data Descriptor Article Iwama, S., Morishige, M., Kodama, M. et al. High-density scalp electroencephalogram dataset during sensorimotor rhythm-based brain-computer interfacing. Sci Data 10, 385 (2023). [https://doi.org/10.1038/s41597-023-02260-6](https://doi.org/10.1038/s41597-023-02260-6) Sample code [https://github.com/Junichi-Ushiba-Laboratory/pj-hd-smrbmi](https://github.com/Junichi-Ushiba-Laboratory/pj-hd-smrbmi) ## Dataset Information | Dataset ID | `DS004447` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The BMI-HDEEG dataset 3 | | Author (year) | `Iwama2023_D3` | | Canonical | `BMI_HDEEG_D3` | | Importable as | `DS004447`, `Iwama2023_D3`, `BMI_HDEEG_D3` | | Year | 2023 | | Authors | Seitaro Iwama, Masumi Morishige, Yoshikazu Takahashi, Ryotaro Hirose, Midori Kodama, Junichi Ushiba | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004447.v1.0.1](https://doi.org/10.18112/openneuro.ds004447.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004447) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004447) | [Source URL](https://openneuro.org/datasets/ds004447) | ### Copy-paste BibTeX ```bibtex @dataset{ds004447, title = {The BMI-HDEEG dataset 3}, author = {Seitaro Iwama and Masumi Morishige and Yoshikazu Takahashi and Ryotaro Hirose and Midori Kodama and Junichi Ushiba}, doi = {10.18112/openneuro.ds004447.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004447.v1.0.1}, } ``` ## Technical Details - Subjects: 22 - Recordings: 418 - Tasks: 1 - Channels: 129 - Sampling rate (Hz): 1000.0 - Duration (hours): 23.55436055555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 20.7 GB - File count: 418 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004447.v1.0.1 - Source: openneuro - OpenNeuro: [ds004447](https://openneuro.org/datasets/ds004447) - NeMAR: [ds004447](https://nemar.org/dataexplorer/detail?dataset_id=ds004447) ## API Reference Use the `DS004447` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004447(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The BMI-HDEEG dataset 3 * **Study:** `ds004447` (OpenNeuro) * **Author (year):** `Iwama2023_D3` * **Canonical:** `BMI_HDEEG_D3` Also importable as: `DS004447`, `Iwama2023_D3`, `BMI_HDEEG_D3`. Modality: `eeg`. Subjects: 22; recordings: 418; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004447](https://openneuro.org/datasets/ds004447) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004447](https://nemar.org/dataexplorer/detail?dataset_id=ds004447) DOI: [https://doi.org/10.18112/openneuro.ds004447.v1.0.1](https://doi.org/10.18112/openneuro.ds004447.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004447 >>> dataset = DS004447(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004447) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004447) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004448: eeg dataset, 56 subjects *The BMI-HDEEG dataset 4* Access recordings and metadata through EEGDash. **Citation:** Seitaro Iwama, Masumi Morishige, Yoshikazu Takahashi, Ryotaro Hirose, Midori Kodama, Junichi Ushiba (2023). *The BMI-HDEEG dataset 4*. [10.18112/openneuro.ds004448.v1.0.2](https://doi.org/10.18112/openneuro.ds004448.v1.0.2) Modality: eeg Subjects: 56 Recordings: 280 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004448 dataset = DS004448(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004448(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004448( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004448, title = {The BMI-HDEEG dataset 4}, author = {Seitaro Iwama and Masumi Morishige and Yoshikazu Takahashi and Ryotaro Hirose and Midori Kodama and Junichi Ushiba}, doi = {10.18112/openneuro.ds004448.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004448.v1.0.2}, } ``` ## About This Dataset Data Descriptor Article Iwama, S., Morishige, M., Kodama, M. et al. High-density scalp electroencephalogram dataset during sensorimotor rhythm-based brain-computer interfacing. Sci Data 10, 385 (2023). [https://doi.org/10.1038/s41597-023-02260-6](https://doi.org/10.1038/s41597-023-02260-6) Sample code [https://github.com/Junichi-Ushiba-Laboratory/pj-hd-smrbmi](https://github.com/Junichi-Ushiba-Laboratory/pj-hd-smrbmi) ## Dataset Information | Dataset ID | `DS004448` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The BMI-HDEEG dataset 4 | | Author (year) | `Iwama2023_D4` | | Canonical | `BMI_HDEEG_D4` | | Importable as | `DS004448`, `Iwama2023_D4`, `BMI_HDEEG_D4` | | Year | 2023 | | Authors | Seitaro Iwama, Masumi Morishige, Yoshikazu Takahashi, Ryotaro Hirose, Midori Kodama, Junichi Ushiba | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004448.v1.0.2](https://doi.org/10.18112/openneuro.ds004448.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004448) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004448) | [Source URL](https://openneuro.org/datasets/ds004448) | ### Copy-paste BibTeX ```bibtex @dataset{ds004448, title = {The BMI-HDEEG dataset 4}, author = {Seitaro Iwama and Masumi Morishige and Yoshikazu Takahashi and Ryotaro Hirose and Midori Kodama and Junichi Ushiba}, doi = {10.18112/openneuro.ds004448.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004448.v1.0.2}, } ``` ## Technical Details - Subjects: 56 - Recordings: 280 - Tasks: 1 - Channels: 129 - Sampling rate (Hz): 1000.0 - Duration (hours): 43.732013888888886 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 38.2 GB - File count: 280 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004448.v1.0.2 - Source: openneuro - OpenNeuro: [ds004448](https://openneuro.org/datasets/ds004448) - NeMAR: [ds004448](https://nemar.org/dataexplorer/detail?dataset_id=ds004448) ## API Reference Use the `DS004448` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004448(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The BMI-HDEEG dataset 4 * **Study:** `ds004448` (OpenNeuro) * **Author (year):** `Iwama2023_D4` * **Canonical:** `BMI_HDEEG_D4` Also importable as: `DS004448`, `Iwama2023_D4`, `BMI_HDEEG_D4`. Modality: `eeg`. Subjects: 56; recordings: 280; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004448](https://openneuro.org/datasets/ds004448) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004448](https://nemar.org/dataexplorer/detail?dataset_id=ds004448) DOI: [https://doi.org/10.18112/openneuro.ds004448.v1.0.2](https://doi.org/10.18112/openneuro.ds004448.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004448 >>> dataset = DS004448(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004448) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004448) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004457: ieeg dataset, 5 subjects *Electrical stimulation of temporal and limbic circuitry produces distinct responses in human ventral temporal cortex* Access recordings and metadata through EEGDash. **Citation:** Harvey Huang, Nicholas M Gregg, Gabriela Ojeda Valencia, Benjamin H Brinkmann, Brian N Lundstrom, Gregory A Worrell, Kai J Miller, Dora Hermes (2023). *Electrical stimulation of temporal and limbic circuitry produces distinct responses in human ventral temporal cortex*. [10.18112/openneuro.ds004457.v1.0.1](https://doi.org/10.18112/openneuro.ds004457.v1.0.1) Modality: ieeg Subjects: 5 Recordings: 5 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004457 dataset = DS004457(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004457(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004457( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004457, title = {Electrical stimulation of temporal and limbic circuitry produces distinct responses in human ventral temporal cortex}, author = {Harvey Huang and Nicholas M Gregg and Gabriela Ojeda Valencia and Benjamin H Brinkmann and Brian N Lundstrom and Gregory A Worrell and Kai J Miller and Dora Hermes}, doi = {10.18112/openneuro.ds004457.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004457.v1.0.1}, } ``` ## About This Dataset **Basis Profile Curve identification in the human ventral temporal cortex** This dataset contains intracranial EEG recordings from five patients during single pulse electrical stimulation as described in: \* H Huang, NM Gregg, G Ojeda Valencia, BH Brinkmann, BN Lundstrom, GA Worrell, KJ Miller, and D Hermes (2022) Electrical stimulation of temporal and limbic circuitry produces distinct responses in human ventral temporal cortex. (Under Review) Please cite this work when using the data. These data were recorded at the Mayo Clinic in Rochester, MN, as part of the NIH Brain Initiative supported project R01 MH122258 “CRCNS: Processing speed in the human connectome across the lifespan”. Research reported in this publication was supported by the National Institute Of Mental Health of the National Institutes of Health under Award Number R01MH122258 and by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number T32GM065841. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The data was collected by Harvey Huang, Dora Hermes, Nick Gregg, Brian Lundstrom, Cindy Nelson, Gregg Worrell and Kai J. Miller. The BIDS formatting was performed by Harvey Huang, Dora Hermes and Gabriela Ojeda Valencia. Data can be analyzed using the Matlab code at: \* [https://github.com/hharveygit/VTCBPC_JNS_Manu](https://github.com/hharveygit/VTCBPC_JNS_Manu) **Format** Data are formatted according to BIDS version 1.9.9 **Single pulse stimulation** The patient were resting in the hospital bed, while single pulse stimulation was performed with a frequency of ~0.2 Hz. The stimulation had a duration of 200 microseconds, was biphasic and had an amplitude of 6mA. **Contact** Please contact Dora Hermes ([hermes.dora@mayo.edu](mailto:hermes.dora@mayo.edu)) for questions. ## Dataset Information | Dataset ID | `DS004457` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Electrical stimulation of temporal and limbic circuitry produces distinct responses in human ventral temporal cortex | | Author (year) | `Huang2023` | | Canonical | `Huang2022` | | Importable as | `DS004457`, `Huang2023`, `Huang2022` | | Year | 2023 | | Authors | Harvey Huang, Nicholas M Gregg, Gabriela Ojeda Valencia, Benjamin H Brinkmann, Brian N Lundstrom, Gregory A Worrell, Kai J Miller, Dora Hermes | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004457.v1.0.1](https://doi.org/10.18112/openneuro.ds004457.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004457) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004457) | [Source URL](https://openneuro.org/datasets/ds004457) | ### Copy-paste BibTeX ```bibtex @dataset{ds004457, title = {Electrical stimulation of temporal and limbic circuitry produces distinct responses in human ventral temporal cortex}, author = {Harvey Huang and Nicholas M Gregg and Gabriela Ojeda Valencia and Benjamin H Brinkmann and Brian N Lundstrom and Gregory A Worrell and Kai J Miller and Dora Hermes}, doi = {10.18112/openneuro.ds004457.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004457.v1.0.1}, } ``` ## Technical Details - Subjects: 5 - Recordings: 5 - Tasks: 1 - Channels: 206, 178, 194, 135, 192 - Sampling rate (Hz): 2048.0 - Duration (hours): 5.6259163411458335 - Pathology: Surgery - Modality: Other - Type: Clinical/Intervention - Size on disk: 10.9 GB - File count: 5 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004457.v1.0.1 - Source: openneuro - OpenNeuro: [ds004457](https://openneuro.org/datasets/ds004457) - NeMAR: [ds004457](https://nemar.org/dataexplorer/detail?dataset_id=ds004457) ## API Reference Use the `DS004457` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004457(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrical stimulation of temporal and limbic circuitry produces distinct responses in human ventral temporal cortex * **Study:** `ds004457` (OpenNeuro) * **Author (year):** `Huang2023` * **Canonical:** `Huang2022` Also importable as: `DS004457`, `Huang2023`, `Huang2022`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 5; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004457](https://openneuro.org/datasets/ds004457) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004457](https://nemar.org/dataexplorer/detail?dataset_id=ds004457) DOI: [https://doi.org/10.18112/openneuro.ds004457.v1.0.1](https://doi.org/10.18112/openneuro.ds004457.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004457 >>> dataset = DS004457(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004457) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004457) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004460: eeg dataset, 20 subjects *EEG and motion capture data set for a full-body/joystick rotation task* Access recordings and metadata through EEGDash. **Citation:** Gramann, K., Hohlefeld, F.U., Gehrke, L., Klug, M (2023). *EEG and motion capture data set for a full-body/joystick rotation task*. [10.18112/openneuro.ds004460.v1.1.0](https://doi.org/10.18112/openneuro.ds004460.v1.1.0) Modality: eeg Subjects: 20 Recordings: 40 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004460 dataset = DS004460(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004460(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004460( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004460, title = {EEG and motion capture data set for a full-body/joystick rotation task}, author = {Gramann, K. and Hohlefeld, F.U. and Gehrke, L. and Klug, M}, doi = {10.18112/openneuro.ds004460.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004460.v1.1.0}, } ``` ## About This Dataset An EEG + motion capture data set, analyzed and published in “Gramann, K., Hohlefeld, F. U., Gehrke, L., & Klug, M. (2021). Human cortical dynamics during full-body heading changes. Scientific Reports, 11(1), 18186”. Used as a BIDS-example data set for EEG + motion : [https://github.com/bids-standard/bids-examples/tree/master/motion_spotrotation](https://github.com/bids-standard/bids-examples/tree/master/motion_spotrotation) **Overview** This is the “Spot rotation” dataset. It contains EEG and motion data collected from 20 subjects collected at the Berlin Mobile Brain-Body Imaging Lab, while they rotated their heading in physical space or on flat screen using a joystick. Detailed description of the paradigm can be found in the following reference: ### View full README An EEG + motion capture data set, analyzed and published in “Gramann, K., Hohlefeld, F. U., Gehrke, L., & Klug, M. (2021). Human cortical dynamics during full-body heading changes. Scientific Reports, 11(1), 18186”. Used as a BIDS-example data set for EEG + motion : [https://github.com/bids-standard/bids-examples/tree/master/motion_spotrotation](https://github.com/bids-standard/bids-examples/tree/master/motion_spotrotation) **Overview** This is the “Spot rotation” dataset. It contains EEG and motion data collected from 20 subjects collected at the Berlin Mobile Brain-Body Imaging Lab, while they rotated their heading in physical space or on flat screen using a joystick. Detailed description of the paradigm can be found in the following reference: Gramann.K, Hohlefeld, F. U., Gehrke, L., and Klug, M. “Human cortical dynamics during full-body heading changes”. Scientific Reports 11, 18186 (2021). [https://doi.org/10.1038/s41598-021-97749-8](https://doi.org/10.1038/s41598-021-97749-8) **Citing this dataset** Please cite as follows: Gramann, K., Hohlefeld, F.U., Gehrke, L. et al. Human cortical dynamics during full-body heading changes. Sci Rep 11, 18186 (2021). [https://doi.org/10.1038/s41598-021-97749-8](https://doi.org/10.1038/s41598-021-97749-8) For more information, see the `dataset_description.json` file. **License** This motion_spotrotation dataset is made available under the Creative Commons CC0 license. Information on CC0 can be found here : [https://creativecommons.org/share-your-work/public-domain/cc0/](https://creativecommons.org/share-your-work/public-domain/cc0/) **Format** The dataset is formatted according to the Brain Imaging Data Structure. See the `dataset_description.json` file for the specific version used. Generally, you can find data in the .tsv files and descriptions in the accompanying .json files. An important BIDS definition to consider is the “Inheritance Principle”, which is described in the BIDS specification under the following link: [https://bids-specification.rtfd.io/en/stable/02-common-principles.html#the-inheritance-principle](https://bids-specification.rtfd.io/en/stable/02-common-principles.html#the-inheritance-principle) The section states that: > Any metadata file (such as .json, .bvec or .tsv) may be defined at any directory level, > but no more than one applicable file may be defined at a given level […] > The values from the top level are inherited by all lower levels unless > they are overridden by a file at the lower level. **Details about the experiment** For a detailed description of the task, see Gramann et al. (2021). What follows is a brief summary. Data were collected from 20 healthy adults (11 females) with a mean age of 30.25 years (SD = 7.68, ranging from ages 20 to 46) who received 10€/h or course credit for compensation. All participants reported normal or corrected to normal vision and no history of neurological disease. Eighteen participants reported being right-handed (two left-handed). To control for the effects of different reference frame proclivities on neural dynamics, the online version of the spatial reference frame proclivity test (RFPT44, 45) was administered prior to the experiment. Participants had to consistently use an ego- or allocentric reference frame in at least 80% of their responses. Of the 20 participants, nine preferentially used an egocentric reference frame, nine used an allocentric reference frame, and two used a mixed strategy. One participant (egocentric reference frame) dropped out of the experiment after the first block due to motion sickness and was removed from further data analyses. The reported results are based on the remaining 19 participants. The experimental procedures were approved by the local ethics committee (Technische Universität Berlin, Germany) and the research was performed in accordance with the ethics guidelines. The study was conducted in accordance to the Declaration of Helsinki and all participants signed a written informed consent. Participants performed a spatial orientation task in a sparse virtual environment (WorldViz Vizard, Santa Barbara, USA) consisting of an infinite floor granulated in green and black. The experiment was self-paced and participants advanced the experiment by starting and ending each trial with a button press using the index finger of the dominant hand. A trial started with the onset of a red pole, which participants had to face and align with. Once the button was pressed the pole disappeared and was immediately replaced by a red sphere floating at eye level. The sphere automatically started to move around the participant along a circular trajectory at a fixed distance (30 m) with one of two different velocity profiles. Participants were asked to rotate on the spot and to follow the sphere, keeping it in the center of their visual field (outward rotation). The sphere stopped unpredictably at varying eccentricity between 30° and 150° and turned blue, which indicated that participants had to rotate back to the initial heading (backward rotation). When participants had reproduced their estimated initial heading, they confirmed their heading with a button press and the red pole reappeared for reorientation. The participants completed the experimental task twice, using (i) a traditional desktop 2D setup (visual flow controlled through joystick movement; “joyR”), and (ii) equipped with a MoBI setup (visual flow controlled through active physical rotation with the whole body; “physR”). The condition order was balanced across participants. To ensure the comparability of both rotation conditions, participants carried the full motion capture system at all times. In the joyR condition participants stood in the dimly lit experimental hall in front of a standard TV monitor (1.5 m viewing distance, HD resolution, 60 Hz refresh rate, 40″ diagonal size) and were instructed to move as little as possible. They followed the sphere by tilting the joystick and were thus only able to use visual flow information to complete the task. In the physical rotation condition participants were situated in a 3D virtual reality environment using a head mounted display (HTC Vive; 2 × 1080 × 1200 resolution, 90 Hz refresh rate, 110° field of view). Participants’ movements were unconstrained, i.e., in order to follow the sphere they physically rotated on the spot, thus enabling them to use motor and kinesthetic information (i.e., vestibular input and proprioception) in addition to the visual flow for completing the task. If participants diverged from the center position as determined through motion capture of the head position, the task automatically halted and participants were asked to regain center position, indicated by a yellow floating sphere, before continuing with the task. Each movement condition was preceded by recording a three-minute baseline, during which the participants were instructed to stand still and to look straight ahead. Data Recordings: EEG. EEG data was recorded from 157 active electrodes with a sampling rate of 1000 Hz and band-pass filtered from 0.016 Hz to 500 Hz (BrainAmp Move System, Brain Products, Gilching, Germany). Using an elastic cap with an equidistant design (EASYCAP, Herrsching, Germany), 129 electrodes were placed on the scalp, and 28 electrodes were placed around the neck using a custom neckband (EASYCAP, Herrsching, Germany) in order to record neck muscle activity. Data were referenced to an electrode located closest to the standard position FCz. Impedances were kept below 10kΩ for standard locations on the scalp, and below 50kΩ for the neckband. Electrode locations were digitized using an optical tracking system (Polaris Vicra, NDI, Waterloo, ON, Canada). Data Recordings: Motion Capture. Two different motion capture data sources were used: 19 red active light-emitting diodes (LEDs) were captured using 31 cameras of the Impulse X2 System (PhaseSpace Inc., San Leandro, CA, USA) with a sampling rate of 90 Hz. They were placed on the feet (2 x 4 LEDs), around the hips (5 LEDs), on the shoulders (4 LEDs), and on the HTC Vive (2 LEDs; to account for an offset in yaw angle between the PhaseSpace and the HTC Vive tracking). Except for the two LEDs on the HTC Vive, they were subsequently grouped together to form rigid body parts of feet, hip, and shoulders, enabling tracking with six degrees of freedom (x, y, and z position and roll, yaw, and pitch orientation) per body part. Head motion capture data (position and orientation) was acquired using the HTC Lighthouse tracking system with 90Hz sampling rate, since it was also used for the positional tracking of the virtual reality view. The original data was recorded in `.xdf` format using labstreaminglayer ([https://github.com/sccn/labstreaminglayer](https://github.com/sccn/labstreaminglayer)). It is stored in the `/sourcedata` directory. To comply with the BIDS format, the .xdf format was converted to BrainVision format (see the `.eeg` file for binary eeg data, the `.vhdr` as a text header filer containing meta data, and the `.vmrk` as a text file storing the eeg markers). ## Dataset Information | Dataset ID | `DS004460` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG and motion capture data set for a full-body/joystick rotation task | | Author (year) | `Gramann2023` | | Canonical | — | | Importable as | `DS004460`, `Gramann2023` | | Year | 2023 | | Authors | Gramann, K., Hohlefeld, F.U., Gehrke, L., Klug, M | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004460.v1.1.0](https://doi.org/10.18112/openneuro.ds004460.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004460) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004460) | [Source URL](https://openneuro.org/datasets/ds004460) | ### Copy-paste BibTeX ```bibtex @dataset{ds004460, title = {EEG and motion capture data set for a full-body/joystick rotation task}, author = {Gramann, K. and Hohlefeld, F.U. and Gehrke, L. and Klug, M}, doi = {10.18112/openneuro.ds004460.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004460.v1.1.0}, } ``` ## Technical Details - Subjects: 20 - Recordings: 40 - Tasks: 1 - Channels: 160 - Sampling rate (Hz): 1000.0 - Duration (hours): 27.49372888888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 59.1 GB - File count: 40 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004460.v1.1.0 - Source: openneuro - OpenNeuro: [ds004460](https://openneuro.org/datasets/ds004460) - NeMAR: [ds004460](https://nemar.org/dataexplorer/detail?dataset_id=ds004460) ## API Reference Use the `DS004460` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004460(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG and motion capture data set for a full-body/joystick rotation task * **Study:** `ds004460` (OpenNeuro) * **Author (year):** `Gramann2023` * **Canonical:** — Also importable as: `DS004460`, `Gramann2023`. Modality: `eeg`. Subjects: 20; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004460](https://openneuro.org/datasets/ds004460) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004460](https://nemar.org/dataexplorer/detail?dataset_id=ds004460) DOI: [https://doi.org/10.18112/openneuro.ds004460.v1.1.0](https://doi.org/10.18112/openneuro.ds004460.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004460 >>> dataset = DS004460(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004460) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004460) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004473: ieeg dataset, 8 subjects *sEEG Forced Two-Choice Task* Access recordings and metadata through EEGDash. **Citation:** Alexander P. Rockhill, Alessandra Mantovani, Brittany Stedelin, Admed M. Raslan, Nicole C. Swann (2023). *sEEG Forced Two-Choice Task*. [10.18112/openneuro.ds004473.v1.0.1](https://doi.org/10.18112/openneuro.ds004473.v1.0.1) Modality: ieeg Subjects: 8 Recordings: 8 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004473 dataset = DS004473(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004473(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004473( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004473, title = {sEEG Forced Two-Choice Task}, author = {Alexander P. Rockhill and Alessandra Mantovani and Brittany Stedelin and Admed M. Raslan and Nicole C. Swann}, doi = {10.18112/openneuro.ds004473.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004473.v1.0.1}, } ``` ## About This Dataset Welcome to our dataset! Here we present stereoelectroencephalography data from a forced two-choice response task collected in the epilepsy monitoring unit at Oregon Health & Science University. The data was analyzed in collaboration with the University of Oregon. The accompanying paper the first reference below. **References** Rockhill, A. P., Mantovani, A., Stedelin, B., Nerison, C. S., Raslan, A. M., & Swann, N. C. (2022). Stereo-EEG recordings extend known distributions of canonical movement-related oscillations. Journal of Neural Engineering. [https://doi.org/10.1088/1741-2552/acae0a](https://doi.org/10.1088/1741-2552/acae0a) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Holdgraf, C., Appelhoff, S., Bickel, S., Bouchard, K., D’Ambrosio, S., David, O., … Hermes, D. (2019). iEEG-BIDS, extending the Brain Imaging Data Structure specification to human intracranial electrophysiology. Scientific Data, 6, 102. [https://doi.org/10.1038/s41597-019-0105-7](https://doi.org/10.1038/s41597-019-0105-7) ## Dataset Information | Dataset ID | `DS004473` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | sEEG Forced Two-Choice Task | | Author (year) | `Rockhill2023` | | Canonical | `Rockhill2022` | | Importable as | `DS004473`, `Rockhill2023`, `Rockhill2022` | | Year | 2023 | | Authors | Alexander P. Rockhill, Alessandra Mantovani, Brittany Stedelin, Admed M. Raslan, Nicole C. Swann | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004473.v1.0.1](https://doi.org/10.18112/openneuro.ds004473.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004473) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004473) | [Source URL](https://openneuro.org/datasets/ds004473) | ### Copy-paste BibTeX ```bibtex @dataset{ds004473, title = {sEEG Forced Two-Choice Task}, author = {Alexander P. Rockhill and Alessandra Mantovani and Brittany Stedelin and Admed M. Raslan and Nicole C. Swann}, doi = {10.18112/openneuro.ds004473.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004473.v1.0.1}, } ``` ## Technical Details - Subjects: 8 - Recordings: 8 - Tasks: 1 - Channels: 129 - Sampling rate (Hz): 999.4121105232217 - Duration (hours): 6.985437776470588 - Pathology: Epilepsy - Modality: Visual - Type: Motor - Size on disk: 6.3 GB - File count: 8 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004473.v1.0.1 - Source: openneuro - OpenNeuro: [ds004473](https://openneuro.org/datasets/ds004473) - NeMAR: [ds004473](https://nemar.org/dataexplorer/detail?dataset_id=ds004473) ## API Reference Use the `DS004473` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004473(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) sEEG Forced Two-Choice Task * **Study:** `ds004473` (OpenNeuro) * **Author (year):** `Rockhill2023` * **Canonical:** `Rockhill2022` Also importable as: `DS004473`, `Rockhill2023`, `Rockhill2022`. Modality: `ieeg`; Experiment type: `Motor`; Subject type: `Epilepsy`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004473](https://openneuro.org/datasets/ds004473) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004473](https://nemar.org/dataexplorer/detail?dataset_id=ds004473) DOI: [https://doi.org/10.18112/openneuro.ds004473.v1.0.1](https://doi.org/10.18112/openneuro.ds004473.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004473 >>> dataset = DS004473(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004473) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004473) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004475: eeg dataset, 30 subjects *Mobile EEG split-belt walking study* Access recordings and metadata through EEGDash. **Citation:** Noelle A. Jacobsen, Daniel P. Ferris (2023). *Mobile EEG split-belt walking study*. [10.18112/openneuro.ds004475.v1.0.3](https://doi.org/10.18112/openneuro.ds004475.v1.0.3) Modality: eeg Subjects: 30 Recordings: 30 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004475 dataset = DS004475(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004475(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004475( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004475, title = {Mobile EEG split-belt walking study}, author = {Noelle A. Jacobsen and Daniel P. Ferris}, doi = {10.18112/openneuro.ds004475.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds004475.v1.0.3}, } ``` ## About This Dataset This mobile brain body imaging (MoBI) experiment investigates brain activity correlated to gait adaptation during split-belt treadmill walking. 30 participants completed an abrupt and gradual split-belt walking paradigm (2:1 belt speed ratio). ## Dataset Information | Dataset ID | `DS004475` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mobile EEG split-belt walking study | | Author (year) | `Jacobsen2023` | | Canonical | — | | Importable as | `DS004475`, `Jacobsen2023` | | Year | 2023 | | Authors | Noelle A. Jacobsen, Daniel P. Ferris | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004475.v1.0.3](https://doi.org/10.18112/openneuro.ds004475.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004475) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004475) | [Source URL](https://openneuro.org/datasets/ds004475) | ### Copy-paste BibTeX ```bibtex @dataset{ds004475, title = {Mobile EEG split-belt walking study}, author = {Noelle A. Jacobsen and Daniel P. Ferris}, doi = {10.18112/openneuro.ds004475.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds004475.v1.0.3}, } ``` ## Technical Details - Subjects: 30 - Recordings: 30 - Tasks: 1 - Channels: 260 (5), 250 (3), 263 (3), 255 (3), 257 (3), 258 (3), 259 (2), 256, 249, 252, 265, 254, 253, 261, 262 - Sampling rate (Hz): 512.0 - Duration (hours): 26.898611111111112 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 48.5 GB - File count: 30 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004475.v1.0.3 - Source: openneuro - OpenNeuro: [ds004475](https://openneuro.org/datasets/ds004475) - NeMAR: [ds004475](https://nemar.org/dataexplorer/detail?dataset_id=ds004475) ## API Reference Use the `DS004475` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004475(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mobile EEG split-belt walking study * **Study:** `ds004475` (OpenNeuro) * **Author (year):** `Jacobsen2023` * **Canonical:** — Also importable as: `DS004475`, `Jacobsen2023`. Modality: `eeg`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004475](https://openneuro.org/datasets/ds004475) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004475](https://nemar.org/dataexplorer/detail?dataset_id=ds004475) DOI: [https://doi.org/10.18112/openneuro.ds004475.v1.0.3](https://doi.org/10.18112/openneuro.ds004475.v1.0.3) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004475 >>> dataset = DS004475(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004475) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004475) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004477: eeg dataset, 9 subjects *PES - Pandemic Emergency Scenario* Access recordings and metadata through EEGDash. **Citation:** Tasos Papastylianou, Rodrigo Ramele, Luca Citi, Caterina Cinel, Riccardo Poli (2023). *PES - Pandemic Emergency Scenario*. [10.18112/openneuro.ds004477.v1.0.2](https://doi.org/10.18112/openneuro.ds004477.v1.0.2) Modality: eeg Subjects: 9 Recordings: 9 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004477 dataset = DS004477(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004477(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004477( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004477, title = {PES - Pandemic Emergency Scenario}, author = {Tasos Papastylianou and Rodrigo Ramele and Luca Citi and Caterina Cinel and Riccardo Poli}, doi = {10.18112/openneuro.ds004477.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004477.v1.0.2}, } ``` ## About This Dataset Experiment: PES is a complex and strategic decision-making “Pandemic” Experiment. In this experiment, users were shown a map that gives a description of the spread of a pandemic emergency situation in various locations within the map. Resources (in terms, medicines, personnels) are allocated to few cities in the beginning. The user must allocate more resources to new cities that are displayed on the map. The user must keep in mind that the resources are limited and handing over all resources could mean that new cities (if displayed) might not get any resources. In this experiment, 9 participants are paired with an artificial agent and they have to decide resource allocation on this scenario, providing their reported confidences for each decision. The experiment is divided in 64 sequences. Neurophysiological markers and behavioural information is obtained for each participant as they provide the number of allocated resources and their own subjective perception of the accuracy of each response for each trial. There is a span of 10 seconds where the Participant can press the mouse button (the Hold Response event), drag the mouse upwards while keeping the mouse-button pressed, thereby increasing the number of plus symbols that appear around the city icon, or downwards to decrease them, and finally release the mouse button when the decision is made (the Release Response event). Immediately after that, there is an additional span of 5 seconds where the participant reports the confidence in their decision by moving the mouse wheel. After that (the End-of-trial event) a black screen replaces the map, and the responses from the other players are shown for 2 seconds. Each participant sat comfortably at about 1 meter from an LCD monitor; each participant wore an EEG cap connected to a Biosemi ActiveTwo system. Wet electrodes were used and recordings were performed with 64 electrodes in the International 10-20 System. Eight additional external channels were also included, two measuring the electrocardiogram (ECG), while 4 measured the electrooculogram (EOG) signal. The EEG data was sampled at 2048 Hz. Ethical Statement: The study complied at all times with the Declaration of Helsinki ethical guidelines for research involving human subjects; formal ethical approval was granted by the Ministry of Defence Research Ethics Committee MoDREC – Application No: 983/MoDREC/19 first approved on 5th September 2019, with revisions (ver. 3) approved on the 3rd of June 2021. Acknowledgment: This research was supported by the Defence Science and Technology Laboratory (Dstl) on behalf of the UK Ministry of Defence (MOD) via funding from US/UK DoD Bilateral Academic Research Initiative (BARI). Code: [https://github.com/BCI-NE/PES](https://github.com/BCI-NE/PES) ## Dataset Information | Dataset ID | `DS004477` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PES - Pandemic Emergency Scenario | | Author (year) | `Papastylianou2023` | | Canonical | — | | Importable as | `DS004477`, `Papastylianou2023` | | Year | 2023 | | Authors | Tasos Papastylianou, Rodrigo Ramele, Luca Citi, Caterina Cinel, Riccardo Poli | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004477.v1.0.2](https://doi.org/10.18112/openneuro.ds004477.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004477) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004477) | [Source URL](https://openneuro.org/datasets/ds004477) | ### Copy-paste BibTeX ```bibtex @dataset{ds004477, title = {PES - Pandemic Emergency Scenario}, author = {Tasos Papastylianou and Rodrigo Ramele and Luca Citi and Caterina Cinel and Riccardo Poli}, doi = {10.18112/openneuro.ds004477.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004477.v1.0.2}, } ``` ## Technical Details - Subjects: 9 - Recordings: 9 - Tasks: 1 - Channels: 80 - Sampling rate (Hz): 2048.0 - Duration (hours): 13.557221001519098 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 22.3 GB - File count: 9 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004477.v1.0.2 - Source: openneuro - OpenNeuro: [ds004477](https://openneuro.org/datasets/ds004477) - NeMAR: [ds004477](https://nemar.org/dataexplorer/detail?dataset_id=ds004477) ## API Reference Use the `DS004477` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004477(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PES - Pandemic Emergency Scenario * **Study:** `ds004477` (OpenNeuro) * **Author (year):** `Papastylianou2023` * **Canonical:** — Also importable as: `DS004477`, `Papastylianou2023`. Modality: `eeg`. Subjects: 9; recordings: 9; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004477](https://openneuro.org/datasets/ds004477) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004477](https://nemar.org/dataexplorer/detail?dataset_id=ds004477) DOI: [https://doi.org/10.18112/openneuro.ds004477.v1.0.2](https://doi.org/10.18112/openneuro.ds004477.v1.0.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004477 >>> dataset = DS004477(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004477) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004477) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004483: meg dataset, 19 subjects *ABSeqMEG* Access recordings and metadata through EEGDash. **Citation:** Samuel Planton\*, Fosca Al Roumi\*, Liping Wang, Stanislas Dehaene (2023). *ABSeqMEG*. [10.18112/openneuro.ds004483.v1.0.0](https://doi.org/10.18112/openneuro.ds004483.v1.0.0) Modality: meg Subjects: 19 Recordings: 282 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004483 dataset = DS004483(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004483(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004483( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004483, title = {ABSeqMEG}, author = {Samuel Planton* and Fosca Al Roumi* and Liping Wang and Stanislas Dehaene}, doi = {10.18112/openneuro.ds004483.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004483.v1.0.0}, } ``` ## About This Dataset This dataset contains the MEG data from the article entitled Compression of binary sound sequences in human working memory [https://www.biorxiv.org/content/10.1101/2022.10.15.512361v1](https://www.biorxiv.org/content/10.1101/2022.10.15.512361v1) According to the language of thought hypothesis, regular sequences are compressed in human working memory using recursive loops akin to a mental program that predicts future items. We tested this theory by probing working memory for 16-item sequences made of two sounds. We recorded brain activity with functional MRI and magneto-encephalography (MEG) while participants listened to a hierarchy of sequences of variable complexity, whose minimal description required transition probabilities, chunking, or nested structures. Occasional deviant sounds probed the participants’ knowledge of the sequence. We predicted that task difficulty and brain activity would be proportional to minimal description length (MDL) in our formal language. Furthermore, activity should increase with MDL for learned sequences, and decrease with MDL for deviants. These predictions were upheld in both fMRI and MEG, indicating that sequence predictions are highly dependent on sequence structure and become weaker and delayed as complexity increases. The proposed language recruited bilateral superior temporal, precentral, anterior intraparietal and cerebellar cortices. These regions overlapped extensively with a localizer for mathematical calculation, and much less with spoken or written language processing. We propose that these areas collectively encode regular sequences as repetitions with variations and their recursive composition into nested structures. ## Dataset Information | Dataset ID | `DS004483` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | ABSeqMEG | | Author (year) | `Planton2023` | | Canonical | `ABSeqMEG` | | Importable as | `DS004483`, `Planton2023`, `ABSeqMEG` | | Year | 2023 | | Authors | Samuel Planton\*, Fosca Al Roumi\*, Liping Wang, Stanislas Dehaene | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004483.v1.0.0](https://doi.org/10.18112/openneuro.ds004483.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004483) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004483) | [Source URL](https://openneuro.org/datasets/ds004483) | ### Copy-paste BibTeX ```bibtex @dataset{ds004483, title = {ABSeqMEG}, author = {Samuel Planton* and Fosca Al Roumi* and Liping Wang and Stanislas Dehaene}, doi = {10.18112/openneuro.ds004483.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004483.v1.0.0}, } ``` ## Technical Details - Subjects: 19 - Recordings: 282 - Tasks: 1 - Channels: 396 - Sampling rate (Hz): 250.0 - Duration (hours): 16.683036666666666 - Pathology: Healthy - Modality: Auditory - Type: Memory - Size on disk: 23.4 GB - File count: 282 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004483.v1.0.0 - Source: openneuro - OpenNeuro: [ds004483](https://openneuro.org/datasets/ds004483) - NeMAR: [ds004483](https://nemar.org/dataexplorer/detail?dataset_id=ds004483) ## API Reference Use the `DS004483` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004483(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ABSeqMEG * **Study:** `ds004483` (OpenNeuro) * **Author (year):** `Planton2023` * **Canonical:** `ABSeqMEG` Also importable as: `DS004483`, `Planton2023`, `ABSeqMEG`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 19; recordings: 282; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004483](https://openneuro.org/datasets/ds004483) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004483](https://nemar.org/dataexplorer/detail?dataset_id=ds004483) DOI: [https://doi.org/10.18112/openneuro.ds004483.v1.0.0](https://doi.org/10.18112/openneuro.ds004483.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004483 >>> dataset = DS004483(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004483) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004483) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS004502: eeg dataset, 48 subjects *Anticipatory differences between Attention and Expectation* Access recordings and metadata through EEGDash. **Citation:** Jose M. G. Penalver, David Lopez-Garcia, Blanca Aguado-Lopez, Carlos Gonzalez-Garcia, Maria Ruz (2023). *Anticipatory differences between Attention and Expectation*. [10.18112/openneuro.ds004502.v1.0.1](https://doi.org/10.18112/openneuro.ds004502.v1.0.1) Modality: eeg Subjects: 48 Recordings: 48 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004502 dataset = DS004502(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004502(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004502( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004502, title = {Anticipatory differences between Attention and Expectation}, author = {Jose M. G. Penalver and David Lopez-Garcia and Blanca Aguado-Lopez and Carlos Gonzalez-Garcia and Maria Ruz}, doi = {10.18112/openneuro.ds004502.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004502.v1.0.1}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS004502` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Anticipatory differences between Attention and Expectation | | Author (year) | `Penalver2023` | | Canonical | `Penalver2024` | | Importable as | `DS004502`, `Penalver2023`, `Penalver2024` | | Year | 2023 | | Authors | Jose M. G. Penalver, David Lopez-Garcia, Blanca Aguado-Lopez, Carlos Gonzalez-Garcia, Maria Ruz | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004502.v1.0.1](https://doi.org/10.18112/openneuro.ds004502.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004502) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004502) | [Source URL](https://openneuro.org/datasets/ds004502) | ### Copy-paste BibTeX ```bibtex @dataset{ds004502, title = {Anticipatory differences between Attention and Expectation}, author = {Jose M. G. Penalver and David Lopez-Garcia and Blanca Aguado-Lopez and Carlos Gonzalez-Garcia and Maria Ruz}, doi = {10.18112/openneuro.ds004502.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004502.v1.0.1}, } ``` ## Technical Details - Subjects: 48 - Recordings: 48 - Tasks: 1 - Channels: 63 (44), 65 (4) - Sampling rate (Hz): 1000.0 (44), 500.0 (4) - Duration (hours): 92.62319444444444 - Pathology: Healthy - Modality: — - Type: Attention - Size on disk: 59.4 GB - File count: 48 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004502.v1.0.1 - Source: openneuro - OpenNeuro: [ds004502](https://openneuro.org/datasets/ds004502) - NeMAR: [ds004502](https://nemar.org/dataexplorer/detail?dataset_id=ds004502) ## API Reference Use the `DS004502` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004502(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Anticipatory differences between Attention and Expectation * **Study:** `ds004502` (OpenNeuro) * **Author (year):** `Penalver2023` * **Canonical:** `Penalver2024` Also importable as: `DS004502`, `Penalver2023`, `Penalver2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 48; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004502](https://openneuro.org/datasets/ds004502) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004502](https://nemar.org/dataexplorer/detail?dataset_id=ds004502) DOI: [https://doi.org/10.18112/openneuro.ds004502.v1.0.1](https://doi.org/10.18112/openneuro.ds004502.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004502 >>> dataset = DS004502(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004502) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004502) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004504: eeg dataset, 88 subjects *A dataset of EEG recordings from: Alzheimer’s disease, Frontotemporal dementia and Healthy subjects* Access recordings and metadata through EEGDash. **Citation:** Andreas Miltiadous, Katerina D. Tzimourta, Theodora Afrantou, Panagiotis Ioannidis, Nikolaos Grigoriadis, Dimitrios G. Tsalikakis, Pantelis Angelidis, Markos G. Tsipouras, Evripidis Glavas, Nikolaos Giannakeas, Alexandros T. Tzallas (2023). *A dataset of EEG recordings from: Alzheimer’s disease, Frontotemporal dementia and Healthy subjects*. [10.18112/openneuro.ds004504.v1.0.8](https://doi.org/10.18112/openneuro.ds004504.v1.0.8) Modality: eeg Subjects: 88 Recordings: 88 License: CC0 Source: openneuro Citations: 55.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004504 dataset = DS004504(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004504(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004504( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004504, title = {A dataset of EEG recordings from: Alzheimer's disease, Frontotemporal dementia and Healthy subjects}, author = {Andreas Miltiadous and Katerina D. Tzimourta and Theodora Afrantou and Panagiotis Ioannidis and Nikolaos Grigoriadis and Dimitrios G. Tsalikakis and Pantelis Angelidis and Markos G. Tsipouras and Evripidis Glavas and Nikolaos Giannakeas and Alexandros T. Tzallas}, doi = {10.18112/openneuro.ds004504.v1.0.8}, url = {https://doi.org/10.18112/openneuro.ds004504.v1.0.8}, } ``` ## About This Dataset This dataset contains the EEG resting state-closed eyes recordings from 88 subjects in total. Participants: 36 of them were diagnosed with Alzheimer’s disease (AD group), 23 were diagnosed with Frontotemporal Dementia (FTD group) and 29 were healthy subjects (CN group). Cognitive and neuropsychological state was evaluated by the international Mini-Mental State Examination (MMSE). MMSE score ranges from 0 to 30, with lower MMSE indicating more severe cognitive decline. The duration of the disease was measured in months and the median value was 25 with IQR range (Q1-Q3) being 24 - 28.5 months. Concerning the AD groups, no dementia-related comorbidities have been reported. The average MMSE for the AD group was 17.75 (sd=4.5), for the FTD group was 22.17 (sd=8.22) and for the CN group was 30. The mean age of the AD group was 66.4 (sd=7.9), for the FTD group was 63.6 (sd=8.2), and for the CN group was 67.9 (sd=5.4). Recordings: Recordings were aquired from the 2nd Department of Neurology of AHEPA General Hospital of Thessaloniki by an experienced team of neurologists. For recording, a Nihon Kohden EEG 2100 clinical device was used, with 19 scalp electrodes (Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, and O2) according to the 10-20 international system and 2 reference electrodes (A1 and A2) placed on the mastoids for impendance check, according to the manual of the device. Each recording was performed according to the clinical protocol with participants being in a sitting position having their eyes closed. Before the initialization of each recording, the skin impedance value was ensured to be below 5k?. The sampling rate was 500 Hz with 10uV/mm resolution. The recording montages were anterior-posterior bipolar and referential montage using Cz as the common reference. The referential montage was included in this dataset. The recordings were received under the range of the following parameters of the amplifier: Sensitivity: 10uV/mm, time constant: 0.3s, and high frequency filter at 70 Hz. Each recording lasted approximately 13.5 minutes for AD group (min=5.1, max=21.3), 12 minutes for FTD group (min=7.9, max=16.9) and 13.8 for CN group (min=12.5, max=16.5). In total, 485.5 minutes of AD, 276.5 minutes of FTD and 402 minutes of CN recordings were collected and are included in the dataset. Preprocessing: The EEG recordings were exported in .eeg format and are transformed to BIDS accepted .set format for the inclusion in the dataset. Automatic annotations of the Nihon Kohden EEG device marking artifacts (muscle activity, blinking, swallowing) have not been included for language compatibility purposes (If this is an issue, please use the preprocessed dataset in Folder: derivatives). The unprocessed EEG recordings are included in folders named: sub-0XX. Folders named sub-0XX in the subfolder derivatives contain the preprocessed and denoised EEG recordings. The preprocessing pipeline of the EEG signals is as follows. First, a Butterworth band-pass filter 0.5-45 Hz was applied and the signals were re-referenced to A1-A2. Then, the Artifact Subspace Reconstruction routine (ASR) which is an EEG artifact correction method included in the EEGLab Matlab software was applied to the signals, removing bad data periods which exceeded the max acceptable 0.5 second window standard deviation of 17, which is considered a conservative window. Next, the Independent Component Analysis (ICA) method (RunICA algorithm) was performed, transforming the 19 EEG signals to 19 ICA components. ICA components that were classified as “eye artifacts” or “jaw artifacts” by the automatic classification routine “ICLabel” in the EEGLAB platform were automatically rejected. It should be noted that, even though the recording was performed in a resting state, eyes-closed condition, eye artifacts of eye movement were still found at some EEG recordings. A complete analysis of this dataset can be found in the published Data Descriptor paper “A Dataset of Scalp EEG Recordings of Alzheimer’s Disease, Frontotemporal Dementia and Healthy Subjects from Routine EEG”, [https://doi.org/10.3390/data8060095](https://doi.org/10.3390/data8060095) ## Dataset Information | Dataset ID | `DS004504` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A dataset of EEG recordings from: Alzheimer’s disease, Frontotemporal dementia and Healthy subjects | | Author (year) | `Miltiadous2023` | | Canonical | — | | Importable as | `DS004504`, `Miltiadous2023` | | Year | 2023 | | Authors | Andreas Miltiadous, Katerina D. Tzimourta, Theodora Afrantou, Panagiotis Ioannidis, Nikolaos Grigoriadis, Dimitrios G. Tsalikakis, Pantelis Angelidis, Markos G. Tsipouras, Evripidis Glavas, Nikolaos Giannakeas, Alexandros T. Tzallas | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004504.v1.0.8](https://doi.org/10.18112/openneuro.ds004504.v1.0.8) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004504) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004504) | [Source URL](https://openneuro.org/datasets/ds004504) | ### Copy-paste BibTeX ```bibtex @dataset{ds004504, title = {A dataset of EEG recordings from: Alzheimer's disease, Frontotemporal dementia and Healthy subjects}, author = {Andreas Miltiadous and Katerina D. Tzimourta and Theodora Afrantou and Panagiotis Ioannidis and Nikolaos Grigoriadis and Dimitrios G. Tsalikakis and Pantelis Angelidis and Markos G. Tsipouras and Evripidis Glavas and Nikolaos Giannakeas and Alexandros T. Tzallas}, doi = {10.18112/openneuro.ds004504.v1.0.8}, url = {https://doi.org/10.18112/openneuro.ds004504.v1.0.8}, } ``` ## Technical Details - Subjects: 88 - Recordings: 88 - Tasks: 1 - Channels: 19 - Sampling rate (Hz): 500.0 - Duration (hours): 19.608416666666667 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 2.6 GB - File count: 88 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004504.v1.0.8 - Source: openneuro - OpenNeuro: [ds004504](https://openneuro.org/datasets/ds004504) - NeMAR: [ds004504](https://nemar.org/dataexplorer/detail?dataset_id=ds004504) ## API Reference Use the `DS004504` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004504(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset of EEG recordings from: Alzheimer’s disease, Frontotemporal dementia and Healthy subjects * **Study:** `ds004504` (OpenNeuro) * **Author (year):** `Miltiadous2023` * **Canonical:** — Also importable as: `DS004504`, `Miltiadous2023`. Modality: `eeg`. Subjects: 88; recordings: 88; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004504](https://openneuro.org/datasets/ds004504) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004504](https://nemar.org/dataexplorer/detail?dataset_id=ds004504) DOI: [https://doi.org/10.18112/openneuro.ds004504.v1.0.8](https://doi.org/10.18112/openneuro.ds004504.v1.0.8) NEMAR citation count: 55 ### Examples ```pycon >>> from eegdash.dataset import DS004504 >>> dataset = DS004504(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004504) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004504) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004505: eeg dataset, 25 subjects *Real World Table Tennis* Access recordings and metadata through EEGDash. **Citation:** Amanda Studnicki, Daniel P. Ferris (2023). *Real World Table Tennis*. [10.18112/openneuro.ds004505.v1.0.4](https://doi.org/10.18112/openneuro.ds004505.v1.0.4) Modality: eeg Subjects: 25 Recordings: 25 License: CC0 Source: openneuro Citations: 5.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004505 dataset = DS004505(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004505(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004505( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004505, title = {Real World Table Tennis}, author = {Amanda Studnicki and Daniel P. Ferris}, doi = {10.18112/openneuro.ds004505.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds004505.v1.0.4}, } ``` ## About This Dataset Our dataset contains high-density, dual-layer electroencephalography (EEG), neck electromyography (EMG), inertial measurement unit (IMU) acceleration, T1 structural MR images, and video data from 25 participants playing real-world table tennis. Participants played 60 minutes of table tennis (in total) with a ball machine and a human player, with an additional 10 minutes of standing baseline. For 17 of the participants, we also include video data of all trials. The Adobe Premiere project files (linked to each video) have the timing of hit events marked. Data in the main subject folders have been processed. We include the ICA decomposition and dipole model in EEG.etc. The components retained in our analyses are shown in EEG.etc.KeepComponents. The raw data can be found in the sourcedata folder. Please refer to our publication for more details. ## Dataset Information | Dataset ID | `DS004505` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Real World Table Tennis | | Author (year) | `Studnicki2023` | | Canonical | — | | Importable as | `DS004505`, `Studnicki2023` | | Year | 2023 | | Authors | Amanda Studnicki, Daniel P. Ferris | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004505.v1.0.4](https://doi.org/10.18112/openneuro.ds004505.v1.0.4) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004505) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004505) | [Source URL](https://openneuro.org/datasets/ds004505) | ### Copy-paste BibTeX ```bibtex @dataset{ds004505, title = {Real World Table Tennis}, author = {Amanda Studnicki and Daniel P. Ferris}, doi = {10.18112/openneuro.ds004505.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds004505.v1.0.4}, } ``` ## Technical Details - Subjects: 25 - Recordings: 25 - Tasks: 1 - Channels: 313 (13), 270 (4), 299 (2), 312 (2), 303, 327, 326, 340 - Sampling rate (Hz): 250.0 - Duration (hours): 30.398184444444443 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 34.6 GB - File count: 25 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004505.v1.0.4 - Source: openneuro - OpenNeuro: [ds004505](https://openneuro.org/datasets/ds004505) - NeMAR: [ds004505](https://nemar.org/dataexplorer/detail?dataset_id=ds004505) ## API Reference Use the `DS004505` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004505(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Real World Table Tennis * **Study:** `ds004505` (OpenNeuro) * **Author (year):** `Studnicki2023` * **Canonical:** — Also importable as: `DS004505`, `Studnicki2023`. Modality: `eeg`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004505](https://openneuro.org/datasets/ds004505) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004505](https://nemar.org/dataexplorer/detail?dataset_id=ds004505) DOI: [https://doi.org/10.18112/openneuro.ds004505.v1.0.4](https://doi.org/10.18112/openneuro.ds004505.v1.0.4) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS004505 >>> dataset = DS004505(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004505) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004505) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004511: eeg dataset, 45 subjects *Deception_data* Access recordings and metadata through EEGDash. **Citation:** Makowski, Dominique, Pham, Tam, Lau, Zen Juen (2023). *Deception_data*. [10.18112/openneuro.ds004511.v1.0.2](https://doi.org/10.18112/openneuro.ds004511.v1.0.2) Modality: eeg Subjects: 45 Recordings: 134 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004511 dataset = DS004511(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004511(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004511( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004511, title = {Deception_data}, author = {Makowski, Dominique and Pham, Tam and Lau, Zen Juen}, doi = {10.18112/openneuro.ds004511.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004511.v1.0.2}, } ``` ## About This Dataset **Overview** This dataset was collected in 2020 and comprises electroencephalography, physiological and behavioural data. The dataset includes both resting-state (eyes closed) and task-related neurophysiological signals acquired from 44 healthy individuals (ages: 21-40). The tasks administered to subjects include a spontaneous deception task (Gambling Game; GG) as well as a task assessing cognitive control (CC). **Task Description** **Spontaneous Deception Task (GG)** ### View full README **Overview** This dataset was collected in 2020 and comprises electroencephalography, physiological and behavioural data. The dataset includes both resting-state (eyes closed) and task-related neurophysiological signals acquired from 44 healthy individuals (ages: 21-40). The tasks administered to subjects include a spontaneous deception task (Gambling Game; GG) as well as a task assessing cognitive control (CC). **Task Description** **Spontaneous Deception Task (GG)** Participants were informed that the GG task aimed to study a player’s behaviour during a gambling game. They were given SGD 50 at the start of the game. They were to undergo 144 rounds of making a prediction about the outcome of a dice roll. They were to also place a bet ranging from 10 cents to 80 cents for each prediction; they win the bet if the prediction was true and lose it if it was false. Participants were also informed that they were the only ones who knew the outcome of the dice roll and were responsible for reporting if their predictions were true to the system, and were debriefed at the end regarding this cover story. **Cognitive Control (CC)** Participants performed 60 trials of a simple processing speed task, 80 trials of a simple response selection task, 160 trials of a response inhibition task, and 160 trials of a conflict resolution task. See details of the task [https://github.com/neuropsychology/CognitiveControl](https://github.com/neuropsychology/CognitiveControl). **Data acquisition** **EEG data acquisition** EEG signals were recorded using the TruScan 128 Research EEG system and TruScan Aquisition software (DeyMed Diagnostics s.r.o). Electrodes were placed on the EEG cap according to the standard 10-5 system of electrode placement (Oostenveld & Praamsrta, 2001) and impedance was kept below 20 kOhm for each subject. The ground electrode was placed on the zygomatic bone and two electrodes were fixed on the mastoids to be used as references. During recording, the sampling rate was 3000Hz. Note that channels 124 and 125 were placed above and below the eyes respectively for vertical EOG signals. **Note** sub-S200203 does not have any EEG acquisition file pertaining to the Gambling Game task due to technical errors during the recording. **Physiological data acquisition** Participants’ physiological signals, that is their electrocardiogram (*ECG*), respiration signals (*RSP*), electrodermal activity (*EDA*) and electromyography (*EMG*), were obtained at a sampling frequency of 4000Hz. All physiological signals were recorded via the BioPac MP160 system (BioPac Systems Inc., USA) and the AcqKnowledge 5.0 software. ECG was collected using three ECG electrodes placed according to a modified Lead II configuration, and RSP was acquired using a respiration belt tightened over participants’ upper abdomen. EDA, a measure of skin conductance, was acquired using electrodes placed on the middle and index fingers of subjects’ non-dominant hands and EMG was obtained by measuring the electrical activity of the corrugator muscles. **Note** With regards to the Cognitive Control task, physiological data was collected over 2 sessions for sub-S200303 as a result of technical errors during the recording. ## Dataset Information | Dataset ID | `DS004511` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Deception_data | | Author (year) | `Makowski2023_Deception` | | Canonical | — | | Importable as | `DS004511`, `Makowski2023_Deception` | | Year | 2023 | | Authors | Makowski, Dominique, Pham, Tam, Lau, Zen Juen | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004511.v1.0.2](https://doi.org/10.18112/openneuro.ds004511.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004511) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004511) | [Source URL](https://openneuro.org/datasets/ds004511) | ### Copy-paste BibTeX ```bibtex @dataset{ds004511, title = {Deception_data}, author = {Makowski, Dominique and Pham, Tam and Lau, Zen Juen}, doi = {10.18112/openneuro.ds004511.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004511.v1.0.2}, } ``` ## Technical Details - Subjects: 45 - Recordings: 134 - Tasks: 3 - Channels: 139 - Sampling rate (Hz): 3000.0 - Duration (hours): 64.14129787037037 - Pathology: Healthy - Modality: Visual - Type: Decision-making - Size on disk: 202.3 GB - File count: 134 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004511.v1.0.2 - Source: openneuro - OpenNeuro: [ds004511](https://openneuro.org/datasets/ds004511) - NeMAR: [ds004511](https://nemar.org/dataexplorer/detail?dataset_id=ds004511) ## API Reference Use the `DS004511` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004511(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Deception_data * **Study:** `ds004511` (OpenNeuro) * **Author (year):** `Makowski2023_Deception` * **Canonical:** — Also importable as: `DS004511`, `Makowski2023_Deception`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 45; recordings: 134; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004511](https://openneuro.org/datasets/ds004511) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004511](https://nemar.org/dataexplorer/detail?dataset_id=ds004511) DOI: [https://doi.org/10.18112/openneuro.ds004511.v1.0.2](https://doi.org/10.18112/openneuro.ds004511.v1.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004511 >>> dataset = DS004511(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004511) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004511) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004514: eeg, fnirs dataset, 12 subjects *Simultaneous EEG and fNIRS recordings for semantic decoding of imagined animals and tools* Access recordings and metadata through EEGDash. **Citation:** Milan Rybář, Riccardo Poli, Ian Daly (2023). *Simultaneous EEG and fNIRS recordings for semantic decoding of imagined animals and tools*. [10.18112/openneuro.ds004514.v1.1.2](https://doi.org/10.18112/openneuro.ds004514.v1.1.2) Modality: eeg, fnirs Subjects: 12 Recordings: 24 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004514 dataset = DS004514(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004514(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004514( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004514, title = {Simultaneous EEG and fNIRS recordings for semantic decoding of imagined animals and tools}, author = {Milan Rybář and Riccardo Poli and Ian Daly}, doi = {10.18112/openneuro.ds004514.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds004514.v1.1.2}, } ``` ## About This Dataset **Description** This dataset contains simultaneous electroencephalography (EEG) and near-infrared spectroscopy (fNIRS) signals recorded from 12 participants while performing a silent naming task and three sensory-based imagery tasks using visual, auditory, and tactile perception. Participants were asked to visualize an object in their minds, imagine the sounds made by the object, and imagine the feeling of touching the object. **EEG** EEG data were acquired with a BioSemi ActiveTwo system with 64 electrodes positioned according to the international 10-20 system, plus one electrode on each earlobe as references (‘EXG1’ channel is the left ear electrode and ‘EXG2’ channel is the right ear electrode). Additionally, 2 electrodes placed on the left hand measured galvanic skin response (‘GSR1’ channel) and a respiration belt around the waist measured respiration (‘Resp’ channel). ### View full README **Description** This dataset contains simultaneous electroencephalography (EEG) and near-infrared spectroscopy (fNIRS) signals recorded from 12 participants while performing a silent naming task and three sensory-based imagery tasks using visual, auditory, and tactile perception. Participants were asked to visualize an object in their minds, imagine the sounds made by the object, and imagine the feeling of touching the object. **EEG** EEG data were acquired with a BioSemi ActiveTwo system with 64 electrodes positioned according to the international 10-20 system, plus one electrode on each earlobe as references (‘EXG1’ channel is the left ear electrode and ‘EXG2’ channel is the right ear electrode). Additionally, 2 electrodes placed on the left hand measured galvanic skin response (‘GSR1’ channel) and a respiration belt around the waist measured respiration (‘Resp’ channel). The sampling rate was 2048 Hz. The electrode names were saved in a default BioSemi labeling scheme (A1-A32, B1-B32). See the Biosemi documentation for the corresponding international 10-20 naming scheme ([https://www.biosemi.com/pics/cap_64_layout_medium.jpg](https://www.biosemi.com/pics/cap_64_layout_medium.jpg), [https://www.biosemi.com/headcap.htm](https://www.biosemi.com/headcap.htm)). For convenience, the following ordered channels ```text ['A1', 'A2', 'A3', 'A4', 'A5', 'A6', 'A7', 'A8', 'A9', 'A10', 'A11', 'A12', 'A13', 'A14', 'A15', 'A16', 'A17', 'A18', 'A19', 'A20', 'A21', 'A22', 'A23', 'A24', 'A25', 'A26', 'A27', 'A28', 'A29', 'A30', 'A31', 'A32', 'B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7', 'B8', 'B9', 'B10', 'B11', 'B12', 'B13', 'B14', 'B15', 'B16', 'B17', 'B18', 'B19', 'B20', 'B21', 'B22', 'B23', 'B24', 'B25', 'B26', 'B27', 'B28', 'B29', 'B30', 'B31', 'B32'] ``` can thus be renamed to ```text ['Fp1', 'AF7', 'AF3', 'F1', 'F3', 'F5', 'F7', 'FT7', 'FC5', 'FC3', 'FC1', 'C1', 'C3', 'C5', 'T7', 'TP7', 'CP5', 'CP3', 'CP1', 'P1', 'P3', 'P5', 'P7', 'P9', 'PO7', 'PO3', 'O1', 'Iz', 'Oz', 'POz', 'Pz', 'CPz', 'Fpz', 'Fp2', 'AF8', 'AF4', 'AFz', 'Fz', 'F2', 'F4', 'F6', 'F8', 'FT8', 'FC6', 'FC4', 'FC2', 'FCz', 'Cz', 'C2', 'C4', 'C6', 'T8', 'TP8', 'CP6', 'CP4', 'CP2', 'P2', 'P4', 'P6', 'P8', 'P10', 'PO8', 'PO4', 'O2'] ``` **fNIRS** fNIRS data were acquired with a NIRx NIRScoutXP continuous wave imaging system equipped with 4 light detectors, 8 light emitters (sources), and low-profile fNIRS optodes. Both electrodes and optodes were placed in a NIRx NIRScap for integrated fNIRS-EEG layouts. Two different montages were used: frontal and temporal, see references for more information. **Stimulus** Folder ‘stimuli’ contains all images of the semantic categories of animals and tools presented to participants. **Example code** We have prepared example scripts to demonstrate how to load the EEG and fNIRS data into Python using MNE and MNE-BIDS packages. These scripts are located in the ‘code’ directory. **References** This dataset was analyzed in the following publications: [1] Rybář, M., Poli, R. and Daly, I., 2024. Using data from cue presentations results in grossly overestimating semantic BCI performance. Scientific Reports, 14(1), p.28003. [2] Rybář, M., Poli, R. and Daly, I., 2021. Decoding of semantic categories of imagined concepts of animals and tools in fNIRS. Journal of Neural Engineering, 18(4), p.046035. [3] Rybář, M., 2023. Towards EEG/fNIRS-based semantic brain-computer interfacing (Doctoral dissertation, University of Essex). ## Dataset Information | Dataset ID | `DS004514` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Simultaneous EEG and fNIRS recordings for semantic decoding of imagined animals and tools | | Author (year) | `Rybar2023_Simultaneous` | | Canonical | — | | Importable as | `DS004514`, `Rybar2023_Simultaneous` | | Year | 2023 | | Authors | Milan Rybář, Riccardo Poli, Ian Daly | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004514.v1.1.2](https://doi.org/10.18112/openneuro.ds004514.v1.1.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004514) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004514) | [Source URL](https://openneuro.org/datasets/ds004514) | ### Copy-paste BibTeX ```bibtex @dataset{ds004514, title = {Simultaneous EEG and fNIRS recordings for semantic decoding of imagined animals and tools}, author = {Milan Rybář and Riccardo Poli and Ian Daly}, doi = {10.18112/openneuro.ds004514.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds004514.v1.1.2}, } ``` ## Technical Details - Subjects: 12 - Recordings: 24 - Tasks: 2 - Channels: 80 (12), 22 (6), 28 (6) - Sampling rate (Hz): 2048.0 (12), 7.8125 (6), 8.928571428571429 (6) - Duration (hours): 29.3083050390625 - Pathology: Healthy - Modality: Multisensory - Type: Other - Size on disk: 24.1 GB - File count: 24 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004514.v1.1.2 - Source: openneuro - OpenNeuro: [ds004514](https://openneuro.org/datasets/ds004514) - NeMAR: [ds004514](https://nemar.org/dataexplorer/detail?dataset_id=ds004514) ## API Reference Use the `DS004514` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004514(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Simultaneous EEG and fNIRS recordings for semantic decoding of imagined animals and tools * **Study:** `ds004514` (OpenNeuro) * **Author (year):** `Rybar2023_Simultaneous` * **Canonical:** — Also importable as: `DS004514`, `Rybar2023_Simultaneous`. Modality: `eeg, fnirs`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 12; recordings: 24; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004514](https://openneuro.org/datasets/ds004514) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004514](https://nemar.org/dataexplorer/detail?dataset_id=ds004514) DOI: [https://doi.org/10.18112/openneuro.ds004514.v1.1.2](https://doi.org/10.18112/openneuro.ds004514.v1.1.2) ### Examples ```pycon >>> from eegdash.dataset import DS004514 >>> dataset = DS004514(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004514) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004514) * [eegdash.dataset.DS004541](eegdash.dataset.DS004541.md) * [eegdash.dataset.DS007554](eegdash.dataset.DS007554.md) # DS004515: eeg dataset, 54 subjects *EEG: Alcohol imagery reinforcement learning task with light and heavy drinker participants* Access recordings and metadata through EEGDash. **Citation:** Garima Singh, James F Cavanagh (2023). *EEG: Alcohol imagery reinforcement learning task with light and heavy drinker participants*. [10.18112/openneuro.ds004515.v1.0.0](https://doi.org/10.18112/openneuro.ds004515.v1.0.0) Modality: eeg Subjects: 54 Recordings: 54 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004515 dataset = DS004515(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004515(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004515( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004515, title = {EEG: Alcohol imagery reinforcement learning task with light and heavy drinker participants}, author = {Garima Singh and James F Cavanagh}, doi = {10.18112/openneuro.ds004515.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004515.v1.0.0}, } ``` ## About This Dataset Affective state reinforcement learning task in N=54 Community participants. High and low drinkers. Data collected from 2019-2021 in the CRCL at UNM. The paper [Singh, G., Campbell, E., Hogeveen, J; Witkiewitz,K., Claus, E.D., & Cavanagh, J.F. Alcohol Imagery Boosts The Reward Positivity in Heavy Drinkers] Under review at the moment. Your best bet for understanding this task would be to read that paper first. - James F Cavanagh 08/02/2022 ## Dataset Information | Dataset ID | `DS004515` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Alcohol imagery reinforcement learning task with light and heavy drinker participants | | Author (year) | `Singh2023` | | Canonical | — | | Importable as | `DS004515`, `Singh2023` | | Year | 2023 | | Authors | Garima Singh, James F Cavanagh | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004515.v1.0.0](https://doi.org/10.18112/openneuro.ds004515.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004515) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004515) | [Source URL](https://openneuro.org/datasets/ds004515) | ### Copy-paste BibTeX ```bibtex @dataset{ds004515, title = {EEG: Alcohol imagery reinforcement learning task with light and heavy drinker participants}, author = {Garima Singh and James F Cavanagh}, doi = {10.18112/openneuro.ds004515.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004515.v1.0.0}, } ``` ## Technical Details - Subjects: 54 - Recordings: 54 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 500.0 - Duration (hours): 20.60968055555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 9.5 GB - File count: 54 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004515.v1.0.0 - Source: openneuro - OpenNeuro: [ds004515](https://openneuro.org/datasets/ds004515) - NeMAR: [ds004515](https://nemar.org/dataexplorer/detail?dataset_id=ds004515) ## API Reference Use the `DS004515` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004515(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Alcohol imagery reinforcement learning task with light and heavy drinker participants * **Study:** `ds004515` (OpenNeuro) * **Author (year):** `Singh2023` * **Canonical:** — Also importable as: `DS004515`, `Singh2023`. Modality: `eeg`. Subjects: 54; recordings: 54; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004515](https://openneuro.org/datasets/ds004515) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004515](https://nemar.org/dataexplorer/detail?dataset_id=ds004515) DOI: [https://doi.org/10.18112/openneuro.ds004515.v1.0.0](https://doi.org/10.18112/openneuro.ds004515.v1.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS004515 >>> dataset = DS004515(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004515) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004515) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004517: eeg dataset, 7 subjects *EEG recordings for semantic decoding of imagined animals and tools during auditory imagery task* Access recordings and metadata through EEGDash. **Citation:** Milan Rybář, Riccardo Poli, Ian Daly (2023). *EEG recordings for semantic decoding of imagined animals and tools during auditory imagery task*. [10.18112/openneuro.ds004517.v1.0.2](https://doi.org/10.18112/openneuro.ds004517.v1.0.2) Modality: eeg Subjects: 7 Recordings: 7 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004517 dataset = DS004517(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004517(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004517( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004517, title = {EEG recordings for semantic decoding of imagined animals and tools during auditory imagery task}, author = {Milan Rybář and Riccardo Poli and Ian Daly}, doi = {10.18112/openneuro.ds004517.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004517.v1.0.2}, } ``` ## About This Dataset **Description** This dataset contains electroencephalography (EEG) signals recorded from 7 participants while performing an auditory imagery task. Participants were asked to imagine the sounds made by an object for 5 seconds. **EEG** EEG data were acquired with a BioSemi ActiveTwo system with 64 electrodes positioned according to the international 10-20 system, plus one electrode on each earlobe as references (‘EXG1’ channel is the left ear electrode and ‘EXG2’ channel is the right ear electrode). Electrooculography (EOG) was also recorded to monitor eye movements. Two electrodes were placed above (‘EXG7’ channel) and below (‘EXG8’) the right eye to capture the vertical oculogram, while two more electrodes were placed near the canthus of each eye (‘EXG5’ by the left eye and ‘EXG6’ by the right eye) to record the horizontal oculogram. Additionally, two electrodes were placed on the left (‘EXG3’) and right (‘EXG4’) wrists for additional physiological measurements (e.g., heart rate variability), and respiration was recorded using a belt placed around the waist (‘Resp’ channel). The sampling rate was 2048 Hz. **Stimulus** Folder ‘stimuli’ contains all images of the semantic categories of animals and tools presented to participants. **Example code** We have prepared an example script to demonstrate how to load the EEG data into Python using MNE and MNE-BIDS packages. This script is located in the ‘code’ directory. **References** This dataset was analyzed in the following publications: [1] Rybář, M., Poli, R. and Daly, I., 2024. Using data from cue presentations results in grossly overestimating semantic BCI performance. Scientific Reports, 14(1), p.28003. [2] Rybář, M., 2023. Towards EEG/fNIRS-based semantic brain-computer interfacing (Doctoral dissertation, University of Essex). ## Dataset Information | Dataset ID | `DS004517` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG recordings for semantic decoding of imagined animals and tools during auditory imagery task | | Author (year) | `Rybar2023_semantic` | | Canonical | — | | Importable as | `DS004517`, `Rybar2023_semantic` | | Year | 2023 | | Authors | Milan Rybář, Riccardo Poli, Ian Daly | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004517.v1.0.2](https://doi.org/10.18112/openneuro.ds004517.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004517) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004517) | [Source URL](https://openneuro.org/datasets/ds004517) | ### Copy-paste BibTeX ```bibtex @dataset{ds004517, title = {EEG recordings for semantic decoding of imagined animals and tools during auditory imagery task}, author = {Milan Rybář and Riccardo Poli and Ian Daly}, doi = {10.18112/openneuro.ds004517.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004517.v1.0.2}, } ``` ## Technical Details - Subjects: 7 - Recordings: 7 - Tasks: 1 - Channels: 80 - Sampling rate (Hz): 2048.0 - Duration (hours): 7.691110161675347 - Pathology: Healthy - Modality: Visual - Type: Other - Size on disk: 12.7 GB - File count: 7 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004517.v1.0.2 - Source: openneuro - OpenNeuro: [ds004517](https://openneuro.org/datasets/ds004517) - NeMAR: [ds004517](https://nemar.org/dataexplorer/detail?dataset_id=ds004517) ## API Reference Use the `DS004517` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004517(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG recordings for semantic decoding of imagined animals and tools during auditory imagery task * **Study:** `ds004517` (OpenNeuro) * **Author (year):** `Rybar2023_semantic` * **Canonical:** — Also importable as: `DS004517`, `Rybar2023_semantic`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 7; recordings: 7; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004517](https://openneuro.org/datasets/ds004517) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004517](https://nemar.org/dataexplorer/detail?dataset_id=ds004517) DOI: [https://doi.org/10.18112/openneuro.ds004517.v1.0.2](https://doi.org/10.18112/openneuro.ds004517.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS004517 >>> dataset = DS004517(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004517) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004517) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004519: eeg dataset, 40 subjects *Internal selective attention is delayed by competition between endogenous and exogenous factors* Access recordings and metadata through EEGDash. **Citation:** Edward Ester, Asal Nouri (2023). *Internal selective attention is delayed by competition between endogenous and exogenous factors*. [10.18112/openneuro.ds004519.v1.0.1](https://doi.org/10.18112/openneuro.ds004519.v1.0.1) Modality: eeg Subjects: 40 Recordings: 40 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004519 dataset = DS004519(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004519(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004519( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004519, title = {Internal selective attention is delayed by competition between endogenous and exogenous factors}, author = {Edward Ester and Asal Nouri}, doi = {10.18112/openneuro.ds004519.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004519.v1.0.1}, } ``` ## About This Dataset Preprocessed data files from “Internal selective attention is delayed by competition between endogenous and exogenous factors”. A preprint describing the work can be found at [https://www.biorxiv.org/content/10.1101/2022.07.05.498906v4.abstract](https://www.biorxiv.org/content/10.1101/2022.07.05.498906v4.abstract), and analysis scripts can be found at [https://osf.io/wat6d/](https://osf.io/wat6d/). This study was conceptualized and analyzed before our lab made the switch to BIDS archival. If you want to use the analysis scripts linked above to analyze the BIDS data, you’ll have to modify them to load the BIDS .set files rather than the .mat files we analyzed in our lab (the .set and .mat files, however, are identical). You will also need to modify the analysis scripts to load in the \_behavSummary.mat files for alignment with the EEG data. If you have questions or run into problems, please e-mail the corresponding author of the study ([eester@unr.edu](mailto:eester@unr.edu))” ## Dataset Information | Dataset ID | `DS004519` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Internal selective attention is delayed by competition between endogenous and exogenous factors | | Author (year) | `Ester2023_Internal` | | Canonical | `Ester2022` | | Importable as | `DS004519`, `Ester2023_Internal`, `Ester2022` | | Year | 2023 | | Authors | Edward Ester, Asal Nouri | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004519.v1.0.1](https://doi.org/10.18112/openneuro.ds004519.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004519) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004519) | [Source URL](https://openneuro.org/datasets/ds004519) | ### Copy-paste BibTeX ```bibtex @dataset{ds004519, title = {Internal selective attention is delayed by competition between endogenous and exogenous factors}, author = {Edward Ester and Asal Nouri}, doi = {10.18112/openneuro.ds004519.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004519.v1.0.1}, } ``` ## Technical Details - Subjects: 40 - Recordings: 40 - Tasks: 1 - Channels: 62 - Sampling rate (Hz): 250.0 - Duration (hours): 0.0666666666666666 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 12.6 GB - File count: 40 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004519.v1.0.1 - Source: openneuro - OpenNeuro: [ds004519](https://openneuro.org/datasets/ds004519) - NeMAR: [ds004519](https://nemar.org/dataexplorer/detail?dataset_id=ds004519) ## API Reference Use the `DS004519` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004519(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Internal selective attention is delayed by competition between endogenous and exogenous factors * **Study:** `ds004519` (OpenNeuro) * **Author (year):** `Ester2023_Internal` * **Canonical:** `Ester2022` Also importable as: `DS004519`, `Ester2023_Internal`, `Ester2022`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 40; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004519](https://openneuro.org/datasets/ds004519) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004519](https://nemar.org/dataexplorer/detail?dataset_id=ds004519) DOI: [https://doi.org/10.18112/openneuro.ds004519.v1.0.1](https://doi.org/10.18112/openneuro.ds004519.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004519 >>> dataset = DS004519(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004519) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004519) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004520: eeg dataset, 33 subjects *Changes in behavioral priority influence the accessibility of working memory content - Experiment 2* Access recordings and metadata through EEGDash. **Citation:** Edward Ester, Paige Pytel (2023). *Changes in behavioral priority influence the accessibility of working memory content - Experiment 2*. [10.18112/openneuro.ds004520.v1.0.1](https://doi.org/10.18112/openneuro.ds004520.v1.0.1) Modality: eeg Subjects: 33 Recordings: 33 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004520 dataset = DS004520(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004520(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004520( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004520, title = {Changes in behavioral priority influence the accessibility of working memory content - Experiment 2}, author = {Edward Ester and Paige Pytel}, doi = {10.18112/openneuro.ds004520.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004520.v1.0.1}, } ``` ## About This Dataset Preprocessed data from Experiment 2 of Ester & Pytel “Changes in behavioral priority influence the accessibility of working memory content”. Analytic scripts for this project can be found on OSF: [https://osf.io/gtd5f/](https://osf.io/gtd5f/). Note that to analyze the BIDS data, you’ll need to modify the analysis scripts to read in the BIDS .set files rather than the expected .mat files. See the OSF wiki for more information ## Dataset Information | Dataset ID | `DS004520` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Changes in behavioral priority influence the accessibility of working memory content - Experiment 2 | | Author (year) | `Ester2023_Changes` | | Canonical | `Ester2024_E2` | | Importable as | `DS004520`, `Ester2023_Changes`, `Ester2024_E2` | | Year | 2023 | | Authors | Edward Ester, Paige Pytel | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004520.v1.0.1](https://doi.org/10.18112/openneuro.ds004520.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004520) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004520) | [Source URL](https://openneuro.org/datasets/ds004520) | ### Copy-paste BibTeX ```bibtex @dataset{ds004520, title = {Changes in behavioral priority influence the accessibility of working memory content - Experiment 2}, author = {Edward Ester and Paige Pytel}, doi = {10.18112/openneuro.ds004520.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004520.v1.0.1}, } ``` ## Technical Details - Subjects: 33 - Recordings: 33 - Tasks: 1 - Channels: 62 - Sampling rate (Hz): 250.0 - Duration (hours): 0.055 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 10.4 GB - File count: 33 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004520.v1.0.1 - Source: openneuro - OpenNeuro: [ds004520](https://openneuro.org/datasets/ds004520) - NeMAR: [ds004520](https://nemar.org/dataexplorer/detail?dataset_id=ds004520) ## API Reference Use the `DS004520` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004520(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Changes in behavioral priority influence the accessibility of working memory content - Experiment 2 * **Study:** `ds004520` (OpenNeuro) * **Author (year):** `Ester2023_Changes` * **Canonical:** `Ester2024_E2` Also importable as: `DS004520`, `Ester2023_Changes`, `Ester2024_E2`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 33; recordings: 33; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004520](https://openneuro.org/datasets/ds004520) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004520](https://nemar.org/dataexplorer/detail?dataset_id=ds004520) DOI: [https://doi.org/10.18112/openneuro.ds004520.v1.0.1](https://doi.org/10.18112/openneuro.ds004520.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004520 >>> dataset = DS004520(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004520) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004520) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004521: eeg dataset, 34 subjects *Changes in behavioral priority influence the accessibility of working memory content - Experiment 1* Access recordings and metadata through EEGDash. **Citation:** Edward Ester, Paige Pytel (2023). *Changes in behavioral priority influence the accessibility of working memory content - Experiment 1*. [10.18112/openneuro.ds004521.v1.0.1](https://doi.org/10.18112/openneuro.ds004521.v1.0.1) Modality: eeg Subjects: 34 Recordings: 34 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004521 dataset = DS004521(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004521(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004521( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004521, title = {Changes in behavioral priority influence the accessibility of working memory content - Experiment 1}, author = {Edward Ester and Paige Pytel}, doi = {10.18112/openneuro.ds004521.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004521.v1.0.1}, } ``` ## About This Dataset Preprocessed data from Experiment 1 of Ester & Pytel “Changes in behavioral priority influence the accessibility of working memory content”. Analytic scripts for this project can be found on OSF: [https://osf.io/gtd5f/](https://osf.io/gtd5f/). Note that to analyze the BIDS data, you’ll need to modify the analysis scripts to read in the BIDS .set files rather than the expected .mat files. See the OSF wiki for more information ## Dataset Information | Dataset ID | `DS004521` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Changes in behavioral priority influence the accessibility of working memory content - Experiment 1 | | Author (year) | `Ester2023_Changes_behavioral` | | Canonical | `Ester2024_E1` | | Importable as | `DS004521`, `Ester2023_Changes_behavioral`, `Ester2024_E1` | | Year | 2023 | | Authors | Edward Ester, Paige Pytel | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004521.v1.0.1](https://doi.org/10.18112/openneuro.ds004521.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004521) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004521) | [Source URL](https://openneuro.org/datasets/ds004521) | ### Copy-paste BibTeX ```bibtex @dataset{ds004521, title = {Changes in behavioral priority influence the accessibility of working memory content - Experiment 1}, author = {Edward Ester and Paige Pytel}, doi = {10.18112/openneuro.ds004521.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004521.v1.0.1}, } ``` ## Technical Details - Subjects: 34 - Recordings: 34 - Tasks: 1 - Channels: 62 - Sampling rate (Hz): 250.0 - Duration (hours): 0.0566666666666666 - Pathology: Healthy - Modality: — - Type: Memory - Size on disk: 10.7 GB - File count: 34 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004521.v1.0.1 - Source: openneuro - OpenNeuro: [ds004521](https://openneuro.org/datasets/ds004521) - NeMAR: [ds004521](https://nemar.org/dataexplorer/detail?dataset_id=ds004521) ## API Reference Use the `DS004521` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004521(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Changes in behavioral priority influence the accessibility of working memory content - Experiment 1 * **Study:** `ds004521` (OpenNeuro) * **Author (year):** `Ester2023_Changes_behavioral` * **Canonical:** `Ester2024_E1` Also importable as: `DS004521`, `Ester2023_Changes_behavioral`, `Ester2024_E1`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004521](https://openneuro.org/datasets/ds004521) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004521](https://nemar.org/dataexplorer/detail?dataset_id=ds004521) DOI: [https://doi.org/10.18112/openneuro.ds004521.v1.0.1](https://doi.org/10.18112/openneuro.ds004521.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004521 >>> dataset = DS004521(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004521) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004521) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004532: eeg dataset, 110 subjects *EEG: Probabilistic Selection Task (PST) + PST with Cabergoline Challenge* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh, Michael J Frank (2023). *EEG: Probabilistic Selection Task (PST) + PST with Cabergoline Challenge*. [10.18112/openneuro.ds004532.v1.2.0](https://doi.org/10.18112/openneuro.ds004532.v1.2.0) Modality: eeg Subjects: 110 Recordings: 137 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004532 dataset = DS004532(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004532(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004532( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004532, title = {EEG: Probabilistic Selection Task (PST) + PST with Cabergoline Challenge}, author = {James F Cavanagh and Michael J Frank}, doi = {10.18112/openneuro.ds004532.v1.2.0}, url = {https://doi.org/10.18112/openneuro.ds004532.v1.2.0}, } ``` ## About This Dataset Probabilistic Selection Task. Unpublished! Same sample as this published study: 10.1038/ncomms6394. Study 1: 80 healthy participants + 5 placebo session from a pilot of the drug study. Total n=85. But Subj 173 might have bad EEG. Study 2: 30 healthy participants (3 dropout) in a double-blind drug study. Total n=27. Drug was Cabergoline 1.25 mg. If you look in the code folder at the .xls sheet, you’ll see that subjects had different initial IDs. Study 1 subjects had subject IDs 101-180 plus the 5 placebo runs from an early test of ultra-low-dose cabergoline: these pilot runs were subject # 301/401 | 305/405. Study 2 subjects had IDs 306/406 | 335/435. Why the odd ranges for the drug study? Glad you asked. The dual numbers were for session: 300s were first session, 400s were second session. The last two digits were subject ID. (here with the benefit of BIDS formatting we have simply put them in as session 1 and session 2 with unique sub-#, which is BETTER). For example. Joe Smith would have been 305 on visit 1, then 405 on visit 2. Jane Henderson would have been 306 on visit 1, then 406 on visit 2. Whatever visit got cab or placebo is indicated in the .xls sheet as well as on the Sess1_Drug and Sess2_Drug columns in the main .tsv file. Task included in Matlab programming language. Data collected circa 2012-2013 in Laboratory for Neural Computation & Cognition at Brown. Check the .xls sheet under code folder for more meta data. A few old analysis scripts are included. - James F Cavanagh 02/15/2021 UPDATES: 1) Uploaded a .json sidecar developed by EEGLab for NEMAR indexing: task-PST_events.json 2) Since this was updated, I had to erase each subject’s \*_events.json files. 3) Note that the Reward and Penalty feedback labels (‘FB: 0’ and ‘FB: +1’) are incorrect here. The actual feedback was ‘Correct!’ or ‘Incorrect.’ I’m just going to leave those as-is in the files since it doesn’t change too much. Run the task (under /stimuli) to see what the feedbacks look like. 4) there was a bug in the original task description that indicated this as ‘Simon Conflict’. This is not that task. This is a Probabilistic Selection Task. These should have been changed to PST, but if you see SimonConflict just realize that was an original mis-label. ## Dataset Information | Dataset ID | `DS004532` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: Probabilistic Selection Task (PST) + PST with Cabergoline Challenge | | Author (year) | `Cavanagh2023` | | Canonical | — | | Importable as | `DS004532`, `Cavanagh2023` | | Year | 2023 | | Authors | James F Cavanagh, Michael J Frank | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004532.v1.2.0](https://doi.org/10.18112/openneuro.ds004532.v1.2.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004532) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004532) | [Source URL](https://openneuro.org/datasets/ds004532) | ### Copy-paste BibTeX ```bibtex @dataset{ds004532, title = {EEG: Probabilistic Selection Task (PST) + PST with Cabergoline Challenge}, author = {James F Cavanagh and Michael J Frank}, doi = {10.18112/openneuro.ds004532.v1.2.0}, url = {https://doi.org/10.18112/openneuro.ds004532.v1.2.0}, } ``` ## Technical Details - Subjects: 110 - Recordings: 137 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 500.0 - Duration (hours): 49.65127888888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 21.8 GB - File count: 137 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004532.v1.2.0 - Source: openneuro - OpenNeuro: [ds004532](https://openneuro.org/datasets/ds004532) - NeMAR: [ds004532](https://nemar.org/dataexplorer/detail?dataset_id=ds004532) ## API Reference Use the `DS004532` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004532(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Probabilistic Selection Task (PST) + PST with Cabergoline Challenge * **Study:** `ds004532` (OpenNeuro) * **Author (year):** `Cavanagh2023` * **Canonical:** — Also importable as: `DS004532`, `Cavanagh2023`. Modality: `eeg`. Subjects: 110; recordings: 137; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004532](https://openneuro.org/datasets/ds004532) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004532](https://nemar.org/dataexplorer/detail?dataset_id=ds004532) DOI: [https://doi.org/10.18112/openneuro.ds004532.v1.2.0](https://doi.org/10.18112/openneuro.ds004532.v1.2.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004532 >>> dataset = DS004532(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004532) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004532) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004541: eeg, fnirs dataset, 8 subjects *Multimodal EEG-fNIRS data from patients undergoing general anesthesia* Access recordings and metadata through EEGDash. **Citation:** Catalina Saini Ferrón, Gabriela Vargas González, Carlos Valle Araya (2023). *Multimodal EEG-fNIRS data from patients undergoing general anesthesia*. [10.18112/openneuro.ds004541.v1.0.0](https://doi.org/10.18112/openneuro.ds004541.v1.0.0) Modality: eeg, fnirs Subjects: 8 Recordings: 18 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004541 dataset = DS004541(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004541(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004541( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004541, title = {Multimodal EEG-fNIRS data from patients undergoing general anesthesia}, author = {Catalina Saini Ferrón and Gabriela Vargas González and Carlos Valle Araya}, doi = {10.18112/openneuro.ds004541.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004541.v1.0.0}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) **References** In preperation ## Dataset Information | Dataset ID | `DS004541` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Multimodal EEG-fNIRS data from patients undergoing general anesthesia | | Author (year) | `Ferron2023` | | Canonical | `Ferron2019` | | Importable as | `DS004541`, `Ferron2023`, `Ferron2019` | | Year | 2023 | | Authors | Catalina Saini Ferrón, Gabriela Vargas González, Carlos Valle Araya | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004541.v1.0.0](https://doi.org/10.18112/openneuro.ds004541.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004541) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004541) | [Source URL](https://openneuro.org/datasets/ds004541) | ### Copy-paste BibTeX ```bibtex @dataset{ds004541, title = {Multimodal EEG-fNIRS data from patients undergoing general anesthesia}, author = {Catalina Saini Ferrón and Gabriela Vargas González and Carlos Valle Araya}, doi = {10.18112/openneuro.ds004541.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004541.v1.0.0}, } ``` ## Technical Details - Subjects: 8 - Recordings: 18 - Tasks: 1 - Channels: 59 (9), 40 (5), 30 (3), 38 - Sampling rate (Hz): 1000.0 (9), 7.8125 (9) - Duration (hours): 12.130006388888887 - Pathology: Surgery - Modality: Anesthesia - Type: Clinical/Intervention - Size on disk: 2.9 GB - File count: 18 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004541.v1.0.0 - Source: openneuro - OpenNeuro: [ds004541](https://openneuro.org/datasets/ds004541) - NeMAR: [ds004541](https://nemar.org/dataexplorer/detail?dataset_id=ds004541) ## API Reference Use the `DS004541` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004541(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal EEG-fNIRS data from patients undergoing general anesthesia * **Study:** `ds004541` (OpenNeuro) * **Author (year):** `Ferron2023` * **Canonical:** `Ferron2019` Also importable as: `DS004541`, `Ferron2023`, `Ferron2019`. Modality: `eeg, fnirs`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 8; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004541](https://openneuro.org/datasets/ds004541) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004541](https://nemar.org/dataexplorer/detail?dataset_id=ds004541) DOI: [https://doi.org/10.18112/openneuro.ds004541.v1.0.0](https://doi.org/10.18112/openneuro.ds004541.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS004541 >>> dataset = DS004541(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004541) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004541) * [eegdash.dataset.DS004514](eegdash.dataset.DS004514.md) * [eegdash.dataset.DS007554](eegdash.dataset.DS007554.md) # DS004551: ieeg dataset, 114 subjects *iEEG on children during slow wave sleep* Access recordings and metadata through EEGDash. **Citation:** Kazuki Sakakura, Naoto Kuroda, Masaki Sonoda, Takumi Mitsuhashi, Ethan Firestone, Aimee F. Luat, Neena I. Marupudi, Sandeep Sood, Eishi Asano (2023). *iEEG on children during slow wave sleep*. [10.18112/openneuro.ds004551.v1.0.6](https://doi.org/10.18112/openneuro.ds004551.v1.0.6) Modality: ieeg Subjects: 114 Recordings: 125 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004551 dataset = DS004551(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004551(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004551( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004551, title = {iEEG on children during slow wave sleep}, author = {Kazuki Sakakura and Naoto Kuroda and Masaki Sonoda and Takumi Mitsuhashi and Ethan Firestone and Aimee F. Luat and Neena I. Marupudi and Sandeep Sood and Eishi Asano}, doi = {10.18112/openneuro.ds004551.v1.0.6}, url = {https://doi.org/10.18112/openneuro.ds004551.v1.0.6}, } ``` ## About This Dataset This dataset was curated for publication as part of the manuscript in Sakakura et al. (in preparation). It contains iEEGs collected from 114 individuals during slow wave sleep. The available Matlab code can be found at [https://github.com/kaz1126/MI_HFO](https://github.com/kaz1126/MI_HFO). The iEEG coordinate system employed in this dataset is MNI305. ## Dataset Information | Dataset ID | `DS004551` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | iEEG on children during slow wave sleep | | Author (year) | `Sakakura2023_children_slow_wave` | | Canonical | `Sakakura2025` | | Importable as | `DS004551`, `Sakakura2023_children_slow_wave`, `Sakakura2025` | | Year | 2023 | | Authors | Kazuki Sakakura, Naoto Kuroda, Masaki Sonoda, Takumi Mitsuhashi, Ethan Firestone, Aimee F. Luat, Neena I. Marupudi, Sandeep Sood, Eishi Asano | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004551.v1.0.6](https://doi.org/10.18112/openneuro.ds004551.v1.0.6) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004551) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004551) | [Source URL](https://openneuro.org/datasets/ds004551) | ### Copy-paste BibTeX ```bibtex @dataset{ds004551, title = {iEEG on children during slow wave sleep}, author = {Kazuki Sakakura and Naoto Kuroda and Masaki Sonoda and Takumi Mitsuhashi and Ethan Firestone and Aimee F. Luat and Neena I. Marupudi and Sandeep Sood and Eishi Asano}, doi = {10.18112/openneuro.ds004551.v1.0.6}, url = {https://doi.org/10.18112/openneuro.ds004551.v1.0.6}, } ``` ## Technical Details - Subjects: 114 - Recordings: 125 - Tasks: 1 - Channels: 128 (82), 112 (5), 118 (3), 138 (3), 104 (2), 134 (2), 110 (2), 124 (2), 108 (2), 142 (2), 148 (2), 122 (2), 102 (2), 130 (2), 144 (2), 120, 96, 106, 146, 84, 116, 136, 126, 132, 58 - Sampling rate (Hz): 1000.0 - Duration (hours): 1.0367611111111112 - Pathology: Epilepsy - Modality: Sleep - Type: Sleep - Size on disk: 68.9 GB - File count: 125 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004551.v1.0.6 - Source: openneuro - OpenNeuro: [ds004551](https://openneuro.org/datasets/ds004551) - NeMAR: [ds004551](https://nemar.org/dataexplorer/detail?dataset_id=ds004551) ## API Reference Use the `DS004551` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004551(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG on children during slow wave sleep * **Study:** `ds004551` (OpenNeuro) * **Author (year):** `Sakakura2023_children_slow_wave` * **Canonical:** `Sakakura2025` Also importable as: `DS004551`, `Sakakura2023_children_slow_wave`, `Sakakura2025`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Epilepsy`. Subjects: 114; recordings: 125; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004551](https://openneuro.org/datasets/ds004551) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004551](https://nemar.org/dataexplorer/detail?dataset_id=ds004551) DOI: [https://doi.org/10.18112/openneuro.ds004551.v1.0.6](https://doi.org/10.18112/openneuro.ds004551.v1.0.6) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004551 >>> dataset = DS004551(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004551) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004551) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004554: eeg dataset, 16 subjects *Forced Picture Naming Task* Access recordings and metadata through EEGDash. **Citation:** V. Volpert, B. Xu, A. Tchechmedjiev, S. Harispe, A. Aksenov, Q. Mesnildrey and A. Beuter (2023). *Forced Picture Naming Task*. [10.18112/openneuro.ds004554.v1.0.4](https://doi.org/10.18112/openneuro.ds004554.v1.0.4) Modality: eeg Subjects: 16 Recordings: 16 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004554 dataset = DS004554(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004554(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004554( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004554, title = {Forced Picture Naming Task}, author = {V. Volpert and B. Xu and A. Tchechmedjiev and S. Harispe and A. Aksenov and Q. Mesnildrey and A. Beuter}, doi = {10.18112/openneuro.ds004554.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds004554.v1.0.4}, } ``` ## About This Dataset This is the preprocessed dataset used for study “Characterization of spatiotemporal dynamics in EEG data during picture naming with optical flow patterns”. The Picture Naming Task study included sixteen native French-speaking men, ranging in age from 18 to 70 years old. The participants met the inclusion criteria, which required normal or corrected-to-normal vision and hearing, as well as right-handedness, as determined by a handedness questionnaire [Oldfield1971assessment]. Exclusion criteria were in place to ensure that participants had no history of neurological or psychiatric disorders, drug addiction, or head trauma. In total 20 subjects were included in the study. The four first subjects’ data was excluded due to hardware failure. Participants were required to name the pictures shown on a screen. Each event (random pictures) has three phases: [-2s, 0s] is the baseline (pre-visual-stimulation); at time 0 picture is shown on screen; then [0s, 1.5s] post-stimulation phase; [1.5s, 3s], naming phase. Pictures used in the task were selected from the Snodgrass & Vanderwart black-and-white line drawing corpus [Snodgrass1980standardized]. “./code/experiment_schema.pdf” showed the task design. Data pre-processing pipeline is illustrated in “./code/preprocess_pipeline.pdf”. In total, 270 trials each for the 16 subjects. ## Dataset Information | Dataset ID | `DS004554` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Forced Picture Naming Task | | Author (year) | `Volpert2023` | | Canonical | — | | Importable as | `DS004554`, `Volpert2023` | | Year | 2023 | | Authors | 1. Volpert, B. Xu, A. Tchechmedjiev, S. Harispe, A. Aksenov, Q. Mesnildrey and A. Beuter | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004554.v1.0.4](https://doi.org/10.18112/openneuro.ds004554.v1.0.4) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004554) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004554) | [Source URL](https://openneuro.org/datasets/ds004554) | ### Copy-paste BibTeX ```bibtex @dataset{ds004554, title = {Forced Picture Naming Task}, author = {V. Volpert and B. Xu and A. Tchechmedjiev and S. Harispe and A. Aksenov and Q. Mesnildrey and A. Beuter}, doi = {10.18112/openneuro.ds004554.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds004554.v1.0.4}, } ``` ## Technical Details - Subjects: 16 - Recordings: 16 - Tasks: 1 - Channels: 99 - Sampling rate (Hz): 1000.0 - Duration (hours): 0.0244488888888888 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 8.8 GB - File count: 16 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004554.v1.0.4 - Source: openneuro - OpenNeuro: [ds004554](https://openneuro.org/datasets/ds004554) - NeMAR: [ds004554](https://nemar.org/dataexplorer/detail?dataset_id=ds004554) ## API Reference Use the `DS004554` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004554(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Forced Picture Naming Task * **Study:** `ds004554` (OpenNeuro) * **Author (year):** `Volpert2023` * **Canonical:** — Also importable as: `DS004554`, `Volpert2023`. Modality: `eeg`. Subjects: 16; recordings: 16; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004554](https://openneuro.org/datasets/ds004554) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004554](https://nemar.org/dataexplorer/detail?dataset_id=ds004554) DOI: [https://doi.org/10.18112/openneuro.ds004554.v1.0.4](https://doi.org/10.18112/openneuro.ds004554.v1.0.4) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004554 >>> dataset = DS004554(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004554) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004554) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004561: eeg dataset, 23 subjects *Illusion of Agency over Electrically-Actuated Movements* Access recordings and metadata through EEGDash. **Citation:** John Veillette, Pedro Lopes, Howard Nusbaum (2023). *Illusion of Agency over Electrically-Actuated Movements*. [10.18112/openneuro.ds004561.v1.0.0](https://doi.org/10.18112/openneuro.ds004561.v1.0.0) Modality: eeg Subjects: 23 Recordings: 23 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004561 dataset = DS004561(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004561(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004561( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004561, title = {Illusion of Agency over Electrically-Actuated Movements}, author = {John Veillette and Pedro Lopes and Howard Nusbaum}, doi = {10.18112/openneuro.ds004561.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004561.v1.0.0}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS004561` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Illusion of Agency over Electrically-Actuated Movements | | Author (year) | `Veillette2023` | | Canonical | — | | Importable as | `DS004561`, `Veillette2023` | | Year | 2023 | | Authors | John Veillette, Pedro Lopes, Howard Nusbaum | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004561.v1.0.0](https://doi.org/10.18112/openneuro.ds004561.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004561) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004561) | [Source URL](https://openneuro.org/datasets/ds004561) | ### Copy-paste BibTeX ```bibtex @dataset{ds004561, title = {Illusion of Agency over Electrically-Actuated Movements}, author = {John Veillette and Pedro Lopes and Howard Nusbaum}, doi = {10.18112/openneuro.ds004561.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004561.v1.0.0}, } ``` ## Technical Details - Subjects: 23 - Recordings: 23 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 10000.0 - Duration (hours): 11.379221527777778 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 97.7 GB - File count: 23 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004561.v1.0.0 - Source: openneuro - OpenNeuro: [ds004561](https://openneuro.org/datasets/ds004561) - NeMAR: [ds004561](https://nemar.org/dataexplorer/detail?dataset_id=ds004561) ## API Reference Use the `DS004561` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004561(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Illusion of Agency over Electrically-Actuated Movements * **Study:** `ds004561` (OpenNeuro) * **Author (year):** `Veillette2023` * **Canonical:** — Also importable as: `DS004561`, `Veillette2023`. Modality: `eeg`. Subjects: 23; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004561](https://openneuro.org/datasets/ds004561) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004561](https://nemar.org/dataexplorer/detail?dataset_id=ds004561) DOI: [https://doi.org/10.18112/openneuro.ds004561.v1.0.0](https://doi.org/10.18112/openneuro.ds004561.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004561 >>> dataset = DS004561(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004561) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004561) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004563: eeg dataset, 40 subjects *Vicarious touch: overlapping neural patterns between seeing and feeling touch* Access recordings and metadata through EEGDash. **Citation:** Sophie Smit, Denise Moerel, Regine Zopf, Anina N Rich (2023). *Vicarious touch: overlapping neural patterns between seeing and feeling touch*. [10.18112/openneuro.ds004563.v1.0.1](https://doi.org/10.18112/openneuro.ds004563.v1.0.1) Modality: eeg Subjects: 40 Recordings: 119 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004563 dataset = DS004563(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004563(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004563( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004563, title = {Vicarious touch: overlapping neural patterns between seeing and feeling touch}, author = {Sophie Smit and Denise Moerel and Regine Zopf and Anina N Rich}, doi = {10.18112/openneuro.ds004563.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004563.v1.0.1}, } ``` ## About This Dataset Data collection took place at Macquarie University in Sydney Australia. The study was approved by the Macquarie University Ethics Committee. We used time-resolved multivariate pattern analysis on whole-brain EEG data from people with and without vicarious touch experiences to test whether seen touch evokes overlapping neural representations with the first-hand experience of touch. Participants felt touch to the fingers (tactile trials) or watched carefully matched videos of touch to another person’s fingers (visual trials). There were 12 runs in total, divided into four blocks of 36 trials (with alternating sets of nine tactile and nine visual trials) resulting in a total of 1728 trials (864 tactile and 864 visual). There were an additional 240 target trials (20 per run), which were excluded from analysis. Between trials there was an inter-trial-interval of 800ms. Each run lasted approximately 7-8 minutes with short breaks between blocks and runs. Whole brain 64-channel EEG data were recorded using an Active Two Biosemi system (Biosemi, Inc.) at 2048Hz and 10-20 standard caps. Stimuli were presented using MATLAB (MathWorks) and Psychtoolbox (Brainard and Vision). The experiment presentation script, all analysis code, and stimuli are made available (see code and stimuli folder). The data is made available both in raw form (see each participant’s file) and after processing (see derivatives). ## Dataset Information | Dataset ID | `DS004563` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Vicarious touch: overlapping neural patterns between seeing and feeling touch | | Author (year) | `Smit2023` | | Canonical | — | | Importable as | `DS004563`, `Smit2023` | | Year | 2023 | | Authors | Sophie Smit, Denise Moerel, Regine Zopf, Anina N Rich | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004563.v1.0.1](https://doi.org/10.18112/openneuro.ds004563.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004563) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004563) | [Source URL](https://openneuro.org/datasets/ds004563) | ### Copy-paste BibTeX ```bibtex @dataset{ds004563, title = {Vicarious touch: overlapping neural patterns between seeing and feeling touch}, author = {Sophie Smit and Denise Moerel and Regine Zopf and Anina N Rich}, doi = {10.18112/openneuro.ds004563.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004563.v1.0.1}, } ``` ## Technical Details - Subjects: 40 - Recordings: 119 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 2048.0 - Duration (hours): 64.68555555555555 - Pathology: Other - Modality: Multisensory - Type: Perception - Size on disk: 100.9 GB - File count: 119 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004563.v1.0.1 - Source: openneuro - OpenNeuro: [ds004563](https://openneuro.org/datasets/ds004563) - NeMAR: [ds004563](https://nemar.org/dataexplorer/detail?dataset_id=ds004563) ## API Reference Use the `DS004563` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004563(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Vicarious touch: overlapping neural patterns between seeing and feeling touch * **Study:** `ds004563` (OpenNeuro) * **Author (year):** `Smit2023` * **Canonical:** — Also importable as: `DS004563`, `Smit2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Other`. Subjects: 40; recordings: 119; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004563](https://openneuro.org/datasets/ds004563) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004563](https://nemar.org/dataexplorer/detail?dataset_id=ds004563) DOI: [https://doi.org/10.18112/openneuro.ds004563.v1.0.1](https://doi.org/10.18112/openneuro.ds004563.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004563 >>> dataset = DS004563(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004563) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004563) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004572: eeg dataset, 52 subjects *The effects of sham hypnosis techniques* Access recordings and metadata through EEGDash. **Citation:** Zoltan Kekecs, Kyra Girán, Vanda Vizkievicz, Anna Lutoskin, Yeganeh Farahzadi (2023). *The effects of sham hypnosis techniques*. [10.18112/openneuro.ds004572.v1.3.2](https://doi.org/10.18112/openneuro.ds004572.v1.3.2) Modality: eeg Subjects: 52 Recordings: 516 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004572 dataset = DS004572(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004572(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004572( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004572, title = {The effects of sham hypnosis techniques}, author = {Zoltan Kekecs and Kyra Girán and Vanda Vizkievicz and Anna Lutoskin and Yeganeh Farahzadi}, doi = {10.18112/openneuro.ds004572.v1.3.2}, url = {https://doi.org/10.18112/openneuro.ds004572.v1.3.2}, } ``` ## About This Dataset 52 participants (39 females) took part in this study and their brain electrophysiological activity were being recorded using 64-channel EasyCap from Brain Products. After mounting the EEG electrode cap, the study protocol started with 5 minutes of closed-eyes rest (Pre-hypnosis Baseline), followed by four experimental conditions (Experimental Blocks), and ended with another 5 minutes of closed-eyes rest (Post-hypnosis Baseline). Throughout the four Experimental Blocks, participants were exposed to either conventional or unconventional (placebo) hypnotic inductions described either as hypnosis or as simple relaxation technique in a 2 x 2 balanced placebo design. In other words, each participant underwent four trials, in which they were exposed to a conventional hypnosis induction presented as “hypnosis”; a conventional hypnosis induction presented as “control”; an unconventional hypnosis induction presented as “hypnosis”; and an unconventional hypnosis induction presented as “control” in a randomized order. For detailed information on our data collection methods, refer to the public trial registry on the Open Science Framework: [https://doi.org/10.17605/OSF.IO/WVHDA](https://doi.org/10.17605/OSF.IO/WVHDA). Publications based on this dataset: - [https://onlinelibrary.wiley.com/doi/full/10.1111/psyp.70183](https://onlinelibrary.wiley.com/doi/full/10.1111/psyp.70183) - [https://www.nature.com/articles/s41598-024-56633-x](https://www.nature.com/articles/s41598-024-56633-x) ## Dataset Information | Dataset ID | `DS004572` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The effects of sham hypnosis techniques | | Author (year) | `Kekecs2023` | | Canonical | `Kekecs2024` | | Importable as | `DS004572`, `Kekecs2023`, `Kekecs2024` | | Year | 2023 | | Authors | Zoltan Kekecs, Kyra Girán, Vanda Vizkievicz, Anna Lutoskin, Yeganeh Farahzadi | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004572.v1.3.2](https://doi.org/10.18112/openneuro.ds004572.v1.3.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004572) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004572) | [Source URL](https://openneuro.org/datasets/ds004572) | ### Copy-paste BibTeX ```bibtex @dataset{ds004572, title = {The effects of sham hypnosis techniques}, author = {Zoltan Kekecs and Kyra Girán and Vanda Vizkievicz and Anna Lutoskin and Yeganeh Farahzadi}, doi = {10.18112/openneuro.ds004572.v1.3.2}, url = {https://doi.org/10.18112/openneuro.ds004572.v1.3.2}, } ``` ## Technical Details - Subjects: 52 - Recordings: 516 - Tasks: 10 - Channels: 61 - Sampling rate (Hz): 1000.0 - Duration (hours): 53.24708222222222 - Pathology: Healthy - Modality: Auditory - Type: Other - Size on disk: 43.6 GB - File count: 516 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004572.v1.3.2 - Source: openneuro - OpenNeuro: [ds004572](https://openneuro.org/datasets/ds004572) - NeMAR: [ds004572](https://nemar.org/dataexplorer/detail?dataset_id=ds004572) ## API Reference Use the `DS004572` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004572(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effects of sham hypnosis techniques * **Study:** `ds004572` (OpenNeuro) * **Author (year):** `Kekecs2023` * **Canonical:** `Kekecs2024` Also importable as: `DS004572`, `Kekecs2023`, `Kekecs2024`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 52; recordings: 516; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004572](https://openneuro.org/datasets/ds004572) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004572](https://nemar.org/dataexplorer/detail?dataset_id=ds004572) DOI: [https://doi.org/10.18112/openneuro.ds004572.v1.3.2](https://doi.org/10.18112/openneuro.ds004572.v1.3.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004572 >>> dataset = DS004572(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004572) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004572) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004574: eeg dataset, 146 subjects *Cross-modal Oddball Task.* Access recordings and metadata through EEGDash. **Citation:** Arun Singh [arun.singh@usd.edu](mailto:arun.singh@usd.edu), Rachel Cole [rachel-cole@uiowa.edu](mailto:rachel-cole@uiowa.edu), Arturo Espinoza [arturo-espinoza@uiowa.edu](mailto:arturo-espinoza@uiowa.edu), Jan R Wessel [jan-wessel@uiowa.edu](mailto:jan-wessel@uiowa.edu), Jim Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu), Nandakumar Narayanan [nandakumar-narayanan@uiowa.edu](mailto:nandakumar-narayanan@uiowa.edu) (2023). *Cross-modal Oddball Task.*. [10.18112/openneuro.ds004574.v1.0.0](https://doi.org/10.18112/openneuro.ds004574.v1.0.0) Modality: eeg Subjects: 146 Recordings: 146 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004574 dataset = DS004574(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004574(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004574( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004574, title = {Cross-modal Oddball Task.}, author = {Arun Singh arun.singh@usd.edu and Rachel Cole rachel-cole@uiowa.edu and Arturo Espinoza arturo-espinoza@uiowa.edu and Jan R Wessel jan-wessel@uiowa.edu and Jim Cavanagh jcavanagh@unm.edu and Nandakumar Narayanan nandakumar-narayanan@uiowa.edu}, doi = {10.18112/openneuro.ds004574.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004574.v1.0.0}, } ``` ## About This Dataset This experiment includes 146 subjects: 98 individuals with Parkinsons disease, and 48 controls. The data were collected from 2017-2021. Subjects completed this oddball task (along with multiple other cognitive tasks) while EEG was recorded with a 64-channel BrainVision cap. This task includes a primary GO cue, (white arrow) that required a directional response. That response could be correct or incorrect. The primary cue was preceeded by a visual pre-cue and an auditory pre-cue, which occurred at the same time (500ms before arrow cue). Each trial had either standard for both pre-cues, oddball visual pre-cue, or oddball auditory pre-cue. Our analysis focused only on trials with both pre-cues standard or oddball auditory pre-cue. ## Dataset Information | Dataset ID | `DS004574` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Cross-modal Oddball Task. | | Author (year) | `Singh2023_Cross_modal` | | Canonical | — | | Importable as | `DS004574`, `Singh2023_Cross_modal` | | Year | 2023 | | Authors | Arun Singh [arun.singh@usd.edu](mailto:arun.singh@usd.edu), Rachel Cole [rachel-cole@uiowa.edu](mailto:rachel-cole@uiowa.edu), Arturo Espinoza [arturo-espinoza@uiowa.edu](mailto:arturo-espinoza@uiowa.edu), Jan R Wessel [jan-wessel@uiowa.edu](mailto:jan-wessel@uiowa.edu), Jim Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu), Nandakumar Narayanan [nandakumar-narayanan@uiowa.edu](mailto:nandakumar-narayanan@uiowa.edu) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004574.v1.0.0](https://doi.org/10.18112/openneuro.ds004574.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004574) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004574) | [Source URL](https://openneuro.org/datasets/ds004574) | ### Copy-paste BibTeX ```bibtex @dataset{ds004574, title = {Cross-modal Oddball Task.}, author = {Arun Singh arun.singh@usd.edu and Rachel Cole rachel-cole@uiowa.edu and Arturo Espinoza arturo-espinoza@uiowa.edu and Jan R Wessel jan-wessel@uiowa.edu and Jim Cavanagh jcavanagh@unm.edu and Nandakumar Narayanan nandakumar-narayanan@uiowa.edu}, doi = {10.18112/openneuro.ds004574.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004574.v1.0.0}, } ``` ## Technical Details - Subjects: 146 - Recordings: 146 - Tasks: 1 - Channels: 63 (116), 64 (29), 66 - Sampling rate (Hz): 500.0 - Duration (hours): 31.04288888888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 13.5 GB - File count: 146 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004574.v1.0.0 - Source: openneuro - OpenNeuro: [ds004574](https://openneuro.org/datasets/ds004574) - NeMAR: [ds004574](https://nemar.org/dataexplorer/detail?dataset_id=ds004574) ## API Reference Use the `DS004574` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004574(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cross-modal Oddball Task. * **Study:** `ds004574` (OpenNeuro) * **Author (year):** `Singh2023_Cross_modal` * **Canonical:** — Also importable as: `DS004574`, `Singh2023_Cross_modal`. Modality: `eeg`. Subjects: 146; recordings: 146; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004574](https://openneuro.org/datasets/ds004574) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004574](https://nemar.org/dataexplorer/detail?dataset_id=ds004574) DOI: [https://doi.org/10.18112/openneuro.ds004574.v1.0.0](https://doi.org/10.18112/openneuro.ds004574.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004574 >>> dataset = DS004574(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004574) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004574) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004577: eeg dataset, 103 subjects *Dataset containing resting EEG for a sample of 103 normal infants in the first year of life* Access recordings and metadata through EEGDash. **Citation:** Thalía Harmony (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México), Gloria Otero-Ojeda (Facultad de Medicina; Universidad Autónoma del Estado de México), Eduardo Aubert (Centro de Neurociencias de Cuba), Thalía Fernández (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México), Lourdes Cubero-Rego (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México) (2023). *Dataset containing resting EEG for a sample of 103 normal infants in the first year of life*. [10.18112/openneuro.ds004577.v1.0.1](https://doi.org/10.18112/openneuro.ds004577.v1.0.1) Modality: eeg Subjects: 103 Recordings: 130 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004577 dataset = DS004577(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004577(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004577( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004577, title = {Dataset containing resting EEG for a sample of 103 normal infants in the first year of life}, author = {Thalía Harmony (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México) and Gloria Otero-Ojeda (Facultad de Medicina; Universidad Autónoma del Estado de México) and Eduardo Aubert (Centro de Neurociencias de Cuba) and Thalía Fernández (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México) and Lourdes Cubero-Rego (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México)}, doi = {10.18112/openneuro.ds004577.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004577.v1.0.1}, } ``` ## About This Dataset May 25th 2023 Neurodevelopment Research Unit, Instituto de Neurobiología, Universidad Nacional Autónoma de México This is a dataset containing resting EEG for a sample of 103 normal infants (41 female and 62 male) in the first year of life. 81 subjects with 1 EEG recording 18 subjects with 2 EEG recordings 3 subjects with 3 EEG recording 1 subject with 4 EEG recordings 130 EEG recordings in total distributed in 4 sessions ## Dataset Information | Dataset ID | `DS004577` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset containing resting EEG for a sample of 103 normal infants in the first year of life | | Author (year) | `Unit2023` | | Canonical | — | | Importable as | `DS004577`, `Unit2023` | | Year | 2023 | | Authors | Thalía Harmony (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México), Gloria Otero-Ojeda (Facultad de Medicina; Universidad Autónoma del Estado de México), Eduardo Aubert (Centro de Neurociencias de Cuba), Thalía Fernández (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México), Lourdes Cubero-Rego (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004577.v1.0.1](https://doi.org/10.18112/openneuro.ds004577.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004577) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004577) | [Source URL](https://openneuro.org/datasets/ds004577) | ### Copy-paste BibTeX ```bibtex @dataset{ds004577, title = {Dataset containing resting EEG for a sample of 103 normal infants in the first year of life}, author = {Thalía Harmony (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México) and Gloria Otero-Ojeda (Facultad de Medicina; Universidad Autónoma del Estado de México) and Eduardo Aubert (Centro de Neurociencias de Cuba) and Thalía Fernández (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México) and Lourdes Cubero-Rego (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México)}, doi = {10.18112/openneuro.ds004577.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004577.v1.0.1}, } ``` ## Technical Details - Subjects: 103 - Recordings: 130 - Tasks: 1 - Channels: 19 (106), 24 (23), 21 - Sampling rate (Hz): 200.0 - Duration (hours): 22.973859722222223 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 652.7 MB - File count: 130 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004577.v1.0.1 - Source: openneuro - OpenNeuro: [ds004577](https://openneuro.org/datasets/ds004577) - NeMAR: [ds004577](https://nemar.org/dataexplorer/detail?dataset_id=ds004577) ## API Reference Use the `DS004577` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004577(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset containing resting EEG for a sample of 103 normal infants in the first year of life * **Study:** `ds004577` (OpenNeuro) * **Author (year):** `Unit2023` * **Canonical:** — Also importable as: `DS004577`, `Unit2023`. Modality: `eeg`. Subjects: 103; recordings: 130; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004577](https://openneuro.org/datasets/ds004577) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004577](https://nemar.org/dataexplorer/detail?dataset_id=ds004577) DOI: [https://doi.org/10.18112/openneuro.ds004577.v1.0.1](https://doi.org/10.18112/openneuro.ds004577.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004577 >>> dataset = DS004577(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004577) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004577) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004579: eeg dataset, 139 subjects *Interval Timing Task* Access recordings and metadata through EEGDash. **Citation:** Arun Singh [arun.singh@usd.edu](mailto:arun.singh@usd.edu), Rachel Cole [rachel-cole@uiowa.edu](mailto:rachel-cole@uiowa.edu), Arturo Espinoza [arturo-espinoza@uiowa.edu](mailto:arturo-espinoza@uiowa.edu), Jan R Wessel [jan-wessel@uiowa.edu](mailto:jan-wessel@uiowa.edu), Jim Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu), Nandakumar Narayanan [nandakumar-narayanan@uiowa.edu](mailto:nandakumar-narayanan@uiowa.edu) (2023). *Interval Timing Task*. [10.18112/openneuro.ds004579.v1.0.0](https://doi.org/10.18112/openneuro.ds004579.v1.0.0) Modality: eeg Subjects: 139 Recordings: 139 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004579 dataset = DS004579(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004579(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004579( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004579, title = {Interval Timing Task}, author = {Arun Singh arun.singh@usd.edu and Rachel Cole rachel-cole@uiowa.edu and Arturo Espinoza arturo-espinoza@uiowa.edu and Jan R Wessel jan-wessel@uiowa.edu and Jim Cavanagh jcavanagh@unm.edu and Nandakumar Narayanan nandakumar-narayanan@uiowa.edu}, doi = {10.18112/openneuro.ds004579.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004579.v1.0.0}, } ``` ## About This Dataset This experiment includes 139 subjects: 94 individuals with Parkinsons disease, and 45 controls. Subjects completed this IntervalTiming task (along with multiple other cognitive tasks) while EEG was recorded with a 64-channel BrainVision cap. This task presented black instructional text on the center of a white screen that read “Short interval” on 3-second interval trials and “Long interval” on 7-second interval trials. The researchers never communicated the actual interval durations to the patient. The instructions were displayed for 1 second, and the appearance of an image of a solid box in the center of the computer screen indicated the start of the interval. The cue was displayed on the screen for the entire trial, which lasted 6 s for 3-s intervals and 14 s for 7-s intervals. The researchers instructed participants to press the keyboard spacebar when they judged the target interval to have elapsed. Participants were directed not to count, and a distractor vowel appeared at random intervals in the screen center. ## Dataset Information | Dataset ID | `DS004579` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Interval Timing Task | | Author (year) | `Singh2023_Interval_Timing` | | Canonical | — | | Importable as | `DS004579`, `Singh2023_Interval_Timing` | | Year | 2023 | | Authors | Arun Singh [arun.singh@usd.edu](mailto:arun.singh@usd.edu), Rachel Cole [rachel-cole@uiowa.edu](mailto:rachel-cole@uiowa.edu), Arturo Espinoza [arturo-espinoza@uiowa.edu](mailto:arturo-espinoza@uiowa.edu), Jan R Wessel [jan-wessel@uiowa.edu](mailto:jan-wessel@uiowa.edu), Jim Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu), Nandakumar Narayanan [nandakumar-narayanan@uiowa.edu](mailto:nandakumar-narayanan@uiowa.edu) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004579.v1.0.0](https://doi.org/10.18112/openneuro.ds004579.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004579) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004579) | [Source URL](https://openneuro.org/datasets/ds004579) | ### Copy-paste BibTeX ```bibtex @dataset{ds004579, title = {Interval Timing Task}, author = {Arun Singh arun.singh@usd.edu and Rachel Cole rachel-cole@uiowa.edu and Arturo Espinoza arturo-espinoza@uiowa.edu and Jan R Wessel jan-wessel@uiowa.edu and Jim Cavanagh jcavanagh@unm.edu and Nandakumar Narayanan nandakumar-narayanan@uiowa.edu}, doi = {10.18112/openneuro.ds004579.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004579.v1.0.0}, } ``` ## Technical Details - Subjects: 139 - Recordings: 139 - Tasks: 1 - Channels: 63 (110), 64 (28), 66 - Sampling rate (Hz): 500.0 - Duration (hours): 55.70307777777777 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 24.1 GB - File count: 139 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004579.v1.0.0 - Source: openneuro - OpenNeuro: [ds004579](https://openneuro.org/datasets/ds004579) - NeMAR: [ds004579](https://nemar.org/dataexplorer/detail?dataset_id=ds004579) ## API Reference Use the `DS004579` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004579(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Interval Timing Task * **Study:** `ds004579` (OpenNeuro) * **Author (year):** `Singh2023_Interval_Timing` * **Canonical:** — Also importable as: `DS004579`, `Singh2023_Interval_Timing`. Modality: `eeg`. Subjects: 139; recordings: 139; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004579](https://openneuro.org/datasets/ds004579) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004579](https://nemar.org/dataexplorer/detail?dataset_id=ds004579) DOI: [https://doi.org/10.18112/openneuro.ds004579.v1.0.0](https://doi.org/10.18112/openneuro.ds004579.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004579 >>> dataset = DS004579(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004579) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004579) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004580: eeg dataset, 147 subjects *Simon-conflict Task.* Access recordings and metadata through EEGDash. **Citation:** Arun Singh [arun.singh@usd.edu](mailto:arun.singh@usd.edu), Rachel Cole [rachel-cole@uiowa.edu](mailto:rachel-cole@uiowa.edu), Arturo Espinoza [arturo-espinoza@uiowa.edu](mailto:arturo-espinoza@uiowa.edu), Jan R Wessel [jan-wessel@uiowa.edu](mailto:jan-wessel@uiowa.edu), Jim Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu), Nandakumar Narayanan [nandakumar-narayanan@uiowa.edu](mailto:nandakumar-narayanan@uiowa.edu) (2023). *Simon-conflict Task.*. [10.18112/openneuro.ds004580.v1.0.0](https://doi.org/10.18112/openneuro.ds004580.v1.0.0) Modality: eeg Subjects: 147 Recordings: 147 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004580 dataset = DS004580(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004580(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004580( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004580, title = {Simon-conflict Task.}, author = {Arun Singh arun.singh@usd.edu and Rachel Cole rachel-cole@uiowa.edu and Arturo Espinoza arturo-espinoza@uiowa.edu and Jan R Wessel jan-wessel@uiowa.edu and Jim Cavanagh jcavanagh@unm.edu and Nandakumar Narayanan nandakumar-narayanan@uiowa.edu}, doi = {10.18112/openneuro.ds004580.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004580.v1.0.0}, } ``` ## About This Dataset This experiment includes 146 subjects: 98 individuals with Parkinsons disease, and 48 controls. Subjects completed this Simon task (along with multiple other cognitive tasks) while EEG was recorded with a 64-channel BrainVision cap. This task included a stimulus presented to the left or right side of the screen. The researchers instructed participants to press a left key when the was yellow or red and a right key when it was cyan or blue. The stimulus was either spatially congruent with the screen side matching the response hand or incongruent with the screen side contralateral to the response hand. The researchers analyzed data from congruent and incongruent trials separately. ## Dataset Information | Dataset ID | `DS004580` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Simon-conflict Task. | | Author (year) | `Singh2023_Simon_conflict` | | Canonical | — | | Importable as | `DS004580`, `Singh2023_Simon_conflict` | | Year | 2023 | | Authors | Arun Singh [arun.singh@usd.edu](mailto:arun.singh@usd.edu), Rachel Cole [rachel-cole@uiowa.edu](mailto:rachel-cole@uiowa.edu), Arturo Espinoza [arturo-espinoza@uiowa.edu](mailto:arturo-espinoza@uiowa.edu), Jan R Wessel [jan-wessel@uiowa.edu](mailto:jan-wessel@uiowa.edu), Jim Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu), Nandakumar Narayanan [nandakumar-narayanan@uiowa.edu](mailto:nandakumar-narayanan@uiowa.edu) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004580.v1.0.0](https://doi.org/10.18112/openneuro.ds004580.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004580) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004580) | [Source URL](https://openneuro.org/datasets/ds004580) | ### Copy-paste BibTeX ```bibtex @dataset{ds004580, title = {Simon-conflict Task.}, author = {Arun Singh arun.singh@usd.edu and Rachel Cole rachel-cole@uiowa.edu and Arturo Espinoza arturo-espinoza@uiowa.edu and Jan R Wessel jan-wessel@uiowa.edu and Jim Cavanagh jcavanagh@unm.edu and Nandakumar Narayanan nandakumar-narayanan@uiowa.edu}, doi = {10.18112/openneuro.ds004580.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004580.v1.0.0}, } ``` ## Technical Details - Subjects: 147 - Recordings: 147 - Tasks: 1 - Channels: 63 (118), 64 (28), 66 - Sampling rate (Hz): 500.0 - Duration (hours): 36.51436111111112 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 15.8 GB - File count: 147 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004580.v1.0.0 - Source: openneuro - OpenNeuro: [ds004580](https://openneuro.org/datasets/ds004580) - NeMAR: [ds004580](https://nemar.org/dataexplorer/detail?dataset_id=ds004580) ## API Reference Use the `DS004580` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004580(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Simon-conflict Task. * **Study:** `ds004580` (OpenNeuro) * **Author (year):** `Singh2023_Simon_conflict` * **Canonical:** — Also importable as: `DS004580`, `Singh2023_Simon_conflict`. Modality: `eeg`. Subjects: 147; recordings: 147; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004580](https://openneuro.org/datasets/ds004580) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004580](https://nemar.org/dataexplorer/detail?dataset_id=ds004580) DOI: [https://doi.org/10.18112/openneuro.ds004580.v1.0.0](https://doi.org/10.18112/openneuro.ds004580.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004580 >>> dataset = DS004580(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004580) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004580) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004582: eeg dataset, 73 subjects *FakeFaceEmo_data* Access recordings and metadata through EEGDash. **Citation:** Makowski, Dominique, Te, An-Shu, Kirk, Stephanie, Ngoi, Zi Liang (2023). *FakeFaceEmo_data*. [10.18112/openneuro.ds004582.v1.0.0](https://doi.org/10.18112/openneuro.ds004582.v1.0.0) Modality: eeg Subjects: 73 Recordings: 73 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004582 dataset = DS004582(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004582(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004582( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004582, title = {FakeFaceEmo_data}, author = {Makowski, Dominique and Te, An-Shu and Kirk, Stephanie and Ngoi, Zi Liang}, doi = {10.18112/openneuro.ds004582.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004582.v1.0.0}, } ``` ## About This Dataset **Overview** This dataset was collected in 2023 and comprises electroencephalography, physiological and behavioural data acquired from 73 healthy individuals (ages: 21-45). The task was administered as part of a larger study. **Task Description** **Fake Face (FF)** The objective of the study was to investigate if emotional arousal would affect people’s perceived realness of others’ faces, given ambiguous information. To manipulate participants’ emotional arousal, images of angry (high emotionality) and neutral (low emotionality) faces (selected based on the their rated intensity from the NimStim Set of Facial Expressions (Tottenham et al., 2009)), were used as subliminal primes and facial images from the Multi-Racial Mega-Resolution database (Strohminger et al., 2016) were used as target stimuli. Blank screens were flashed prior to the target presentation in control trials. Forward and backward masks, generated by scrambling the primes, were implemented to prevent the primes from breaking awareness. Each participant underwent a total of 222 trials, comprising of a forward mask,followed by the prime and backward mask, before the presentation of the target stimuli. The primes and targets were presented in a randomized order and trials were administered over a course of 3 blocks, between which participants were given a break to rest before proceeding to the next block of trials. During the presentation of the target stimulus, participants were instructed to indicate whether they thought the target was real or fake in a limited span of time (750ms), after which participants rated their confidence in their response using a sliding scale (0-100). **Data acquisition** **EEG data acquisition** EEG signals were recorded using the EasyCap 64-channel and BrainVision Recording system. Electrodes were placed on the EEG cap according to the standard 10-5 system of electrode placement (Oostenveld & Praamsrta, 2001) and impedance was kept below 12 kOhm for each subject. The ground electrode was placed on the forehead the Cz was used as the reference channel. During recording, the sampling rate was 10000Hz. Note that channels Tp9 and Tp10 were placed near the outer canthi of each eye, and POz as well as Oz were fixed above and below one of the eyes to measure the E0G. **Physiological data acquisition** Participants’ physiological signals, that is their electrocardiogram (*ECG*), photoplethysmograph (PPG) and respiration signals (*RSP*), were obtained at a sampling frequency of 1000Hz. All physiological signals were recorded via the PLUX OpenSignals software and BITalino Toolkit. ECG was collected using three ECG electrodes placed according to a modified Lead II configuration, and RSP was acquired using a respiration belt tightened over participants’ upper abdomen. PPG sensors, which record changes in blood volume, were clipped on the tip of the index finger of participants’ non-dominant hand to meaure heart rate and oxygen saturation. **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS004582` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | FakeFaceEmo_data | | Author (year) | `Makowski2023_FakeFaceEmo` | | Canonical | — | | Importable as | `DS004582`, `Makowski2023_FakeFaceEmo` | | Year | 2023 | | Authors | Makowski, Dominique, Te, An-Shu, Kirk, Stephanie, Ngoi, Zi Liang | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004582.v1.0.0](https://doi.org/10.18112/openneuro.ds004582.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004582) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004582) | [Source URL](https://openneuro.org/datasets/ds004582) | ### Copy-paste BibTeX ```bibtex @dataset{ds004582, title = {FakeFaceEmo_data}, author = {Makowski, Dominique and Te, An-Shu and Kirk, Stephanie and Ngoi, Zi Liang}, doi = {10.18112/openneuro.ds004582.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004582.v1.0.0}, } ``` ## Technical Details - Subjects: 73 - Recordings: 73 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 10000.0 - Duration (hours): 34.243775722222225 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 294.2 GB - File count: 73 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004582.v1.0.0 - Source: openneuro - OpenNeuro: [ds004582](https://openneuro.org/datasets/ds004582) - NeMAR: [ds004582](https://nemar.org/dataexplorer/detail?dataset_id=ds004582) ## API Reference Use the `DS004582` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004582(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FakeFaceEmo_data * **Study:** `ds004582` (OpenNeuro) * **Author (year):** `Makowski2023_FakeFaceEmo` * **Canonical:** — Also importable as: `DS004582`, `Makowski2023_FakeFaceEmo`. Modality: `eeg`. Subjects: 73; recordings: 73; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004582](https://openneuro.org/datasets/ds004582) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004582](https://nemar.org/dataexplorer/detail?dataset_id=ds004582) DOI: [https://doi.org/10.18112/openneuro.ds004582.v1.0.0](https://doi.org/10.18112/openneuro.ds004582.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004582 >>> dataset = DS004582(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004582) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004582) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004584: eeg dataset, 149 subjects *Rest eyes open* Access recordings and metadata through EEGDash. **Citation:** Arun Singh [arun.singh@usd.edu](mailto:arun.singh@usd.edu), Rachel Cole [rachel-cole@uiowa.edu](mailto:rachel-cole@uiowa.edu), Arturo Espinoza [arturo-espinoza@uiowa.edu](mailto:arturo-espinoza@uiowa.edu), Jim Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu), Nandakumar Narayanan [nandakumar-narayanan@uiowa.edu](mailto:nandakumar-narayanan@uiowa.edu) (2023). *Rest eyes open*. [10.18112/openneuro.ds004584.v1.0.0](https://doi.org/10.18112/openneuro.ds004584.v1.0.0) Modality: eeg Subjects: 149 Recordings: 149 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004584 dataset = DS004584(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004584(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004584( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004584, title = {Rest eyes open}, author = {Arun Singh arun.singh@usd.edu and Rachel Cole rachel-cole@uiowa.edu and Arturo Espinoza arturo-espinoza@uiowa.edu and Jim Cavanagh jcavanagh@unm.edu and Nandakumar Narayanan nandakumar-narayanan@uiowa.edu}, doi = {10.18112/openneuro.ds004584.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004584.v1.0.0}, } ``` ## About This Dataset This experiment includes 149 subjects: 100 individuals with Parkinsons disease, and 49 controls. EEG was recorded with a 64-channel BrainVision cap. Resting-state EEG was collected from patients sitting in a quiet room with their eyes open for two minutes. ## Dataset Information | Dataset ID | `DS004584` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Rest eyes open | | Author (year) | `Singh2023_Rest_eyes` | | Canonical | — | | Importable as | `DS004584`, `Singh2023_Rest_eyes` | | Year | 2023 | | Authors | Arun Singh [arun.singh@usd.edu](mailto:arun.singh@usd.edu), Rachel Cole [rachel-cole@uiowa.edu](mailto:rachel-cole@uiowa.edu), Arturo Espinoza [arturo-espinoza@uiowa.edu](mailto:arturo-espinoza@uiowa.edu), Jim Cavanagh [jcavanagh@unm.edu](mailto:jcavanagh@unm.edu), Nandakumar Narayanan [nandakumar-narayanan@uiowa.edu](mailto:nandakumar-narayanan@uiowa.edu) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004584.v1.0.0](https://doi.org/10.18112/openneuro.ds004584.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004584) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004584) | [Source URL](https://openneuro.org/datasets/ds004584) | ### Copy-paste BibTeX ```bibtex @dataset{ds004584, title = {Rest eyes open}, author = {Arun Singh arun.singh@usd.edu and Rachel Cole rachel-cole@uiowa.edu and Arturo Espinoza arturo-espinoza@uiowa.edu and Jim Cavanagh jcavanagh@unm.edu and Nandakumar Narayanan nandakumar-narayanan@uiowa.edu}, doi = {10.18112/openneuro.ds004584.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004584.v1.0.0}, } ``` ## Technical Details - Subjects: 149 - Recordings: 149 - Tasks: 1 - Channels: 63 (119), 64 (29), 66 - Sampling rate (Hz): 500.0 - Duration (hours): 6.641037222222223 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 2.9 GB - File count: 149 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004584.v1.0.0 - Source: openneuro - OpenNeuro: [ds004584](https://openneuro.org/datasets/ds004584) - NeMAR: [ds004584](https://nemar.org/dataexplorer/detail?dataset_id=ds004584) ## API Reference Use the `DS004584` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004584(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Rest eyes open * **Study:** `ds004584` (OpenNeuro) * **Author (year):** `Singh2023_Rest_eyes` * **Canonical:** — Also importable as: `DS004584`, `Singh2023_Rest_eyes`. Modality: `eeg`. Subjects: 149; recordings: 149; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004584](https://openneuro.org/datasets/ds004584) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004584](https://nemar.org/dataexplorer/detail?dataset_id=ds004584) DOI: [https://doi.org/10.18112/openneuro.ds004584.v1.0.0](https://doi.org/10.18112/openneuro.ds004584.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004584 >>> dataset = DS004584(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004584) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004584) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004587: eeg dataset, 103 subjects *IllusionGameEEG_data* Access recordings and metadata through EEGDash. **Citation:** Makowski, Dominique, Te, An-Shu, Jiayi, Zhang, Kirk, Stephanie, Ngoi, Zi Liang (2023). *IllusionGameEEG_data*. [10.18112/openneuro.ds004587.v1.0.0](https://doi.org/10.18112/openneuro.ds004587.v1.0.0) Modality: eeg Subjects: 103 Recordings: 114 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004587 dataset = DS004587(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004587(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004587( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004587, title = {IllusionGameEEG_data}, author = {Makowski, Dominique and Te, An-Shu and Jiayi, Zhang and Kirk, Stephanie and Ngoi, Zi Liang}, doi = {10.18112/openneuro.ds004587.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004587.v1.0.0}, } ``` ## About This Dataset **Overview** This dataset was collected in 2022-20233 and comprises electroencephalography, physiological and behavioural data acquired from 103 healthy individuals (ages: 21-45). The task was administered as part of a larger study. **Task Description** **Illusion Game (IG)** ### View full README **Overview** This dataset was collected in 2022-20233 and comprises electroencephalography, physiological and behavioural data acquired from 103 healthy individuals (ages: 21-45). The task was administered as part of a larger study. **Task Description** **Illusion Game (IG)** The aim of this task is to investigate people’s sensitivity to visual illusions as a general, common factor. Using Pyllusion, which enabled us to manipulate the objective parameters of visual illusions, we generated stimuli of varying task difficulty and illusion strength for 3 different classic illusions (Ebbinghaus, Müller-Lyer and Vertical-Horizontal). We then created an experimental task in which participants were instructed to make perceptual judgements about targets in the illusion as quickly as possible, ignoring its context, which biases their perception of the illusion. For instance, in the Müller-Lyer illusion, the same-length line segments (*targets*) appear to have different lengths if they end with inwards vs. outwards pointing arrows (*context*). The first series of the 3 illusion blocks (each comprising 64 trials) were first presented to participants in a randomized order, followed by a short break, after which participants performed the second series of blocks displayed in a newly randomized order. In total, each participant performed 384 illusion trials (6\*64). **Resting State** Before the start of the illusion task, paricipants were instructed to keep their eyes closed for 8 minutes. At the end of the resting period, a ‘beep’ soundclip was played to cue participants to open their eyes. An adapted version of the Amsterdam Resting State Questionnaire (Diaz et al., 2014) was then administered to examine participants’ subjective resting state experience. **NOTES** Due to a technical error, sub-FFE111 and sub-FFE116 do not have any physiological data, and sub-FFE117, sub-FFE139 and sub-FFE146 do not have behavioural data for the illusion game task. EEG data collection was split into 6 runs corresponding to each block of illusion trials for sub-FFE111 and sub-FFE121 during pilot testing. EEG data collection was collected twice for sub-FFE007 due to a technical glitch that occcured in the middle of illusion task trials. **Data acquisition** **EEG data acquisition** EEG signals were recorded using the EasyCap 64-channel and BrainVision Recording system. Electrodes were placed on the EEG cap according to the standard 10-5 system of electrode placement (Oostenveld & Praamsrta, 2001) and impedance was kept below 12 kOhm for each subject. The ground electrode was placed on the forehead the Cz was used as the reference channel. During recording, the sampling rate was 10000Hz. Note that channels Tp9 and Tp10 were placed near the outer canthi of each eye, and POz as well as Oz were fixed above and below one of the eyes to measure the E0G. **Physiological data acquisition** Participants’ physiological signals, that is their electrocardiogram (*ECG*), photoplethysmograph (PPG) and respiration signals (*RSP*), were obtained at a sampling frequency of 1000Hz. All physiological signals were recorded via the PLUX OpenSignals software and BITalino Toolkit. ECG was collected using three ECG electrodes placed according to a modified Lead II configuration, and RSP was acquired using a respiration belt tightened over participants’ upper abdomen. PPG sensors, which record changes in blood volume, were clipped on the tip of the index finger of participants’ non-dominant hand to meaure heart rate and oxygen saturation. **References** Diaz, B. A., Van Der Sluis, S., Benjamins, J. S., Stoffers, D., Hardstone, R., Mansvelder, H. D., … & Linkenkaer-Hansen, K. (2014). The ARSQ 2.0 reveals age and personality effects on mind-wandering experiences. Frontiers in psychology, 5, 271. ## Dataset Information | Dataset ID | `DS004587` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | IllusionGameEEG_data | | Author (year) | `Makowski2023_IllusionGameEEG` | | Canonical | — | | Importable as | `DS004587`, `Makowski2023_IllusionGameEEG` | | Year | 2023 | | Authors | Makowski, Dominique, Te, An-Shu, Jiayi, Zhang, Kirk, Stephanie, Ngoi, Zi Liang | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004587.v1.0.0](https://doi.org/10.18112/openneuro.ds004587.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004587) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004587) | [Source URL](https://openneuro.org/datasets/ds004587) | ### Copy-paste BibTeX ```bibtex @dataset{ds004587, title = {IllusionGameEEG_data}, author = {Makowski, Dominique and Te, An-Shu and Jiayi, Zhang and Kirk, Stephanie and Ngoi, Zi Liang}, doi = {10.18112/openneuro.ds004587.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004587.v1.0.0}, } ``` ## Technical Details - Subjects: 103 - Recordings: 114 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 10000.0 - Duration (hours): 25.51370786111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 219.3 GB - File count: 114 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004587.v1.0.0 - Source: openneuro - OpenNeuro: [ds004587](https://openneuro.org/datasets/ds004587) - NeMAR: [ds004587](https://nemar.org/dataexplorer/detail?dataset_id=ds004587) ## API Reference Use the `DS004587` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004587(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) IllusionGameEEG_data * **Study:** `ds004587` (OpenNeuro) * **Author (year):** `Makowski2023_IllusionGameEEG` * **Canonical:** — Also importable as: `DS004587`, `Makowski2023_IllusionGameEEG`. Modality: `eeg`. Subjects: 103; recordings: 114; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004587](https://openneuro.org/datasets/ds004587) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004587](https://nemar.org/dataexplorer/detail?dataset_id=ds004587) DOI: [https://doi.org/10.18112/openneuro.ds004587.v1.0.0](https://doi.org/10.18112/openneuro.ds004587.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004587 >>> dataset = DS004587(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004587) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004587) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004588: eeg dataset, 42 subjects *Neuma* Access recordings and metadata through EEGDash. **Citation:** Kostas Georgiadis, Fotis P. Kalaganis, Kyriakos Riskos, Eleytheria Matta, Vangelis P. Oikonomou, Yfantidou Ioanna, Dimitris Chantziaras, Kyriakos Pantouvakis, Spiros Nikolopoulos, Nikos A. Laskaris, Ioannis Kompatsiaris (2023). *Neuma*. [10.18112/openneuro.ds004588.v1.2.0](https://doi.org/10.18112/openneuro.ds004588.v1.2.0) Modality: eeg Subjects: 42 Recordings: 42 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004588 dataset = DS004588(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004588(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004588( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004588, title = {Neuma}, author = {Kostas Georgiadis and Fotis P. Kalaganis and Kyriakos Riskos and Eleytheria Matta and Vangelis P. Oikonomou and Yfantidou Ioanna and Dimitris Chantziaras and Kyriakos Pantouvakis and Spiros Nikolopoulos and Nikos A. Laskaris and Ioannis Kompatsiaris}, doi = {10.18112/openneuro.ds004588.v1.2.0}, url = {https://doi.org/10.18112/openneuro.ds004588.v1.2.0}, } ``` ## About This Dataset A novel multimodal Neuromarketing dataset that encompasses the data from 42 individuals who participated in an advertising brochure-browsing scenario is introduced here. In more detail, participants were exposed to a series of supermarket brochures (containing various products) and instructed to select the products they intended to buy. The data collected for each individual executing this protocol included: (i) encephalographic (EEG) recordings, (ii) eye tracking (ET) recordings, (iii) questionnaire responses (demographic, profiling and product related questions), and (iv) computer mouse data. The preprocessed version of this dataset can be found here: [https://figshare.com/articles/dataset/NeuMa_PreProcessed_A_multimodal_Neuromarketing_dataset/22117124](https://figshare.com/articles/dataset/NeuMa_PreProcessed_A_multimodal_Neuromarketing_dataset/22117124) ## Dataset Information | Dataset ID | `DS004588` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Neuma | | Author (year) | `Georgiadis2023` | | Canonical | `Neuma` | | Importable as | `DS004588`, `Georgiadis2023`, `Neuma` | | Year | 2023 | | Authors | Kostas Georgiadis, Fotis P. Kalaganis, Kyriakos Riskos, Eleytheria Matta, Vangelis P. Oikonomou, Yfantidou Ioanna, Dimitris Chantziaras, Kyriakos Pantouvakis, Spiros Nikolopoulos, Nikos A. Laskaris, Ioannis Kompatsiaris | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004588.v1.2.0](https://doi.org/10.18112/openneuro.ds004588.v1.2.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004588) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004588) | [Source URL](https://openneuro.org/datasets/ds004588) | ### Copy-paste BibTeX ```bibtex @dataset{ds004588, title = {Neuma}, author = {Kostas Georgiadis and Fotis P. Kalaganis and Kyriakos Riskos and Eleytheria Matta and Vangelis P. Oikonomou and Yfantidou Ioanna and Dimitris Chantziaras and Kyriakos Pantouvakis and Spiros Nikolopoulos and Nikos A. Laskaris and Ioannis Kompatsiaris}, doi = {10.18112/openneuro.ds004588.v1.2.0}, url = {https://doi.org/10.18112/openneuro.ds004588.v1.2.0}, } ``` ## Technical Details - Subjects: 42 - Recordings: 42 - Tasks: 1 - Channels: 24 - Sampling rate (Hz): 300.0 - Duration (hours): 4.957289814814814 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 534.1 MB - File count: 42 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004588.v1.2.0 - Source: openneuro - OpenNeuro: [ds004588](https://openneuro.org/datasets/ds004588) - NeMAR: [ds004588](https://nemar.org/dataexplorer/detail?dataset_id=ds004588) ## API Reference Use the `DS004588` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004588(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neuma * **Study:** `ds004588` (OpenNeuro) * **Author (year):** `Georgiadis2023` * **Canonical:** `Neuma` Also importable as: `DS004588`, `Georgiadis2023`, `Neuma`. Modality: `eeg`. Subjects: 42; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004588](https://openneuro.org/datasets/ds004588) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004588](https://nemar.org/dataexplorer/detail?dataset_id=ds004588) DOI: [https://doi.org/10.18112/openneuro.ds004588.v1.2.0](https://doi.org/10.18112/openneuro.ds004588.v1.2.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004588 >>> dataset = DS004588(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004588) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004588) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004595: eeg dataset, 53 subjects *EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls* Access recordings and metadata through EEGDash. **Citation:** Ethan Campbell, James F Cavanagh (2023). *EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls*. [10.18112/openneuro.ds004595.v1.0.0](https://doi.org/10.18112/openneuro.ds004595.v1.0.0) Modality: eeg Subjects: 53 Recordings: 53 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004595 dataset = DS004595(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004595(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004595( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004595, title = {EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls}, author = {Ethan Campbell and James F Cavanagh}, doi = {10.18112/openneuro.ds004595.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004595.v1.0.0}, } ``` ## About This Dataset RL task (3-armed bandit) with alcohol vs. beverage cues in N=53 Community participants. Data collected from 2019-2021 in the CRCL at UNM. The paper [Campbell, E., Singh, G., Claus, E.D., Witkiewitz,K., Costa, V.D., Hogeveen, J; & Cavanagh, J.F. Electrophysiological markers of aberrant cue-specific exploration in hazardous drinkers] Should be coming out in print soonish. Your best bet for understanding this task would be to read that paper first. For more info on triggers and outputs, see BEH_EXPLAIN.m file in code folder. - James F Cavanagh 03/06/2023 ## Dataset Information | Dataset ID | `DS004595` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls | | Author (year) | `Campbell2023` | | Canonical | — | | Importable as | `DS004595`, `Campbell2023` | | Year | 2023 | | Authors | Ethan Campbell, James F Cavanagh | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004595.v1.0.0](https://doi.org/10.18112/openneuro.ds004595.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004595) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004595) | [Source URL](https://openneuro.org/datasets/ds004595) | ### Copy-paste BibTeX ```bibtex @dataset{ds004595, title = {EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls}, author = {Ethan Campbell and James F Cavanagh}, doi = {10.18112/openneuro.ds004595.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004595.v1.0.0}, } ``` ## Technical Details - Subjects: 53 - Recordings: 53 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 500.0 - Duration (hours): 17.077527777777778 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 7.8 GB - File count: 53 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004595.v1.0.0 - Source: openneuro - OpenNeuro: [ds004595](https://openneuro.org/datasets/ds004595) - NeMAR: [ds004595](https://nemar.org/dataexplorer/detail?dataset_id=ds004595) ## API Reference Use the `DS004595` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004595(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls * **Study:** `ds004595` (OpenNeuro) * **Author (year):** `Campbell2023` * **Canonical:** — Also importable as: `DS004595`, `Campbell2023`. Modality: `eeg`. Subjects: 53; recordings: 53; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004595](https://openneuro.org/datasets/ds004595) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004595](https://nemar.org/dataexplorer/detail?dataset_id=ds004595) DOI: [https://doi.org/10.18112/openneuro.ds004595.v1.0.0](https://doi.org/10.18112/openneuro.ds004595.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004595 >>> dataset = DS004595(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004595) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004595) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004598: eeg dataset, 9 subjects *LFP during linear track in 6-month old TgF344-AD rats* Access recordings and metadata through EEGDash. **Citation:** Moradi Faraz, van den Berg Monica, Mirjebreili Morteza, Kosten Lauren, Verhoye Marleen, Amiri Mahmood, Keliris A. Georgios (2023). *LFP during linear track in 6-month old TgF344-AD rats*. [10.18112/openneuro.ds004598.v1.0.0](https://doi.org/10.18112/openneuro.ds004598.v1.0.0) Modality: eeg Subjects: 9 Recordings: 20 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004598 dataset = DS004598(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004598(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004598( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004598, title = {LFP during linear track in 6-month old TgF344-AD rats}, author = {Moradi Faraz and van den Berg Monica and Mirjebreili Morteza and Kosten Lauren and Verhoye Marleen and Amiri Mahmood and Keliris A. Georgios}, doi = {10.18112/openneuro.ds004598.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004598.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS004598` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | LFP during linear track in 6-month old TgF344-AD rats | | Author (year) | `Faraz2023` | | Canonical | `Moradi2024` | | Importable as | `DS004598`, `Faraz2023`, `Moradi2024` | | Year | 2023 | | Authors | Moradi Faraz, van den Berg Monica, Mirjebreili Morteza, Kosten Lauren, Verhoye Marleen, Amiri Mahmood, Keliris A. Georgios | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004598.v1.0.0](https://doi.org/10.18112/openneuro.ds004598.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004598) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004598) | [Source URL](https://openneuro.org/datasets/ds004598) | ### Copy-paste BibTeX ```bibtex @dataset{ds004598, title = {LFP during linear track in 6-month old TgF344-AD rats}, author = {Moradi Faraz and van den Berg Monica and Mirjebreili Morteza and Kosten Lauren and Verhoye Marleen and Amiri Mahmood and Keliris A. Georgios}, doi = {10.18112/openneuro.ds004598.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004598.v1.0.0}, } ``` ## Technical Details - Subjects: 9 - Recordings: 20 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 10000.0 - Duration (hours): 8.2455 - Pathology: Dementia - Modality: Motor - Type: Memory - Size on disk: 9.9 GB - File count: 20 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004598.v1.0.0 - Source: openneuro - OpenNeuro: [ds004598](https://openneuro.org/datasets/ds004598) - NeMAR: [ds004598](https://nemar.org/dataexplorer/detail?dataset_id=ds004598) ## API Reference Use the `DS004598` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004598(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LFP during linear track in 6-month old TgF344-AD rats * **Study:** `ds004598` (OpenNeuro) * **Author (year):** `Faraz2023` * **Canonical:** `Moradi2024` Also importable as: `DS004598`, `Faraz2023`, `Moradi2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Dementia`. Subjects: 9; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004598](https://openneuro.org/datasets/ds004598) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004598](https://nemar.org/dataexplorer/detail?dataset_id=ds004598) DOI: [https://doi.org/10.18112/openneuro.ds004598.v1.0.0](https://doi.org/10.18112/openneuro.ds004598.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004598 >>> dataset = DS004598(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004598) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004598) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004602: eeg dataset, 182 subjects *Registered Replication Report of ERN/Pe Psychometrics* Access recordings and metadata through EEGDash. **Citation:** Peter E Clayson, Michael J Larson (2023). *Registered Replication Report of ERN/Pe Psychometrics*. [10.18112/openneuro.ds004602.v1.0.1](https://doi.org/10.18112/openneuro.ds004602.v1.0.1) Modality: eeg Subjects: 182 Recordings: 546 License: CC0 Source: openneuro Citations: 5.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004602 dataset = DS004602(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004602(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004602( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004602, title = {Registered Replication Report of ERN/Pe Psychometrics}, author = {Peter E Clayson and Michael J Larson}, doi = {10.18112/openneuro.ds004602.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004602.v1.0.1}, } ``` ## About This Dataset This dataset supports a registered replication report that is described at [https://osf.io/8cbua/](https://osf.io/8cbua/). Scripts used for data processing are posted there. Abstract Intact cognitive control is critical for goal-directed behavior and is widely studied in healthy and clinical populations using the error-related negativity (ERN). A common assumption in such studies is that ERNs recorded during different experimental paradigms reflect the same construct or functionally equivalent processes and that ERN is functionally distinct from other error-monitoring event-related potentials (ERPs; error positivity [Pe]), other neurophysiological indices of cognitive control (N2), and even other indices unrelated to cognitive control (visual N1). The present registered report represents a replication-plus-extension study of the psychometric validity of cognitive control ERPs (Riesel et al., 2013, Biological Psychology) and evaluated the convergent and divergent validity of ERN, Pe, N2, and visual N1 recorded during three paradigms (flanker, Stroop, Go/no-go). Data from 182 participants were collected from two study sites, and ERP psychometric reliability and validity were evaluated. Findings supported convergent and divergent validity of ERN, Pe, and delta-Pe (error minus correct)—these ERPs correlated more with themselves across tasks than with other ERPs measured during the same task. Convergent validity of delta-ERN was not replicated, despite high internal consistency. ERN was strongly correlated with N2 at levels similar or higher than those in support of convergent validity for other ERPs, and the present study failed to provide evidence of divergent validity for ERN and Pe from N2 or the theoretically unrelated N1. Present findings underscore the importance of considering the psychometric validity of ERPs as it provides a foundation for interpreting and comparing ERPs across different tasks and studies. ## Dataset Information | Dataset ID | `DS004602` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Registered Replication Report of ERN/Pe Psychometrics | | Author (year) | `Clayson2023_Registered` | | Canonical | — | | Importable as | `DS004602`, `Clayson2023_Registered` | | Year | 2023 | | Authors | Peter E Clayson, Michael J Larson | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004602.v1.0.1](https://doi.org/10.18112/openneuro.ds004602.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004602) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004602) | [Source URL](https://openneuro.org/datasets/ds004602) | ### Copy-paste BibTeX ```bibtex @dataset{ds004602, title = {Registered Replication Report of ERN/Pe Psychometrics}, author = {Peter E Clayson and Michael J Larson}, doi = {10.18112/openneuro.ds004602.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004602.v1.0.1}, } ``` ## Technical Details - Subjects: 182 - Recordings: 546 - Tasks: 3 - Channels: 129 - Sampling rate (Hz): 500.0 (501), 250.0 (45) - Duration (hours): 87.17373944444445 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 73.9 GB - File count: 546 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004602.v1.0.1 - Source: openneuro - OpenNeuro: [ds004602](https://openneuro.org/datasets/ds004602) - NeMAR: [ds004602](https://nemar.org/dataexplorer/detail?dataset_id=ds004602) ## API Reference Use the `DS004602` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004602(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Registered Replication Report of ERN/Pe Psychometrics * **Study:** `ds004602` (OpenNeuro) * **Author (year):** `Clayson2023_Registered` * **Canonical:** — Also importable as: `DS004602`, `Clayson2023_Registered`. Modality: `eeg`. Subjects: 182; recordings: 546; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004602](https://openneuro.org/datasets/ds004602) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004602](https://nemar.org/dataexplorer/detail?dataset_id=ds004602) DOI: [https://doi.org/10.18112/openneuro.ds004602.v1.0.1](https://doi.org/10.18112/openneuro.ds004602.v1.0.1) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS004602 >>> dataset = DS004602(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004602) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004602) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004603: eeg dataset, 37 subjects *Visual Attribute-Specific Contextual Trajectory Paradigm* Access recordings and metadata through EEGDash. **Citation:** Benjamin Lowe ([ben.lowe@mq.edu.au](mailto:ben.lowe@mq.edu.au)), Jonathan Robinson ([jonathan.robinson@monash.edu](mailto:jonathan.robinson@monash.edu)), Naohide Yamamoto ([naohide.yamamoto@qut.edu.au](mailto:naohide.yamamoto@qut.edu.au)), Hinze Hogendoorn ([hinze.hogendoorn@qut.edu.au](mailto:hinze.hogendoorn@qut.edu.au)), Patrick Johnston ([dr.pat.johnston@icloud.com](mailto:dr.pat.johnston@icloud.com)) (2023). *Visual Attribute-Specific Contextual Trajectory Paradigm*. [10.18112/openneuro.ds004603.v1.1.0](https://doi.org/10.18112/openneuro.ds004603.v1.1.0) Modality: eeg Subjects: 37 Recordings: 37 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004603 dataset = DS004603(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004603(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004603( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004603, title = {Visual Attribute-Specific Contextual Trajectory Paradigm}, author = {Benjamin Lowe (ben.lowe@mq.edu.au) and Jonathan Robinson (jonathan.robinson@monash.edu) and Naohide Yamamoto (naohide.yamamoto@qut.edu.au) and Hinze Hogendoorn (hinze.hogendoorn@qut.edu.au) and Patrick Johnston (dr.pat.johnston@icloud.com)}, doi = {10.18112/openneuro.ds004603.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004603.v1.1.0}, } ``` ## About This Dataset These data were recorded from 37 subjects using the following exclusion criteria: Normal, or correct to normal, vision; no history of neurological disorder; and less than 35 years of age. Subjects completed a novel, visual contextual trajectory paradigm (CTP) wherein the onset of a bound stimulus violated an established trajectory in terms of its brightness, size, or orientation. No attribute was violated during control trials. Full method details can be read within the following published paper: [https://doi.org/10.1016/j.cortex.2023.08.004](https://doi.org/10.1016/j.cortex.2023.08.004) Analysis code is available at: [https://github.com/benjaminglowe/attribute-specific-prediction-error-analysis-code](https://github.com/benjaminglowe/attribute-specific-prediction-error-analysis-code) Please email [ben.lowe@mq.edu.au](mailto:ben.lowe@mq.edu.au) if you have any further questions. ## Dataset Information | Dataset ID | `DS004603` | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Visual Attribute-Specific Contextual Trajectory Paradigm | | Author (year) | `Lowe2023` | | Canonical | `VisualContextTrajectory` | | Importable as | `DS004603`, `Lowe2023`, `VisualContextTrajectory` | | Year | 2023 | | Authors | Benjamin Lowe ([ben.lowe@mq.edu.au](mailto:ben.lowe@mq.edu.au)), Jonathan Robinson ([jonathan.robinson@monash.edu](mailto:jonathan.robinson@monash.edu)), Naohide Yamamoto ([naohide.yamamoto@qut.edu.au](mailto:naohide.yamamoto@qut.edu.au)), Hinze Hogendoorn ([hinze.hogendoorn@qut.edu.au](mailto:hinze.hogendoorn@qut.edu.au)), Patrick Johnston ([dr.pat.johnston@icloud.com](mailto:dr.pat.johnston@icloud.com)) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004603.v1.1.0](https://doi.org/10.18112/openneuro.ds004603.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004603) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004603) | [Source URL](https://openneuro.org/datasets/ds004603) | ### Copy-paste BibTeX ```bibtex @dataset{ds004603, title = {Visual Attribute-Specific Contextual Trajectory Paradigm}, author = {Benjamin Lowe (ben.lowe@mq.edu.au) and Jonathan Robinson (jonathan.robinson@monash.edu) and Naohide Yamamoto (naohide.yamamoto@qut.edu.au) and Hinze Hogendoorn (hinze.hogendoorn@qut.edu.au) and Patrick Johnston (dr.pat.johnston@icloud.com)}, doi = {10.18112/openneuro.ds004603.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004603.v1.1.0}, } ``` ## Technical Details - Subjects: 37 - Recordings: 37 - Tasks: 1 - Channels: 65 - Sampling rate (Hz): 1024.0 - Duration (hours): 30.653045518663195 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 27.4 GB - File count: 37 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004603.v1.1.0 - Source: openneuro - OpenNeuro: [ds004603](https://openneuro.org/datasets/ds004603) - NeMAR: [ds004603](https://nemar.org/dataexplorer/detail?dataset_id=ds004603) ## API Reference Use the `DS004603` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004603(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual Attribute-Specific Contextual Trajectory Paradigm * **Study:** `ds004603` (OpenNeuro) * **Author (year):** `Lowe2023` * **Canonical:** `VisualContextTrajectory` Also importable as: `DS004603`, `Lowe2023`, `VisualContextTrajectory`. Modality: `eeg`. Subjects: 37; recordings: 37; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004603](https://openneuro.org/datasets/ds004603) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004603](https://nemar.org/dataexplorer/detail?dataset_id=ds004603) DOI: [https://doi.org/10.18112/openneuro.ds004603.v1.1.0](https://doi.org/10.18112/openneuro.ds004603.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004603 >>> dataset = DS004603(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004603) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004603) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004621: eeg dataset, 42 subjects *The Nencki-Symfonia EEG/ERP dataset* Access recordings and metadata through EEGDash. **Citation:** Dzianok Patrycja, Antonova Ingrida, Wojciechowski Jakub, Dreszer Joanna, Kublik Ewa (2023). *The Nencki-Symfonia EEG/ERP dataset*. [10.18112/openneuro.ds004621.v1.0.4](https://doi.org/10.18112/openneuro.ds004621.v1.0.4) Modality: eeg Subjects: 42 Recordings: 167 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004621 dataset = DS004621(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004621(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004621( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004621, title = {The Nencki-Symfonia EEG/ERP dataset}, author = {Dzianok Patrycja and Antonova Ingrida and Wojciechowski Jakub and Dreszer Joanna and Kublik Ewa}, doi = {10.18112/openneuro.ds004621.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds004621.v1.0.4}, } ``` ## About This Dataset The Nencki-Symfonia EEG/ERP dataset (dataset DOI: doi.org/10.5524/100990) IMPORTANT NOTE: The dataset contains no errors (BIDS-1). The numerous warnings currently displayed are a result of OpenNeuro updating its validator to BIDS-2. The OpenNeuro team is actively working on refining the validator to display only meaningful warnings (more information on OpenNeuro GitHub page). At this time, as dataset owners, we are unable to take any action to resolve these warnings. Description: mixed cognitive tasks [(i) an extended multi-source interference task, MSIT+; (ii) a 3-stimuli oddball task; (iii) a control, simple reaction task, SRT; and (iv) a resting-state protocol] Please cite the following references if you use these data: 1. Dzianok P, Antonova I, Wojciechowski J, Dreszer J, Kublik E. The Nencki-Symfonia electroencephalography/event-related potential dataset: Multiple cognitive tasks and resting-state data collected in a sample of healthy adults. Gigascience. 2022 Mar 7;11:giac015. doi: 10.1093/gigascience/giac015. 2. Dzianok P, Antonova I, Wojciechowski J, Dreszer J, Kublik E. Supporting data for “The Nencki-Symfonia EEG/ERP dataset: Multiple cognitive tasks and resting-state data collected in a sample of healthy adults” GigaScience Database, 2022. [http://doi.org/10.5524/100990](http://doi.org/10.5524/100990) Release history: 26/01/2022: Initial release (GigaDB) 15/06/2023: Added to OpenNeuro; updated README and dataset_description.json; minor updated to .json files related with BIDS errors/warnings. Updated events files (ms changed to s). 12/10/2023: public release on OpenNeuro after deleting some additional, not needed system information from raw logfiles 10/2024: minor correction of logfiles in the /sourcedata directory (MSIT and SRT) for sub-01 to sub-03 02/2025 (v1.0.3): corrections to REST files for subjects sub-20 and sub-23 (EEG and .tsv files) – corrected marker names and removed redundant markers ## Dataset Information | Dataset ID | `DS004621` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The Nencki-Symfonia EEG/ERP dataset | | Author (year) | `Patrycja2023_Nencki` | | Canonical | `NenckiSymfonia` | | Importable as | `DS004621`, `Patrycja2023_Nencki`, `NenckiSymfonia` | | Year | 2023 | | Authors | Dzianok Patrycja, Antonova Ingrida, Wojciechowski Jakub, Dreszer Joanna, Kublik Ewa | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004621.v1.0.4](https://doi.org/10.18112/openneuro.ds004621.v1.0.4) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004621) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004621) | [Source URL](https://openneuro.org/datasets/ds004621) | ### Copy-paste BibTeX ```bibtex @dataset{ds004621, title = {The Nencki-Symfonia EEG/ERP dataset}, author = {Dzianok Patrycja and Antonova Ingrida and Wojciechowski Jakub and Dreszer Joanna and Kublik Ewa}, doi = {10.18112/openneuro.ds004621.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds004621.v1.0.4}, } ``` ## Technical Details - Subjects: 42 - Recordings: 167 - Tasks: 4 - Channels: 127 - Sampling rate (Hz): 1000.0 - Duration (hours): 45.42930277777778 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 77.4 GB - File count: 167 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004621.v1.0.4 - Source: openneuro - OpenNeuro: [ds004621](https://openneuro.org/datasets/ds004621) - NeMAR: [ds004621](https://nemar.org/dataexplorer/detail?dataset_id=ds004621) ## API Reference Use the `DS004621` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004621(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Nencki-Symfonia EEG/ERP dataset * **Study:** `ds004621` (OpenNeuro) * **Author (year):** `Patrycja2023_Nencki` * **Canonical:** `NenckiSymfonia` Also importable as: `DS004621`, `Patrycja2023_Nencki`, `NenckiSymfonia`. Modality: `eeg`. Subjects: 42; recordings: 167; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004621](https://openneuro.org/datasets/ds004621) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004621](https://nemar.org/dataexplorer/detail?dataset_id=ds004621) DOI: [https://doi.org/10.18112/openneuro.ds004621.v1.0.4](https://doi.org/10.18112/openneuro.ds004621.v1.0.4) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004621 >>> dataset = DS004621(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004621) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004621) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004624: ieeg dataset, 3 subjects *Intracranial recordings using BCI2000 and the CorTec BrainInterchange* Access recordings and metadata through EEGDash. **Citation:** F. Mivalt, F. Lampert, M.A. van den Boom, P. Brunner, J. Kim, Andrea Duque-lopez, M. Krakorova, V. Kremen, D. Hermes, G.A. Worrell, K. J. Miller (2023). *Intracranial recordings using BCI2000 and the CorTec BrainInterchange*. [10.18112/openneuro.ds004624.v2.0.0](https://doi.org/10.18112/openneuro.ds004624.v2.0.0) Modality: ieeg Subjects: 3 Recordings: 614 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004624 dataset = DS004624(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004624(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004624( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004624, title = {Intracranial recordings using BCI2000 and the CorTec BrainInterchange}, author = {F. Mivalt and F. Lampert and M.A. van den Boom and P. Brunner and J. Kim and Andrea Duque-lopez and M. Krakorova and V. Kremen and D. Hermes and G.A. Worrell and K. J. Miller}, doi = {10.18112/openneuro.ds004624.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds004624.v2.0.0}, } ``` ## About This Dataset An Ecosystem of Technology and Protocols for Adaptive Neuromodulation Research in Humans This study aims to develop an ecosystem for the purpose of neurmodulation using the Cortec BCI device and BCI2000 software. Contact: For questions regarding this dataset, please contact [mivalt.filip@mayo.edu](mailto:mivalt.filip@mayo.edu) or [Miller.Kai@mayo.edu](mailto:Miller.Kai@mayo.edu) Funding: NIH U01NS128612 ## Dataset Information | Dataset ID | `DS004624` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Intracranial recordings using BCI2000 and the CorTec BrainInterchange | | Author (year) | `Mivalt2025` | | Canonical | `Mivalt2024`, `BCI2000_Intracranial` | | Importable as | `DS004624`, `Mivalt2025`, `Mivalt2024`, `BCI2000_Intracranial` | | Year | 2023 | | Authors | 1. Mivalt, F. Lampert, M.A. van den Boom, P. Brunner, J. Kim, Andrea Duque-lopez, M. Krakorova, V. Kremen, D. Hermes, G.A. Worrell, K. J. Miller | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004624.v2.0.0](https://doi.org/10.18112/openneuro.ds004624.v2.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004624) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004624) | [Source URL](https://openneuro.org/datasets/ds004624) | ### Copy-paste BibTeX ```bibtex @dataset{ds004624, title = {Intracranial recordings using BCI2000 and the CorTec BrainInterchange}, author = {F. Mivalt and F. Lampert and M.A. van den Boom and P. Brunner and J. Kim and Andrea Duque-lopez and M. Krakorova and V. Kremen and D. Hermes and G.A. Worrell and K. J. Miller}, doi = {10.18112/openneuro.ds004624.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds004624.v2.0.0}, } ``` ## Technical Details - Subjects: 3 - Recordings: 614 - Tasks: 28 - Channels: 36 (363), 34 (234), 39 (17) - Sampling rate (Hz): 1000.0 - Duration (hours): Not calculated - Pathology: Surgery - Modality: Multisensory - Type: Clinical/Intervention - Size on disk: 19.3 GB - File count: 614 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004624.v2.0.0 - Source: openneuro - OpenNeuro: [ds004624](https://openneuro.org/datasets/ds004624) - NeMAR: [ds004624](https://nemar.org/dataexplorer/detail?dataset_id=ds004624) ## API Reference Use the `DS004624` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004624(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Intracranial recordings using BCI2000 and the CorTec BrainInterchange * **Study:** `ds004624` (OpenNeuro) * **Author (year):** `Mivalt2025` * **Canonical:** `Mivalt2024`, `BCI2000_Intracranial` Also importable as: `DS004624`, `Mivalt2025`, `Mivalt2024`, `BCI2000_Intracranial`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 3; recordings: 614; tasks: 28. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004624](https://openneuro.org/datasets/ds004624) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004624](https://nemar.org/dataexplorer/detail?dataset_id=ds004624) DOI: [https://doi.org/10.18112/openneuro.ds004624.v2.0.0](https://doi.org/10.18112/openneuro.ds004624.v2.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004624 >>> dataset = DS004624(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004624) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004624) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004625: eeg dataset, 32 subjects *Mind in Motion Young Adults Walking Over Uneven Terrain* Access recordings and metadata through EEGDash. **Citation:** Chang Liu, Ryan J. Downey, Jacob S. Salminen, Sofia Arvelo Rojas, Erika M. Pliner, Natalie Richer, Jungyun Hwang, Yenisel Cruz-Almeida, Todd M. Manini, Chris J. Hass, Rachael D. Seidler, David J. Clark, Daniel P. Ferris (2023). *Mind in Motion Young Adults Walking Over Uneven Terrain*. [10.18112/openneuro.ds004625.v1.0.2](https://doi.org/10.18112/openneuro.ds004625.v1.0.2) Modality: eeg Subjects: 32 Recordings: 543 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004625 dataset = DS004625(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004625(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004625( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004625, title = {Mind in Motion Young Adults Walking Over Uneven Terrain}, author = {Chang Liu and Ryan J. Downey and Jacob S. Salminen and Sofia Arvelo Rojas and Erika M. Pliner and Natalie Richer and Jungyun Hwang and Yenisel Cruz-Almeida and Todd M. Manini and Chris J. Hass and Rachael D. Seidler and David J. Clark and Daniel P. Ferris}, doi = {10.18112/openneuro.ds004625.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004625.v1.0.2}, } ``` ## About This Dataset Our dataset contains high-density, dual-layer electroencephalography (EEG), neck electromyography (EMG), inertial measurement unit (IMU) acceleration, ground reaction forces, head model constructed from T1 structural MR images from 32 participants walking over uneven terrain and at different speeds. Participants completed two trials for each condition for three minutes and a seated rest trial for three minutes. Digitized electrode locations (txt) are included in each subject folder. Please refer to our publication for more detail. This study was supported by the National Institute of Health (U01AG061389). ## Dataset Information | Dataset ID | `DS004625` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mind in Motion Young Adults Walking Over Uneven Terrain | | Author (year) | `Liu2023` | | Canonical | — | | Importable as | `DS004625`, `Liu2023` | | Year | 2023 | | Authors | Chang Liu, Ryan J. Downey, Jacob S. Salminen, Sofia Arvelo Rojas, Erika M. Pliner, Natalie Richer, Jungyun Hwang, Yenisel Cruz-Almeida, Todd M. Manini, Chris J. Hass, Rachael D. Seidler, David J. Clark, Daniel P. Ferris | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004625.v1.0.2](https://doi.org/10.18112/openneuro.ds004625.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004625) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004625) | [Source URL](https://openneuro.org/datasets/ds004625) | ### Copy-paste BibTeX ```bibtex @dataset{ds004625, title = {Mind in Motion Young Adults Walking Over Uneven Terrain}, author = {Chang Liu and Ryan J. Downey and Jacob S. Salminen and Sofia Arvelo Rojas and Erika M. Pliner and Natalie Richer and Jungyun Hwang and Yenisel Cruz-Almeida and Todd M. Manini and Chris J. Hass and Rachael D. Seidler and David J. Clark and Daniel P. Ferris}, doi = {10.18112/openneuro.ds004625.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004625.v1.0.2}, } ``` ## Technical Details - Subjects: 32 - Recordings: 543 - Tasks: 9 - Channels: 284 (323), 310 (187), 375 (33) - Sampling rate (Hz): 500.0 - Duration (hours): 28.581393888888886 - Pathology: Healthy - Modality: Motor - Type: Motor - Size on disk: 62.5 GB - File count: 543 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004625.v1.0.2 - Source: openneuro - OpenNeuro: [ds004625](https://openneuro.org/datasets/ds004625) - NeMAR: [ds004625](https://nemar.org/dataexplorer/detail?dataset_id=ds004625) ## API Reference Use the `DS004625` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004625(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mind in Motion Young Adults Walking Over Uneven Terrain * **Study:** `ds004625` (OpenNeuro) * **Author (year):** `Liu2023` * **Canonical:** — Also importable as: `DS004625`, `Liu2023`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 32; recordings: 543; tasks: 9. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004625](https://openneuro.org/datasets/ds004625) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004625](https://nemar.org/dataexplorer/detail?dataset_id=ds004625) DOI: [https://doi.org/10.18112/openneuro.ds004625.v1.0.2](https://doi.org/10.18112/openneuro.ds004625.v1.0.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004625 >>> dataset = DS004625(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004625) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004625) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004626: eeg dataset, 52 subjects *Can we dissociate hypervigilance to social threats from altered perceptual decision-making processes in lonely individuals? An exploration with Drift Diffusion Modelling and event-related potentials.* Access recordings and metadata through EEGDash. **Citation:** Szymon Mąka, Marta Chrustowicz, Łukasz Okruszek (2023). *Can we dissociate hypervigilance to social threats from altered perceptual decision-making processes in lonely individuals? An exploration with Drift Diffusion Modelling and event-related potentials.*. [10.18112/openneuro.ds004626.v1.0.2](https://doi.org/10.18112/openneuro.ds004626.v1.0.2) Modality: eeg Subjects: 52 Recordings: 52 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004626 dataset = DS004626(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004626(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004626( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004626, title = {Can we dissociate hypervigilance to social threats from altered perceptual decision-making processes in lonely individuals? An exploration with Drift Diffusion Modelling and event-related potentials.}, author = {Szymon Mąka and Marta Chrustowicz and Łukasz Okruszek}, doi = {10.18112/openneuro.ds004626.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004626.v1.0.2}, } ``` ## About This Dataset Dataset is related to publication: Mąka, S., Chrustowicz, M., & Okruszek, Ł. (2023). Can we dissociate hypervigilance to social threats from altered perceptual decision-making processes in lonely individuals? An exploration with Drift Diffusion Modeling and event-related potentials. Psychophysiology, e14406. [https://doi](https://doi). org/10.1111/psyp.14406 ## Dataset Information | Dataset ID | `DS004626` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Can we dissociate hypervigilance to social threats from altered perceptual decision-making processes in lonely individuals? An exploration with Drift Diffusion Modelling and event-related potentials. | | Author (year) | `Maka2023` | | Canonical | — | | Importable as | `DS004626`, `Maka2023` | | Year | 2023 | | Authors | Szymon Mąka, Marta Chrustowicz, Łukasz Okruszek | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004626.v1.0.2](https://doi.org/10.18112/openneuro.ds004626.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004626) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004626) | [Source URL](https://openneuro.org/datasets/ds004626) | ### Copy-paste BibTeX ```bibtex @dataset{ds004626, title = {Can we dissociate hypervigilance to social threats from altered perceptual decision-making processes in lonely individuals? An exploration with Drift Diffusion Modelling and event-related potentials.}, author = {Szymon Mąka and Marta Chrustowicz and Łukasz Okruszek}, doi = {10.18112/openneuro.ds004626.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004626.v1.0.2}, } ``` ## Technical Details - Subjects: 52 - Recordings: 52 - Tasks: 1 - Channels: 68 - Sampling rate (Hz): 1000.0 - Duration (hours): 21.358625555555555 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 19.9 GB - File count: 52 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004626.v1.0.2 - Source: openneuro - OpenNeuro: [ds004626](https://openneuro.org/datasets/ds004626) - NeMAR: [ds004626](https://nemar.org/dataexplorer/detail?dataset_id=ds004626) ## API Reference Use the `DS004626` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004626(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Can we dissociate hypervigilance to social threats from altered perceptual decision-making processes in lonely individuals? An exploration with Drift Diffusion Modelling and event-related potentials. * **Study:** `ds004626` (OpenNeuro) * **Author (year):** `Maka2023` * **Canonical:** — Also importable as: `DS004626`, `Maka2023`. Modality: `eeg`. Subjects: 52; recordings: 52; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004626](https://openneuro.org/datasets/ds004626) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004626](https://nemar.org/dataexplorer/detail?dataset_id=ds004626) DOI: [https://doi.org/10.18112/openneuro.ds004626.v1.0.2](https://doi.org/10.18112/openneuro.ds004626.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004626 >>> dataset = DS004626(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004626) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004626) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004635: eeg dataset, 48 subjects *Gaffrey Lab Infant Microstates Reliability* Access recordings and metadata through EEGDash. **Citation:** Armen Bagdasarov, Michael S. Gaffrey (2023). *Gaffrey Lab Infant Microstates Reliability*. [10.18112/openneuro.ds004635.v3.1.0](https://doi.org/10.18112/openneuro.ds004635.v3.1.0) Modality: eeg Subjects: 48 Recordings: 48 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004635 dataset = DS004635(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004635(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004635( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004635, title = {Gaffrey Lab Infant Microstates Reliability}, author = {Armen Bagdasarov and Michael S. Gaffrey}, doi = {10.18112/openneuro.ds004635.v3.1.0}, url = {https://doi.org/10.18112/openneuro.ds004635.v3.1.0}, } ``` ## About This Dataset Participants were 48, 5-10-month-old infants (27 male). All research was approved by the Duke University Health System Institutional Review Board and carried out in accordance with the Declaration of Helsinki. Caregivers provided informed consent, and compensation was provided for their participation. Infants sat on their caregiver’s lap and watched up to 15 minutes of relaxing videos with sound (i.e., 10, 90-second videos separated by breaks during which caregivers could play with their infant). Before each video started, an attention grabber (i.e., three-second video of a noisy rattle) directed the infant’s attention to the screen. Videos were presented with E-Prime software (Psychological Software Tools, Pittsburgh, PA). Caregivers were instructed to silently sit still during videos. If infants shifted their attention away from the screen, caregivers were permitted to re-direct their attention only by pointing to the screen. EEG was recorded at 1000 Hertz (Hz) and referenced to the vertex (channel Cz) using a 128-channel HydroCel Geodesic Sensor Net (Electrical Geodesics, Eugene, OR). Impedances were maintained below 50 kilohms throughout the EEG session. For more information, visit: [https://github.com/gaffreylab/EEG-Microstate-Analysis-Tutorial/wiki](https://github.com/gaffreylab/EEG-Microstate-Analysis-Tutorial/wiki) ## Dataset Information | Dataset ID | `DS004635` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Gaffrey Lab Infant Microstates Reliability | | Author (year) | `Bagdasarov2023` | | Canonical | — | | Importable as | `DS004635`, `Bagdasarov2023` | | Year | 2023 | | Authors | Armen Bagdasarov, Michael S. Gaffrey | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004635.v3.1.0](https://doi.org/10.18112/openneuro.ds004635.v3.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004635) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004635) | [Source URL](https://openneuro.org/datasets/ds004635) | ### Copy-paste BibTeX ```bibtex @dataset{ds004635, title = {Gaffrey Lab Infant Microstates Reliability}, author = {Armen Bagdasarov and Michael S. Gaffrey}, doi = {10.18112/openneuro.ds004635.v3.1.0}, url = {https://doi.org/10.18112/openneuro.ds004635.v3.1.0}, } ``` ## Technical Details - Subjects: 48 - Recordings: 48 - Tasks: 1 - Channels: 129 - Sampling rate (Hz): 1000.0 - Duration (hours): 16.849317222222222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 26.1 GB - File count: 48 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004635.v3.1.0 - Source: openneuro - OpenNeuro: [ds004635](https://openneuro.org/datasets/ds004635) - NeMAR: [ds004635](https://nemar.org/dataexplorer/detail?dataset_id=ds004635) ## API Reference Use the `DS004635` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004635(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Gaffrey Lab Infant Microstates Reliability * **Study:** `ds004635` (OpenNeuro) * **Author (year):** `Bagdasarov2023` * **Canonical:** — Also importable as: `DS004635`, `Bagdasarov2023`. Modality: `eeg`. Subjects: 48; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004635](https://openneuro.org/datasets/ds004635) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004635](https://nemar.org/dataexplorer/detail?dataset_id=ds004635) DOI: [https://doi.org/10.18112/openneuro.ds004635.v3.1.0](https://doi.org/10.18112/openneuro.ds004635.v3.1.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004635 >>> dataset = DS004635(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004635) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004635) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004642: ieeg dataset, 10 subjects *Intraoperative recordings of medianus stimulation with low and high impedance ECoG* Access recordings and metadata through EEGDash. **Citation:** Vasileios Dimakopoulos, Marian Neidert, Johannes Sarnthein (2023). *Intraoperative recordings of medianus stimulation with low and high impedance ECoG*. [10.18112/openneuro.ds004642.v1.0.1](https://doi.org/10.18112/openneuro.ds004642.v1.0.1) Modality: ieeg Subjects: 10 Recordings: 10 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004642 dataset = DS004642(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004642(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004642( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004642, title = {Intraoperative recordings of medianus stimulation with low and high impedance ECoG}, author = {Vasileios Dimakopoulos and Marian Neidert and Johannes Sarnthein}, doi = {10.18112/openneuro.ds004642.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004642.v1.0.1}, } ``` ## About This Dataset **Intraoperative recordings of medianus stimulation with low and high impedance ECoG** This dataset of medianus SEP was first analyzed in publication [1]. There we investigated whether the low impedance ECoG electrode (LoZ) improves fast ripple detection over a standard electrode with high impedance contacts (HiZ). There are 10 patients (median age 40 y, range 19-56 y, 6 female) who underwent brain tumor resections in the perirolandic region at our institution. The data includes the continuous raw data from the ECoG contacts of both electrodes. We recorded medianus SEP intraoperatively (stimulation rate = 4.7 Hz) from two 4-contacts ECoG strips simultaneously (LoZ: a contacts, HiZ: b contacts) that had different median impedance (LoZ: 3.4 kΩ, HiZ: 6.9 kΩ). **Repository structure** **Main directory (LoZ HFO)** ### View full README **Intraoperative recordings of medianus stimulation with low and high impedance ECoG** This dataset of medianus SEP was first analyzed in publication [1]. There we investigated whether the low impedance ECoG electrode (LoZ) improves fast ripple detection over a standard electrode with high impedance contacts (HiZ). There are 10 patients (median age 40 y, range 19-56 y, 6 female) who underwent brain tumor resections in the perirolandic region at our institution. The data includes the continuous raw data from the ECoG contacts of both electrodes. We recorded medianus SEP intraoperatively (stimulation rate = 4.7 Hz) from two 4-contacts ECoG strips simultaneously (LoZ: a contacts, HiZ: b contacts) that had different median impedance (LoZ: 3.4 kΩ, HiZ: 6.9 kΩ). **Repository structure** **Main directory (LoZ HFO)** Contains metadata files in the BIDS standard about the participants and the study. Folders are explained below. **Subfolders** - LoZ HFO/sub-/ Contains folders for each subject, named sub- and session information. - LoZ HFO/sub-/ses-01/ieeg/ Contains the raw ieeg data in .edf format for each subject. Each \*ieeg.edf file contains continuous iEEG data from one stimulation rate recorded at the hand area Ω from both the electrodes simultaneously . Details about the channels are given in the corresponding .tsv file. **Note from the paper** “The offline data processing used the continuous ECoG that was recorded in parallel to the SEP recordings. Data analysis was performed with custom scripts in Matlab. To detect the SEP stimulation artefact, we first filtered the ECoG (high pass cutoff = 200 Hz) and performed local peak detection (minimum peak prominence between peaks = 30 ms, minimum peak width = 4 ms, samples = 0.2 ms). We used the times of the detected stimulus artifact as triggers to define sweeps with post-stimulus recording sweep length 50 ms. We classified sweeps with amplitude ±100 µV as artefact-ridden and excluded them from further analysis. We averaged 100 sweeps and filtered the averaged trace (bandpass [30 300] Hz, IIR filter, response roll-off -12 db per octave, forward and reverse filtering to avoid phase distortion). We visually inspected the data and selected one optimal channel with high N20 amplitude (positive or negative) for further analysis. From the averaged N20 trace, we determined the N20 peak latency. To obtain the N20 peak amplitude and the SNR, we inspected the latency of the N20 peak. If the N20 latency was >20 ms, we selected a signal window [20 25] ms. If the N20 latency was ≤ 20 ms, we selected a signal window [17 22] ms. In the same way, we filtered the averaged trace in the [250 500] Hz band to obtain the evoked FR and in the [500 1000] Hz band to obtain the evoked HFO. We doubled the largest deflection in the signal window of the N20 frequency band to define the N20 signal amplitude. In the FR and HFO bands we used the peak-to-peak amplitude.” **BIDS Conversion** bids-starter-kid and custom Matlab scripts were used to convert the dataset into BIDS format. **References** [1] Vasileios Dimakopoulos, Marian C. Neidert, Johannes Sarnthein, Low impedance electrodes improve detection of high frequency oscillations in the intracranial EEG, Clinical Neurophysiology, 2023, ISSN 1388-2457, [https://doi.org/10.1016/j.clinph.2023.07.002](https://doi.org/10.1016/j.clinph.2023.07.002) If you have any inquiries or questions, contact: \* Vasileios Dimakopoulos ([vasileios.dimakopoulos@usz.ch](mailto:vasileios.dimakopoulos@usz.ch)) \* Johannes Sarnthein ([johannes.sarnthein@usz.ch](mailto:johannes.sarnthein@usz.ch)) ## Dataset Information | Dataset ID | `DS004642` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Intraoperative recordings of medianus stimulation with low and high impedance ECoG | | Author (year) | `Dimakopoulos2023_Intraoperative` | | Canonical | — | | Importable as | `DS004642`, `Dimakopoulos2023_Intraoperative` | | Year | 2023 | | Authors | Vasileios Dimakopoulos, Marian Neidert, Johannes Sarnthein | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004642.v1.0.1](https://doi.org/10.18112/openneuro.ds004642.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004642) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004642) | [Source URL](https://openneuro.org/datasets/ds004642) | ### Copy-paste BibTeX ```bibtex @dataset{ds004642, title = {Intraoperative recordings of medianus stimulation with low and high impedance ECoG}, author = {Vasileios Dimakopoulos and Marian Neidert and Johannes Sarnthein}, doi = {10.18112/openneuro.ds004642.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004642.v1.0.1}, } ``` ## Technical Details - Subjects: 10 - Recordings: 10 - Tasks: 1 - Channels: 8 (7), 9 (2), 10 - Sampling rate (Hz): 20000.0 - Duration (hours): 1.069361111111111 - Pathology: Surgery - Modality: Other - Type: Other - Size on disk: 1.2 GB - File count: 10 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004642.v1.0.1 - Source: openneuro - OpenNeuro: [ds004642](https://openneuro.org/datasets/ds004642) - NeMAR: [ds004642](https://nemar.org/dataexplorer/detail?dataset_id=ds004642) ## API Reference Use the `DS004642` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004642(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Intraoperative recordings of medianus stimulation with low and high impedance ECoG * **Study:** `ds004642` (OpenNeuro) * **Author (year):** `Dimakopoulos2023_Intraoperative` * **Canonical:** — Also importable as: `DS004642`, `Dimakopoulos2023_Intraoperative`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Surgery`. Subjects: 10; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004642](https://openneuro.org/datasets/ds004642) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004642](https://nemar.org/dataexplorer/detail?dataset_id=ds004642) DOI: [https://doi.org/10.18112/openneuro.ds004642.v1.0.1](https://doi.org/10.18112/openneuro.ds004642.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004642 >>> dataset = DS004642(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004642) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004642) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004657: eeg dataset, 24 subjects *Driving with Autonomous Aids* Access recordings and metadata through EEGDash. **Citation:** Jason Metcalfe, Amar Marathe, Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King (2023). *Driving with Autonomous Aids*. [10.18112/openneuro.ds004657.v1.0.3](https://doi.org/10.18112/openneuro.ds004657.v1.0.3) Modality: eeg Subjects: 24 Recordings: 119 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004657 dataset = DS004657(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004657(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004657( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004657, title = {Driving with Autonomous Aids}, author = {Jason Metcalfe and Amar Marathe and Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004657.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds004657.v1.0.3}, } ``` ## About This Dataset TX20 dataset Vehicle survivability is critically important in today’s military. Survivability is critically impacted by the performance of human operators – especially as it degrades with various factors. Significant DoD investments have focused on developing and integrating autonomous technologies to mitigate the effects of human error. However, simply implementing autonomy without having a clear plan for integrating with human operators can lead to relatively poor performance and thus low user acceptance. Human trust in automation (TiA) is a well-documented determinant of acceptance and use, but more important than achieving a certain level of trust is to find an appropriate match between the capabilities of the technology and the operator’s trust. Finding means to calibrate TiA to elicit the desired use of the autonomy is an important goal, but requires reliable quantitative indicators that can be continuously monitored. Considerable research on interpersonal trust has revealed measurable patterns of physiological change that correlate significantly with changing levels of subjective trust and trust-based decision making. This research was aimed at facilitating the eventual real-time management of TiA by developing initial psychophysiology-based metrics for monitoring and predicting continuous changes in trust and/or trust-related behaviors. Complete a semi-automated driving task involving lane maintenance, following distance from a lead vehicle, and collision avoidance (with oncoming traffic and frequently appearing pedestrians). Under certain conditions, an automated driving assistant was available and could be engaged and disengaged at the discretion of the driver. The automated assistant was capable of managing limited aspects of the driving task (maintainance of following distance alone or maintaining following distance and lane position), but was not capable of collision avoidance. Separate driver responses (button presses) were required to successfully avoid collisions with pedestrians. This research was conducted to develop and validate methods for monitoring and predicting varying degrees of trust in automation (TiA) using both physiological and behavioral metrics characterizing real-time human-automation interactions. The overarching goal of this research was to develop and validate methods for measuring and drawing inferences about TiA, either directly or indirectly through correlated constructs. In particular, we examined operator trust in vehicle automation as it is reflected in changes observed in subjective reports as well as behavioral and physiological state variables during the execution of a shared human-autonomy driving task. The stated aims underlying this goal included: Aim #1: To develop and experimentally validate metrics (dependent variables) that index changes in TiA. Rather than focusing on single-modality metrics, we will record and explore the patterns of correlation and co-variance among a variety of psychophysiological and behavioral variables and focus particularly on metrics that predict decisions around sharing vehicle control with the autonomy in each condition. State measures will be derived from EEG, EOG (electrooculography), ECG, EDA, and gaze position tracking as well as the subject vehicle control behaviors. Aim #2: To develop an understanding of factors (independent variables and covariates) that influence the subject’s TiA. Whereas the Aim #1 targets the identification of metrics, or groups of metrics, that reliably predict trust-based decision-making, here we seek to gain insight as to which factors influence the likelihood and directionality of those same trust-based decisions. Such factors will include real-time tracking of variables such as task load, collision risk, and recent performance history or trending changes in success rate. Sessions/Conditions SCPB: PractB SCMM: Manual driving SCFB: Full Bad autonomy SCFG: Full Good autonomy SCSB: Speed Bad autonomy SCSG: Speed Good autonomy. ## Dataset Information | Dataset ID | `DS004657` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Driving with Autonomous Aids | | Author (year) | `Metcalfe2023_Driving` | | Canonical | `TX20` | | Importable as | `DS004657`, `Metcalfe2023_Driving`, `TX20` | | Year | 2023 | | Authors | Jason Metcalfe, Amar Marathe, Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004657.v1.0.3](https://doi.org/10.18112/openneuro.ds004657.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004657) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004657) | [Source URL](https://openneuro.org/datasets/ds004657) | ### Copy-paste BibTeX ```bibtex @dataset{ds004657, title = {Driving with Autonomous Aids}, author = {Jason Metcalfe and Amar Marathe and Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004657.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds004657.v1.0.3}, } ``` ## Technical Details - Subjects: 24 - Recordings: 119 - Tasks: 1 - Channels: 74 - Sampling rate (Hz): 1024.0 (111), 8192.0 (8) - Duration (hours): 27.205277777777777 - Pathology: Healthy - Modality: Visual - Type: Decision-making - Size on disk: 43.1 GB - File count: 119 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004657.v1.0.3 - Source: openneuro - OpenNeuro: [ds004657](https://openneuro.org/datasets/ds004657) - NeMAR: [ds004657](https://nemar.org/dataexplorer/detail?dataset_id=ds004657) ## API Reference Use the `DS004657` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004657(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Driving with Autonomous Aids * **Study:** `ds004657` (OpenNeuro) * **Author (year):** `Metcalfe2023_Driving` * **Canonical:** `TX20` Also importable as: `DS004657`, `Metcalfe2023_Driving`, `TX20`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 24; recordings: 119; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004657](https://openneuro.org/datasets/ds004657) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004657](https://nemar.org/dataexplorer/detail?dataset_id=ds004657) DOI: [https://doi.org/10.18112/openneuro.ds004657.v1.0.3](https://doi.org/10.18112/openneuro.ds004657.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004657 >>> dataset = DS004657(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004657) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004657) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004660: eeg dataset, 21 subjects *TNO* Access recordings and metadata through EEGDash. **Citation:** Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King (2023). *TNO*. [10.18112/openneuro.ds004660.v1.0.2](https://doi.org/10.18112/openneuro.ds004660.v1.0.2) Modality: eeg Subjects: 21 Recordings: 42 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004660 dataset = DS004660(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004660(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004660( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004660, title = {TNO}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004660.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004660.v1.0.2}, } ``` ## About This Dataset TNO dataset ## Dataset Information | Dataset ID | `DS004660` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | TNO | | Author (year) | `Johnson2023_TNO` | | Canonical | `TNO` | | Importable as | `DS004660`, `Johnson2023_TNO`, `TNO` | | Year | 2023 | | Authors | Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004660.v1.0.2](https://doi.org/10.18112/openneuro.ds004660.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004660) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004660) | [Source URL](https://openneuro.org/datasets/ds004660) | ### Copy-paste BibTeX ```bibtex @dataset{ds004660, title = {TNO}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004660.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004660.v1.0.2}, } ``` ## Technical Details - Subjects: 21 - Recordings: 42 - Tasks: 1 - Channels: 38 - Sampling rate (Hz): 512.0 (41), 2048.0 - Duration (hours): 23.79277777777778 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 7.2 GB - File count: 42 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004660.v1.0.2 - Source: openneuro - OpenNeuro: [ds004660](https://openneuro.org/datasets/ds004660) - NeMAR: [ds004660](https://nemar.org/dataexplorer/detail?dataset_id=ds004660) ## API Reference Use the `DS004660` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004660(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TNO * **Study:** `ds004660` (OpenNeuro) * **Author (year):** `Johnson2023_TNO` * **Canonical:** `TNO` Also importable as: `DS004660`, `Johnson2023_TNO`, `TNO`. Modality: `eeg`. Subjects: 21; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004660](https://openneuro.org/datasets/ds004660) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004660](https://nemar.org/dataexplorer/detail?dataset_id=ds004660) DOI: [https://doi.org/10.18112/openneuro.ds004660.v1.0.2](https://doi.org/10.18112/openneuro.ds004660.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004660 >>> dataset = DS004660(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004660) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004660) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004661: eeg dataset, 17 subjects *ANDI* Access recordings and metadata through EEGDash. **Citation:** Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King (2023). *ANDI*. [10.18112/openneuro.ds004661.v1.1.0](https://doi.org/10.18112/openneuro.ds004661.v1.1.0) Modality: eeg Subjects: 17 Recordings: 17 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004661 dataset = DS004661(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004661(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004661( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004661, title = {ANDI}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004661.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004661.v1.1.0}, } ``` ## About This Dataset Participants (N=17, all males) with an average age of 32.8 years performed a guided visual search task in parallel with a second binaurally presented auditory task (Ries, et al., 2016). EEG data from each participant were recorded using a 64-channel BioSemi ActiveTwo system digitized at 512 Hz. Four external electrodes were used to record bipolar horizontal and vertical EOG signals, and a single external electrode was placed on each of the left and right mastoids to provide the reference signals. Fourteen participants were included in the original study, with three additional participants later added, resulting in 17 participants. The visual search task for this experiment required participants to follow a red annulus around the screen and press a button if the annulus stopped at a prespecified target. The auditory task for this experiment was an N-back matching task in which participants listened to a string of numbers presented at approximately 2 second intervals and were required to indicate whether the current number matched a previously presented number. For the N=0, this would be the number immediately prior. For N=1 this would be the number one level before that, and so on. In the example string “1”, “1”, “2”, “1”, “3”, “2”, the second “1” should generate a match in the N=0 condition, the third “1” should generate a match in the N=1 condition, and the second “2” should generate a match in the N=2 condition. The task was composed of a baseline condition in which participants were presented with both visual and auditory stimuli but were instructed to ignore the auditory component. Next, were three dual-task conditions with N-back levels of N=0, N=1, and N=2. ## Dataset Information | Dataset ID | `DS004661` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | ANDI | | Author (year) | `Johnson2023_ANDI` | | Canonical | `ANDI` | | Importable as | `DS004661`, `Johnson2023_ANDI`, `ANDI` | | Year | 2023 | | Authors | Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004661.v1.1.0](https://doi.org/10.18112/openneuro.ds004661.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004661) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004661) | [Source URL](https://openneuro.org/datasets/ds004661) | ### Copy-paste BibTeX ```bibtex @dataset{ds004661, title = {ANDI}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004661.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004661.v1.1.0}, } ``` ## Technical Details - Subjects: 17 - Recordings: 17 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 128.0 - Duration (hours): 10.13709201388889 - Pathology: Healthy - Modality: Multisensory - Type: Attention - Size on disk: 1.4 GB - File count: 17 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004661.v1.1.0 - Source: openneuro - OpenNeuro: [ds004661](https://openneuro.org/datasets/ds004661) - NeMAR: [ds004661](https://nemar.org/dataexplorer/detail?dataset_id=ds004661) ## API Reference Use the `DS004661` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004661(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ANDI * **Study:** `ds004661` (OpenNeuro) * **Author (year):** `Johnson2023_ANDI` * **Canonical:** `ANDI` Also importable as: `DS004661`, `Johnson2023_ANDI`, `ANDI`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004661](https://openneuro.org/datasets/ds004661) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004661](https://nemar.org/dataexplorer/detail?dataset_id=ds004661) DOI: [https://doi.org/10.18112/openneuro.ds004661.v1.1.0](https://doi.org/10.18112/openneuro.ds004661.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004661 >>> dataset = DS004661(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004661) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004661) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004696: ieeg dataset, 8 subjects *HAPwave_bids* Access recordings and metadata through EEGDash. **Citation:** Ojeda Valencia, G., Gregg, N., Huang, H., Lundstrom, B., Brinkmann, B., Pal Attia1, T., Van Gompel, J., Bernstein,M., In, M., Huston, J., Worrell1, G., Miller, K., Hermes, D. (2023). *HAPwave_bids*. [10.18112/openneuro.ds004696.v1.0.1](https://doi.org/10.18112/openneuro.ds004696.v1.0.1) Modality: ieeg Subjects: 8 Recordings: 8 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004696 dataset = DS004696(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004696(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004696( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004696, title = {HAPwave_bids}, author = {Ojeda Valencia, G. and Gregg, N. and Huang, H. and Lundstrom, B. and Brinkmann, B. and Pal Attia1, T. and Van Gompel, J. and Bernstein,M. and In, M. and Huston, J. and Worrell1, G. and Miller, K. and Hermes, D.}, doi = {10.18112/openneuro.ds004696.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004696.v1.0.1}, } ``` ## About This Dataset **Information** This dataset contains intracranial EEG (iEEG) recordings from 8 patients during single pulse electrical stimulation used in the publication of: Ojeda Valencia G, Gregg N, Huang H, Lundstrom B, Brinkmann B, Pal Attia T, Van Gompel J, Bernstein M, In MH, Huston J, Worrell G, Miller KJ, and Hermes D. 2023. Signatures of electrical stimulation driven network interactions in the human limbic system. Journal of Neuroscience (in press). **License** This dataset is made available under the Public Domain Dedication and License CC v1.0, whose full text can be found at [https://creativecommons.org/publicdomain/zero/1.0/](https://creativecommons.org/publicdomain/zero/1.0/). ### View full README **Information** This dataset contains intracranial EEG (iEEG) recordings from 8 patients during single pulse electrical stimulation used in the publication of: Ojeda Valencia G, Gregg N, Huang H, Lundstrom B, Brinkmann B, Pal Attia T, Van Gompel J, Bernstein M, In MH, Huston J, Worrell G, Miller KJ, and Hermes D. 2023. Signatures of electrical stimulation driven network interactions in the human limbic system. Journal of Neuroscience (in press). **License** This dataset is made available under the Public Domain Dedication and License CC v1.0, whose full text can be found at [https://creativecommons.org/publicdomain/zero/1.0/](https://creativecommons.org/publicdomain/zero/1.0/). We hope that all users will follow the ODC Attribution/Share-Alike Community Norms ([http://www.opendatacommons.org/norms/odc-by-sa/](http://www.opendatacommons.org/norms/odc-by-sa/)); in particular, while not legally required, we hope that all users of the data will acknowledge by citing the following in any publication: Ojeda Valencia G, Gregg N, Huang H, Lundstrom B, Brinkmann B, Pal Attia T, Van Gompel J, Bernstein M, In MH, Huston J, Worrell G, Miller KJ, and Hermes D. 2023. Signatures of electrical stimulation driven network interactions in the human limbic system. Journal of Neuroscience. DOI: [https://doi.org/10.1523/JNEUROSCI.2201-22.2023](https://doi.org/10.1523/JNEUROSCI.2201-22.2023) **Task Description** Patients were resting in the hospital bed, while single pulse stimulation was performed. The stimulation had a duration of 200 microseconds, was biphasic and had an amplitude of 6mA. For subject 7 stimulation amplitude was sometimes reduced to 4mA to minimize interictal responses. **Code** Code to analyses these data is available at: [https://github.com/MultimodalNeuroimagingLab/HAPwave](https://github.com/MultimodalNeuroimagingLab/HAPwave) **Dataset** This data is organized according to the Brain Imaging Data Structure specification (BIDS version 1.12.0). A community- driven specification for organizing neurophysiology data along with its metadata. For more information on this data specification, see [https://bids-specification.readthedocs.io/en/stable/](https://bids-specification.readthedocs.io/en/stable/) Each subject has their own folder (e.g., ‘sub-01’) containing intracranial EEG (iEEG) recordings from 8 patients during single pulse electrical stimulation, as well as the metadata needed to understand the raw data and event timing. **Acknowledgements** This project was funded by the National Institute Of Mental Health of the National Institutes of Health Brain Initiative under Award Number R01 MH122258, “CRCNS: Processing speed in the human connectome across the lifespan”. The overall goal of this project is to develop a large database of single pulse stimulation data and develop tools to advance our understanding of the human connectome across the lifespan. The data was collected by Dora Hermes, Nick Gregg, Brian Lundstrom, Cindy Nelson, Gabriela Ojeda Valencia, Gregg Worrell and Kai J. Miller. The BIDS formatting was performed by Dora Hermes and Gabriela Ojeda Valencia. **Contact** Please contact Dora Hermes ([hermes.dora@mayo.edu](mailto:hermes.dora@mayo.edu)) or Gabriela Ojeda Valencia ([OjedaValencia.Alma@mayo.edu](mailto:OjedaValencia.Alma@mayo.edu)) for questions. ## Dataset Information | Dataset ID | `DS004696` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | HAPwave_bids | | Author (year) | `Valencia2023` | | Canonical | — | | Importable as | `DS004696`, `Valencia2023` | | Year | 2023 | | Authors | Ojeda Valencia, G., Gregg, N., Huang, H., Lundstrom, B., Brinkmann, B., Pal Attia1, T., Van Gompel, J., Bernstein,M., In, M., Huston, J., Worrell1, G., Miller, K., Hermes, D. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004696.v1.0.1](https://doi.org/10.18112/openneuro.ds004696.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004696) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004696) | [Source URL](https://openneuro.org/datasets/ds004696) | ### Copy-paste BibTeX ```bibtex @dataset{ds004696, title = {HAPwave_bids}, author = {Ojeda Valencia, G. and Gregg, N. and Huang, H. and Lundstrom, B. and Brinkmann, B. and Pal Attia1, T. and Van Gompel, J. and Bernstein,M. and In, M. and Huston, J. and Worrell1, G. and Miller, K. and Hermes, D.}, doi = {10.18112/openneuro.ds004696.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004696.v1.0.1}, } ``` ## Technical Details - Subjects: 8 - Recordings: 8 - Tasks: 1 - Channels: 226, 201, 178, 192, 194, 244, 207, 256 - Sampling rate (Hz): 2048.0 - Duration (hours): 9.122338324652778 - Pathology: Epilepsy - Modality: Other - Type: Clinical/Intervention - Size on disk: 14.2 GB - File count: 8 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004696.v1.0.1 - Source: openneuro - OpenNeuro: [ds004696](https://openneuro.org/datasets/ds004696) - NeMAR: [ds004696](https://nemar.org/dataexplorer/detail?dataset_id=ds004696) ## API Reference Use the `DS004696` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004696(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HAPwave_bids * **Study:** `ds004696` (OpenNeuro) * **Author (year):** `Valencia2023` * **Canonical:** — Also importable as: `DS004696`, `Valencia2023`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004696](https://openneuro.org/datasets/ds004696) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004696](https://nemar.org/dataexplorer/detail?dataset_id=ds004696) DOI: [https://doi.org/10.18112/openneuro.ds004696.v1.0.1](https://doi.org/10.18112/openneuro.ds004696.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004696 >>> dataset = DS004696(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004696) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004696) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004703: ieeg dataset, 10 subjects *sEEG Passive listening to natural speech* Access recordings and metadata through EEGDash. **Citation:** Anna Mai, Stephanie Ries, Sharona Ben-Haim, Jerry Shih, Timothy Gentner (2023). *sEEG Passive listening to natural speech*. [10.18112/openneuro.ds004703.v1.1.0](https://doi.org/10.18112/openneuro.ds004703.v1.1.0) Modality: ieeg Subjects: 10 Recordings: 11 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004703 dataset = DS004703(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004703(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004703( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004703, title = {sEEG Passive listening to natural speech}, author = {Anna Mai and Stephanie Ries and Sharona Ben-Haim and Jerry Shih and Timothy Gentner}, doi = {10.18112/openneuro.ds004703.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004703.v1.1.0}, } ``` ## About This Dataset CONTACT For questions about this data set, please contact Anna Mai ([anna.mai@mpi.nl](mailto:anna.mai@mpi.nl); ORCiD 0000-0002-8343-9216). PERMISSIONS These data may not be used for commericial purposes, including but not limited to use in any kind of training set for commercial machine learning applications. These data may not be used in any way that either in part or in whole disambiguates participant identity, including but not limited to attempts at 3D facial reconstruction. RECORDING SETUP These data were collected from June 2018 to August 2019. For all patients, a scalp electrode was used for referencing and ground. These were 13mm, 2.5M single lead subdermal electrodes made by Rochester Electro-Medical with serial number S81025-A-24RM. Depth electrodes were manufactured by Ad-Tech and are Spencer Probe depth electrodes. Each electrode has 10 leads evenly spaced 3-7mm apart. With the exception of patients SD012 and SD022, all implants are depth electrodes. Patients SD012 and SD022 had grid and strip electrodes implanted in addition to several depth electrodes. Any channel names beginning with ``` `` ``` C’’ were not used and should be dropped from analyses. TASK Participants passively listened to 30-45s passages of conversational speech and verbally answered a 2AC content question after each passage. 6 blocks with 7 passages per block. MISSING DATA Anatomical scans for particpant SD012 are not available due to excessive movement artifacts. ## Dataset Information | Dataset ID | `DS004703` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | sEEG Passive listening to natural speech | | Author (year) | `Mai2023` | | Canonical | — | | Importable as | `DS004703`, `Mai2023` | | Year | 2023 | | Authors | Anna Mai, Stephanie Ries, Sharona Ben-Haim, Jerry Shih, Timothy Gentner | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004703.v1.1.0](https://doi.org/10.18112/openneuro.ds004703.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004703) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004703) | [Source URL](https://openneuro.org/datasets/ds004703) | ### Copy-paste BibTeX ```bibtex @dataset{ds004703, title = {sEEG Passive listening to natural speech}, author = {Anna Mai and Stephanie Ries and Sharona Ben-Haim and Jerry Shih and Timothy Gentner}, doi = {10.18112/openneuro.ds004703.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004703.v1.1.0}, } ``` ## Technical Details - Subjects: 10 - Recordings: 11 - Tasks: 1 - Channels: 148 (4), 279 (2), 276 (2), 277 (2), 280 - Sampling rate (Hz): 1024.0 (7), 512.0 (4) - Duration (hours): 9.091059027777778 - Pathology: Surgery - Modality: Auditory - Type: Memory - Size on disk: 12.4 GB - File count: 11 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004703.v1.1.0 - Source: openneuro - OpenNeuro: [ds004703](https://openneuro.org/datasets/ds004703) - NeMAR: [ds004703](https://nemar.org/dataexplorer/detail?dataset_id=ds004703) ## API Reference Use the `DS004703` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004703(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) sEEG Passive listening to natural speech * **Study:** `ds004703` (OpenNeuro) * **Author (year):** `Mai2023` * **Canonical:** — Also importable as: `DS004703`, `Mai2023`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Surgery`. Subjects: 10; recordings: 11; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004703](https://openneuro.org/datasets/ds004703) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004703](https://nemar.org/dataexplorer/detail?dataset_id=ds004703) DOI: [https://doi.org/10.18112/openneuro.ds004703.v1.1.0](https://doi.org/10.18112/openneuro.ds004703.v1.1.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004703 >>> dataset = DS004703(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004703) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004703) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004706: eeg dataset, 34 subjects *Spatial memory and non-invasive closed-loop stimulus timing* Access recordings and metadata through EEGDash. **Citation:** Joseph H. Rudoler, Matthew R. Dougherty, Brandon S. Katerman, James P. Bruska, Woohyeuk Chang, David J. Halpern, Nicholas B. Diamond, Michael J. Kahana (2023). *Spatial memory and non-invasive closed-loop stimulus timing*. [10.18112/openneuro.ds004706.v1.0.0](https://doi.org/10.18112/openneuro.ds004706.v1.0.0) Modality: eeg Subjects: 34 Recordings: 298 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004706 dataset = DS004706(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004706(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004706( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004706, title = {Spatial memory and non-invasive closed-loop stimulus timing}, author = {Joseph H. Rudoler and Matthew R. Dougherty and Brandon S. Katerman and James P. Bruska and Woohyeuk Chang and David J. Halpern and Nicholas B. Diamond and Michael J. Kahana}, doi = {10.18112/openneuro.ds004706.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004706.v1.0.0}, } ``` ## About This Dataset This dataset contains behavioral events and electrophysiological recordings from an experiment run in the Computational Memory Lab at the University of Pennsylvania from 2021-2022 with funding from U.S. Army Medical Research and Development Command (USAMRDC) through the Medical Technology Enterprise Consortium (MTEC) project MTEC-20-06-MOM-013, “Restoring memory with task-independent semi-chronic closed-loop direct brain stimulation and non-invasive closed-loop stimulus timing optimization”. This experiment constitutes the non-invasive portion of the project, which targeted memory improvement through classifier-based stimulus presentation. The experiment is a hybrid spatial-navigation and free recall paradigm in which subjects play the role of a courier delivering items to stores across a virtual town, and are subsequently asked to recall their deliveries. There are two phases - “read-only” and “closed-loop”. In read-only sessions, there is no classifier-based timing manipulation and participants simply perform the task in order to generate training data for the models used in subsequent closed-loop sessions. After collecting sufficient training data, classifier models predict recall in closed-loop sessions and the stimulus presentation is timed to coincide with predicted good or bad memory encoding. Two publications are based on this experiment: [“Neural correlates of memory in an immersive spatiotemporal context”](https://www.biorxiv.org/content/10.1101/2022.11.30.518606) studies the navigation and memory dynamics in read-only sessions, and “Optimizing learning via real-time neural decoding” (link pending) explores the results of the closed-loop manipulation. Note: memory dynamics in closed-loop sessions are potentially influenced by the closed-loop timing manipulation, and so may be biased in a way that precludes them from analyses of general mnemonic function. The read-only sessions, however, were not subject to this manipulation and therefore can be used for studying spatial and episodic memory (as in the first paper mentioned above). ## Dataset Information | Dataset ID | `DS004706` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Spatial memory and non-invasive closed-loop stimulus timing | | Author (year) | `Rudoler2023` | | Canonical | — | | Importable as | `DS004706`, `Rudoler2023` | | Year | 2023 | | Authors | Joseph H. Rudoler, Matthew R. Dougherty, Brandon S. Katerman, James P. Bruska, Woohyeuk Chang, David J. Halpern, Nicholas B. Diamond, Michael J. Kahana | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004706.v1.0.0](https://doi.org/10.18112/openneuro.ds004706.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004706) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004706) | [Source URL](https://openneuro.org/datasets/ds004706) | ### Copy-paste BibTeX ```bibtex @dataset{ds004706, title = {Spatial memory and non-invasive closed-loop stimulus timing}, author = {Joseph H. Rudoler and Matthew R. Dougherty and Brandon S. Katerman and James P. Bruska and Woohyeuk Chang and David J. Halpern and Nicholas B. Diamond and Michael J. Kahana}, doi = {10.18112/openneuro.ds004706.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004706.v1.0.0}, } ``` ## Technical Details - Subjects: 34 - Recordings: 298 - Tasks: 2 - Channels: 137 - Sampling rate (Hz): 2048.0 - Duration (hours): 470.60607069227433 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 1.3 TB - File count: 298 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004706.v1.0.0 - Source: openneuro - OpenNeuro: [ds004706](https://openneuro.org/datasets/ds004706) - NeMAR: [ds004706](https://nemar.org/dataexplorer/detail?dataset_id=ds004706) ## API Reference Use the `DS004706` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004706(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Spatial memory and non-invasive closed-loop stimulus timing * **Study:** `ds004706` (OpenNeuro) * **Author (year):** `Rudoler2023` * **Canonical:** — Also importable as: `DS004706`, `Rudoler2023`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 34; recordings: 298; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004706](https://openneuro.org/datasets/ds004706) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004706](https://nemar.org/dataexplorer/detail?dataset_id=ds004706) DOI: [https://doi.org/10.18112/openneuro.ds004706.v1.0.0](https://doi.org/10.18112/openneuro.ds004706.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004706 >>> dataset = DS004706(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004706) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004706) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004718: eeg dataset, 51 subjects *Le Petit Prince Hong Kong: Naturalistic fMRI and EEG dataset from older Cantonese speakers* Access recordings and metadata through EEGDash. **Citation:** Mohammad Momenian, Zhengwu Ma, Shuyi Wu, Chengcheng Wang, Jixing Li (2023). *Le Petit Prince Hong Kong: Naturalistic fMRI and EEG dataset from older Cantonese speakers*. [10.18112/openneuro.ds004718.v1.1.2](https://doi.org/10.18112/openneuro.ds004718.v1.1.2) Modality: eeg Subjects: 51 Recordings: 51 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004718 dataset = DS004718(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004718(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004718( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004718, title = {Le Petit Prince Hong Kong: Naturalistic fMRI and EEG dataset from older Cantonese speakers}, author = {Mohammad Momenian and Zhengwu Ma and Shuyi Wu and Chengcheng Wang and Jixing Li}, doi = {10.18112/openneuro.ds004718.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds004718.v1.1.2}, } ``` ## About This Dataset **Update note** Since the auditory stimuli were presented sentence by sentence, we decided to include the original audio files instead of a continuous file. We presented the story in 4 different sections. After each section, there was a time for 5 comprehension check questions. The file “lppHK_timing_word_information.xlsx” includes all the timing information for each section of the story. Information about each column of the file is included in the same file. We also included another file called “EEG_trigger_and_sentence_number.xlsx”. This included information about how to match sentence ID and trigger number in the EEG data. These two files are useful for EEG data analysis. For timing issues in fMRI data analysis, we included the Eprime output files which have all the necessary information for aligning in fMRI analysis. Since Eprime usually shows some delay in the presentation of audio files, the delay could be considered in the analysis which can help with better alignment. Per OpenNeuro’s new formatting requirements, all annotation, quiz, and stimuli files are located in the “sourcedata” folder. **Overview** In the field of neurobiology of language, existing research predominantly focuses on data from a limited number of Indo-European languages and primarily involves younger adults, overlooking other age groups. This experiment aims to address these gaps by creating a comprehensive multimodal database. The primary goal is to advance our understanding of language processing in older adults and the impact of healthy aging on brain-behavior relationships. The experiment involves collecting task-based and resting-state fMRI, structural MRI, and EEG data from 52 healthy right-handed older Cantonese participants over 65 years old as they listen to excerpts from “The Little Prince” in Cantonese. Additionally, the database includes detailed information on participants’ language history, lifetime experiences, linguistic and cognitive skills, as well as extensive audio and text annotations, such as time-aligned speech segmentation and prosodic features, along with word-by-word predictors from natural language processing (NLP) tools. Quality diagnostics of the MRI and EEG data confirm their robustness, positioning this database as a valuable resource for studying the spatiotemporal dynamics of language comprehension in older adults. ### View full README **Update note** Since the auditory stimuli were presented sentence by sentence, we decided to include the original audio files instead of a continuous file. We presented the story in 4 different sections. After each section, there was a time for 5 comprehension check questions. The file “lppHK_timing_word_information.xlsx” includes all the timing information for each section of the story. Information about each column of the file is included in the same file. We also included another file called “EEG_trigger_and_sentence_number.xlsx”. This included information about how to match sentence ID and trigger number in the EEG data. These two files are useful for EEG data analysis. For timing issues in fMRI data analysis, we included the Eprime output files which have all the necessary information for aligning in fMRI analysis. Since Eprime usually shows some delay in the presentation of audio files, the delay could be considered in the analysis which can help with better alignment. Per OpenNeuro’s new formatting requirements, all annotation, quiz, and stimuli files are located in the “sourcedata” folder. **Overview** In the field of neurobiology of language, existing research predominantly focuses on data from a limited number of Indo-European languages and primarily involves younger adults, overlooking other age groups. This experiment aims to address these gaps by creating a comprehensive multimodal database. The primary goal is to advance our understanding of language processing in older adults and the impact of healthy aging on brain-behavior relationships. The experiment involves collecting task-based and resting-state fMRI, structural MRI, and EEG data from 52 healthy right-handed older Cantonese participants over 65 years old as they listen to excerpts from “The Little Prince” in Cantonese. Additionally, the database includes detailed information on participants’ language history, lifetime experiences, linguistic and cognitive skills, as well as extensive audio and text annotations, such as time-aligned speech segmentation and prosodic features, along with word-by-word predictors from natural language processing (NLP) tools. Quality diagnostics of the MRI and EEG data confirm their robustness, positioning this database as a valuable resource for studying the spatiotemporal dynamics of language comprehension in older adults. **Methods** **Participants** We recruited 52 healthy, right-handed older Cantonese participants (40 females, mean age=69.12, SD=3.52) from Hong Kong for the experiment, which consists of an fMRI and an EEG session. In both sessions, participants listened to the same sections of The Little Prince in Cantonese for approximately 20 minutes. We made sure each participant was right-handed and a native Cantonese speaker using the Language History Questionnaire8 (LHQ3). Additionally, participants reported normal or corrected normal hearing. They confirmed they had no cognitive decline. Two participants did not take part in the fMRI session and an additional 4 participants’ fMRI data were removed due to excessive head movement, resulting in a total of 46 participants (39 females, mean age=69.08yrs, SD=3.58) for the fMRI session and 52 participants (40 females, mean age=69.12yrs, SD=3.52) for the EEG session. Prior to the experiment, all participants were provided with written informed consent. All participants received monetary compensation after each session. Ethical approval was obtained from the Human Subjects Ethics Application Committee at the Hong Kong Polytechnic University (application number HSEARS20210302001). This study was performed in accordance with the Declaration of Helsinki and all other regulations set by the Ethics Committee. **Experiment Procedures** The study consisted of an fMRI session and an EEG session. The order of the EEG and fMRI sessions was counterbalanced across all participants, and a minimum two-week interval was maintained between sessions. **fMRI experiment** Before the scanning day, an MRI safety screening form was sent to the participants to make sure MRI scanning was safe for them. We also sent them simple readings and videos about MRI scanning so that they could have an idea of what it would be like to be in a scanner. On the day of scanning, participants were initially introduced to the MRI facility and comfortably positioned inside the scanner, with their heads securely supported using paddings. An MRI-safe headphone (Sinorad package) was provided for participants to wear inside the head coil. The audio volume for the listening task was adjusted to ensure audibility for each participant. A mirror attached to the head coil allowed participants to view the stimuli presented on a screen. Participants were instructed to stay focused on the visual fixation sign while listening to the audiobook. The scanning session commenced with the acquisition of structural (T1-weighted) scans. Subsequently, participants engaged in the listening task concurrently with fMRI scanning. The task-based fMRI experiment was divided into four runs, each corresponding to a section of the audiobook. Comprehension was assessed by a series of 5 yes/no questions (20 questions in total) on the content they had listened to. These questions were presented on the screen, with participants indicating their answers by pressing a button. The session concluded with the collection of resting-state fMRI data. **Cognitive tasks** Four cognitive tasks were selected to assess participants’ cognitive abilities in various domains, including the forward digit span task, picture naming task, verbal fluency task, and Flanker task. These tasks were delivered after the fMRI session in a separate soundproof booth. **EEG experiment** During the EEG experiment, participants were seated comfortably in a quiet room and standard procedures were followed for electrode placement and EEG cap preparation. Participants were instructed to focus on a fixation sign displayed on a monitor. The EEG recording was then initiated, with participants listening to the audiobook. The audio volume was adapted to each participant’s hearing ability before the recording using a different set of stimuli. We used Foam Ear Inserts (Medium 14mm). Similar to the fMRI experiment, participants listened to four sections of the audiobook, each lasting approximately 5 minutes. After each run, participants were asked to answer a total of 20 yes/no questions, with 5 questions assigned to each run. They indicated their answers by pressing a button. The EEG recording was conducted continuously throughout all four runs until their completion. **Questionnaires.** We administered LHQ3 and the Lifetime of Experiences Questionnaire (LEQ) during EEG cap preparation. The participants did not need to move or fill in these questionnaires themselves; a research assistant asked the questions one by one in Cantonese and input the responses in an online Google form. LHQ is designed to document language history by producing aggregate scores for language proficiency, exposure, and dominance in all the languages spoken by the participants. LEQ is a tool to document what sorts of activities (e.g. sports, music, education, profession, etc) participants engage in over their lifetime. It measures lifetime experiences in three periods of life: from 13 to 30 (young adulthood), from 30 to 65 (midlife), and after 65 (late life). LEQ produces a total score (see participants.tsv) which is an indication of cognitive activity. Collecting data using these two questionnaires allowed us to have a thicker description of our participants’ linguistic, social, and cognitive experiences. **Acquisition** The MRI data were collected at the University Research Facility in Behavioral and Systems Neuroscience (UBSN) at The Hong Kong Polytechnic University. EEG data was collected at the Speech and Language Sciences Laboratory within the Department of Chinese and Bilingual Studies at the same university. Data acquisition for this project started in July 2021 and ended in December 2022. **fMRI data.** MRI imaging data were acquired using a 3T Siemens MAGNETOM Prisma system MRI scanner with a 20-channel coil. Structural MRI was acquired for each participant using a T1-weighted sequence with the following parameters: repetition time (TR) = 2,500 ms, echo time (TE) = 2.22 ms, inversion time (TI) = 1,120 ms, flip angle α (FA) = 8°, field of view (FOV) = 240 × 256 × 167 mm, resolution = 0.8 mm isotropic, acquisition time = 4 min and 32s. The acquisition parameters for echo planar T2-weighted imaging (EPI) were as follows: 60 oblique axial slices, TR = 2000 ms, TE = 22 ms, FA= 80°, FOV = 204 × 204 × 165 mm, 2.5 mm isotropic, and acceleration factor 3. E-Prime 2.0 (Psychology Software Tools) was used to present the stimuli. **EEG data.** A gel-based 64-channel Neuroscan system on a 10-20 electrode template was used for data acquisition, sampling at a rate of 1000 Hz. To mark the onset of each sentence, triggers were set at the beginning of each sentence. STIM2 software (Compumedics Neuroscan) was used for stimulus presentation. **Stimuli** The experimental stimuli utilized in both the EEG and fMRI consisted of approximately 20 minutes of the story The Little Prince in Cantonese audiobook. It was translated and narrated in Cantonese by a native male speaker. The stimuli consist of a total of 4,473 words and 535 sentences. To facilitate data analysis and participant engagement, the stimuli were further segmented into four distinct sections, each spanning nearly five minutes. To assess listening comprehension, participants were presented with five yes/no questions after completing each section, resulting in a total of 20 questions throughout the experiment. To make sure the speed of story narration was normal for the participants, we asked a few people who were different from the participants in this study to judge the speed and comprehensibility. They all reported the speed was normal, neither so slow nor so fast. **Annotation** We present audio and text annotations, including time-aligned speech segmentation and prosodic information, as well as word-by-word predictors derived from natural language processing (NLP) tools. These predictors include aspects of lexical semantic information, such as part-of-speech (POS) tagging and word frequency. **Prosodic information.** We extracted the root mean square intensity and the fundamental frequency (f0) from every 10 ms interval of the audio segments by utilizing the Voicebox toolbox ([http://www.ee.ic.ac.uk/hp/staff/dmb/voicebox/voicebox.html](http://www.ee.ic.ac.uk/hp/staff/dmb/voicebox/voicebox.html)). Peak RMS intensity and peak f0 for each word in the naturalistic stimuli were used to represent the intensity and pitch information for each word. **Word frequency.** Word segmentation was performed manually by two native Cantonese speakers. The log-transformed frequency of each word was also estimated using PyCantonese20, Version 3.4.0 ([https://pycantonese.org/](https://pycantonese.org/)). The built-in corpus in PyCantonese is the Hong Kong Cantonese Corpus21 (HKCancor), collected from transcribed conversations between March 1997 and August 1998. **Part-of-speech tagging.** Part-of-speech (POS) tagging for each word in the stimuli was extracted using the PyCantonese20, Version 3.4.0 ([https://pycantonese.org/](https://pycantonese.org/)). Following the manual segmentation of words, we input these segments into the Cantonese-exclusive NLP tool PyCantonese, which then provided POS tags for each word according to the Universal Dependencies v2 tagset22 (UDv2). **Preprocessing** All MRI data were preprocessed using the NeuroScholar cloud platform ([http://www.humanbrain.cn](http://www.humanbrain.cn), Beijing Intelligent Brain Cloud, Inc.), provided by The Hong Kong Polytechnic University. This platform uses an enhanced pipeline based on fMRIPrep 20.2.6 (RRID: SCR_016216) and supported by Nipype 1.7.0 (RRID: SCR_002502). Then we used the pydeface ([https://github.com/poldracklab/pydeface](https://github.com/poldracklab/pydeface)) package to remove the voxels corresponding to the faces from both anatomical and preprocessed data to anonymize participants’ facial information. **Anatomical MRI.** The structural MRI data underwent intensity non-uniformity correction, skull-stripping, and brain tissue segmentation of cerebrospinal fluid (CSF), white matter (WM), and gray matter (GM) based on the reference T1w image. The resulting anatomical images were nonlinearly aligned to the ICBM 152 Nonlinear Asymmetrical template version 2009c (MNI152NLin2009cAsym) template brain. Radiological reviews were performed on MRI images by a medical specialist in the lab. Incidental findings were noticed for participants sub-HK031 and sub-HK049. There was a sub-centimeter (0.7cm) blooming artefact in the right putamen, likely a cavernoma for participant sub-HK031. For participant sub-HK049, there was a left thalamic (0.7cm) oval-shaped susceptibility artefact and a 2.6 cm cystic collection in the right posterior fossa. **Functional MRI.** The preprocessing of both resting and functional MRI data included the following steps: (1) skull-stripping, (2) slice-timing correction with the temporal realignment of slices according to the reference slice, (3) BOLD time-series co-registration to the T1w reference image, (4) head-motion estimations and spatial realignment to adjust for linear head motion, (5) applying parameters from structural images to spatially normalize functional images into Montreal Neurological Institute (MNI) template, and (6) smoothing by a 6mm FWHM (full-width half-maximum) Gaussian kernel. **EEG.** The pre-processing was carried out using EEGLAB and in-house MATLAB functions. The preprocessing of EEG data included the following steps: (1) a cutoff frequency filter with 1 Hz high pass and 40.0 Hz low pass cut-off was applied followed by a notch filter at 50 Hz to reduce electrical line noise, (2) use of kurtosis measure to identify and remove bad channels, (3) application of the RUNICA algorithm (from EEGLab toolbox, 2023 version), a machine learning algorithm that evaluates ICA-derived components, for automated rejection of artifacts, including signal noise from eye and muscle, high-amplitude artifacts (e.g., blinks), and signal discontinuities (e.g., electrodes losing contact with the scalp), (4) interpolating data for bad channels using spherical splines for each segment. (5) re-referencing the data by using both electrodes M1 and M2 as the reference for all channels and (6) down-sampling all the data to 250 Hz. **Dataset Structure** **Participant responses** 1. Location: participants.json, participants.tsv 2. File format: tab-separated value 3. Participants’ sex, age, and accuracy of quiz questions for each fMRI and EEG experiment, scan number and LEQ scores in tab-separated value (tsv) files. Data is structured as one line per participant. **Audio files** 1. Location: sourcedata/stimuli/task-lppHK_run-1[2-4].wav 2. File format: wav 3. The 4-section audiobook from The Little Prince in Cantonese **Anatomical data files** 1. Location: sub-HK/anat/sub-HK_T1w.nii.gz 2. File format: NIfTI, gzip-compressed 3. The raw high-resolution anatomical image after defacing **Functional data files** 1. Location: sub-HK/func/sub-HK_task-lppHK_run-1[2–4]_bold.nii.gz 2. File format: NIfTI, gzip-compressed. 3. Sequence protocol: sub-HK/func/sub-HK_task-lppHK_run-1[2–4]_bold.json. 4. The preprocessed data are also available as:derivatives/sub-HK/func/sub-HK_task-lppHK_run-1[2–4]_desc-preprocessed_bold.nii.gz **Resting-state MRI data files** 1. Location: sub-HK/func/sub-HK_task-rest_bold.nii.gz 2. File format: NIfTI, gzip-compressed 3. Sequence protocol: sub-HK/func/sub-HK_task-rest_bold.json. 4. The preprocessed data are also available as: derivatives/sub-HK/func/sub-HK_rest_bold.nii.gz **EEG data files** 1. Location: sub-HK/eeg/sub-HK_task-lppHK_eeg.set 2. File format: set (a type of MATLAB file, with a file in the .fdt extension containing raw data) 3. The preprocessed data are also available as: derivatives/sub-HK/eeg/sub-HK_task-lppHK_eeg.set (together with a file in the .fdt extension containing raw data) **Annotations** 1. Location: annotation/snts.txt, annotation/lppHK_word_information.txt, annotation/wav_acoustic.csv 2. File format: comma-separated value 3. Annotation of speech and linguistic features for the audio and text of the stimuli **Quiz questions** 1. Location: quiz/lppHK_quiz_questions.csv 2. File format: comma-separated value 3. The 20 yes/no quiz questions were employed in both the fMRI and EEG experiments **Usage Note** If you want to know more about the dataset, please refer to our paper “Le Petit Prince Hong Kong (LPPHK): Naturalistic fMRI and EEG Data from Older Cantonese Speakers”, [https://doi.org/10.1101/2024.04.24.590842](https://doi.org/10.1101/2024.04.24.590842) This dataset is still under maintenance. **Contact** For any question regarding this data, please contact: 1. Dr. Mohammad Momenian, [momenian@hku.hk](mailto:momenian@hku.hk) 2. Ms. Zhengwu Ma, [zhengwu.ma@my.cityu.edu.hk](mailto:zhengwu.ma@my.cityu.edu.hk) 3. Ms. Shuyi Wu, [shuyiwu2017@gmail.com](mailto:shuyiwu2017@gmail.com) 4. Ms. Chengcheng Wang, [cwang495-c@my.cityu.edu.hk](mailto:cwang495-c@my.cityu.edu.hk) 5. Dr. Jixing Li, [jixingli@cityu.edu.hk](mailto:jixingli@cityu.edu.hk) ## Dataset Information | Dataset ID | `DS004718` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Le Petit Prince Hong Kong: Naturalistic fMRI and EEG dataset from older Cantonese speakers | | Author (year) | `Momenian2023` | | Canonical | — | | Importable as | `DS004718`, `Momenian2023` | | Year | 2023 | | Authors | Mohammad Momenian, Zhengwu Ma, Shuyi Wu, Chengcheng Wang, Jixing Li | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004718.v1.1.2](https://doi.org/10.18112/openneuro.ds004718.v1.1.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004718) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004718) | [Source URL](https://openneuro.org/datasets/ds004718) | ### Copy-paste BibTeX ```bibtex @dataset{ds004718, title = {Le Petit Prince Hong Kong: Naturalistic fMRI and EEG dataset from older Cantonese speakers}, author = {Mohammad Momenian and Zhengwu Ma and Shuyi Wu and Chengcheng Wang and Jixing Li}, doi = {10.18112/openneuro.ds004718.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds004718.v1.1.2}, } ``` ## Technical Details - Subjects: 51 - Recordings: 51 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 21.836041388888887 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 37.0 GB - File count: 51 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004718.v1.1.2 - Source: openneuro - OpenNeuro: [ds004718](https://openneuro.org/datasets/ds004718) - NeMAR: [ds004718](https://nemar.org/dataexplorer/detail?dataset_id=ds004718) ## API Reference Use the `DS004718` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004718(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Le Petit Prince Hong Kong: Naturalistic fMRI and EEG dataset from older Cantonese speakers * **Study:** `ds004718` (OpenNeuro) * **Author (year):** `Momenian2023` * **Canonical:** — Also importable as: `DS004718`, `Momenian2023`. Modality: `eeg`. Subjects: 51; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004718](https://openneuro.org/datasets/ds004718) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004718](https://nemar.org/dataexplorer/detail?dataset_id=ds004718) DOI: [https://doi.org/10.18112/openneuro.ds004718.v1.1.2](https://doi.org/10.18112/openneuro.ds004718.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004718 >>> dataset = DS004718(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004718) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004718) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004738: meg dataset, 4 subjects *sfb_meg_phantom (B04/C01)* Access recordings and metadata through EEGDash. **Citation:** Bahne H. Bahners, Roxanne Lofredi, Tilmann Sander, Alfons Schnitzler, Andrea A. Kuhn, Esther Florin (2023). *sfb_meg_phantom (B04/C01)*. [10.18112/openneuro.ds004738.v1.0.1](https://doi.org/10.18112/openneuro.ds004738.v1.0.1) Modality: meg Subjects: 4 Recordings: 25 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004738 dataset = DS004738(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004738(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004738( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004738, title = {sfb_meg_phantom (B04/C01)}, author = {Bahne H. Bahners and Roxanne Lofredi and Tilmann Sander and Alfons Schnitzler and Andrea A. Kuhn and Esther Florin}, doi = {10.18112/openneuro.ds004738.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004738.v1.0.1}, } ``` ## About This Dataset This dataset is a part of the data used for the study: ‘Bahners B.H., Lofredi R. , Sander T., Schnitzler A., Kuhn A.A., Florin E. Deep brain stimulation device-specific artefacts in MEG recordings. 2023, submitted. doi: tba’ Please use the latest version of the dataset. For detailed information about measurement protocol please refer to doi: tba. Additional information about Neuromag Phantom measurement is provided below. Neuromag Phantom Measurement Movement onset is recorded with an accelerometer and captured with MISC006-MISC008 ## Dataset Information | Dataset ID | `DS004738` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | sfb_meg_phantom (B04/C01) | | Author (year) | `Bahners2023` | | Canonical | — | | Importable as | `DS004738`, `Bahners2023` | | Year | 2023 | | Authors | Bahne H. Bahners, Roxanne Lofredi, Tilmann Sander, Alfons Schnitzler, Andrea A. Kuhn, Esther Florin | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004738.v1.0.1](https://doi.org/10.18112/openneuro.ds004738.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004738) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004738) | [Source URL](https://openneuro.org/datasets/ds004738) | ### Copy-paste BibTeX ```bibtex @dataset{ds004738, title = {sfb_meg_phantom (B04/C01)}, author = {Bahne H. Bahners and Roxanne Lofredi and Tilmann Sander and Alfons Schnitzler and Andrea A. Kuhn and Esther Florin}, doi = {10.18112/openneuro.ds004738.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004738.v1.0.1}, } ``` ## Technical Details - Subjects: 4 - Recordings: 25 - Tasks: 2 - Channels: 323 (13), 160 (12) - Sampling rate (Hz): 5000.0 - Duration (hours): 0.4349166666666667 - Pathology: Other - Modality: Other - Type: Other - Size on disk: 6.1 GB - File count: 25 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004738.v1.0.1 - Source: openneuro - OpenNeuro: [ds004738](https://openneuro.org/datasets/ds004738) - NeMAR: [ds004738](https://nemar.org/dataexplorer/detail?dataset_id=ds004738) ## API Reference Use the `DS004738` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004738(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) sfb_meg_phantom (B04/C01) * **Study:** `ds004738` (OpenNeuro) * **Author (year):** `Bahners2023` * **Canonical:** — Also importable as: `DS004738`, `Bahners2023`. Modality: `meg`; Experiment type: `Other`; Subject type: `Other`. Subjects: 4; recordings: 25; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004738](https://openneuro.org/datasets/ds004738) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004738](https://nemar.org/dataexplorer/detail?dataset_id=ds004738) DOI: [https://doi.org/10.18112/openneuro.ds004738.v1.0.1](https://doi.org/10.18112/openneuro.ds004738.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004738 >>> dataset = DS004738(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004738) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004738) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS004745: eeg dataset, 6 subjects *8-Channel SSVEP EEG Dataset with Artifact Trials* Access recordings and metadata through EEGDash. **Citation:** Velu Prabhakar Kumaravel, Victor Kartsch, Simone Benatti, Giorgio Vallortigara, Elisabetta Farella, Marco Buiatti (2023). *8-Channel SSVEP EEG Dataset with Artifact Trials*. [10.18112/openneuro.ds004745.v1.0.1](https://doi.org/10.18112/openneuro.ds004745.v1.0.1) Modality: eeg Subjects: 6 Recordings: 6 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004745 dataset = DS004745(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004745(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004745( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004745, title = {8-Channel SSVEP EEG Dataset with Artifact Trials}, author = {Velu Prabhakar Kumaravel and Victor Kartsch and Simone Benatti and Giorgio Vallortigara and Elisabetta Farella and Marco Buiatti}, doi = {10.18112/openneuro.ds004745.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004745.v1.0.1}, } ``` ## About This Dataset Dataset consists of 6 participants who performed SSVEP tasks. We designed stimulations at 3 different frequencies (2 Hz, 4 Hz, 8 Hz). Each participant attended to 3 trials for each frequency in which they remained static as much as possible to avoid artifacts. They attended to 3 trials for each frequency in which they made voluntary head/neck and eye movements. Please refer to Kumaravel et al., (IEEE EMBC 2021) for further details. ## Dataset Information | Dataset ID | `DS004745` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 8-Channel SSVEP EEG Dataset with Artifact Trials | | Author (year) | `Kumaravel2023` | | Canonical | — | | Importable as | `DS004745`, `Kumaravel2023` | | Year | 2023 | | Authors | Velu Prabhakar Kumaravel, Victor Kartsch, Simone Benatti, Giorgio Vallortigara, Elisabetta Farella, Marco Buiatti | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004745.v1.0.1](https://doi.org/10.18112/openneuro.ds004745.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004745) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004745) | [Source URL](https://openneuro.org/datasets/ds004745) | ### Copy-paste BibTeX ```bibtex @dataset{ds004745, title = {8-Channel SSVEP EEG Dataset with Artifact Trials}, author = {Velu Prabhakar Kumaravel and Victor Kartsch and Simone Benatti and Giorgio Vallortigara and Elisabetta Farella and Marco Buiatti}, doi = {10.18112/openneuro.ds004745.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004745.v1.0.1}, } ``` ## Technical Details - Subjects: 6 - Recordings: 6 - Tasks: 1 - Channels: 8 - Sampling rate (Hz): 1000.0 - Duration (hours): 1.7556605555555556 - Pathology: Healthy - Modality: Visual - Type: Other - Size on disk: 242.1 MB - File count: 6 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004745.v1.0.1 - Source: openneuro - OpenNeuro: [ds004745](https://openneuro.org/datasets/ds004745) - NeMAR: [ds004745](https://nemar.org/dataexplorer/detail?dataset_id=ds004745) ## API Reference Use the `DS004745` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004745(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 8-Channel SSVEP EEG Dataset with Artifact Trials * **Study:** `ds004745` (OpenNeuro) * **Author (year):** `Kumaravel2023` * **Canonical:** — Also importable as: `DS004745`, `Kumaravel2023`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 6; recordings: 6; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004745](https://openneuro.org/datasets/ds004745) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004745](https://nemar.org/dataexplorer/detail?dataset_id=ds004745) DOI: [https://doi.org/10.18112/openneuro.ds004745.v1.0.1](https://doi.org/10.18112/openneuro.ds004745.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004745 >>> dataset = DS004745(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004745) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004745) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004752: eeg, ieeg dataset, 15 subjects *Dataset of intracranial EEG, scalp EEG and beamforming sources from epilepsy patients performing a verbal working memory task* Access recordings and metadata through EEGDash. **Citation:** Vasileios Dimakopoulos, Lennart Stieglitz, Lukas Imbach, Johannes Sarnthein (2023). *Dataset of intracranial EEG, scalp EEG and beamforming sources from epilepsy patients performing a verbal working memory task*. [10.18112/openneuro.ds004752.v1.0.1](https://doi.org/10.18112/openneuro.ds004752.v1.0.1) Modality: eeg, ieeg Subjects: 15 Recordings: 136 License: CC0 Source: openneuro Citations: 4.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004752 dataset = DS004752(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004752(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004752( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004752, title = {Dataset of intracranial EEG, scalp EEG and beamforming sources from epilepsy patients performing a verbal working memory task}, author = {Vasileios Dimakopoulos and Lennart Stieglitz and Lukas Imbach and Johannes Sarnthein}, doi = {10.18112/openneuro.ds004752.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004752.v1.0.1}, } ``` ## About This Dataset **Dataset of intracranial EEG, scalp EEG and beamforming sources from human epilepsy patients performing a verbal working memory task** **Description** We present an electrophysiological dataset recorded from fifteen subjects during a verbal working memory task. Subjects were epilepsy patients undergoing intracranial monitoring for localization of epileptic seizures. Subjects performed a modified Sternberg task in which the encoding of memory items, maintenance, and recall were temporally separated. The dataset includes simultaneously recorded scalp EEG with the 10-20 system, intracranial EEG (iEEG) recorded with depth electrodes, waveforms, and the MNI coordinates and anatomical labels of all intracranial electrodes. The dataset includes also reconstructed virtual sensor data that were created by performing LCMV beamforming on the EEG at specific brain regions including, temporal superior lobe, lateral prefrontal cortex, occipital cortex, posterior parietal cortex, and Broca. Subject characteristics and information on sessions (set size, match/mismatch, correct/incorrect, response, response time for each trial) are also provided. This dataset enables the investigation of working memory by providing simultaneous scalp EEG and iEEG recordings, which can be used for connectivity analysis, alongside reconstructed beamforming EEG sources that can enable further cognitive analysis such as replay of memory items. **Repository structure** ### View full README **Dataset of intracranial EEG, scalp EEG and beamforming sources from human epilepsy patients performing a verbal working memory task** **Description** We present an electrophysiological dataset recorded from fifteen subjects during a verbal working memory task. Subjects were epilepsy patients undergoing intracranial monitoring for localization of epileptic seizures. Subjects performed a modified Sternberg task in which the encoding of memory items, maintenance, and recall were temporally separated. The dataset includes simultaneously recorded scalp EEG with the 10-20 system, intracranial EEG (iEEG) recorded with depth electrodes, waveforms, and the MNI coordinates and anatomical labels of all intracranial electrodes. The dataset includes also reconstructed virtual sensor data that were created by performing LCMV beamforming on the EEG at specific brain regions including, temporal superior lobe, lateral prefrontal cortex, occipital cortex, posterior parietal cortex, and Broca. Subject characteristics and information on sessions (set size, match/mismatch, correct/incorrect, response, response time for each trial) are also provided. This dataset enables the investigation of working memory by providing simultaneous scalp EEG and iEEG recordings, which can be used for connectivity analysis, alongside reconstructed beamforming EEG sources that can enable further cognitive analysis such as replay of memory items. **Repository structure** **Main directory (verbal WM)** Contains metadata files in the BIDS standard about the participants and the study. Folders are explained below. **Subfolders** - verbalWM/sub-/: Contains folders for each subject, named sub- and session information. - verbalWM/sub-/ses-/ieeg/: Contains the raw iEEG data in .edf format for each subject. Each subject performed more than 1 working memory session (ses-0x) each of which includes ~50 trials. Each \*ieeg.edf file contains continuous iEEG data during the working memory task. Details about the channels are given in the corresponding .tsv file. We also provide the information on the trial start and end in the events.tsv files by specifying the start and end sample of each trial. - verbalWM/sub-/ses-/eeg/: Contains the raw EEG data in .edf format for each subject. Each subject performed more than 1 working memory session (ses-0x) each of which includes ~50 trials. Each \*eeg.edf file contains continuous EEG data during the working memory task. Details about the channels are given in the corresponding .tsv file. We also provide the information on the trial start and end in the events.tsv files by specifying the start and end sample of each trial. - verbalWM/derivatives/sub-/: Contains the LCMV beamforming sources during encoding and maintenance. The beamforming sources are in the form of virtual EEG sensors each of which corresponds to a specific brain region. The naming convention used for the virtual sensors is the following: DLPFC; dorsolateral pre-frontal cortex, OFC; orbitofrontal cortex, PPC; posterior parietal cortex, AC; auditory cortex, V1; primary visual cortex **BIDS Conversion** bids-starter-kid and custom Matlab scripts were used to convert the dataset into BIDS format. **References** [1] Dimakopoulos V, Megevand P, Stieglitz LH, Imbach L, Sarnthein J. Information flows from hippocampus to auditory cortex during replay of verbal working memory items. Elife 2022;11. 10.7554/eLife.78677 [2] Boran E, Fedele T, Klaver P, Hilfiker P, Stieglitz L, Grunwald T, et al. Persistent hippocampal neural firing and hippocampal-cortical coupling predict verbal working memory load. Science Advances 2019;5(3):eaav3687. 10.1126/sciadv.aav3687 [3] Boran E, Fedele T, Steiner A, Hilfiker P, Stieglitz L, Grunwald T, et al. Dataset of human medial temporal lobe neurons, scalp and intracranial EEG during a verbal working memory task. Scientific Data 2020;7(1):30. 10.1038/s41597-020-0364-3 ## Dataset Information | Dataset ID | `DS004752` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset of intracranial EEG, scalp EEG and beamforming sources from epilepsy patients performing a verbal working memory task | | Author (year) | `Dimakopoulos2023_intracranial` | | Canonical | — | | Importable as | `DS004752`, `Dimakopoulos2023_intracranial` | | Year | 2023 | | Authors | Vasileios Dimakopoulos, Lennart Stieglitz, Lukas Imbach, Johannes Sarnthein | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004752.v1.0.1](https://doi.org/10.18112/openneuro.ds004752.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004752) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004752) | [Source URL](https://openneuro.org/datasets/ds004752) | ### Copy-paste BibTeX ```bibtex @dataset{ds004752, title = {Dataset of intracranial EEG, scalp EEG and beamforming sources from epilepsy patients performing a verbal working memory task}, author = {Vasileios Dimakopoulos and Lennart Stieglitz and Lukas Imbach and Johannes Sarnthein}, doi = {10.18112/openneuro.ds004752.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004752.v1.0.1}, } ``` ## Technical Details - Subjects: 15 - Recordings: 136 - Tasks: 1 - Channels: 64 (34), 8 (16), 20 (15), 21 (14), 19 (10), 10 (7), 46 (6), 68 (6), 36 (6), 23 (6), 62 (4), 48 (4), 32 (3), 40 (3), 80 (2) - Sampling rate (Hz): 2000.0 (40), 4000.0 (40), 200.0 (36), 4096.0 (20) - Duration (hours): 0.3022222222222222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 10.2 GB - File count: 136 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004752.v1.0.1 - Source: openneuro - OpenNeuro: [ds004752](https://openneuro.org/datasets/ds004752) - NeMAR: [ds004752](https://nemar.org/dataexplorer/detail?dataset_id=ds004752) ## API Reference Use the `DS004752` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004752(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of intracranial EEG, scalp EEG and beamforming sources from epilepsy patients performing a verbal working memory task * **Study:** `ds004752` (OpenNeuro) * **Author (year):** `Dimakopoulos2023_intracranial` * **Canonical:** — Also importable as: `DS004752`, `Dimakopoulos2023_intracranial`. Modality: `eeg, ieeg`. Subjects: 15; recordings: 136; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004752](https://openneuro.org/datasets/ds004752) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004752](https://nemar.org/dataexplorer/detail?dataset_id=ds004752) DOI: [https://doi.org/10.18112/openneuro.ds004752.v1.0.1](https://doi.org/10.18112/openneuro.ds004752.v1.0.1) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS004752 >>> dataset = DS004752(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004752) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004752) # DS004770: ieeg dataset, 10 subjects *iEEG on children during gameplay* Access recordings and metadata through EEGDash. **Citation:** Riyo Ueda, Kazuki Sakakura, Takumi Mitsuhashi, Masaki Sonoda, Ethan Firestone, Naoto Kuroda, Yu Kitazawa, Hiroshi Uda, Aimee F. Luat, Elizabeth L. Johnson, Noa Ofen, Eishi Asano (2023). *iEEG on children during gameplay*. [10.18112/openneuro.ds004770.v1.0.0](https://doi.org/10.18112/openneuro.ds004770.v1.0.0) Modality: ieeg Subjects: 10 Recordings: 22 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004770 dataset = DS004770(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004770(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004770( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004770, title = {iEEG on children during gameplay}, author = {Riyo Ueda and Kazuki Sakakura and Takumi Mitsuhashi and Masaki Sonoda and Ethan Firestone and Naoto Kuroda and Yu Kitazawa and Hiroshi Uda and Aimee F. Luat and Elizabeth L. Johnson and Noa Ofen and Eishi Asano}, doi = {10.18112/openneuro.ds004770.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004770.v1.0.0}, } ``` ## About This Dataset Dataset of intracranial EEG from human epilepsy patients performing a visuospatial working memory task Description: We present an electrophysiological dataset recorded from ten subjects during a visuospatial working memory task. Subjects were epilepsy patients undergoing intracranial monitoring for localization of epileptic seizures. Subjects completed 60 trials (five sessions) of Memory Matrix - a visuospatial working memory game on the Lumosity platform ([https://www.lumosity.com/](https://www.lumosity.com/); Lumos Labs, Inc, San Francisco, CA) - during interictal iEEG recording. Repository structure: Main directory (iEEG from children during gameplay) Contains iEEG files of each participant in the study. Folders are explained below. Subfolders: 1. sub-/: Contains folders for each subject, named sub- and session information. 2. sub-/ses-: Contains folders for base and task. 3. sub-/ses-/ieeg/: Contains the raw iEEG data in .edf format for each subject. Each subject performed 60 working memory trials (ses-task). Each \*ieeg.edf file contains continuous iEEG data during the working memory task. Details about the channels are given in the corresponding .tsv file. We also provide the information on the timing of the stimulus onset and finger tapping on ieeg/edf file by specifying the start and end sample of each trial. (101 is for task display, 401 is for finger tapping to the successful grid, and 501 is for finger tapping to the failed grid). Each subject also had baseline periods (ses-base). To establish baseline, we selected 60 non-overlapping 2,000-ms time windows during periods of spontaneous, resting, eye-open wakefulness immediately preceding the game sessions. ## Dataset Information | Dataset ID | `DS004770` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | iEEG on children during gameplay | | Author (year) | `Ueda2023` | | Canonical | — | | Importable as | `DS004770`, `Ueda2023` | | Year | 2023 | | Authors | Riyo Ueda, Kazuki Sakakura, Takumi Mitsuhashi, Masaki Sonoda, Ethan Firestone, Naoto Kuroda, Yu Kitazawa, Hiroshi Uda, Aimee F. Luat, Elizabeth L. Johnson, Noa Ofen, Eishi Asano | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004770.v1.0.0](https://doi.org/10.18112/openneuro.ds004770.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004770) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004770) | [Source URL](https://openneuro.org/datasets/ds004770) | ### Copy-paste BibTeX ```bibtex @dataset{ds004770, title = {iEEG on children during gameplay}, author = {Riyo Ueda and Kazuki Sakakura and Takumi Mitsuhashi and Masaki Sonoda and Ethan Firestone and Naoto Kuroda and Yu Kitazawa and Hiroshi Uda and Aimee F. Luat and Elizabeth L. Johnson and Noa Ofen and Eishi Asano}, doi = {10.18112/openneuro.ds004770.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004770.v1.0.0}, } ``` ## Technical Details - Subjects: 10 - Recordings: 22 - Tasks: 1 - Channels: 128 (14), 105 (2), 112 (2), 110 (2), 113 (2) - Sampling rate (Hz): 1000.0 - Duration (hours): 2.0000166666666668 - Pathology: Epilepsy - Modality: Visual - Type: Memory - Size on disk: 8.7 GB - File count: 22 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004770.v1.0.0 - Source: openneuro - OpenNeuro: [ds004770](https://openneuro.org/datasets/ds004770) - NeMAR: [ds004770](https://nemar.org/dataexplorer/detail?dataset_id=ds004770) ## API Reference Use the `DS004770` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004770(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG on children during gameplay * **Study:** `ds004770` (OpenNeuro) * **Author (year):** `Ueda2023` * **Canonical:** — Also importable as: `DS004770`, `Ueda2023`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 10; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004770](https://openneuro.org/datasets/ds004770) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004770](https://nemar.org/dataexplorer/detail?dataset_id=ds004770) DOI: [https://doi.org/10.18112/openneuro.ds004770.v1.0.0](https://doi.org/10.18112/openneuro.ds004770.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004770 >>> dataset = DS004770(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004770) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004770) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004771: eeg dataset, 61 subjects *EEG/ERP data from a Python Reading Task* Access recordings and metadata through EEGDash. **Citation:** Chu-Hsuan Kuo, Chantel S. Prat (2023). *EEG/ERP data from a Python Reading Task*. [10.18112/openneuro.ds004771.v1.0.0](https://doi.org/10.18112/openneuro.ds004771.v1.0.0) Modality: eeg Subjects: 61 Recordings: 61 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004771 dataset = DS004771(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004771(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004771( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004771, title = {EEG/ERP data from a Python Reading Task}, author = {Chu-Hsuan Kuo and Chantel S. Prat}, doi = {10.18112/openneuro.ds004771.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004771.v1.0.0}, } ``` ## About This Dataset EEG data for the Python reading task (acceptability judgments) described in [Kuo, C-H. and Prat, C.S. Programmers show distinct, language-like brain responses to violations in form and meaning when reading code], pending submission to Nature Communications. This study recruited 62 total subjects. 1 subject did not complete the EEG session and was removed from all analyses and is not included in this dataset. The remaining 61 individuals’ EEG data are included. The participants info file contains information regarding which individuals were included in the final analyses (per artifact rejection criteria detailed in the article). The stimuli for this study was administered in Presentation; as such, the files are in the formats compatible with this program. The provided code was used for processing the EEG data. All statistics were run in Jamovi, an R-based open source software; feel free to reach out for the original files if you are interested. ## Dataset Information | Dataset ID | `DS004771` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG/ERP data from a Python Reading Task | | Author (year) | `Kuo2023` | | Canonical | — | | Importable as | `DS004771`, `Kuo2023` | | Year | 2023 | | Authors | Chu-Hsuan Kuo, Chantel S. Prat | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004771.v1.0.0](https://doi.org/10.18112/openneuro.ds004771.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004771) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004771) | [Source URL](https://openneuro.org/datasets/ds004771) | ### Copy-paste BibTeX ```bibtex @dataset{ds004771, title = {EEG/ERP data from a Python Reading Task}, author = {Chu-Hsuan Kuo and Chantel S. Prat}, doi = {10.18112/openneuro.ds004771.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004771.v1.0.0}, } ``` ## Technical Details - Subjects: 61 - Recordings: 61 - Tasks: 1 - Channels: 34 - Sampling rate (Hz): 256.0 - Duration (hours): 0.0221072048611111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 1.4 GB - File count: 61 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004771.v1.0.0 - Source: openneuro - OpenNeuro: [ds004771](https://openneuro.org/datasets/ds004771) - NeMAR: [ds004771](https://nemar.org/dataexplorer/detail?dataset_id=ds004771) ## API Reference Use the `DS004771` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004771(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG/ERP data from a Python Reading Task * **Study:** `ds004771` (OpenNeuro) * **Author (year):** `Kuo2023` * **Canonical:** — Also importable as: `DS004771`, `Kuo2023`. Modality: `eeg`. Subjects: 61; recordings: 61; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004771](https://openneuro.org/datasets/ds004771) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004771](https://nemar.org/dataexplorer/detail?dataset_id=ds004771) DOI: [https://doi.org/10.18112/openneuro.ds004771.v1.0.0](https://doi.org/10.18112/openneuro.ds004771.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004771 >>> dataset = DS004771(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004771) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004771) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004774: ieeg dataset, 14 subjects *Automatic Evoked Response Detection (ER-Detect) dataset* Access recordings and metadata through EEGDash. **Citation:** M.A. van den Boom, N.M. Gregg, G.O. Valencia, B.N. Lundstrom, K.J. Miller, D. van Blooijs, G.J.M. Huiskamp, F.S.S. Leijten, G.A. Worrell, D. Hermes (2023). *Automatic Evoked Response Detection (ER-Detect) dataset*. [10.18112/openneuro.ds004774.v1.0.0](https://doi.org/10.18112/openneuro.ds004774.v1.0.0) Modality: ieeg Subjects: 14 Recordings: 14 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004774 dataset = DS004774(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004774(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004774( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004774, title = {Automatic Evoked Response Detection (ER-Detect) dataset}, author = {M.A. van den Boom and N.M. Gregg and G.O. Valencia and B.N. Lundstrom and K.J. Miller and D. van Blooijs and G.J.M. Huiskamp and F.S.S. Leijten and G.A. Worrell and D. Hermes}, doi = {10.18112/openneuro.ds004774.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004774.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS004774` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Automatic Evoked Response Detection (ER-Detect) dataset | | Author (year) | `Boom2023` | | Canonical | `ERDetect`, `ER_Detect` | | Importable as | `DS004774`, `Boom2023`, `ERDetect`, `ER_Detect` | | Year | 2023 | | Authors | M.A. van den Boom, N.M. Gregg, G.O. Valencia, B.N. Lundstrom, K.J. Miller, D. van Blooijs, G.J.M. Huiskamp, F.S.S. Leijten, G.A. Worrell, D. Hermes | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004774.v1.0.0](https://doi.org/10.18112/openneuro.ds004774.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004774) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004774) | [Source URL](https://openneuro.org/datasets/ds004774) | ### Copy-paste BibTeX ```bibtex @dataset{ds004774, title = {Automatic Evoked Response Detection (ER-Detect) dataset}, author = {M.A. van den Boom and N.M. Gregg and G.O. Valencia and B.N. Lundstrom and K.J. Miller and D. van Blooijs and G.J.M. Huiskamp and F.S.S. Leijten and G.A. Worrell and D. Hermes}, doi = {10.18112/openneuro.ds004774.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004774.v1.0.0}, } ``` ## Technical Details - Subjects: 14 - Recordings: 14 - Tasks: 2 - Channels: 133 (6), 68 (2), 39 (2), 130, 97, 89, 65 - Sampling rate (Hz): 2048.0 - Duration (hours): 11.947781982421876 - Pathology: Epilepsy - Modality: Other - Type: Clinical/Intervention - Size on disk: 24.8 GB - File count: 14 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004774.v1.0.0 - Source: openneuro - OpenNeuro: [ds004774](https://openneuro.org/datasets/ds004774) - NeMAR: [ds004774](https://nemar.org/dataexplorer/detail?dataset_id=ds004774) ## API Reference Use the `DS004774` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004774(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Automatic Evoked Response Detection (ER-Detect) dataset * **Study:** `ds004774` (OpenNeuro) * **Author (year):** `Boom2023` * **Canonical:** `ERDetect`, `ER_Detect` Also importable as: `DS004774`, `Boom2023`, `ERDetect`, `ER_Detect`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 14; recordings: 14; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004774](https://openneuro.org/datasets/ds004774) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004774](https://nemar.org/dataexplorer/detail?dataset_id=ds004774) DOI: [https://doi.org/10.18112/openneuro.ds004774.v1.0.0](https://doi.org/10.18112/openneuro.ds004774.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS004774 >>> dataset = DS004774(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004774) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004774) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004784: eeg dataset, 1 subjects *Phantom EEG Dataset with Motion, Muscle, and Eye Artifacts and Example Scripts* Access recordings and metadata through EEGDash. **Citation:** Ryan J. Downey, Daniel P. Ferris (2023). *Phantom EEG Dataset with Motion, Muscle, and Eye Artifacts and Example Scripts*. [10.18112/openneuro.ds004784.v1.0.4](https://doi.org/10.18112/openneuro.ds004784.v1.0.4) Modality: eeg Subjects: 1 Recordings: 6 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004784 dataset = DS004784(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004784(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004784( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004784, title = {Phantom EEG Dataset with Motion, Muscle, and Eye Artifacts and Example Scripts}, author = {Ryan J. Downey and Daniel P. Ferris}, doi = {10.18112/openneuro.ds004784.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds004784.v1.0.4}, } ``` ## About This Dataset This phantom experiment contains data collected from a an electrically conductive head phantom. Six conditions were tested: brain-only [no artifacts], or brain with eye, jaw muscle, neck muscle, or motion artifacts present, or brain with all artifacts simultaneously present. Also contained is a copy of the iCanClean plugin for EEGLAB and a set of other helpful scripts that enable parameter sweep testing and validation with ground truth knowledge of the brain signals of interest. Please see derivatives folder and read the How To document within. A copy of iCanClean plugin is in derivatives->Scripts->plugins Please see reference for methodological details [https://doi.org/10.3390/s23198214](https://doi.org/10.3390/s23198214) - Ryan Downey (December 20, 2023) ## Dataset Information | Dataset ID | `DS004784` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Phantom EEG Dataset with Motion, Muscle, and Eye Artifacts and Example Scripts | | Author (year) | `Downey2023` | | Canonical | — | | Importable as | `DS004784`, `Downey2023` | | Year | 2023 | | Authors | Ryan J. Downey, Daniel P. Ferris | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004784.v1.0.4](https://doi.org/10.18112/openneuro.ds004784.v1.0.4) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004784) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004784) | [Source URL](https://openneuro.org/datasets/ds004784) | ### Copy-paste BibTeX ```bibtex @dataset{ds004784, title = {Phantom EEG Dataset with Motion, Muscle, and Eye Artifacts and Example Scripts}, author = {Ryan J. Downey and Daniel P. Ferris}, doi = {10.18112/openneuro.ds004784.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds004784.v1.0.4}, } ``` ## Technical Details - Subjects: 1 - Recordings: 6 - Tasks: 6 - Channels: 264 - Sampling rate (Hz): 512.0 - Duration (hours): 0.5408333333333334 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 1.0 GB - File count: 6 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004784.v1.0.4 - Source: openneuro - OpenNeuro: [ds004784](https://openneuro.org/datasets/ds004784) - NeMAR: [ds004784](https://nemar.org/dataexplorer/detail?dataset_id=ds004784) ## API Reference Use the `DS004784` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004784(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Phantom EEG Dataset with Motion, Muscle, and Eye Artifacts and Example Scripts * **Study:** `ds004784` (OpenNeuro) * **Author (year):** `Downey2023` * **Canonical:** — Also importable as: `DS004784`, `Downey2023`. Modality: `eeg`. Subjects: 1; recordings: 6; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004784](https://openneuro.org/datasets/ds004784) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004784](https://nemar.org/dataexplorer/detail?dataset_id=ds004784) DOI: [https://doi.org/10.18112/openneuro.ds004784.v1.0.4](https://doi.org/10.18112/openneuro.ds004784.v1.0.4) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004784 >>> dataset = DS004784(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004784) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004784) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004785: eeg dataset, 17 subjects *EEG data for paper titled - Precise cortical contributions to feedback sensorimotor control during reactive balance* Access recordings and metadata through EEGDash. **Citation:** Scott Boebinger, Aiden Payne, Giovanni Martino, Kennedy Kerr, Jasmine Mirdamadi, J. Lucas McKay, Michael Borich, Lena Ting (2023). *EEG data for paper titled - Precise cortical contributions to feedback sensorimotor control during reactive balance*. [10.18112/openneuro.ds004785.v1.0.1](https://doi.org/10.18112/openneuro.ds004785.v1.0.1) Modality: eeg Subjects: 17 Recordings: 17 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004785 dataset = DS004785(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004785(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004785( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004785, title = {EEG data for paper titled - Precise cortical contributions to feedback sensorimotor control during reactive balance}, author = {Scott Boebinger and Aiden Payne and Giovanni Martino and Kennedy Kerr and Jasmine Mirdamadi and J. Lucas McKay and Michael Borich and Lena Ting}, doi = {10.18112/openneuro.ds004785.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004785.v1.0.1}, } ``` ## About This Dataset Electroencephalography data for paper titled “Precise cortical contributions to feedback sensorimotor control during reactive balance” ## Dataset Information | Dataset ID | `DS004785` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG data for paper titled - Precise cortical contributions to feedback sensorimotor control during reactive balance | | Author (year) | `Boebinger2023` | | Canonical | — | | Importable as | `DS004785`, `Boebinger2023` | | Year | 2023 | | Authors | Scott Boebinger, Aiden Payne, Giovanni Martino, Kennedy Kerr, Jasmine Mirdamadi, J. Lucas McKay, Michael Borich, Lena Ting | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004785.v1.0.1](https://doi.org/10.18112/openneuro.ds004785.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004785) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004785) | [Source URL](https://openneuro.org/datasets/ds004785) | ### Copy-paste BibTeX ```bibtex @dataset{ds004785, title = {EEG data for paper titled - Precise cortical contributions to feedback sensorimotor control during reactive balance}, author = {Scott Boebinger and Aiden Payne and Giovanni Martino and Kennedy Kerr and Jasmine Mirdamadi and J. Lucas McKay and Michael Borich and Lena Ting}, doi = {10.18112/openneuro.ds004785.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004785.v1.0.1}, } ``` ## Technical Details - Subjects: 17 - Recordings: 17 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 0.0188888888888888 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 351.2 MB - File count: 17 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004785.v1.0.1 - Source: openneuro - OpenNeuro: [ds004785](https://openneuro.org/datasets/ds004785) - NeMAR: [ds004785](https://nemar.org/dataexplorer/detail?dataset_id=ds004785) ## API Reference Use the `DS004785` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004785(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data for paper titled - Precise cortical contributions to feedback sensorimotor control during reactive balance * **Study:** `ds004785` (OpenNeuro) * **Author (year):** `Boebinger2023` * **Canonical:** — Also importable as: `DS004785`, `Boebinger2023`. Modality: `eeg`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004785](https://openneuro.org/datasets/ds004785) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004785](https://nemar.org/dataexplorer/detail?dataset_id=ds004785) DOI: [https://doi.org/10.18112/openneuro.ds004785.v1.0.1](https://doi.org/10.18112/openneuro.ds004785.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004785 >>> dataset = DS004785(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004785) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004785) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004789: ieeg dataset, 273 subjects *Delayed Free Recall of Word Lists* Access recordings and metadata through EEGDash. **Citation:** Haydn G. Herrema, Michael J. Kahana (2023). *Delayed Free Recall of Word Lists*. [10.18112/openneuro.ds004789.v3.1.0](https://doi.org/10.18112/openneuro.ds004789.v3.1.0) Modality: ieeg Subjects: 273 Recordings: 983 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004789 dataset = DS004789(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004789(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004789( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004789, title = {Delayed Free Recall of Word Lists}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds004789.v3.1.0}, url = {https://doi.org/10.18112/openneuro.ds004789.v3.1.0}, } ``` ## About This Dataset **Delayed Free Recall of Word Lists** **Description** This dataset contains behavioral events and intracranial electrophysiological recordings from a delayed free recall task. The experiment consists of participants studying a list of words, presented visually one at a time, completing simple arithmetic problems that function as a distractor, and then freely recalling the words from the just-presented list in any order. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania. **To Note** \* The iEEG recordings are labeled either “monopolar” or “bipolar.” The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis. The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables. \* Each subject has a unique montage of electrode locations. MNI and Talairach coordinates are provided when available, along with brain region annotations. \* Recordings were made on multiple different systems, so we have done the scaling to provide all voltage values in V. **Contact** For questions or inquiries, please contact [sas-kahana-sysadmin@sas.upenn.edu](mailto:sas-kahana-sysadmin@sas.upenn.edu). ## Dataset Information | Dataset ID | `DS004789` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Delayed Free Recall of Word Lists | | Author (year) | `Herrema2023_Delayed_Free_Recall` | | Canonical | — | | Importable as | `DS004789`, `Herrema2023_Delayed_Free_Recall` | | Year | 2023 | | Authors | Haydn G. Herrema, Michael J. Kahana | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004789.v3.1.0](https://doi.org/10.18112/openneuro.ds004789.v3.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004789) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004789) | [Source URL](https://openneuro.org/datasets/ds004789) | ### Copy-paste BibTeX ```bibtex @dataset{ds004789, title = {Delayed Free Recall of Word Lists}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds004789.v3.1.0}, url = {https://doi.org/10.18112/openneuro.ds004789.v3.1.0}, } ``` ## Technical Details - Subjects: 273 - Recordings: 983 - Tasks: 1 - Channels: 126 (87), 108 (32), 112 (32), 110 (31), 88 (29), 120 (28), 128 (28), 127 (24), 116 (22), 124 (21), 196 (20), 109 (19), 111 (18), 106 (17), 100 (17), 113 (16), 125 (15), 86 (14), 107 (13), 64 (13), 60 (13), 158 (12), 118 (12), 68 (11), 104 (11), 178 (10), 76 (10), 180 (10), 122 (10), 121 (10), 102 (9), 80 (9), 142 (9), 56 (9), 153 (8), 97 (8), 140 (8), 75 (8), 188 (7), 114 (7), 62 (7), 85 (7), 146 (7), 172 (7), 130 (7), 148 (7), 90 (7), 83 (7), 92 (6), 72 (6), 162 (6), 168 (6), 139 (6), 173 (6), 70 (6), 134 (6), 78 (6), 96 (5), 74 (5), 206 (5), 93 (5), 165 (5), 141 (5), 160 (5), 84 (4), 161 (4), 203 (4), 119 (4), 136 (4), 177 (4), 224 (4), 54 (4), 200 (4), 46 (4), 123 (4), 208 (3), 186 (3), 50 (3), 176 (3), 37 (3), 212 (3), 138 (3), 59 (3), 94 (3), 99 (3), 154 (3), 103 (3), 152 (3), 166 (3), 133 (3), 69 (3), 151 (2), 170 (2), 95 (2), 58 (2), 55 (2), 184 (2), 218 (2), 213 (2), 36 (2), 156 (2), 52 (2), 67 (2), 179 (2), 87 (2), 182 (2), 105 (2), 149 (2), 43 (2), 26 (2), 77, 53, 101, 190, 16, 129, 98, 202, 14, 209, 216, 48, 195, 175, 229, 73, 65, 215, 131, 38, 63 - Sampling rate (Hz): 1000.0 (785), 500.0 (119), 1600.0 (32), 999.0 (19), 499.7071 (16), 2000.0 (6), 1024.0 (4), 512.0 (2) - Duration (hours): 776.4940177977268 - Pathology: Epilepsy - Modality: Visual - Type: Memory - Size on disk: 576.3 GB - File count: 983 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004789.v3.1.0 - Source: openneuro - OpenNeuro: [ds004789](https://openneuro.org/datasets/ds004789) - NeMAR: [ds004789](https://nemar.org/dataexplorer/detail?dataset_id=ds004789) ## API Reference Use the `DS004789` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004789(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Delayed Free Recall of Word Lists * **Study:** `ds004789` (OpenNeuro) * **Author (year):** `Herrema2023_Delayed_Free_Recall` * **Canonical:** — Also importable as: `DS004789`, `Herrema2023_Delayed_Free_Recall`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 273; recordings: 983; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004789](https://openneuro.org/datasets/ds004789) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004789](https://nemar.org/dataexplorer/detail?dataset_id=ds004789) DOI: [https://doi.org/10.18112/openneuro.ds004789.v3.1.0](https://doi.org/10.18112/openneuro.ds004789.v3.1.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004789 >>> dataset = DS004789(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004789) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004789) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004796: eeg dataset, 79 subjects *A Polish Electroencephalography, Alzheimer’s Risk-genes, Lifestyle and Neuroimaging (PEARL-Neuro) Database* Access recordings and metadata through EEGDash. **Citation:** Dzianok Patrycja, Kublik Ewa (2023). *A Polish Electroencephalography, Alzheimer’s Risk-genes, Lifestyle and Neuroimaging (PEARL-Neuro) Database*. [10.18112/openneuro.ds004796.v1.1.0](https://doi.org/10.18112/openneuro.ds004796.v1.1.0) Modality: eeg Subjects: 79 Recordings: 235 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004796 dataset = DS004796(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004796(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004796( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004796, title = {A Polish Electroencephalography, Alzheimer’s Risk-genes, Lifestyle and Neuroimaging (PEARL-Neuro) Database}, author = {Dzianok Patrycja and Kublik Ewa}, doi = {10.18112/openneuro.ds004796.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004796.v1.1.0}, } ``` ## About This Dataset **A Polish Electroencephalography, Alzheimer’s Risk-genes, Lifestyle and Neuroimaging (PEARL-Neuro) Database** IMPORTANT NOTE: The dataset contains no errors (BIDS-1). The numerous warnings currently displayed are a result of OpenNeuro updating its validator to BIDS-2. The OpenNeuro team is actively working on refining the validator to display only meaningful warnings (more information on OpenNeuro GitHub page). At this time, as dataset owners, we are unable to take any action to resolve these warnings. **Data Descriptor:** \* doi.org/10.1038/s41597-024-03106-5 ([https://www.nature.com/articles/s41597-024-03106-5](https://www.nature.com/articles/s41597-024-03106-5)) **Please cite the following reference if you use these data:** ### View full README **A Polish Electroencephalography, Alzheimer’s Risk-genes, Lifestyle and Neuroimaging (PEARL-Neuro) Database** IMPORTANT NOTE: The dataset contains no errors (BIDS-1). The numerous warnings currently displayed are a result of OpenNeuro updating its validator to BIDS-2. The OpenNeuro team is actively working on refining the validator to display only meaningful warnings (more information on OpenNeuro GitHub page). At this time, as dataset owners, we are unable to take any action to resolve these warnings. **Data Descriptor:** \* doi.org/10.1038/s41597-024-03106-5 ([https://www.nature.com/articles/s41597-024-03106-5](https://www.nature.com/articles/s41597-024-03106-5)) **Please cite the following reference if you use these data:** \* Dzianok P, Kublik E. PEARL-Neuro Database: EEG, fMRI, health and lifestyle data of middle-aged people at risk of dementia. Sci Data 11, 276 (2024). DOI: [https://doi.org/10.1038/s41597-024-03106-5](https://doi.org/10.1038/s41597-024-03106-5) **Publications related to this dataset, reporting & additional data** \* [https://github.com/PTDZ/PEARL-Neuro](https://github.com/PTDZ/PEARL-Neuro) — updates, additional study details, and list of research outputs related to this dataset. IMPORTANT: Please inform us of any research outputs related to the shared data, including publications, preprints, posters, abstracts, talks, and any commercial usage. This is crucial for ensuring transparency and informing users about the analyses already performed on this dataset. Additionally, such information can foster collaboration. **Description of the database:** Full cohort: 192 healthy middle-aged (50-63) individuals, balanced female and male ratio. \* Genetic data (N = 192): > * Apolipoprotein E (APOE) > * Phosphatidylinositol binding clathrin assembly protein (PICALM) \* Basic demographic and health data \* Psychometric data (memory, intelligence, mood, personality, stress coping strategies) Cohort subgroup: 79 healthy middle-aged (50-63) individuals, balanced female and male ratio. \* Neuroimaging data: > * Functional data — electroencefalography (EEG) and functional magnetic resonance imaging (fMRI): > \* Resting-state protocol (with two conditions: eyes open and eyes closed) > \* Cognitive tasks: multi-source interference task (MSIT) and Sternberg’s memory task \* Blood tests data (blood count, lipid profile, HSV virus) **Release history:** \* 10/2023: Initial release \* 02/2024: Public release \* 06/2025, version: 1.1.0 — marker corrections in .tsv and .vmrk EEG resting-state files During EEG data acquisition, technical issues led to missing starting markers for the eyes-open and/or eyes-closed conditions in the resting-state protocol for some participants. As described in the Data Note, the S1 marker indicates the end of the instruction phase—when the participant presses “Enter” to begin a condition (either eyes-open or eyes-closed). Consequently, the first S1 marker coincides with the S2 marker (start of the eyes-open condition), and the second S1 marker aligns with the S4 marker (start of the eyes-closed condition). To ensure consistency with Table 5 in the released Data Note, the missing markers were added to the relevant files (.tsv and .vmrk) for the following participants: 08, 09, 11, 12, 14, 15, 21, 22, 25, 35, 42, 54, 62, 64, 65, 67, 70, 71, 73, 75, and 79. For participants 19 and 30, the S11 marker (indicating the end of the task and accompanying sound effect) was not saved, resulting in a slightly shorter eyes-closed recording duration (by approximately 30–60 seconds). For participant 34, the S11 marker was also not recorded because he/she forgot to press “Enter” to mark the start of the eyes-closed condition, pressing it only after the condition had ended. However, he/she followed the instructions and kept his/her eyes closed during the condition. Therefore, the relevant markers (S1/S4) were manually adjusted to reflect the correct start time. ## Dataset Information | Dataset ID | `DS004796` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A Polish Electroencephalography, Alzheimer’s Risk-genes, Lifestyle and Neuroimaging (PEARL-Neuro) Database | | Author (year) | `Patrycja2023_Polish` | | Canonical | `PEARLNeuro` | | Importable as | `DS004796`, `Patrycja2023_Polish`, `PEARLNeuro` | | Year | 2023 | | Authors | Dzianok Patrycja, Kublik Ewa | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004796.v1.1.0](https://doi.org/10.18112/openneuro.ds004796.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004796) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004796) | [Source URL](https://openneuro.org/datasets/ds004796) | ### Copy-paste BibTeX ```bibtex @dataset{ds004796, title = {A Polish Electroencephalography, Alzheimer’s Risk-genes, Lifestyle and Neuroimaging (PEARL-Neuro) Database}, author = {Dzianok Patrycja and Kublik Ewa}, doi = {10.18112/openneuro.ds004796.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004796.v1.1.0}, } ``` ## Technical Details - Subjects: 79 - Recordings: 235 - Tasks: 3 - Channels: 127 - Sampling rate (Hz): 1000.0 - Duration (hours): 43.99488333333333 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 240.2 GB - File count: 235 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004796.v1.1.0 - Source: openneuro - OpenNeuro: [ds004796](https://openneuro.org/datasets/ds004796) - NeMAR: [ds004796](https://nemar.org/dataexplorer/detail?dataset_id=ds004796) ## API Reference Use the `DS004796` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004796(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Polish Electroencephalography, Alzheimer’s Risk-genes, Lifestyle and Neuroimaging (PEARL-Neuro) Database * **Study:** `ds004796` (OpenNeuro) * **Author (year):** `Patrycja2023_Polish` * **Canonical:** `PEARLNeuro` Also importable as: `DS004796`, `Patrycja2023_Polish`, `PEARLNeuro`. Modality: `eeg`. Subjects: 79; recordings: 235; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004796](https://openneuro.org/datasets/ds004796) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004796](https://nemar.org/dataexplorer/detail?dataset_id=ds004796) DOI: [https://doi.org/10.18112/openneuro.ds004796.v1.1.0](https://doi.org/10.18112/openneuro.ds004796.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004796 >>> dataset = DS004796(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004796) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004796) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004802: eeg dataset, 39 subjects *Pilot data for Loneliness in the Brain: Distinguishing Between Hypersensitivity and Hyperalertness* Access recordings and metadata through EEGDash. **Citation:** Joe Bathelt, Marte Otten (2023). *Pilot data for Loneliness in the Brain: Distinguishing Between Hypersensitivity and Hyperalertness*. [10.18112/openneuro.ds004802.v1.0.0](https://doi.org/10.18112/openneuro.ds004802.v1.0.0) Modality: eeg Subjects: 39 Recordings: 79 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004802 dataset = DS004802(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004802(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004802( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004802, title = {Pilot data for Loneliness in the Brain: Distinguishing Between Hypersensitivity and Hyperalertness}, author = {Joe Bathelt and Marte Otten}, doi = {10.18112/openneuro.ds004802.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004802.v1.0.0}, } ``` ## About This Dataset Pilot EEG data for the registered report ‘Pilot data for Loneliness in the Brain: Distinguishing Between Hypersensitivity and Hyperalertness’. For further details, please see [https://osf.io/c2svz/?view_only=4ee744ac88c74f41a4d955824a69284b](https://osf.io/c2svz/?view_only=4ee744ac88c74f41a4d955824a69284b) ## Dataset Information | Dataset ID | `DS004802` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Pilot data for Loneliness in the Brain: Distinguishing Between Hypersensitivity and Hyperalertness | | Author (year) | `Bathelt2023` | | Canonical | — | | Importable as | `DS004802`, `Bathelt2023` | | Year | 2023 | | Authors | Joe Bathelt, Marte Otten | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004802.v1.0.0](https://doi.org/10.18112/openneuro.ds004802.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004802) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004802) | [Source URL](https://openneuro.org/datasets/ds004802) | ### Copy-paste BibTeX ```bibtex @dataset{ds004802, title = {Pilot data for Loneliness in the Brain: Distinguishing Between Hypersensitivity and Hyperalertness}, author = {Joe Bathelt and Marte Otten}, doi = {10.18112/openneuro.ds004802.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004802.v1.0.0}, } ``` ## Technical Details - Subjects: 39 - Recordings: 79 - Tasks: 1 - Channels: 69 - Sampling rate (Hz): 512.0 (27), 2048.0 (11) - Duration (hours): 40.855 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 10.1 GB - File count: 79 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004802.v1.0.0 - Source: openneuro - OpenNeuro: [ds004802](https://openneuro.org/datasets/ds004802) - NeMAR: [ds004802](https://nemar.org/dataexplorer/detail?dataset_id=ds004802) ## API Reference Use the `DS004802` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004802(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Pilot data for Loneliness in the Brain: Distinguishing Between Hypersensitivity and Hyperalertness * **Study:** `ds004802` (OpenNeuro) * **Author (year):** `Bathelt2023` * **Canonical:** — Also importable as: `DS004802`, `Bathelt2023`. Modality: `eeg`. Subjects: 39; recordings: 79; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004802](https://openneuro.org/datasets/ds004802) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004802](https://nemar.org/dataexplorer/detail?dataset_id=ds004802) DOI: [https://doi.org/10.18112/openneuro.ds004802.v1.0.0](https://doi.org/10.18112/openneuro.ds004802.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004802 >>> dataset = DS004802(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004802) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004802) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004809: ieeg dataset, 252 subjects *Categorized Free Recall: Delayed Free Recall of Word Lists Organized by Semantic Categories* Access recordings and metadata through EEGDash. **Citation:** Haydn G. Herrema, Michael J. Kahana (2023). *Categorized Free Recall: Delayed Free Recall of Word Lists Organized by Semantic Categories*. [10.18112/openneuro.ds004809.v2.2.0](https://doi.org/10.18112/openneuro.ds004809.v2.2.0) Modality: ieeg Subjects: 252 Recordings: 889 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004809 dataset = DS004809(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004809(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004809( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004809, title = {Categorized Free Recall: Delayed Free Recall of Word Lists Organized by Semantic Categories}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds004809.v2.2.0}, url = {https://doi.org/10.18112/openneuro.ds004809.v2.2.0}, } ``` ## About This Dataset **Categorized Free Recall: Delayed Free Recall of Word Lists Organized by Semantic Categories** **Description** This dataset contains behavioral events and intracranial electrophysiological recordings from a categorized free recall task. The experiment consists of participants studying a list of words, presented visually one at a time, completing simple arithmetic problems that function as a distractor, and then freely recalling the words from the just-presented list in any order. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania. Unique to this paradigm is the semantic construction of the word lists. Each word comes from one of 25 semantic categories, and each list of 12 items contains 6 pairs of same-category words from 3 different categories. This means that each list has 4 words from 3 semantic categories, and in each half of the list there will be 1 pair of words from each category. For example, if a list contains words from categories A, B, and C, a possible list construction would be: **A1 - A2 - B1 - B2 - C1 - C2 - A3 - A4 - C3 - C4 - B3 - B4** **To Note** \* The iEEG recordings are labeled either “monopolar” or “bipolar.” The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis. The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables. \* Each subject has a unique montage of electrode locations. MNI and Talairach coordinates are provided when available, along with brain region annotations. \* Recordings were made on multiple different systems, so we have done the scaling to provide all voltage values in V. **Contact** For questions or inquiries, please contact [sas-kahana-sysadmin@sas.upenn.edu](mailto:sas-kahana-sysadmin@sas.upenn.edu). ## Dataset Information | Dataset ID | `DS004809` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Categorized Free Recall: Delayed Free Recall of Word Lists Organized by Semantic Categories | | Author (year) | `Herrema2023_Categorized_Free_Recall` | | Canonical | `catFR_Categorized_Free_Recall`, `CatFR` | | Importable as | `DS004809`, `Herrema2023_Categorized_Free_Recall`, `catFR_Categorized_Free_Recall`, `CatFR` | | Year | 2023 | | Authors | Haydn G. Herrema, Michael J. Kahana | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004809.v2.2.0](https://doi.org/10.18112/openneuro.ds004809.v2.2.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004809) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004809) | [Source URL](https://openneuro.org/datasets/ds004809) | ### Copy-paste BibTeX ```bibtex @dataset{ds004809, title = {Categorized Free Recall: Delayed Free Recall of Word Lists Organized by Semantic Categories}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds004809.v2.2.0}, url = {https://doi.org/10.18112/openneuro.ds004809.v2.2.0}, } ``` ## Technical Details - Subjects: 252 - Recordings: 889 - Tasks: 1 - Channels: 126 (70), 124 (30), 108 (26), 125 (20), 128 (19), 139 (19), 88 (17), 127 (16), 120 (16), 131 (15), 145 (15), 148 (15), 116 (15), 64 (14), 196 (14), 112 (14), 110 (13), 142 (13), 179 (13), 118 (12), 155 (12), 133 (11), 114 (11), 121 (11), 251 (11), 159 (11), 90 (11), 178 (10), 113 (10), 186 (10), 94 (10), 92 (10), 158 (9), 115 (9), 105 (9), 152 (9), 198 (9), 200 (8), 183 (8), 156 (8), 247 (8), 104 (8), 106 (7), 166 (7), 122 (7), 98 (7), 68 (7), 212 (7), 240 (6), 241 (6), 100 (6), 109 (6), 76 (6), 78 (6), 184 (6), 150 (6), 154 (5), 56 (5), 208 (5), 165 (5), 168 (5), 250 (5), 224 (4), 141 (4), 189 (4), 164 (4), 192 (4), 180 (4), 97 (4), 72 (4), 70 (4), 89 (4), 238 (4), 185 (4), 173 (4), 219 (4), 175 (4), 134 (4), 188 (4), 83 (3), 160 (3), 167 (3), 140 (3), 209 (3), 95 (3), 220 (3), 130 (3), 162 (3), 46 (3), 60 (3), 229 (3), 207 (3), 123 (2), 119 (2), 169 (2), 203 (2), 161 (2), 84 (2), 177 (2), 151 (2), 172 (2), 93 (2), 53 (2), 96 (2), 132 (2), 67 (2), 176 (2), 193 (2), 187 (2), 80, 146, 14, 136, 52, 16, 86, 239, 75, 182, 102, 85, 63, 206, 50, 213, 111, 99, 62, 37, 163, 243, 36, 107, 153, 143, 26, 202, 218 - Sampling rate (Hz): 1000.0 (766), 500.0 (93), 1600.0 (10), 999.0 (8), 1023.999 (6), 1024.0 (4), 499.7071 (2) - Duration (hours): 575.3024328526149 - Pathology: Epilepsy - Modality: Visual - Type: Memory - Size on disk: 477.2 GB - File count: 889 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004809.v2.2.0 - Source: openneuro - OpenNeuro: [ds004809](https://openneuro.org/datasets/ds004809) - NeMAR: [ds004809](https://nemar.org/dataexplorer/detail?dataset_id=ds004809) ## API Reference Use the `DS004809` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004809(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Categorized Free Recall: Delayed Free Recall of Word Lists Organized by Semantic Categories * **Study:** `ds004809` (OpenNeuro) * **Author (year):** `Herrema2023_Categorized_Free_Recall` * **Canonical:** `catFR_Categorized_Free_Recall`, `CatFR` Also importable as: `DS004809`, `Herrema2023_Categorized_Free_Recall`, `catFR_Categorized_Free_Recall`, `CatFR`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 252; recordings: 889; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004809](https://openneuro.org/datasets/ds004809) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004809](https://nemar.org/dataexplorer/detail?dataset_id=ds004809) DOI: [https://doi.org/10.18112/openneuro.ds004809.v2.2.0](https://doi.org/10.18112/openneuro.ds004809.v2.2.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004809 >>> dataset = DS004809(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004809) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004809) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004816: eeg dataset, 20 subjects *EEG-attention-rsvp-exp1* Access recordings and metadata through EEGDash. **Citation:** Grootswagers, Tijl, Robinson, Amanda, Shatek, Sofia, Carlson, Thomas (2023). *EEG-attention-rsvp-exp1*. [10.18112/openneuro.ds004816.v1.0.0](https://doi.org/10.18112/openneuro.ds004816.v1.0.0) Modality: eeg Subjects: 20 Recordings: 20 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004816 dataset = DS004816(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004816(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004816( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004816, title = {EEG-attention-rsvp-exp1}, author = {Grootswagers, Tijl and Robinson, Amanda and Shatek, Sofia and Carlson, Thomas}, doi = {10.18112/openneuro.ds004816.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004816.v1.0.0}, } ``` ## About This Dataset EEG data for Grootswagers et al 2021 experiment 1 (small letters on big objects) Grootswagers T., Robinson A.K., Shatek S.M., Carlson T.A. (2021). The neural dynamics underlying prioritisation of task-relevant information. Neurons, Behaviour, Data Analysis, and Theory, 5(1) [https://doi.org/10.51628/001c.19129](https://doi.org/10.51628/001c.19129) See also [https://osf.io/7zhwp/](https://osf.io/7zhwp/) and [https://openneuro.org/datasets/ds004817](https://openneuro.org/datasets/ds004817) ## Dataset Information | Dataset ID | `DS004816` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG-attention-rsvp-exp1 | | Author (year) | `Grootswagers2023_E1` | | Canonical | — | | Importable as | `DS004816`, `Grootswagers2023_E1` | | Year | 2023 | | Authors | Grootswagers, Tijl, Robinson, Amanda, Shatek, Sofia, Carlson, Thomas | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004816.v1.0.0](https://doi.org/10.18112/openneuro.ds004816.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004816) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004816) | [Source URL](https://openneuro.org/datasets/ds004816) | ### Copy-paste BibTeX ```bibtex @dataset{ds004816, title = {EEG-attention-rsvp-exp1}, author = {Grootswagers, Tijl and Robinson, Amanda and Shatek, Sofia and Carlson, Thomas}, doi = {10.18112/openneuro.ds004816.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004816.v1.0.0}, } ``` ## Technical Details - Subjects: 20 - Recordings: 20 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 10.713711111111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 9.1 GB - File count: 20 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004816.v1.0.0 - Source: openneuro - OpenNeuro: [ds004816](https://openneuro.org/datasets/ds004816) - NeMAR: [ds004816](https://nemar.org/dataexplorer/detail?dataset_id=ds004816) ## API Reference Use the `DS004816` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004816(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG-attention-rsvp-exp1 * **Study:** `ds004816` (OpenNeuro) * **Author (year):** `Grootswagers2023_E1` * **Canonical:** — Also importable as: `DS004816`, `Grootswagers2023_E1`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004816](https://openneuro.org/datasets/ds004816) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004816](https://nemar.org/dataexplorer/detail?dataset_id=ds004816) DOI: [https://doi.org/10.18112/openneuro.ds004816.v1.0.0](https://doi.org/10.18112/openneuro.ds004816.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004816 >>> dataset = DS004816(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004816) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004816) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004817: eeg dataset, 20 subjects *EEG-attention-rsvp-exp2* Access recordings and metadata through EEGDash. **Citation:** Grootswagers, Tijl, Robinson, Amanda, Shatek, Sofia, Carlson, Thomas (2023). *EEG-attention-rsvp-exp2*. [10.18112/openneuro.ds004817.v1.0.0](https://doi.org/10.18112/openneuro.ds004817.v1.0.0) Modality: eeg Subjects: 20 Recordings: 20 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004817 dataset = DS004817(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004817(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004817( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004817, title = {EEG-attention-rsvp-exp2}, author = {Grootswagers, Tijl and Robinson, Amanda and Shatek, Sofia and Carlson, Thomas}, doi = {10.18112/openneuro.ds004817.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004817.v1.0.0}, } ``` ## About This Dataset EEG data for Grootswagers et al 2021 experiment 2 (small objects on big letters) Grootswagers T., Robinson A.K., Shatek S.M., Carlson T.A. (2021). The neural dynamics underlying prioritisation of task-relevant information. Neurons, Behaviour, Data Analysis, and Theory, 5(1) [https://doi.org/10.51628/001c.19129](https://doi.org/10.51628/001c.19129) See also [https://osf.io/7zhwp/](https://osf.io/7zhwp/) and [https://openneuro.org/datasets/ds004816](https://openneuro.org/datasets/ds004816) ## Dataset Information | Dataset ID | `DS004817` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG-attention-rsvp-exp2 | | Author (year) | `Grootswagers2023_E2` | | Canonical | — | | Importable as | `DS004817`, `Grootswagers2023_E2` | | Year | 2023 | | Authors | Grootswagers, Tijl, Robinson, Amanda, Shatek, Sofia, Carlson, Thomas | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004817.v1.0.0](https://doi.org/10.18112/openneuro.ds004817.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004817) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004817) | [Source URL](https://openneuro.org/datasets/ds004817) | ### Copy-paste BibTeX ```bibtex @dataset{ds004817, title = {EEG-attention-rsvp-exp2}, author = {Grootswagers, Tijl and Robinson, Amanda and Shatek, Sofia and Carlson, Thomas}, doi = {10.18112/openneuro.ds004817.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004817.v1.0.0}, } ``` ## Technical Details - Subjects: 20 - Recordings: 20 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 11.900994444444445 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 10.1 GB - File count: 20 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004817.v1.0.0 - Source: openneuro - OpenNeuro: [ds004817](https://openneuro.org/datasets/ds004817) - NeMAR: [ds004817](https://nemar.org/dataexplorer/detail?dataset_id=ds004817) ## API Reference Use the `DS004817` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004817(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG-attention-rsvp-exp2 * **Study:** `ds004817` (OpenNeuro) * **Author (year):** `Grootswagers2023_E2` * **Canonical:** — Also importable as: `DS004817`, `Grootswagers2023_E2`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004817](https://openneuro.org/datasets/ds004817) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004817](https://nemar.org/dataexplorer/detail?dataset_id=ds004817) DOI: [https://doi.org/10.18112/openneuro.ds004817.v1.0.0](https://doi.org/10.18112/openneuro.ds004817.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004817 >>> dataset = DS004817(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004817) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004817) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004819: ieeg dataset, 1 subjects *Flexible, Scalable, High Channel Count Stereo-Electrode for Recording in the Human Brain* Access recordings and metadata through EEGDash. **Citation:** Keundong Lee, Angelique C. Paulk, Yun Goo Ro, Daniel R. Cleary, Karen J. Tonsfeldt, Yoav Kfir, John Pezaris, Youngbin Tchoe, Jihwan Lee, Andrew M. Bourhis, Ritwik Vatsyayan, Joel R. Martin, Samantha M. Russman, Jimmy C. Yang, Amy Baohan, R. Mark Richardson, Ziv M. Williams, Shelley I. Fried, Hoi Sang U, Ahmed M. Raslan, Sharona Ben-Haim, Eric Halgren, Sydney S. Cash, Shadi. A. Dayeh (2023). *Flexible, Scalable, High Channel Count Stereo-Electrode for Recording in the Human Brain*. [10.18112/openneuro.ds004819.v1.0.0](https://doi.org/10.18112/openneuro.ds004819.v1.0.0) Modality: ieeg Subjects: 1 Recordings: 8 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004819 dataset = DS004819(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004819(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004819( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004819, title = {Flexible, Scalable, High Channel Count Stereo-Electrode for Recording in the Human Brain}, author = {Keundong Lee and Angelique C. Paulk and Yun Goo Ro and Daniel R. Cleary and Karen J. Tonsfeldt and Yoav Kfir and John Pezaris and Youngbin Tchoe and Jihwan Lee and Andrew M. Bourhis and Ritwik Vatsyayan and Joel R. Martin and Samantha M. Russman and Jimmy C. Yang and Amy Baohan and R. Mark Richardson and Ziv M. Williams and Shelley I. Fried and Hoi Sang U and Ahmed M. Raslan and Sharona Ben-Haim and Eric Halgren and Sydney S. Cash and Shadi. A. Dayeh}, doi = {10.18112/openneuro.ds004819.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004819.v1.0.0}, } ``` ## About This Dataset This project contains the data for the publication Lee et al, “Flexible, Scalable, High Channel Count Stereo-Electrode for Recording in the Human Brain”. It contains the raw and preprocessed (epoched) intracranial EEG (iEEG) data files for multiple species to test novel high resolution micro-stereo-electrodes for recording neural activity in the brain. The data set involves the use of direct electrical stimulation to examine effects of stimulation in the brain. Data are in the iEEG-BIDS format with binary files and channel maps included in the related derivatives folder. ## Dataset Information | Dataset ID | `DS004819` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Flexible, Scalable, High Channel Count Stereo-Electrode for Recording in the Human Brain | | Author (year) | `Lee2023` | | Canonical | — | | Importable as | `DS004819`, `Lee2023` | | Year | 2023 | | Authors | Keundong Lee, Angelique C. Paulk, Yun Goo Ro, Daniel R. Cleary, Karen J. Tonsfeldt, Yoav Kfir, John Pezaris, Youngbin Tchoe, Jihwan Lee, Andrew M. Bourhis, Ritwik Vatsyayan, Joel R. Martin, Samantha M. Russman, Jimmy C. Yang, Amy Baohan, R. Mark Richardson, Ziv M. Williams, Shelley I. Fried, Hoi Sang U, Ahmed M. Raslan, Sharona Ben-Haim, Eric Halgren, Sydney S. Cash, Shadi. A. Dayeh | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004819.v1.0.0](https://doi.org/10.18112/openneuro.ds004819.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004819) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004819) | [Source URL](https://openneuro.org/datasets/ds004819) | ### Copy-paste BibTeX ```bibtex @dataset{ds004819, title = {Flexible, Scalable, High Channel Count Stereo-Electrode for Recording in the Human Brain}, author = {Keundong Lee and Angelique C. Paulk and Yun Goo Ro and Daniel R. Cleary and Karen J. Tonsfeldt and Yoav Kfir and John Pezaris and Youngbin Tchoe and Jihwan Lee and Andrew M. Bourhis and Ritwik Vatsyayan and Joel R. Martin and Samantha M. Russman and Jimmy C. Yang and Amy Baohan and R. Mark Richardson and Ziv M. Williams and Shelley I. Fried and Hoi Sang U and Ahmed M. Raslan and Sharona Ben-Haim and Eric Halgren and Sydney S. Cash and Shadi. A. Dayeh}, doi = {10.18112/openneuro.ds004819.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004819.v1.0.0}, } ``` ## Technical Details - Subjects: 1 - Recordings: 8 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 30000.0 - Duration (hours): 0.0522222222222222 - Pathology: Surgery - Modality: Other - Type: Clinical/Intervention - Size on disk: 688.7 MB - File count: 8 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004819.v1.0.0 - Source: openneuro - OpenNeuro: [ds004819](https://openneuro.org/datasets/ds004819) - NeMAR: [ds004819](https://nemar.org/dataexplorer/detail?dataset_id=ds004819) ## API Reference Use the `DS004819` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004819(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Flexible, Scalable, High Channel Count Stereo-Electrode for Recording in the Human Brain * **Study:** `ds004819` (OpenNeuro) * **Author (year):** `Lee2023` * **Canonical:** — Also importable as: `DS004819`, `Lee2023`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 1; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004819](https://openneuro.org/datasets/ds004819) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004819](https://nemar.org/dataexplorer/detail?dataset_id=ds004819) DOI: [https://doi.org/10.18112/openneuro.ds004819.v1.0.0](https://doi.org/10.18112/openneuro.ds004819.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004819 >>> dataset = DS004819(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004819) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004819) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004830: fnirs dataset, 12 subjects *Spatial Attention Decoding using fNIRS During Complex Scene Analysis* Access recordings and metadata through EEGDash. **Citation:** Matthew Ning, Sudan Duwadi, Meryem A. Yucel, Alexander Von Luhmann, David A. Boas, Kamal Sen (2023). *Spatial Attention Decoding using fNIRS During Complex Scene Analysis*. [10.18112/openneuro.ds004830.v2.0.0](https://doi.org/10.18112/openneuro.ds004830.v2.0.0) Modality: fnirs Subjects: 12 Recordings: 14 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004830 dataset = DS004830(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004830(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004830( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004830, title = {Spatial Attention Decoding using fNIRS During Complex Scene Analysis}, author = {Matthew Ning and Sudan Duwadi and Meryem A. Yucel and Alexander Von Luhmann and David A. Boas and Kamal Sen}, doi = {10.18112/openneuro.ds004830.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds004830.v2.0.0}, } ``` ## About This Dataset This dataset comes with published paper which can be found in [https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2024.1329086/full](https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2024.1329086/full) Please cite the paper if you use this dataset for your publication. ## Dataset Information | Dataset ID | `DS004830` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Spatial Attention Decoding using fNIRS During Complex Scene Analysis | | Author (year) | `Ning2023` | | Canonical | `Ning2024` | | Importable as | `DS004830`, `Ning2023`, `Ning2024` | | Year | 2023 | | Authors | Matthew Ning, Sudan Duwadi, Meryem A. Yucel, Alexander Von Luhmann, David A. Boas, Kamal Sen | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004830.v2.0.0](https://doi.org/10.18112/openneuro.ds004830.v2.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004830) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004830) | [Source URL](https://openneuro.org/datasets/ds004830) | ### Copy-paste BibTeX ```bibtex @dataset{ds004830, title = {Spatial Attention Decoding using fNIRS During Complex Scene Analysis}, author = {Matthew Ning and Sudan Duwadi and Meryem A. Yucel and Alexander Von Luhmann and David A. Boas and Kamal Sen}, doi = {10.18112/openneuro.ds004830.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds004830.v2.0.0}, } ``` ## Technical Details - Subjects: 12 - Recordings: 14 - Tasks: 1 - Channels: 72 (27), 84 (6) - Sampling rate (Hz): 50.0 (32), 50.00000000000001 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 1.2 GB - File count: 14 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004830.v2.0.0 - Source: openneuro - OpenNeuro: [ds004830](https://openneuro.org/datasets/ds004830) - NeMAR: [ds004830](https://nemar.org/dataexplorer/detail?dataset_id=ds004830) ## API Reference Use the `DS004830` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004830(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Spatial Attention Decoding using fNIRS During Complex Scene Analysis * **Study:** `ds004830` (OpenNeuro) * **Author (year):** `Ning2023` * **Canonical:** `Ning2024` Also importable as: `DS004830`, `Ning2023`, `Ning2024`. Modality: `fnirs`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004830](https://openneuro.org/datasets/ds004830) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004830](https://nemar.org/dataexplorer/detail?dataset_id=ds004830) DOI: [https://doi.org/10.18112/openneuro.ds004830.v2.0.0](https://doi.org/10.18112/openneuro.ds004830.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS004830 >>> dataset = DS004830(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004830) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004830) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) * [eegdash.dataset.DS005929](eegdash.dataset.DS005929.md) # DS004837: meg dataset, 60 subjects *Magnetoencephalographic (MEG) Pitch and Duration Mismatch Negativity (MMN) in First-Episode Psychosis* Access recordings and metadata through EEGDash. **Citation:** Fran López-Caballero, Mark Curtis, Brian Coffman, Dean Salisbury (2023). *Magnetoencephalographic (MEG) Pitch and Duration Mismatch Negativity (MMN) in First-Episode Psychosis*. [10.18112/openneuro.ds004837.v1.0.2](https://doi.org/10.18112/openneuro.ds004837.v1.0.2) Modality: meg Subjects: 60 Recordings: 106 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004837 dataset = DS004837(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004837(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004837( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004837, title = {Magnetoencephalographic (MEG) Pitch and Duration Mismatch Negativity (MMN) in First-Episode Psychosis}, author = {Fran López-Caballero and Mark Curtis and Brian Coffman and Dean Salisbury}, doi = {10.18112/openneuro.ds004837.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004837.v1.0.2}, } ``` ## About This Dataset Project Title: Pittsburgh Early Psychosis Program (PEPP): Mismatch Negativity (MMN) in First-Episode Psychosis Expected experimentation period: Start date: 12/01/2017 End date: 09/15/2021 Project Description: Oddball paradigm with standards, pitch deviants and duration deviants. Extended description at: [https://doi.org/10.1111/ejn.16107](https://doi.org/10.1111/ejn.16107) Participant categories: Healthy controls (HC), First-episode Schizophrenia (FESZ), First-episode Affective Psychosis (FEAFF) Further information about clinical, neuropsychological, demographic and medication data can be found in derivatives/participants.csv Events: 1: standard 2: pitch deviant 3: duration deviant Funding: National Institutes of Health (R01 MH108568 and MH113533) ## Dataset Information | Dataset ID | `DS004837` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Magnetoencephalographic (MEG) Pitch and Duration Mismatch Negativity (MMN) in First-Episode Psychosis | | Author (year) | `LopezCaballero2023` | | Canonical | — | | Importable as | `DS004837`, `LopezCaballero2023` | | Year | 2023 | | Authors | Fran López-Caballero, Mark Curtis, Brian Coffman, Dean Salisbury | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004837.v1.0.2](https://doi.org/10.18112/openneuro.ds004837.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004837) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004837) | [Source URL](https://openneuro.org/datasets/ds004837) | ### Copy-paste BibTeX ```bibtex @dataset{ds004837, title = {Magnetoencephalographic (MEG) Pitch and Duration Mismatch Negativity (MMN) in First-Episode Psychosis}, author = {Fran López-Caballero and Mark Curtis and Brian Coffman and Dean Salisbury}, doi = {10.18112/openneuro.ds004837.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004837.v1.0.2}, } ``` ## Technical Details - Subjects: 60 - Recordings: 106 - Tasks: 1 - Channels: Varies - Sampling rate (Hz): 3000 (98), 1000 (8) - Duration (hours): Not calculated - Pathology: Schizophrenia/Psychosis - Modality: Auditory - Type: Perception - Size on disk: 119.9 GB - File count: 106 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004837.v1.0.2 - Source: openneuro - OpenNeuro: [ds004837](https://openneuro.org/datasets/ds004837) - NeMAR: [ds004837](https://nemar.org/dataexplorer/detail?dataset_id=ds004837) ## API Reference Use the `DS004837` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004837(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Magnetoencephalographic (MEG) Pitch and Duration Mismatch Negativity (MMN) in First-Episode Psychosis * **Study:** `ds004837` (OpenNeuro) * **Author (year):** `LopezCaballero2023` * **Canonical:** — Also importable as: `DS004837`, `LopezCaballero2023`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Schizophrenia/Psychosis`. Subjects: 60; recordings: 106; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004837](https://openneuro.org/datasets/ds004837) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004837](https://nemar.org/dataexplorer/detail?dataset_id=ds004837) DOI: [https://doi.org/10.18112/openneuro.ds004837.v1.0.2](https://doi.org/10.18112/openneuro.ds004837.v1.0.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004837 >>> dataset = DS004837(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004837) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004837) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS004840: eeg dataset, 9 subjects *Dataset of electrophysiological signals (EEG, ECG, EMG) during Music therapy with adult burn patients in the Intensive Care Unit.* Access recordings and metadata through EEGDash. **Citation:** Jose Cordoba-Silva, Rafael Maya, Mario Valderrama, Luis Felipe Giraldo, William Betancourt-Zapata, Andrés Salgado-Vascob, Juliana Marín-Sánchez, Viviana Gómez-Ortega, Mark Ettenberger (2023). *Dataset of electrophysiological signals (EEG, ECG, EMG) during Music therapy with adult burn patients in the Intensive Care Unit.*. [10.18112/openneuro.ds004840.v1.0.1](https://doi.org/10.18112/openneuro.ds004840.v1.0.1) Modality: eeg Subjects: 9 Recordings: 51 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004840 dataset = DS004840(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004840(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004840( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004840, title = {Dataset of electrophysiological signals (EEG, ECG, EMG) during Music therapy with adult burn patients in the Intensive Care Unit.}, author = {Jose Cordoba-Silva and Rafael Maya and Mario Valderrama and Luis Felipe Giraldo and William Betancourt-Zapata and Andrés Salgado-Vascob and Juliana Marín-Sánchez and Viviana Gómez-Ortega and Mark Ettenberger}, doi = {10.18112/openneuro.ds004840.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004840.v1.0.1}, } ``` ## About This Dataset **Dataset of electrophysiological signals (EEG, ECG, EMG) during Music therapy with adult burn patients in the Intensive Care Unit - README** **Table of Contents** - [1. Experimental Design](#1-experimental-design) - [1.1 Study Overview](#11-study-overview) - [1.2 Electrophysiological Measurements](#12-electrophysiological-measurements) ### View full README **Dataset of electrophysiological signals (EEG, ECG, EMG) during Music therapy with adult burn patients in the Intensive Care Unit - README** **Table of Contents** - [1. Experimental Design](#1-experimental-design) - [1.1 Study Overview](#11-study-overview) - [1.2 Electrophysiological Measurements](#12-electrophysiological-measurements) - [2. Pain Perception and Anxiety-Depression Levels](#2-pain-perception-and-anxiety-depression-levels) - [3. Code GIT repository](#3-code-GIT-repository) **1. Experimental Design** **1.1 Study Overview** This dataset forms part of an ongoing single-site Randomized Clinical Trial (RCT) involving adult burn patients admitted to the Intensive Care Unit (ICU). The key details of the study include: - **Participants**: The study encompasses 82 adult burn patients admitted to the ICU. - **Randomization**: Participants were randomly assigned to either an intervention group or a control group in a 1:1 ratio. The intervention group received standard care in addition to a maximum of six Music therapy sessions provided by a certified music therapist over a 2-week period. - **Electrophysiological Measures**: Electrophysiological measures were taken from a subset of 9 participants in the intervention group (11%). - **Ethics and Registration**: The study was approved by the ethics committee of the Fundación Santa Fe de Bogotá (FSFB) with approval IDs CCEI-11234-2019 and CCEI-11971-2020. It is registered on Clinicaltrials.gov under the identifier NCT04571255. - **Informed Consent**: All participants provided informed consent to participate in the study. **1.2 Electrophysiological Measurements** - **Participants**: The electrophysiological measurements were conducted with nine adult burn patients hospitalized in the ICU of the University Hospital Fundación Santa Fe de Bogotá (FSFB). - **Inclusion Criteria**: Inclusion criteria involved individuals of legal adult age with an expected hospitalization period of more than 7 days. Patients with known psychiatric disorders, cognitive disabilities, sedation, or mechanical ventilation were excluded. Patients with burns in regions above the neck were also excluded. - **Measurement Sessions**: Electrophysiological measurements were performed with each patient during two Music-Assisted Relaxation (MAR) sessions on two different days. - **Recording Phases**: Each recording session included three phases: - Pre-Intervention (PRE): The resting state was measured as a baseline with the patient’s eyes closed or fixed at a point. - MAR MTI: The specialist performed the MAR MTI. - Post-Intervention (POST): Measurements were taken during the patient’s reincorporation after MAR. - **Equipment**: Recordings were made with the Micromed LTM64 equipment with a sampling frequency of at least 256Hz. The Micromed LTM64 is of clinical quality and has approval from the Colombian National Institute for Drug and Food Safety (approval ID: 20090486-2015). - **Electrode Setup**: - EEG: The electrode montage followed the international System 10-20. Due to time limitations, the number of electrodes was reduced to eight: FP1, FP2, T3, T4, C3, C4, O1, and O2. The reference electrode was set to Cz, and the ground electrode was placed on the mastoids. - ECG: ECG was acquired by a bipolar assembly of lead II with two electrodes located bilaterally in the upper part of the thorax or both arms, depending on each patient’s possibilities or limitations. - EMG: For EMG, a bipolar electrode configuration was positioned on the left eyebrow to assess the motor activity of the corrugator supercilii muscle. The electrodes were placed with a 20 mm distance between them, following the natural alignment of the muscle fibers. **2. Pain Perception and Anxiety-Depression Levels** - **Measures**: To correlate pain, anxiety, and depression levels with electrophysiological signals, two complementary measures were obtained. - **Pain Assessment**: A Visual Analog Scale (VAS) was administered before the PRE and after the POST. The VAS ranged from 0 (indicating no pain) to 10 (representing the maximum pain possible). - **Anxiety and Depression**: The Colombian version of the Hospital Anxiety and Depression Scale (HADS) was used. HADS consists of two sub-scales for anxiety and depression, each containing seven items. Items are rated on a four-point Likert scale (0 to 3), with higher scores indicating increased anxiety or depression levels (maximum score of 21 for each subscale). HADS was administered after obtaining informed consent, both as a baseline and after the last music therapy sessions. **3. Code GIT repository** - All the analysis and graphics in this project were conducted using Python 3.7. We utilized custom scripts in combination with various libraries, such as NumPy, Pandas, SciPy, MNE, Biopsy, neurokit2, and Visbrain. After generating the graphics, we enhanced them in PowerPoint by adding titles, symbols, and any necessary supplementary information. - Code is Open by MIT license at: [https://github.com/jgcordoba/BurnICU_MusicTherapy_Signals.git](https://github.com/jgcordoba/BurnICU_MusicTherapy_Signals.git) For more detailed information about the study, please refer to the associated article titled “Article not submitted”, DOI: ‘-Insert Link-’ **Last Update**: 23/10/2023 **Author**: Jose Gabriel Cordoba Silva ## Dataset Information | Dataset ID | `DS004840` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset of electrophysiological signals (EEG, ECG, EMG) during Music therapy with adult burn patients in the Intensive Care Unit. | | Author (year) | `CordobaSilva2023` | | Canonical | — | | Importable as | `DS004840`, `CordobaSilva2023` | | Year | 2023 | | Authors | Jose Cordoba-Silva, Rafael Maya, Mario Valderrama, Luis Felipe Giraldo, William Betancourt-Zapata, Andrés Salgado-Vascob, Juliana Marín-Sánchez, Viviana Gómez-Ortega, Mark Ettenberger | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004840.v1.0.1](https://doi.org/10.18112/openneuro.ds004840.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004840) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004840) | [Source URL](https://openneuro.org/datasets/ds004840) | ### Copy-paste BibTeX ```bibtex @dataset{ds004840, title = {Dataset of electrophysiological signals (EEG, ECG, EMG) during Music therapy with adult burn patients in the Intensive Care Unit.}, author = {Jose Cordoba-Silva and Rafael Maya and Mario Valderrama and Luis Felipe Giraldo and William Betancourt-Zapata and Andrés Salgado-Vascob and Juliana Marín-Sánchez and Viviana Gómez-Ortega and Mark Ettenberger}, doi = {10.18112/openneuro.ds004840.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004840.v1.0.1}, } ``` ## Technical Details - Subjects: 9 - Recordings: 51 - Tasks: 3 - Channels: 10 (45), 9 (3), 8 (3) - Sampling rate (Hz): 1024.0 (33), 512.0 (15), 256.0 (3) - Duration (hours): 10.935833333333331 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 599.5 MB - File count: 51 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004840.v1.0.1 - Source: openneuro - OpenNeuro: [ds004840](https://openneuro.org/datasets/ds004840) - NeMAR: [ds004840](https://nemar.org/dataexplorer/detail?dataset_id=ds004840) ## API Reference Use the `DS004840` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004840(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of electrophysiological signals (EEG, ECG, EMG) during Music therapy with adult burn patients in the Intensive Care Unit. * **Study:** `ds004840` (OpenNeuro) * **Author (year):** `CordobaSilva2023` * **Canonical:** — Also importable as: `DS004840`, `CordobaSilva2023`. Modality: `eeg`. Subjects: 9; recordings: 51; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004840](https://openneuro.org/datasets/ds004840) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004840](https://nemar.org/dataexplorer/detail?dataset_id=ds004840) DOI: [https://doi.org/10.18112/openneuro.ds004840.v1.0.1](https://doi.org/10.18112/openneuro.ds004840.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004840 >>> dataset = DS004840(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004840) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004840) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004841: eeg dataset, 20 subjects *TX14* Access recordings and metadata through EEGDash. **Citation:** Gabriella Larkin, James A. Davis, Victor Paul, Marcel Cannon, Chris Manteuffel, Ben Brewster, Tony Johnson, Mike Dunkel, Stephen Gordon, Kevin King (2023). *TX14*. [10.18112/openneuro.ds004841.v1.0.1](https://doi.org/10.18112/openneuro.ds004841.v1.0.1) Modality: eeg Subjects: 20 Recordings: 147 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004841 dataset = DS004841(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004841(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004841( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004841, title = {TX14}, author = {Gabriella Larkin and James A. Davis and Victor Paul and Marcel Cannon and Chris Manteuffel and Ben Brewster and Tony Johnson and Mike Dunkel and Stephen Gordon and Kevin King}, doi = {10.18112/openneuro.ds004841.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004841.v1.0.1}, } ``` ## About This Dataset TX14 dataset: Perform a local situational awareness task while maintaining supervisory control of a semi-autonomous vehicle. This Army’s transition to a leaner, more agile and rapidly-deployable force requires the advent of autonomous technologies and systems, and more reliance on computers and machines. This move from traditional warfare to FCS represents a shift in the human role, as well. Technological advancement has made it so that the role of the user has been transformed from active controller to system monitor and manager, intervening only in the case of a problem. As such, the soldier’s dependency on robotics technologies, tele-operations, indirect driving and autonomy is expected to increase significantly. Additionally, although semi-autonomous driving technologies have proven beneficial in aggregate measures of local area awareness (i.e., target/threat detection) and vehicle control, it is important to understand the situational trade-offs between local area awareness and vehicle control, as situational trade-offs provide the basis for developing dynamic task allocation within Crewstations. ## Dataset Information | Dataset ID | `DS004841` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | TX14 | | Author (year) | `Larkin2023_TX14` | | Canonical | `TX14` | | Importable as | `DS004841`, `Larkin2023_TX14`, `TX14` | | Year | 2023 | | Authors | Gabriella Larkin, James A. Davis, Victor Paul, Marcel Cannon, Chris Manteuffel, Ben Brewster, Tony Johnson, Mike Dunkel, Stephen Gordon, Kevin King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004841.v1.0.1](https://doi.org/10.18112/openneuro.ds004841.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004841) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004841) | [Source URL](https://openneuro.org/datasets/ds004841) | ### Copy-paste BibTeX ```bibtex @dataset{ds004841, title = {TX14}, author = {Gabriella Larkin and James A. Davis and Victor Paul and Marcel Cannon and Chris Manteuffel and Ben Brewster and Tony Johnson and Mike Dunkel and Stephen Gordon and Kevin King}, doi = {10.18112/openneuro.ds004841.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004841.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 147 - Tasks: 1 - Channels: 70 - Sampling rate (Hz): 256.0 - Duration (hours): 28.445555555555558 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 7.3 GB - File count: 147 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004841.v1.0.1 - Source: openneuro - OpenNeuro: [ds004841](https://openneuro.org/datasets/ds004841) - NeMAR: [ds004841](https://nemar.org/dataexplorer/detail?dataset_id=ds004841) ## API Reference Use the `DS004841` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004841(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TX14 * **Study:** `ds004841` (OpenNeuro) * **Author (year):** `Larkin2023_TX14` * **Canonical:** `TX14` Also importable as: `DS004841`, `Larkin2023_TX14`, `TX14`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 20; recordings: 147; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004841](https://openneuro.org/datasets/ds004841) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004841](https://nemar.org/dataexplorer/detail?dataset_id=ds004841) DOI: [https://doi.org/10.18112/openneuro.ds004841.v1.0.1](https://doi.org/10.18112/openneuro.ds004841.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004841 >>> dataset = DS004841(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004841) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004841) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004842: eeg dataset, 14 subjects *TX15* Access recordings and metadata through EEGDash. **Citation:** Gabriella Larkin, James A. Davis, Victor Paul, Marcel Cannon, Chris Manteuffel, Ben Brewster, Tony Johnson, Mike Dunkel, Stephen Gordon, Kevin King (2023). *TX15*. [10.18112/openneuro.ds004842.v1.0.0](https://doi.org/10.18112/openneuro.ds004842.v1.0.0) Modality: eeg Subjects: 14 Recordings: 102 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004842 dataset = DS004842(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004842(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004842( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004842, title = {TX15}, author = {Gabriella Larkin and James A. Davis and Victor Paul and Marcel Cannon and Chris Manteuffel and Ben Brewster and Tony Johnson and Mike Dunkel and Stephen Gordon and Kevin King}, doi = {10.18112/openneuro.ds004842.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004842.v1.0.0}, } ``` ## About This Dataset TX15 dataset ## Dataset Information | Dataset ID | `DS004842` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | TX15 | | Author (year) | `Larkin2023_TX15` | | Canonical | `TX15` | | Importable as | `DS004842`, `Larkin2023_TX15`, `TX15` | | Year | 2023 | | Authors | Gabriella Larkin, James A. Davis, Victor Paul, Marcel Cannon, Chris Manteuffel, Ben Brewster, Tony Johnson, Mike Dunkel, Stephen Gordon, Kevin King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004842.v1.0.0](https://doi.org/10.18112/openneuro.ds004842.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004842) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004842) | [Source URL](https://openneuro.org/datasets/ds004842) | ### Copy-paste BibTeX ```bibtex @dataset{ds004842, title = {TX15}, author = {Gabriella Larkin and James A. Davis and Victor Paul and Marcel Cannon and Chris Manteuffel and Ben Brewster and Tony Johnson and Mike Dunkel and Stephen Gordon and Kevin King}, doi = {10.18112/openneuro.ds004842.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004842.v1.0.0}, } ``` ## Technical Details - Subjects: 14 - Recordings: 102 - Tasks: 1 - Channels: 70 (94), 72 (8) - Sampling rate (Hz): 256.0 - Duration (hours): 20.165277777777774 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 5.2 GB - File count: 102 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004842.v1.0.0 - Source: openneuro - OpenNeuro: [ds004842](https://openneuro.org/datasets/ds004842) - NeMAR: [ds004842](https://nemar.org/dataexplorer/detail?dataset_id=ds004842) ## API Reference Use the `DS004842` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004842(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TX15 * **Study:** `ds004842` (OpenNeuro) * **Author (year):** `Larkin2023_TX15` * **Canonical:** `TX15` Also importable as: `DS004842`, `Larkin2023_TX15`, `TX15`. Modality: `eeg`. Subjects: 14; recordings: 102; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004842](https://openneuro.org/datasets/ds004842) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004842](https://nemar.org/dataexplorer/detail?dataset_id=ds004842) DOI: [https://doi.org/10.18112/openneuro.ds004842.v1.0.0](https://doi.org/10.18112/openneuro.ds004842.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004842 >>> dataset = DS004842(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004842) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004842) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004843: eeg dataset, 14 subjects *T16* Access recordings and metadata through EEGDash. **Citation:** Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King (2023). *T16*. [10.18112/openneuro.ds004843.v1.0.0](https://doi.org/10.18112/openneuro.ds004843.v1.0.0) Modality: eeg Subjects: 14 Recordings: 92 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004843 dataset = DS004843(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004843(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004843( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004843, title = {T16}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004843.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004843.v1.0.0}, } ``` ## About This Dataset TX16 dataset ## Dataset Information | Dataset ID | `DS004843` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | T16 | | Author (year) | `Johnson2023_T16` | | Canonical | — | | Importable as | `DS004843`, `Johnson2023_T16` | | Year | 2023 | | Authors | Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004843.v1.0.0](https://doi.org/10.18112/openneuro.ds004843.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004843) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004843) | [Source URL](https://openneuro.org/datasets/ds004843) | ### Copy-paste BibTeX ```bibtex @dataset{ds004843, title = {T16}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004843.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004843.v1.0.0}, } ``` ## Technical Details - Subjects: 14 - Recordings: 92 - Tasks: 1 - Channels: 70 - Sampling rate (Hz): 256.0 - Duration (hours): 28.9975 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 7.7 GB - File count: 92 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004843.v1.0.0 - Source: openneuro - OpenNeuro: [ds004843](https://openneuro.org/datasets/ds004843) - NeMAR: [ds004843](https://nemar.org/dataexplorer/detail?dataset_id=ds004843) ## API Reference Use the `DS004843` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004843(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) T16 * **Study:** `ds004843` (OpenNeuro) * **Author (year):** `Johnson2023_T16` * **Canonical:** — Also importable as: `DS004843`, `Johnson2023_T16`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 14; recordings: 92; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004843](https://openneuro.org/datasets/ds004843) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004843](https://nemar.org/dataexplorer/detail?dataset_id=ds004843) DOI: [https://doi.org/10.18112/openneuro.ds004843.v1.0.0](https://doi.org/10.18112/openneuro.ds004843.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004843 >>> dataset = DS004843(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004843) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004843) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004844: eeg dataset, 17 subjects *T22* Access recordings and metadata through EEGDash. **Citation:** Jason S. Metcalfe, Victor Paul, Benamin Haynes, Corey Atwater, Amar Marathe, Gregory Gremillion, Kim Drnec, William Nothwang, Justin R. Estepp, Margaret Bowers, Jamie Lukos, Tony Johnson, Mike Dunkel, Stephen Gordon, Jon Touryan, Kevin King (2023). *T22*. [10.18112/openneuro.ds004844.v1.0.0](https://doi.org/10.18112/openneuro.ds004844.v1.0.0) Modality: eeg Subjects: 17 Recordings: 68 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004844 dataset = DS004844(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004844(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004844( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004844, title = {T22}, author = {Jason S. Metcalfe and Victor Paul and Benamin Haynes and Corey Atwater and Amar Marathe and Gregory Gremillion and Kim Drnec and William Nothwang and Justin R. Estepp and Margaret Bowers and Jamie Lukos and Tony Johnson and Mike Dunkel and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004844.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004844.v1.0.0}, } ``` ## About This Dataset TX22 dataset: Predicting and influencing trust-based decisions about control authority hand-off and take-over during simulated, semi-automated driving in a leader-follower paradigm.Vehicle survivability is critically important in todays military. Significant DoD investments have focused on developing and integrating autonomous vehicle technologies to mitigate the effects of human error and thus enhance surviability and mission effectiveness. In a previous experiment (SANDR designation: ARL_TX20), we explored how a human operators acceptance and use of advanced technology is influenced by their trust and related factors, like subjective workload and automation reliability. Nevertheless, more critical than measuring and achieving a certain level of trust is the need for a capability to resolve observed (or predicted) discrepancies between trust and trustworthiness that will undermine effective joint system performance. Using the same paradigm as we developed for our previous experiment (ARL_TX20), here we explore our ability to (a) make accurate real-time predictions of instances where intervention is necessary and (b) use those predictions to provide feedback to the driver that is intended to support active “trust management” by influencing the trust-based decisions of the driver. ## Dataset Information | Dataset ID | `DS004844` | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | T22 | | Author (year) | `Metcalfe2023_T22` | | Canonical | — | | Importable as | `DS004844`, `Metcalfe2023_T22` | | Year | 2023 | | Authors | Jason S. Metcalfe, Victor Paul, Benamin Haynes, Corey Atwater, Amar Marathe, Gregory Gremillion, Kim Drnec, William Nothwang, Justin R. Estepp, Margaret Bowers, Jamie Lukos, Tony Johnson, Mike Dunkel, Stephen Gordon, Jon Touryan, Kevin King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004844.v1.0.0](https://doi.org/10.18112/openneuro.ds004844.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004844) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004844) | [Source URL](https://openneuro.org/datasets/ds004844) | ### Copy-paste BibTeX ```bibtex @dataset{ds004844, title = {T22}, author = {Jason S. Metcalfe and Victor Paul and Benamin Haynes and Corey Atwater and Amar Marathe and Gregory Gremillion and Kim Drnec and William Nothwang and Justin R. Estepp and Margaret Bowers and Jamie Lukos and Tony Johnson and Mike Dunkel and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004844.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004844.v1.0.0}, } ``` ## Technical Details - Subjects: 17 - Recordings: 68 - Tasks: 1 - Channels: 72 - Sampling rate (Hz): 1024.0 - Duration (hours): 21.252222222222223 - Pathology: Healthy - Modality: Visual - Type: Decision-making - Size on disk: 22.3 GB - File count: 68 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004844.v1.0.0 - Source: openneuro - OpenNeuro: [ds004844](https://openneuro.org/datasets/ds004844) - NeMAR: [ds004844](https://nemar.org/dataexplorer/detail?dataset_id=ds004844) ## API Reference Use the `DS004844` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004844(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) T22 * **Study:** `ds004844` (OpenNeuro) * **Author (year):** `Metcalfe2023_T22` * **Canonical:** — Also importable as: `DS004844`, `Metcalfe2023_T22`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 17; recordings: 68; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004844](https://openneuro.org/datasets/ds004844) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004844](https://nemar.org/dataexplorer/detail?dataset_id=ds004844) DOI: [https://doi.org/10.18112/openneuro.ds004844.v1.0.0](https://doi.org/10.18112/openneuro.ds004844.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004844 >>> dataset = DS004844(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004844) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004844) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004849: eeg dataset, 1 subjects *STRONG* Access recordings and metadata through EEGDash. **Citation:** Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King (2023). *STRONG*. [10.18112/openneuro.ds004849.v1.0.0](https://doi.org/10.18112/openneuro.ds004849.v1.0.0) Modality: eeg Subjects: 1 Recordings: 1 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004849 dataset = DS004849(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004849(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004849( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004849, title = {STRONG}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004849.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004849.v1.0.0}, } ``` ## About This Dataset STRONG dataset This is a placeholder dataset. ## Dataset Information | Dataset ID | `DS004849` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | STRONG | | Author (year) | `Johnson2023_STRONG` | | Canonical | `STRONG` | | Importable as | `DS004849`, `Johnson2023_STRONG`, `STRONG` | | Year | 2023 | | Authors | Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004849.v1.0.0](https://doi.org/10.18112/openneuro.ds004849.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004849) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004849) | [Source URL](https://openneuro.org/datasets/ds004849) | ### Copy-paste BibTeX ```bibtex @dataset{ds004849, title = {STRONG}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004849.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004849.v1.0.0}, } ``` ## Technical Details - Subjects: 1 - Recordings: 1 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 128.0 - Duration (hours): 0.5350499131944444 - Pathology: Not specified - Modality: — - Type: Memory - Size on disk: 79.2 MB - File count: 1 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004849.v1.0.0 - Source: openneuro - OpenNeuro: [ds004849](https://openneuro.org/datasets/ds004849) - NeMAR: [ds004849](https://nemar.org/dataexplorer/detail?dataset_id=ds004849) ## API Reference Use the `DS004849` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004849(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) STRONG * **Study:** `ds004849` (OpenNeuro) * **Author (year):** `Johnson2023_STRONG` * **Canonical:** `STRONG` Also importable as: `DS004849`, `Johnson2023_STRONG`, `STRONG`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004849](https://openneuro.org/datasets/ds004849) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004849](https://nemar.org/dataexplorer/detail?dataset_id=ds004849) DOI: [https://doi.org/10.18112/openneuro.ds004849.v1.0.0](https://doi.org/10.18112/openneuro.ds004849.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004849 >>> dataset = DS004849(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004849) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004849) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004850: eeg dataset, 1 subjects *ODE* Access recordings and metadata through EEGDash. **Citation:** Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King (2023). *ODE*. [10.18112/openneuro.ds004850.v1.0.0](https://doi.org/10.18112/openneuro.ds004850.v1.0.0) Modality: eeg Subjects: 1 Recordings: 1 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004850 dataset = DS004850(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004850(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004850( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004850, title = {ODE}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004850.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004850.v1.0.0}, } ``` ## About This Dataset ODE dataset This is a placeholder dataset. ## Dataset Information | Dataset ID | `DS004850` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | ODE | | Author (year) | `Johnson2023_ODE` | | Canonical | `Johnson2024` | | Importable as | `DS004850`, `Johnson2023_ODE`, `Johnson2024` | | Year | 2023 | | Authors | Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004850.v1.0.0](https://doi.org/10.18112/openneuro.ds004850.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004850) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004850) | [Source URL](https://openneuro.org/datasets/ds004850) | ### Copy-paste BibTeX ```bibtex @dataset{ds004850, title = {ODE}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004850.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004850.v1.0.0}, } ``` ## Technical Details - Subjects: 1 - Recordings: 1 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 128.0 - Duration (hours): 0.5350499131944444 - Pathology: Not specified - Modality: — - Type: Memory - Size on disk: 79.2 MB - File count: 1 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004850.v1.0.0 - Source: openneuro - OpenNeuro: [ds004850](https://openneuro.org/datasets/ds004850) - NeMAR: [ds004850](https://nemar.org/dataexplorer/detail?dataset_id=ds004850) ## API Reference Use the `DS004850` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004850(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ODE * **Study:** `ds004850` (OpenNeuro) * **Author (year):** `Johnson2023_ODE` * **Canonical:** `Johnson2024` Also importable as: `DS004850`, `Johnson2023_ODE`, `Johnson2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004850](https://openneuro.org/datasets/ds004850) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004850](https://nemar.org/dataexplorer/detail?dataset_id=ds004850) DOI: [https://doi.org/10.18112/openneuro.ds004850.v1.0.0](https://doi.org/10.18112/openneuro.ds004850.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004850 >>> dataset = DS004850(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004850) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004850) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004851: eeg dataset, 66 subjects *HID* Access recordings and metadata through EEGDash. **Citation:** Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King (2023). *HID*. [10.18112/openneuro.ds004851.v2.1.0](https://doi.org/10.18112/openneuro.ds004851.v2.1.0) Modality: eeg Subjects: 66 Recordings: 66 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004851 dataset = DS004851(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004851(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004851( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004851, title = {HID}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004851.v2.1.0}, url = {https://doi.org/10.18112/openneuro.ds004851.v2.1.0}, } ``` ## About This Dataset HID dataset ## Dataset Information | Dataset ID | `DS004851` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | HID | | Author (year) | `Johnson2023_HID` | | Canonical | `HID` | | Importable as | `DS004851`, `Johnson2023_HID`, `HID` | | Year | 2023 | | Authors | Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004851.v2.1.0](https://doi.org/10.18112/openneuro.ds004851.v2.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004851) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004851) | [Source URL](https://openneuro.org/datasets/ds004851) | ### Copy-paste BibTeX ```bibtex @dataset{ds004851, title = {HID}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004851.v2.1.0}, url = {https://doi.org/10.18112/openneuro.ds004851.v2.1.0}, } ``` ## Technical Details - Subjects: 66 - Recordings: 66 - Tasks: 1 - Channels: 72 - Sampling rate (Hz): 2048.0 - Duration (hours): 18.58285617404514 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 55.9 GB - File count: 66 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004851.v2.1.0 - Source: openneuro - OpenNeuro: [ds004851](https://openneuro.org/datasets/ds004851) - NeMAR: [ds004851](https://nemar.org/dataexplorer/detail?dataset_id=ds004851) ## API Reference Use the `DS004851` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004851(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HID * **Study:** `ds004851` (OpenNeuro) * **Author (year):** `Johnson2023_HID` * **Canonical:** `HID` Also importable as: `DS004851`, `Johnson2023_HID`, `HID`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 66; recordings: 66; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004851](https://openneuro.org/datasets/ds004851) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004851](https://nemar.org/dataexplorer/detail?dataset_id=ds004851) DOI: [https://doi.org/10.18112/openneuro.ds004851.v2.1.0](https://doi.org/10.18112/openneuro.ds004851.v2.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004851 >>> dataset = DS004851(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004851) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004851) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004852: eeg dataset, 1 subjects *InsurgentCivilian* Access recordings and metadata through EEGDash. **Citation:** Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King (2023). *InsurgentCivilian*. [10.18112/openneuro.ds004852.v1.0.0](https://doi.org/10.18112/openneuro.ds004852.v1.0.0) Modality: eeg Subjects: 1 Recordings: 1 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004852 dataset = DS004852(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004852(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004852( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004852, title = {InsurgentCivilian}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004852.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004852.v1.0.0}, } ``` ## About This Dataset InsurgentCivilian dataset This is a placeholder dataset. ## Dataset Information | Dataset ID | `DS004852` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | InsurgentCivilian | | Author (year) | `Johnson2023_InsurgentCivilian` | | Canonical | `Johnson2025` | | Importable as | `DS004852`, `Johnson2023_InsurgentCivilian`, `Johnson2025` | | Year | 2023 | | Authors | Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004852.v1.0.0](https://doi.org/10.18112/openneuro.ds004852.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004852) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004852) | [Source URL](https://openneuro.org/datasets/ds004852) | ### Copy-paste BibTeX ```bibtex @dataset{ds004852, title = {InsurgentCivilian}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004852.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004852.v1.0.0}, } ``` ## Technical Details - Subjects: 1 - Recordings: 1 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 128.0 - Duration (hours): 0.5350499131944444 - Pathology: Not specified - Modality: — - Type: Memory - Size on disk: 79.2 MB - File count: 1 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004852.v1.0.0 - Source: openneuro - OpenNeuro: [ds004852](https://openneuro.org/datasets/ds004852) - NeMAR: [ds004852](https://nemar.org/dataexplorer/detail?dataset_id=ds004852) ## API Reference Use the `DS004852` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004852(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) InsurgentCivilian * **Study:** `ds004852` (OpenNeuro) * **Author (year):** `Johnson2023_InsurgentCivilian` * **Canonical:** `Johnson2025` Also importable as: `DS004852`, `Johnson2023_InsurgentCivilian`, `Johnson2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004852](https://openneuro.org/datasets/ds004852) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004852](https://nemar.org/dataexplorer/detail?dataset_id=ds004852) DOI: [https://doi.org/10.18112/openneuro.ds004852.v1.0.0](https://doi.org/10.18112/openneuro.ds004852.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004852 >>> dataset = DS004852(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004852) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004852) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004853: eeg dataset, 1 subjects *TX17* Access recordings and metadata through EEGDash. **Citation:** Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King (2023). *TX17*. [10.18112/openneuro.ds004853.v1.0.0](https://doi.org/10.18112/openneuro.ds004853.v1.0.0) Modality: eeg Subjects: 1 Recordings: 1 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004853 dataset = DS004853(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004853(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004853( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004853, title = {TX17}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004853.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004853.v1.0.0}, } ``` ## About This Dataset TX17 dataset This is a placeholder dataset. ## Dataset Information | Dataset ID | `DS004853` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | TX17 | | Author (year) | `Johnson2023_TX17` | | Canonical | — | | Importable as | `DS004853`, `Johnson2023_TX17` | | Year | 2023 | | Authors | Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004853.v1.0.0](https://doi.org/10.18112/openneuro.ds004853.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004853) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004853) | [Source URL](https://openneuro.org/datasets/ds004853) | ### Copy-paste BibTeX ```bibtex @dataset{ds004853, title = {TX17}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004853.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004853.v1.0.0}, } ``` ## Technical Details - Subjects: 1 - Recordings: 1 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 128.0 - Duration (hours): 0.5350499131944444 - Pathology: Not specified - Modality: — - Type: Memory - Size on disk: 79.2 MB - File count: 1 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004853.v1.0.0 - Source: openneuro - OpenNeuro: [ds004853](https://openneuro.org/datasets/ds004853) - NeMAR: [ds004853](https://nemar.org/dataexplorer/detail?dataset_id=ds004853) ## API Reference Use the `DS004853` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004853(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TX17 * **Study:** `ds004853` (OpenNeuro) * **Author (year):** `Johnson2023_TX17` * **Canonical:** — Also importable as: `DS004853`, `Johnson2023_TX17`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004853](https://openneuro.org/datasets/ds004853) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004853](https://nemar.org/dataexplorer/detail?dataset_id=ds004853) DOI: [https://doi.org/10.18112/openneuro.ds004853.v1.0.0](https://doi.org/10.18112/openneuro.ds004853.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004853 >>> dataset = DS004853(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004853) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004853) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004854: eeg dataset, 1 subjects *TX18* Access recordings and metadata through EEGDash. **Citation:** Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King (2023). *TX18*. [10.18112/openneuro.ds004854.v1.0.0](https://doi.org/10.18112/openneuro.ds004854.v1.0.0) Modality: eeg Subjects: 1 Recordings: 1 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004854 dataset = DS004854(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004854(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004854( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004854, title = {TX18}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004854.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004854.v1.0.0}, } ``` ## About This Dataset TX18 dataset This is a placeholder dataset. ## Dataset Information | Dataset ID | `DS004854` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | TX18 | | Author (year) | `Johnson2023_TX18` | | Canonical | `TX18` | | Importable as | `DS004854`, `Johnson2023_TX18`, `TX18` | | Year | 2023 | | Authors | Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004854.v1.0.0](https://doi.org/10.18112/openneuro.ds004854.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004854) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004854) | [Source URL](https://openneuro.org/datasets/ds004854) | ### Copy-paste BibTeX ```bibtex @dataset{ds004854, title = {TX18}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004854.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004854.v1.0.0}, } ``` ## Technical Details - Subjects: 1 - Recordings: 1 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 128.0 - Duration (hours): 0.5350499131944444 - Pathology: Not specified - Modality: — - Type: Memory - Size on disk: 79.2 MB - File count: 1 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004854.v1.0.0 - Source: openneuro - OpenNeuro: [ds004854](https://openneuro.org/datasets/ds004854) - NeMAR: [ds004854](https://nemar.org/dataexplorer/detail?dataset_id=ds004854) ## API Reference Use the `DS004854` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004854(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TX18 * **Study:** `ds004854` (OpenNeuro) * **Author (year):** `Johnson2023_TX18` * **Canonical:** `TX18` Also importable as: `DS004854`, `Johnson2023_TX18`, `TX18`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004854](https://openneuro.org/datasets/ds004854) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004854](https://nemar.org/dataexplorer/detail?dataset_id=ds004854) DOI: [https://doi.org/10.18112/openneuro.ds004854.v1.0.0](https://doi.org/10.18112/openneuro.ds004854.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004854 >>> dataset = DS004854(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004854) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004854) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004855: eeg dataset, 1 subjects *FT* Access recordings and metadata through EEGDash. **Citation:** Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King (2023). *FT*. [10.18112/openneuro.ds004855.v1.0.0](https://doi.org/10.18112/openneuro.ds004855.v1.0.0) Modality: eeg Subjects: 1 Recordings: 1 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004855 dataset = DS004855(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004855(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004855( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004855, title = {FT}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004855.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004855.v1.0.0}, } ``` ## About This Dataset FT dataset This is a placeholder dataset. ## Dataset Information | Dataset ID | `DS004855` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | FT | | Author (year) | `Johnson2023_FT` | | Canonical | — | | Importable as | `DS004855`, `Johnson2023_FT` | | Year | 2023 | | Authors | Tony Johnson, Stephen Gordon, Jon Touryan, Kevin King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004855.v1.0.0](https://doi.org/10.18112/openneuro.ds004855.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004855) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004855) | [Source URL](https://openneuro.org/datasets/ds004855) | ### Copy-paste BibTeX ```bibtex @dataset{ds004855, title = {FT}, author = {Tony Johnson and Stephen Gordon and Jon Touryan and Kevin King}, doi = {10.18112/openneuro.ds004855.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004855.v1.0.0}, } ``` ## Technical Details - Subjects: 1 - Recordings: 1 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 128.0 - Duration (hours): 0.5350499131944444 - Pathology: Not specified - Modality: — - Type: Memory - Size on disk: 79.2 MB - File count: 1 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004855.v1.0.0 - Source: openneuro - OpenNeuro: [ds004855](https://openneuro.org/datasets/ds004855) - NeMAR: [ds004855](https://nemar.org/dataexplorer/detail?dataset_id=ds004855) ## API Reference Use the `DS004855` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004855(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FT * **Study:** `ds004855` (OpenNeuro) * **Author (year):** `Johnson2023_FT` * **Canonical:** — Also importable as: `DS004855`, `Johnson2023_FT`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004855](https://openneuro.org/datasets/ds004855) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004855](https://nemar.org/dataexplorer/detail?dataset_id=ds004855) DOI: [https://doi.org/10.18112/openneuro.ds004855.v1.0.0](https://doi.org/10.18112/openneuro.ds004855.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004855 >>> dataset = DS004855(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004855) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004855) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004859: ieeg dataset, 7 subjects *iEEG on children during Stroop task* Access recordings and metadata through EEGDash. **Citation:** Kazuki Sakakura, Eishi Asano (2023). *iEEG on children during Stroop task*. [10.18112/openneuro.ds004859.v1.0.0](https://doi.org/10.18112/openneuro.ds004859.v1.0.0) Modality: ieeg Subjects: 7 Recordings: 9 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004859 dataset = DS004859(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004859(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004859( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004859, title = {iEEG on children during Stroop task}, author = {Kazuki Sakakura and Eishi Asano}, doi = {10.18112/openneuro.ds004859.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004859.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS004859` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | iEEG on children during Stroop task | | Author (year) | `Sakakura2023_children_Stroop` | | Canonical | `Sakakura2024` | | Importable as | `DS004859`, `Sakakura2023_children_Stroop`, `Sakakura2024` | | Year | 2023 | | Authors | Kazuki Sakakura, Eishi Asano | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004859.v1.0.0](https://doi.org/10.18112/openneuro.ds004859.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004859) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004859) | [Source URL](https://openneuro.org/datasets/ds004859) | ### Copy-paste BibTeX ```bibtex @dataset{ds004859, title = {iEEG on children during Stroop task}, author = {Kazuki Sakakura and Eishi Asano}, doi = {10.18112/openneuro.ds004859.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004859.v1.0.0}, } ``` ## Technical Details - Subjects: 7 - Recordings: 9 - Tasks: 1 - Channels: 128 (8), 108 - Sampling rate (Hz): 1000.0 - Duration (hours): 2.669722222222222 - Pathology: Not specified - Modality: Visual - Type: Attention - Size on disk: 2.3 GB - File count: 9 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004859.v1.0.0 - Source: openneuro - OpenNeuro: [ds004859](https://openneuro.org/datasets/ds004859) - NeMAR: [ds004859](https://nemar.org/dataexplorer/detail?dataset_id=ds004859) ## API Reference Use the `DS004859` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004859(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG on children during Stroop task * **Study:** `ds004859` (OpenNeuro) * **Author (year):** `Sakakura2023_children_Stroop` * **Canonical:** `Sakakura2024` Also importable as: `DS004859`, `Sakakura2023_children_Stroop`, `Sakakura2024`. Modality: `ieeg`; Experiment type: `Attention`; Subject type: `Unknown`. Subjects: 7; recordings: 9; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004859](https://openneuro.org/datasets/ds004859) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004859](https://nemar.org/dataexplorer/detail?dataset_id=ds004859) DOI: [https://doi.org/10.18112/openneuro.ds004859.v1.0.0](https://doi.org/10.18112/openneuro.ds004859.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004859 >>> dataset = DS004859(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004859) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004859) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004860: eeg dataset, 31 subjects *Investigating the cognitive conflict triggered by moral judgment of accidental harm : an event-related potentials study* Access recordings and metadata through EEGDash. **Citation:** Flora Schwartz, Radouane El-Yagoubi, Julie Cayron, Pierre-Vincent Paubel, Bastien Tremoliere (2023). *Investigating the cognitive conflict triggered by moral judgment of accidental harm : an event-related potentials study*. [10.18112/openneuro.ds004860.v1.0.0](https://doi.org/10.18112/openneuro.ds004860.v1.0.0) Modality: eeg Subjects: 31 Recordings: 31 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004860 dataset = DS004860(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004860(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004860( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004860, title = {Investigating the cognitive conflict triggered by moral judgment of accidental harm : an event-related potentials study}, author = {Flora Schwartz and Radouane El-Yagoubi and Julie Cayron and Pierre-Vincent Paubel and Bastien Tremoliere}, doi = {10.18112/openneuro.ds004860.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004860.v1.0.0}, } ``` ## About This Dataset EEG data collected from 31 participants as part of a research program on moral judgment. The experiment consists of a third-party moral judgment task integrated into a semantic judgment task (N400). Participants listened to moral scenarios featuring either intentional or accidental harm transgressions. The last word of the scenario appeared as text (target) and participants had to respond whether the target was congruent with the scenario they just heard by pressing a response button. The target was congruent half of the time. The agent’s intention and semantic congruency were manipulated orthogonally, leading to 4 within-subject conditions. For 20% of the moral scenarios, a moral judgment question (punishment) was presented immediately after the congruency judgment and participants indicated how much punishment the agent responsible for the moral transgression deserved using a joystick. ## Dataset Information | Dataset ID | `DS004860` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Investigating the cognitive conflict triggered by moral judgment of accidental harm : an event-related potentials study | | Author (year) | `Schwartz2023` | | Canonical | — | | Importable as | `DS004860`, `Schwartz2023` | | Year | 2023 | | Authors | Flora Schwartz, Radouane El-Yagoubi, Julie Cayron, Pierre-Vincent Paubel, Bastien Tremoliere | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004860.v1.0.0](https://doi.org/10.18112/openneuro.ds004860.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004860) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004860) | [Source URL](https://openneuro.org/datasets/ds004860) | ### Copy-paste BibTeX ```bibtex @dataset{ds004860, title = {Investigating the cognitive conflict triggered by moral judgment of accidental harm : an event-related potentials study}, author = {Flora Schwartz and Radouane El-Yagoubi and Julie Cayron and Pierre-Vincent Paubel and Bastien Tremoliere}, doi = {10.18112/openneuro.ds004860.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004860.v1.0.0}, } ``` ## Technical Details - Subjects: 31 - Recordings: 31 - Tasks: 1 - Channels: 36 - Sampling rate (Hz): 512.0 (30), 2048.0 - Duration (hours): 16.364166666666666 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 3.8 GB - File count: 31 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004860.v1.0.0 - Source: openneuro - OpenNeuro: [ds004860](https://openneuro.org/datasets/ds004860) - NeMAR: [ds004860](https://nemar.org/dataexplorer/detail?dataset_id=ds004860) ## API Reference Use the `DS004860` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004860(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Investigating the cognitive conflict triggered by moral judgment of accidental harm : an event-related potentials study * **Study:** `ds004860` (OpenNeuro) * **Author (year):** `Schwartz2023` * **Canonical:** — Also importable as: `DS004860`, `Schwartz2023`. Modality: `eeg`. Subjects: 31; recordings: 31; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004860](https://openneuro.org/datasets/ds004860) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004860](https://nemar.org/dataexplorer/detail?dataset_id=ds004860) DOI: [https://doi.org/10.18112/openneuro.ds004860.v1.0.0](https://doi.org/10.18112/openneuro.ds004860.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004860 >>> dataset = DS004860(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004860) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004860) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004865: ieeg dataset, 42 subjects *pyFR: Delayed Free Recall of Word Lists, Preliminary Cognitive Electrophysiology Study* Access recordings and metadata through EEGDash. **Citation:** Haydn G. Herrema, Michael J. Kahana (2023). *pyFR: Delayed Free Recall of Word Lists, Preliminary Cognitive Electrophysiology Study*. [10.18112/openneuro.ds004865.v2.0.1](https://doi.org/10.18112/openneuro.ds004865.v2.0.1) Modality: ieeg Subjects: 42 Recordings: 172 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004865 dataset = DS004865(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004865(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004865( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004865, title = {pyFR: Delayed Free Recall of Word Lists, Preliminary Cognitive Electrophysiology Study}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds004865.v2.0.1}, url = {https://doi.org/10.18112/openneuro.ds004865.v2.0.1}, } ``` ## About This Dataset **pyFR: Delayed Free Recall of Word Lists, Preliminary Cognitive Electrophysiology Study** **Description** This dataset contains behavioral events and intracranial electrophysiological recordings from a delayed free recall task. The experiment consists of participants studying a list of words, presented visually one at a time, completing simple arithmetic problems that function as a distractor, and then freely recalled the words from the just-presented list in any order. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania. This study was a preliminary cogntive electrophysiology study undertaken by the Computational Memory Lab, and is a predecessor to the following datasets: [FR1](https://openneuro.org/datasets/ds004789) & [CatFR1](https://openneuro.org/datasets/ds004809) **To Note** \* The iEEG recordings are labeled either “monopolar” or “bipolar.” The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis. The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables. \* Each subject has a unique montage of electrode locations. MNI and Talairach coordinates are provided when available, along with brain region annotations. \* Recordings were made on multiple different systems, so we have done the scaling to provide all voltage values in V. **Contact** For questions or inquiries, please contact [sas-kahana-sysadmin@sas.upenn.edu](mailto:sas-kahana-sysadmin@sas.upenn.edu). ## Dataset Information | Dataset ID | `DS004865` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | pyFR: Delayed Free Recall of Word Lists, Preliminary Cognitive Electrophysiology Study | | Author (year) | `Herrema2023_pyFR_Delayed_Free` | | Canonical | `pyFR` | | Importable as | `DS004865`, `Herrema2023_pyFR_Delayed_Free`, `pyFR` | | Year | 2023 | | Authors | Haydn G. Herrema, Michael J. Kahana | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004865.v2.0.1](https://doi.org/10.18112/openneuro.ds004865.v2.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004865) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004865) | [Source URL](https://openneuro.org/datasets/ds004865) | ### Copy-paste BibTeX ```bibtex @dataset{ds004865, title = {pyFR: Delayed Free Recall of Word Lists, Preliminary Cognitive Electrophysiology Study}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds004865.v2.0.1}, url = {https://doi.org/10.18112/openneuro.ds004865.v2.0.1}, } ``` ## Technical Details - Subjects: 42 - Recordings: 172 - Tasks: 1 - Channels: 100 (7), 80 (5), 74 (5), 131 (5), 46 (4), 108 (4), 62 (4), 110 (4), 54 (4), 85 (4), 86 (4), 53 (4), 32 (3), 116 (3), 47 (3), 150 (3), 121 (3), 42 (3), 55 (3), 75 (3), 78 (3), 84 (3), 109 (3), 27 (3), 82 (3), 91 (3), 72 (3), 88 (3), 105 (3), 168 (3), 48 (3), 123 (3), 96 (3), 70 (3), 104 (3), 130 (2), 63 (2), 126 (2), 68 (2), 57 (2), 52 (2), 36 (2), 102 (2), 124 (2), 76 (2), 111 (2), 58 (2), 149 (2), 144 (2), 87 (2), 119 (2), 153 (2), 142 (2), 187, 95, 81, 90, 56, 94, 98, 160, 203, 120, 101, 97, 64 - Sampling rate (Hz): 1000.0 (102), 512.0 (40), 2000.0 (16), 400.0 (8), 499.7071 (6) - Duration (hours): 180.6291955494362 - Pathology: Surgery - Modality: Visual - Type: Memory - Size on disk: 97.8 GB - File count: 172 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004865.v2.0.1 - Source: openneuro - OpenNeuro: [ds004865](https://openneuro.org/datasets/ds004865) - NeMAR: [ds004865](https://nemar.org/dataexplorer/detail?dataset_id=ds004865) ## API Reference Use the `DS004865` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004865(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) pyFR: Delayed Free Recall of Word Lists, Preliminary Cognitive Electrophysiology Study * **Study:** `ds004865` (OpenNeuro) * **Author (year):** `Herrema2023_pyFR_Delayed_Free` * **Canonical:** `pyFR` Also importable as: `DS004865`, `Herrema2023_pyFR_Delayed_Free`, `pyFR`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Surgery`. Subjects: 42; recordings: 172; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004865](https://openneuro.org/datasets/ds004865) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004865](https://nemar.org/dataexplorer/detail?dataset_id=ds004865) DOI: [https://doi.org/10.18112/openneuro.ds004865.v2.0.1](https://doi.org/10.18112/openneuro.ds004865.v2.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004865 >>> dataset = DS004865(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004865) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004865) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004883: eeg dataset, 172 subjects *Registerd Report of ERN During Three Versions of a Flanker Task* Access recordings and metadata through EEGDash. **Citation:** Peter E. Clayson, Michael J. Larson (2023). *Registerd Report of ERN During Three Versions of a Flanker Task*. [10.18112/openneuro.ds004883.v1.0.0](https://doi.org/10.18112/openneuro.ds004883.v1.0.0) Modality: eeg Subjects: 172 Recordings: 516 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004883 dataset = DS004883(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004883(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004883( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004883, title = {Registerd Report of ERN During Three Versions of a Flanker Task}, author = {Peter E. Clayson and Michael J. Larson}, doi = {10.18112/openneuro.ds004883.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004883.v1.0.0}, } ``` ## About This Dataset This study is described at [https://osf.io/qt2zh/](https://osf.io/qt2zh/). Scripts used for data processing are posted there. Here is the script from the manuscript that describes these data. Error-related negativity is a widely used measure of error monitoring, and many projects are independently moving ERN recorded during a flanker task towards standardization, optimization, and eventual clinical application. However, each project uses a different version of the flanker task and tacitly assumes ERN is functionally equivalent across each version. The routine neglect of a rigorous test of this assumption undermines efforts to integrate ERN findings across tasks, optimize and standardize ERN assessment, and widely apply ERN in clinical trials. The purpose of this registered report was to determine whether ERN shows similar experimental effects (correct vs. error trials) and data quality (intraindividual variability) during three commonly-used versions of a flanker task. ERN was recorded from 172 participants during three versions of a flanker task across two study sites. ERN scores showed numerical differences between tasks, raising questions about the comparability of ERN findings across studies and tasks. Although ERN scores from all three versions of the flanker task yielded high data quality and internal consistency, one version did outperform the other two in terms of the size of experimental effects and the data quality. Exploratory analyses of the error positivity (Pe) provided tentative support for the other two versions of the task over the paradigm that appeared optimal for ERN. The present study provides a roadmap for how to statistically compare psychometric characteristics of ERP scores across paradigms and gives preliminary recommendations for flanker tasks to use for ERN- and Pe-focused studies. ## Dataset Information | Dataset ID | `DS004883` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Registerd Report of ERN During Three Versions of a Flanker Task | | Author (year) | `Clayson2023_Registerd` | | Canonical | — | | Importable as | `DS004883`, `Clayson2023_Registerd` | | Year | 2023 | | Authors | Peter E. Clayson, Michael J. Larson | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004883.v1.0.0](https://doi.org/10.18112/openneuro.ds004883.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004883) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004883) | [Source URL](https://openneuro.org/datasets/ds004883) | ### Copy-paste BibTeX ```bibtex @dataset{ds004883, title = {Registerd Report of ERN During Three Versions of a Flanker Task}, author = {Peter E. Clayson and Michael J. Larson}, doi = {10.18112/openneuro.ds004883.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004883.v1.0.0}, } ``` ## Technical Details - Subjects: 172 - Recordings: 516 - Tasks: 3 - Channels: 129 - Sampling rate (Hz): 500.0 - Duration (hours): 139.97186555555555 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 122.8 GB - File count: 516 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004883.v1.0.0 - Source: openneuro - OpenNeuro: [ds004883](https://openneuro.org/datasets/ds004883) - NeMAR: [ds004883](https://nemar.org/dataexplorer/detail?dataset_id=ds004883) ## API Reference Use the `DS004883` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004883(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Registerd Report of ERN During Three Versions of a Flanker Task * **Study:** `ds004883` (OpenNeuro) * **Author (year):** `Clayson2023_Registerd` * **Canonical:** — Also importable as: `DS004883`, `Clayson2023_Registerd`. Modality: `eeg`. Subjects: 172; recordings: 516; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004883](https://openneuro.org/datasets/ds004883) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004883](https://nemar.org/dataexplorer/detail?dataset_id=ds004883) DOI: [https://doi.org/10.18112/openneuro.ds004883.v1.0.0](https://doi.org/10.18112/openneuro.ds004883.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004883 >>> dataset = DS004883(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004883) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004883) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004902: eeg dataset, 71 subjects *A Resting-state EEG Dataset for Sleep Deprivation* Access recordings and metadata through EEGDash. **Citation:** Chuqin Xiang, Xinrui Fan, Duo Bai, Ke Lv, Xu Lei (2023). *A Resting-state EEG Dataset for Sleep Deprivation*. [10.18112/openneuro.ds004902.v1.0.8](https://doi.org/10.18112/openneuro.ds004902.v1.0.8) Modality: eeg Subjects: 71 Recordings: 218 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004902 dataset = DS004902(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004902(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004902( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004902, title = {A Resting-state EEG Dataset for Sleep Deprivation}, author = {Chuqin Xiang and Xinrui Fan and Duo Bai and Ke Lv and Xu Lei}, doi = {10.18112/openneuro.ds004902.v1.0.8}, url = {https://doi.org/10.18112/openneuro.ds004902.v1.0.8}, } ``` ## About This Dataset **General information** The dataset provides resting-state EEG data (eyes open,partially eyes closed) from 71 participants who underwent two experiments involving normal sleep (NS—session1) and sleep deprivation(SD—session2) .The dataset also provides information on participants’ sleepiness and mood states. (Please note here Session 1 (NS) and Session 2 (SD) is not the time order, the time order is counterbalanced across participants and is listed in metadata.) **Dataset** **Presentation** The data collection was initiated in March 2019 and was terminated in December 2020. The detailed description of the dataset is currently under working by Chuqin Xiang,Xinrui Fan,Duo Bai,Ke Lv and Xu Lei, and will submit to Scientific Data for publication. **EEG acquisition** \* EEG system (Brain Products GmbH, Steing- rabenstr, Germany, 61 electrodes) \* Sampling frequency: 500Hz \* Impedances were kept below 5k **Contact** > * If you have any questions or comments, please contact: > * Xu Lei: [xlei@swu.edu.cn](mailto:xlei@swu.edu.cn) **Article** Xiang, C., Fan, X., Bai, D. et al. A resting-state EEG dataset for sleep deprivation. Sci Data 11, 427 (2024). [https://doi.org/10.1038/s41597-024-03268-2](https://doi.org/10.1038/s41597-024-03268-2) ## Dataset Information | Dataset ID | `DS004902` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A Resting-state EEG Dataset for Sleep Deprivation | | Author (year) | `Xiang2023` | | Canonical | — | | Importable as | `DS004902`, `Xiang2023` | | Year | 2023 | | Authors | Chuqin Xiang, Xinrui Fan, Duo Bai, Ke Lv, Xu Lei | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004902.v1.0.8](https://doi.org/10.18112/openneuro.ds004902.v1.0.8) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004902) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004902) | [Source URL](https://openneuro.org/datasets/ds004902) | ### Copy-paste BibTeX ```bibtex @dataset{ds004902, title = {A Resting-state EEG Dataset for Sleep Deprivation}, author = {Chuqin Xiang and Xinrui Fan and Duo Bai and Ke Lv and Xu Lei}, doi = {10.18112/openneuro.ds004902.v1.0.8}, url = {https://doi.org/10.18112/openneuro.ds004902.v1.0.8}, } ``` ## Technical Details - Subjects: 71 - Recordings: 218 - Tasks: 2 - Channels: 61 - Sampling rate (Hz): 500.0 (215), 5000.0 (3) - Duration (hours): 18.11842777777777 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 8.3 GB - File count: 218 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004902.v1.0.8 - Source: openneuro - OpenNeuro: [ds004902](https://openneuro.org/datasets/ds004902) - NeMAR: [ds004902](https://nemar.org/dataexplorer/detail?dataset_id=ds004902) ## API Reference Use the `DS004902` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004902(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Resting-state EEG Dataset for Sleep Deprivation * **Study:** `ds004902` (OpenNeuro) * **Author (year):** `Xiang2023` * **Canonical:** — Also importable as: `DS004902`, `Xiang2023`. Modality: `eeg`. Subjects: 71; recordings: 218; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004902](https://openneuro.org/datasets/ds004902) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004902](https://nemar.org/dataexplorer/detail?dataset_id=ds004902) DOI: [https://doi.org/10.18112/openneuro.ds004902.v1.0.8](https://doi.org/10.18112/openneuro.ds004902.v1.0.8) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004902 >>> dataset = DS004902(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004902) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004902) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004917: eeg dataset, 24 subjects *Probability Decision-making Task with ambiguity* Access recordings and metadata through EEGDash. **Citation:** Alejandra Figueroa-Vargas, Gabriela Valdebenito-Oyarzo, María Paz Martínez-Molina, Francisco Zamorano, Pablo Billeke (2024). *Probability Decision-making Task with ambiguity*. [10.18112/openneuro.ds004917.v1.0.1](https://doi.org/10.18112/openneuro.ds004917.v1.0.1) Modality: eeg Subjects: 24 Recordings: 24 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004917 dataset = DS004917(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004917(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004917( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004917, title = {Probability Decision-making Task with ambiguity}, author = {Alejandra Figueroa-Vargas and Gabriela Valdebenito-Oyarzo and María Paz Martínez-Molina and Francisco Zamorano and Pablo Billeke}, doi = {10.18112/openneuro.ds004917.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004917.v1.0.1}, } ``` ## About This Dataset Summary This dataset forms part of a study supported by the Social Neuroscience and Neuromodulation Laboratory of Universidad del Desarrollo, Chile. The full dataset is described in a submission to Scientific Data. Abstract In our daily lives, we frequently encounter decisions where the potential outcomes are unclear, leading to a state of heightened uncertainty. The complete or partial lack of knowledge regarding the probability of outcomes is called ambiguity and presents significant challenges for individuals. While recent studies have associated the level of ambiguity in decision-making with neural activity in the parietal cortex, the precise role of this brain region and its interactions with other brain regions during decision-making processes are not well known. Here, we present a comprehensive dataset detailing human decision-making under conditions of risk and ambiguity. This dataset includes data from 53 healthy volunteers aged between 18 and 31 years, consisting of structural magnetic resonance imaging (MRI: T1w, T2w, and DWI) and functional MRI (fMRI) acquired during task performance, as well as concurrent electrophysiological (EEG) recordings during inhibitory transcranial magnetic stimulation (TMS) applied over two parietal regions and the vertex. This dataset offers an opportunity to delve into the neurobiological mechanisms of decision-making in detail, highlighting the role of the parietal cortex. Additional Usage Notes - All code related to this dataset can be found on GitHub ([https://github.com/neurocics/LAN_current](https://github.com/neurocics/LAN_current)) and and the additional data set of study are available in the free and open repository of OSF ([https://osf.io/zd3g7/](https://osf.io/zd3g7/)) (DOI: 10.17605/OSF.IO/ZD3G7). This includes sourcedata for the scanner tasks and also stimulus presentation scripts. ## Dataset Information | Dataset ID | `DS004917` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Probability Decision-making Task with ambiguity | | Author (year) | `FigueroaVargas2024` | | Canonical | — | | Importable as | `DS004917`, `FigueroaVargas2024` | | Year | 2024 | | Authors | Alejandra Figueroa-Vargas, Gabriela Valdebenito-Oyarzo, María Paz Martínez-Molina, Francisco Zamorano, Pablo Billeke | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004917.v1.0.1](https://doi.org/10.18112/openneuro.ds004917.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004917) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004917) | [Source URL](https://openneuro.org/datasets/ds004917) | ### Copy-paste BibTeX ```bibtex @dataset{ds004917, title = {Probability Decision-making Task with ambiguity}, author = {Alejandra Figueroa-Vargas and Gabriela Valdebenito-Oyarzo and María Paz Martínez-Molina and Francisco Zamorano and Pablo Billeke}, doi = {10.18112/openneuro.ds004917.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004917.v1.0.1}, } ``` ## Technical Details - Subjects: 24 - Recordings: 24 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 5000.0 - Duration (hours): 14.579594444444444 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 37.5 GB - File count: 24 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004917.v1.0.1 - Source: openneuro - OpenNeuro: [ds004917](https://openneuro.org/datasets/ds004917) - NeMAR: [ds004917](https://nemar.org/dataexplorer/detail?dataset_id=ds004917) ## API Reference Use the `DS004917` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004917(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Probability Decision-making Task with ambiguity * **Study:** `ds004917` (OpenNeuro) * **Author (year):** `FigueroaVargas2024` * **Canonical:** — Also importable as: `DS004917`, `FigueroaVargas2024`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004917](https://openneuro.org/datasets/ds004917) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004917](https://nemar.org/dataexplorer/detail?dataset_id=ds004917) DOI: [https://doi.org/10.18112/openneuro.ds004917.v1.0.1](https://doi.org/10.18112/openneuro.ds004917.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004917 >>> dataset = DS004917(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004917) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004917) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004929: fnirs dataset, 12 subjects *BallSqueezingHD* Access recordings and metadata through EEGDash. **Citation:** Yuanyuan Gao, De’Ja Rogers, Alexander von Lühmann, Antonio Ortega-Martinez, David A. Boas, Meryem A. Yücel (2024). *BallSqueezingHD*. [10.18112/openneuro.ds004929.v1.0.0](https://doi.org/10.18112/openneuro.ds004929.v1.0.0) Modality: fnirs Subjects: 12 Recordings: 36 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004929 dataset = DS004929(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004929(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004929( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004929, title = {BallSqueezingHD}, author = {Yuanyuan Gao and De’Ja Rogers and Alexander von Lühmann and Antonio Ortega-Martinez and David A. Boas and Meryem A. Yücel}, doi = {10.18112/openneuro.ds004929.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004929.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS004929` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BallSqueezingHD | | Author (year) | `Gao2024` | | Canonical | — | | Importable as | `DS004929`, `Gao2024` | | Year | 2024 | | Authors | Yuanyuan Gao, De’Ja Rogers, Alexander von Lühmann, Antonio Ortega-Martinez, David A. Boas, Meryem A. Yücel | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004929.v1.0.0](https://doi.org/10.18112/openneuro.ds004929.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004929) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004929) | [Source URL](https://openneuro.org/datasets/ds004929) | ### Copy-paste BibTeX ```bibtex @dataset{ds004929, title = {BallSqueezingHD}, author = {Yuanyuan Gao and De’Ja Rogers and Alexander von Lühmann and Antonio Ortega-Martinez and David A. Boas and Meryem A. Yücel}, doi = {10.18112/openneuro.ds004929.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004929.v1.0.0}, } ``` ## Technical Details - Subjects: 12 - Recordings: 36 - Tasks: 1 - Channels: 200 - Sampling rate (Hz): 8.719308035714286 - Duration (hours): Not calculated - Pathology: Not specified - Modality: Motor - Type: Motor - Size on disk: 302.4 MB - File count: 36 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004929.v1.0.0 - Source: openneuro - OpenNeuro: [ds004929](https://openneuro.org/datasets/ds004929) - NeMAR: [ds004929](https://nemar.org/dataexplorer/detail?dataset_id=ds004929) ## API Reference Use the `DS004929` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004929(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BallSqueezingHD * **Study:** `ds004929` (OpenNeuro) * **Author (year):** `Gao2024` * **Canonical:** — Also importable as: `DS004929`, `Gao2024`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 12; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004929](https://openneuro.org/datasets/ds004929) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004929](https://nemar.org/dataexplorer/detail?dataset_id=ds004929) DOI: [https://doi.org/10.18112/openneuro.ds004929.v1.0.0](https://doi.org/10.18112/openneuro.ds004929.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS004929 >>> dataset = DS004929(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004929) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004929) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) * [eegdash.dataset.DS005929](eegdash.dataset.DS005929.md) # DS004940: eeg dataset, 22 subjects *Neurophysiological measures of covert semantic processing in neurotypical adolescents actively ignoring spoken sentence inputs: A high-density event-related potential (ERP) study.* Access recordings and metadata through EEGDash. **Citation:** Kathryn Toffolo, Edward G. Freedman, John J. Foxe (2024). *Neurophysiological measures of covert semantic processing in neurotypical adolescents actively ignoring spoken sentence inputs: A high-density event-related potential (ERP) study.*. [10.18112/openneuro.ds004940.v1.0.1](https://doi.org/10.18112/openneuro.ds004940.v1.0.1) Modality: eeg Subjects: 22 Recordings: 48 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004940 dataset = DS004940(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004940(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004940( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004940, title = {Neurophysiological measures of covert semantic processing in neurotypical adolescents actively ignoring spoken sentence inputs: A high-density event-related potential (ERP) study.}, author = {Kathryn Toffolo and Edward G. Freedman and John J. Foxe}, doi = {10.18112/openneuro.ds004940.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004940.v1.0.1}, } ``` ## About This Dataset Citation Toffolo, K.K., Freedman, E.G., Foxe, J.J. Neurophysiological measures of covert semantic processing in neurotypical adolescents actively ignoring spoken sentence inputs: A high-density event-related potential (ERP) study. Neuroscience. 560:238-253 (2024). PMID PMC39369943 DOI: 10.1016/j.neuroscience. 2024.10.008 Project name and executive summary > Title: Neurophysiological measures of covert semantic processing in neurotypical adolescents actively ignoring spoken sentence inputs: A high-density event-related potential (ERP) study. > Abstract: Language comprehension requires semantic processing of individual words and their context within a sentence. Well-characterized event-related potential (ERP) components (the N400 and late positivity component (LPC/P600)) provide neuromarkers of semantic processing, and are robustly evoked when semantic errors are introduced into sentences. These measures are useful for evaluating semantic processing in clinical populations, but it is not known whether they can be evoked in more severe neurodevelopmental disorders where explicit attention to the sentence inputs cannot be objectively assessed (i.e., when sentences are passively listened to). We evaluated whether N400 and LPC/P600 could be detected in adolescents who were explicitly ignoring sentence inputs. Specifically, it was asked whether explicit attention to spoken inputs was required for semantic processing, or if a degree of automatic processing occurs when the focus of attention is directed elsewhere? High-density ERPs were acquired from twenty-two adolescents (12–17 years), under two experimental conditions: 1. individuals actively determined whether the final word in a sentence was congruent or incongruent with sentence context, or 2. passively listened to background sentences while watching a video. When sentences were ignored, N400 and LPC/P600 were robustly evoked to semantic errors, albeit with reduced amplitudes and protracted/delayed latencies. Statistically distinct topographic distributions during passive versus active paradigms pointed to distinct generator configurations for semantic processing as a function of attention. Covert semantic processing continues in neurotypical adolescents when explicit attention is withdrawn from sentence inputs. As such, this approach could be used to objectively investigate semantic processing in populations with communication deficits. > Task Descriptions: > > Active Paradigm: Individuals were asked to focus on a fixation cross throughout the task. Before the two practice trials (the same for all participants), all instructions were explained both on the screen and through headphones (Sennheiser electronic GmbH & Co. KG, USA). Corrective feedback was only given during practice trials and not experimental trials. During experimental trials, an auditory sentence was played through headphones while a fixation cross was on the screen. This was followed by a two second pause, which was in turn followed by a prompt asking if the sentence ended as expected (the prompt was presented both visually and auditorily). Participants would then respond with a left arrow key press if a sentence ended unexpectedly (incongruent) or a right arrow key press if it ended as expected (congruent). Between a response and the start of the next sentence stimulus was a two second pause. Participants were given optional breaks every 20 or 40 stimuli and could continue with the experiment by pressing the spacebar. > > Passive Paradigm: Individuals were instructed to simply ignore the auditory sentence stimuli and watch a show of their choice without sound or subtitles. No response was required for this paradigm and between each sentence stimulus was a four second pause. No breaks were given due to the quick task design. Contact information regarding analyses ### View full README Citation Toffolo, K.K., Freedman, E.G., Foxe, J.J. Neurophysiological measures of covert semantic processing in neurotypical adolescents actively ignoring spoken sentence inputs: A high-density event-related potential (ERP) study. Neuroscience. 560:238-253 (2024). PMID PMC39369943 DOI: 10.1016/j.neuroscience. 2024.10.008 Project name and executive summary > Title: Neurophysiological measures of covert semantic processing in neurotypical adolescents actively ignoring spoken sentence inputs: A high-density event-related potential (ERP) study. > Abstract: Language comprehension requires semantic processing of individual words and their context within a sentence. Well-characterized event-related potential (ERP) components (the N400 and late positivity component (LPC/P600)) provide neuromarkers of semantic processing, and are robustly evoked when semantic errors are introduced into sentences. These measures are useful for evaluating semantic processing in clinical populations, but it is not known whether they can be evoked in more severe neurodevelopmental disorders where explicit attention to the sentence inputs cannot be objectively assessed (i.e., when sentences are passively listened to). We evaluated whether N400 and LPC/P600 could be detected in adolescents who were explicitly ignoring sentence inputs. Specifically, it was asked whether explicit attention to spoken inputs was required for semantic processing, or if a degree of automatic processing occurs when the focus of attention is directed elsewhere? High-density ERPs were acquired from twenty-two adolescents (12–17 years), under two experimental conditions: 1. individuals actively determined whether the final word in a sentence was congruent or incongruent with sentence context, or 2. passively listened to background sentences while watching a video. When sentences were ignored, N400 and LPC/P600 were robustly evoked to semantic errors, albeit with reduced amplitudes and protracted/delayed latencies. Statistically distinct topographic distributions during passive versus active paradigms pointed to distinct generator configurations for semantic processing as a function of attention. Covert semantic processing continues in neurotypical adolescents when explicit attention is withdrawn from sentence inputs. As such, this approach could be used to objectively investigate semantic processing in populations with communication deficits. > Task Descriptions: > > Active Paradigm: Individuals were asked to focus on a fixation cross throughout the task. Before the two practice trials (the same for all participants), all instructions were explained both on the screen and through headphones (Sennheiser electronic GmbH & Co. KG, USA). Corrective feedback was only given during practice trials and not experimental trials. During experimental trials, an auditory sentence was played through headphones while a fixation cross was on the screen. This was followed by a two second pause, which was in turn followed by a prompt asking if the sentence ended as expected (the prompt was presented both visually and auditorily). Participants would then respond with a left arrow key press if a sentence ended unexpectedly (incongruent) or a right arrow key press if it ended as expected (congruent). Between a response and the start of the next sentence stimulus was a two second pause. Participants were given optional breaks every 20 or 40 stimuli and could continue with the experiment by pressing the spacebar. > > Passive Paradigm: Individuals were instructed to simply ignore the auditory sentence stimuli and watch a show of their choice without sound or subtitles. No response was required for this paradigm and between each sentence stimulus was a four second pause. No breaks were given due to the quick task design. Contact information regarding analyses : First Author: Kathryn Toffolo : University Email: [kathryn_toffolo@urmc.rochester.edu](mailto:kathryn_toffolo@urmc.rochester.edu) Unafiliated Email: [kattoffolo@gmail.com](mailto:kattoffolo@gmail.com) ORCID iD: orcid.org/0000-0002-5728-3174 Linkedin: Kathryn Toffolo Sharing/Access Information : Raw file access: : There are many ways to open .bdf files (ex. BESA, fieldtrip via MATLAB, converting .bdf to .edf to use brain vision etc.), but the way our lab accesses/analyzes our data is with EEGLab via MATLAB. Stimulus presentation code is restricted to “Presentation” by Neurobehavioral Systems, Inc. All .tsv and .json files can be read with a standard text editor.
Software used: : JASP (JASP Team [2020], Version 0.12.2) –> Statistical analysis MATLAB (MathWorks Inc., Natick, MA) –> EEG preprocessing and analysis EEGLAB (Delorme & Makeig, 2004) –> EEG preprocessing and analysis FieldTrip toolbox (Oostenveld et al. 2010) –> Topography statistical plots Presentation® Software (Version 18.0, Neurobehavioral Systems, Inc. Berkeley, CA) –> Presenting stimuli to participants Description of file(s) and Dataset: Stimuli: > Outside of the “stimuli” folder is a file describing the parameters of the stimuli: > : > “N400PvsA_stimuli_parameters”- (TSV) The stimuli and parameters for this study were derived from the stimuli used in Toffolo et al. 2022. Information about the sentences in this stimulus set can be found at [https://doi.org/10.5061/dryad.9ghx3ffkg](https://doi.org/10.5061/dryad.9ghx3ffkg)). Of the 442 sentences offered (sentences with congruent (221) and incongruent endings (221)), 404 were used in this study (202 congruent ending sentences and 202 incongruent ending sentences). This file lists the stimuli used and this experiment and the corresponding sentence parameters. The use of this .tsv file was imperative to this study, and as such is described below. The triggers in the raw EEG data recorded when each stimulus began, not the onset of the target word. The “target_onset” column in this file was added to the onset of each trigger in the EEG data. Before this value was added to the stimulus onset times, the “target_onset(s)” was converted to datapoints by multiplying by the sample rate (512). This file can also be used to see what is said in each sentence. The top half of the file are the sentences with congruent endings while the bottom half are the sentences with incongruent sentence endings. The first 3 columns are the order in which a stimulus was played, the stimulus key, and the stimulus file name so that each sentence can be matched to an audio file. Following this are the sentences separated by each word. This file may be useful to N400 investigations that want to have visual presentation of the stimuli in addition to or instead of auditory presentation. > > “stim_key”- is the order number of the stimulus representing when in the dataset the stimulus was presented (1st sound, 2nd sound, … etc.). >
> > > “stim_file”- is the audio file name for the presented stimulus. > > > : “1”- is the first word in the stimulus (sentence). > > > “2”- is the second word in the stimulus (sentence). > > > “3”- is the third word in the stimulus (sentence). > > > “4”- is the fourth word in the stimulus (sentence). > > > “5”- is the fifth word in the stimulus (sentence). > > > “6”- is the sixth word in the stimulus (sentence). > > > “7-” is the seventh word in the stimulus (sentence). > > > “8”- is the eighth word in the stimulus (sentence). >
> > > “stim_dur(s)”- is the entire length of each stimulus in seconds. > > > “target_onset(s)”- is the time from the beginning of the sentence (the raw trigger time in the raw eeg data) to the START of the target/ending word in seconds. > > > “target_end(s)”- is the time from the beginning of the sentence (where the trigger was placed) to the END of the target/ending word in seconds. > > > “target_dur(s)”- is the time between the START and the END of the target/ending word in seconds. > > > “time-quarter_div”- is the division to investigate for an effect of time. The stimuli are broken into 4 groups (1-4) by quarter. Because the stimuli in this file are in the order that they were presented to each participant, this column will be in order from 1-4. > > > “order-group_div”- is the division to investigate for an effect of order. The stimuli are broken into 4 groups: 1. Is congruent stimuli for the scenario in which the congruent stimulus pair was presented before the incongruent stimulus pair; 2. Is incongruent stimuli for the scenario in which the congruent stimulus pair is presented before the incongruent stimulus pair; 3. Is congruent stimuli for the scenario in which the incongruent stimulus pair was presented before the congruent stimulus pair; 4. Is incongruent stimuli for the scenario in which the incongruent stimulus pair was presented before the congruent stimulus pair. > > > “cloze-probability%_div”- are the cloze probability (CP) scores for each sentence. To investigate for an effect of CP, these scores were broken into 4 different groups: 1. Sentence pairs with CP greater of equal to 96%; 2. Sentence pairs with greater than or equal to 90% CP and less than 96% CP; 3. Sentence pairs with greater than or equal to 80% CP and less than 90% CP; 4. Sentence pairs with less than 80% CP. > > > “linguistic-group_div”- is the division to investigate for an effect of linguistic error. The stimuli are broken into 5 groups: 1. Incongruent sentence that contain only semantic ending errors, along with their congruent pair; 2. Incongruent sentences with both semantic errors and a syntactic number error, along with their congruent pair; 3. Incongruent sentences with both semantic errors and syntactic adjective/noun errors, along with their congruent pair; 4. Incongruent sentence stimuli with both semantic errors and syntactic verb/noun ending errors, along with their congruent pair; and 5. Eliminated linguistic division group contained 19 sentence pairs of which the endings could make sense to children, did not match in syllable number, were hyphenated phrases, or contained cultural references. Final analysis combined groups 2-4 into one larger group of semantic and syntactic error sentence pairs in order to contrast with sentence pairs continaing just semantic errors. > > > “linguistic-group_reasoning”- are quick explanations/descriptions for why a stimulus was placed in a particular linguistic group. >
> Within the “stimuli” folder are the stimuli used in this experiment, all of which was auditory. Stimuli were from a published stimulus set (Toffolo et al. 2022). The stimulus set and more detailed information about the creation of the sentences can be found here: [https://doi.org/10.5061/dryad.9ghx3ffkg](https://doi.org/10.5061/dryad.9ghx3ffkg). Of the 442 stimuli provided in this set, the current study used 404 exemplars (202 congruent and 202 incongruent sentence pairs). These non-prosodic sentences ranged from four to eight words in length, and have associated cloze probability scores (i.e. the likelihood that a given sentence-ending word would be provided by typical observers (Kutas and Hillyard, 1984)), and were designed for use with children 5 years and older. > Within the “stimuli” folder are also 15 task related sentences. These audio files were recorded from a female speaker, who was instructed to voice the sentences as she would talking to a young adult. > “(Audio_01)DidThisSentenceEndCorrectly” follows each sentence stimulus and is played so the participant knows it is time to respond. “(Audio_02)DoYouWantToTakeABreak”) is played at the onset of each break period. > “(Audio_03)Congratulations” is played at the end of the task. Audio files with the prefix “(Intro_01)”-“(Intro_05 )” are descrptions of the task, and should be played in order. Audio files with the prefix “(Intro_06)” > “(Intro_11)” are for the practice example section. This includes audio introducing the practice session “(Intro_06)Practice_Intro”, audio for between examples “(Intro_07)Practice_LetsTryAnotherOne”, and 4 possible responses depending on how the subject answers: 1. If the subject got the answer correct for the congruent example “(Intro_08)PracticeFeedback_Correct4Congruent”; 2. Correct for the incongruent example “(Intro_09)PracticeFeedback_Correct4Inongruent”; 3. Incorrect for incongruent example “(Intro_10)PracticeFeedback_Incorrect4Incongruent”; and 4. Incorrect for the congruent example “(Intro_11)PracticeFeedback_Incorrect4Congruent”. Lastly, is the file that lets the participant know that the task is starting “(Intro_12)Practice_LetsStartTheTask”. Dataset: Data for the 22 subjects are provided in EEG BIDS format. Raw unfiltered data are in 22 subject folders (sub-###). Note, in the raw data folders of sub-004, sub-007, sub-012, and sub-019 are 2 BDF files for one of their paradigms (the recording had to be broken up due to the needs of the participant or a technical issue). Filtered EEG data and grand average ERP data of each subject (sub-###) can be found in the “eeg-processed” and “erps” folders respectively of the “derivatives” folder . Additionally, there are files in “ChRej” folder of “derivatives” that contributed to the filtering process. The code used to create these data are provided in the “code” folder. : Outside of the subject folders are several files describing the stimuli, participants, and organization of the data within each subject folder. : “dataset_description”- (JSON) Description of the dataset. “Participants”- (JSON and TSV) Describes the 22 subjects: ID, gender, age, dominant hand, first language, other known languages, what paradigm they performed on the first visit, and the interim time between the first and second visit. “task-N400Active_eeg”- (JSON) Information about aquiring the raw eeg data from the Active Paradigm that is within each subjects raw data folder. “task-N400Passive_eeg”- (JSON) Information about aquiring the raw eeg data from the Passive Paradigm that is within each subjects raw data folder. “task-N400Active_electrodes”- (TSV) Information about electrode location (same for both paradigms). [NOW LOCATED IN DERIVATIVES SINCE COORDINATES ARE ESTIMATES WITHOUT FIDUCIAL DATA VIA “_coordsystem.json”] “task-N400Passive_electrodes”- (TSV) Information about electrode location (same for both paradigms). [NOW LOCATED IN DERIVATIVES SINCE COORDINATES ARE ESTIMATES WITHOUT FIDUCIAL DATA VIA “_coordsystem.json”] “task-N400Active_events”- (JSON) Information about the event file for the Active paradigm that is within each subjects raw data folder. “task-N400Passive_events”- (JSON) Information about the event file for the Passive paradigm that is within each subjects raw data folder.
“sub-###”: Subjects RAW data folder. Within the “eeg” folder of each participants data folder are the unprocessed raw EEG data along with other files. These include: : “sub-###_task-N400Active_eeg”- (BDF) Raw EEG data from the Active Paradigm. “sub-###_task-N400Passive_eeg”- (BDF) Raw EEG data from the Passive Paradigm. “sub-###_task-N400Active_events”- (TSV) The event file for the Active Paradigm. “sub-###_task-N400Passive_events”- (TSV) The event file for the Passive Paradigm.
> Event files start with the “onset” and “duration” of the target word within a stimulus (the final word at the end of a sentence). For example, the sentence “I baked a birthday cake/clue”, the target congruent words would be “cake” and the target incongruent word would be “clue”. The onset and duration of these target words are provided in the first 2 columns respectively. There is also “stim_onset” and “stim_dur” in seconds. “stim_onset” is the onset of the entire stimulus, essentially the trigger time at the beginning of the sentence. “stim_dur” is the duration of the entire stimulus (sentence). “type” includes information about how the stimulus was presented, whether just auditory, both auditory and written on the screen, or if the onset of a trigger was simply the participant responding to the question. “trial_type” is the primary experimental classification. This column tells you whether a stimulus was an introduction, feedback, example stimuli, a break, a right/left arrow key press by the participant, and whether or not the experimental stimulus was congruent (NPC) or incongruent (NPI). Lastly, “stim_file” is the column that provides the name of the stimulus file that was played.
“sub-###_task-N400Active_channels”- (TSV) Information about the finalized channels that were rejected in the Active Paradigm EEG data (both first round and additional channels). “sub-###_task-N400Passive_channels”- (TSV) Information about the finalized channels that were rejected in the Passive Paradigm EEG data (both first round and additional channels).
“derivatives”: If the user decides to use our preprocessed (EEG/ERP) data that utilized functions in EEGLAB (Delorme & Makeig, 2004) for MATLAB (MathWorks Inc., Natick, MA), read the following: Within the derivatives folder there are 3 folders that pertain to the filtered data, “ChRej”, “eeg-processed” and “erps”. : “ChRej”: Within the “ChRej” folder outside of subject folders are .JSON files that describe the organization and filtering of the data within each subjects “ChRej” folder. These include: : “task-N400Active_eeg-filter-NoChRej” - (JSON) Information about how the raw eeg data from the Active Paradigm that is within each subjects ChRej folder was filtered (not including ICA) for the study. “task-N400Passive_eeg-filter-NoChRej” - (JSON) Information about how the raw eeg data from the Passive Paradigm that is within each subjects ChRej folder was filtered (not including ICA) for the study. “task-N400Active_erp-GA_filter-NoChRej” - (JSON) Information about how the ERPs data from the Active Paradigm were derived from the filtered EEG data (not including ICA) including epoch windows and trial rejection parameters. “task-N400Passive_erp-GA_filter-NoChRej” - (JSON) Information about how the ERPs data from he Passive Paradigm were derived from the filtered EEG data (not including ICA) including epoch windows and trial rejection parameters. “task-N400PvsA_erp-GA_trialrej”- (JSON) Information about the “erp-GA_trialrej” file for both paradigms that is within each subjects ChRej folder. “sub-###”: Subjects ChRej folder. Within each subjects “ChRej” folder are files that were used in the filtering process before the final filtered data was saved to a subjects “eeg-processed” data folder.
> These include: > Files from the first round of the manual channel rejection and data rejection process:
> > “sub-###_task-N400Active_eeg-filter”- (MAT) The filtered (not including ICA) EEG data from the Active Paradigm. Filtering included band pass, first round of manual bad channel rejection, and rejection of sections of data that had artifact/motion. > > “sub-###_task-N400Active_ChRej-firstbadchans”- (TSV) A list of numbered channels (numbered 1-128) that were rejected in the first round of manual bad channel selection. > > “sub-###_task-N400Active_channels”- (TSV) Info about the first round of channel rejection. > > “sub-###_task-N400Active_datarej”- (TSV) What windows of data (in data units) were rejected from the filtered (not including ICA) EEG data for the Active Paradigm due to artifact/movement. Files for the second round of the manual channel rejection process to assure there were no other bad channels: > > “sub-###_task-N400Active_erp-GA_filter-NoChRej”- (MAT) ERP data derived from the filtered (not including ICA) EEG data from the Active Paradigm. Filtering included band pass, first round of manual bad channel rejection, and rejection of sections of data that had artifact/motion. Used to generate the topography figures within this folder for additional channel rejection. > > “sub-###_task-N400Active_erp-GA_trialrej”- (TSV) Information about how many trials were accepted into grand average ERP analysis of the filtered (not including ICA) EEG data from the Active Paradigm relative to the total trials in each condition. > > “sub-###_task-N400Active_semconddiff_4ChRej-addbadchans”- (FIG) A figure of the average amplitude topographic difference between conditions (incongruent-congruent) across the epoch. Used to aid in the manual selection of additional bad channels. > > “sub-###_task-N400Active_semcondmean_4ChRej-addbadchans”- (FIG) A figure of the average amplitude topography regardless of condition ((incongruent+congruent)/2) across the epoch. Used to aid in the manual selection of additional bad channels. > > “sub-###_task-N400Active_spectra_4ChRej-addbadchans”- (FIG) A figure of the spectrum of all channels across frequencies from the filtered EEG data. Used to aid in the manual selection of additional bad channels. > > “sub-###_task-N400Active_ChRej-addbadchans”- (TSV) A list of additional channels (numbered 1-128) that were added to channel rejection process of the raw data for final analysis.
> File from the ICA weight transfer method: > : “sub-###_task-N400Active_ICAweights”- (MAT) A file that contains 2 variables “Weights” and “Weights2”. “Weights” is the weights or likelihoods (derived from running ICA, pop_runica) that each component fit into each category (brain, muscle, eye, heart rate, line noise, channel noise, or other) via EEGLAB’s labeling algorithm (pop_iclabel). The size of “weights” is equal to the number of channels included in the component analysis (i.e. not including bad channels as they were interpolated after ICA component rejection). “Weights2” is the the same weight values, but only including components that remained after components were rejected for artifact (i.e. the component data had likelihoods of 0-15% brain, 85-100% muscle artifact, 85-100% eye blink artifact, 85-100% heart rate artifact, 85-100% line noise artifact, 85-100% channel noise artifact, or 85-100% other artifact). This assured that the data that was isolated was primarily brain data.\*And then the same corresponding files for the Passive Paradigm
“eeg-processed”: Within the “eeg-processed” data folder outside of subject folders are .JSON files that describe the filtering parameters of the data within each subjects “eeg-processed” folder. These include: : “task-N400Active_eeg”- (JSON) Information about aquiring the raw eeg data (in each subjects raw data folder) from the Active Paradigm. “task-N400Passive_eeg”- (JSON) Information about aquiring the raw eeg data (in each subjects raw data folder) from the Passive Paradigm. “task-N400Active_eeg-filter”- (JSON) Information about how the raw eeg data (in each subjects raw data folder) from the Active Paradigm was filtered (including ICA). “task-N400Passive_eeg-filter”- (JSON) Information about how the raw eeg data (in each subjects raw data folder) from the Passive Paradigm was filtered (including ICA). “sub-###”: Subjects processed data folder. Within each subjects “eeg-processed” folder are the raw EEG data converted to .mat and the finalized filtered ICA EEG data used in analysis. These include:
> “sub-###_task-N400Active_eeg”- (MAT) Raw EEG data from the Active Paradigm converted to .mat data for EEGLAB. > “sub-###_task-N400Passive_eeg”- (MAT) Raw EEG data from the Passive Paradigm converted to .mat data for EEGLAB.
> > Each unfiltered MAT file contains 1 file “EEG”, which is a struct containing many fields. The most relevant fields would be: “data” which are the amplitudes for 128 channels + 8 unused externals x every data unit (data units=ms\*fs/1000) for the continuous data; and “event” which is a 4 column struct that shows when each stimulus was presented. The “type” column of “event” will correspond to what a stimulus was played, whether it was an experimental sentence (‘condition 1’), an introductory sound file (‘255’, ‘254’, or ‘253’), a response (‘244’ and ‘233’ [right and left arrow key press respectively]), instructional feedback (‘252’), or the congratulations at the end of the experiment (‘250’). The “latency” column shows when the start of each stimulus and experimental sentence was played (in data units).
> “sub-###_task-N400Active_eeg-filter”- (MAT) The EEG data from the Active Paradigm filtered via the study parameters including ICA componant rejection. > “sub-###_task-N400Passive_eeg-filter”- (MAT) The EEG data from the Passive Paradigm filtered via the study parameters including ICA componant rejection.
> > Each filtered MAT file contains 2 files “EEG” and “badchans”. “badchans” is simply the finalized list of channels (numbered 1-128) that were rejected during filtering. “EEG” is a struct containing many fields. The most relevant fields would be: “data” which are the amplitudes for all 128 channels x every data unit (data units=ms\*fs/1000) for the continuous data; and “event” which is a 3 column struct that shows when each stimulus was presented. The “type” column of “event” will correspond to what a stimulus was played, whether it was an experimental sentence (ordered 1-402 relative to the order of a subjects events file), an introductory sound file (2550, 2540, or 2530), a response (2440 and 2330 [right and left arrow key press respectively]), instructional feedback (2520), or the congratulations at the end of the experiment (2500). The “latency” column shows when the stimulus was played (in data units). Here, the data units for the experimental stimuli were shifted to the onset of the last word in the sentence for subsequent analysis.
“erps”: Within the “erps” folder outside of subject folders are .JSON files that describe the organization and filtering of the data within each subjects “erps” folder. These include: : “task-N400Active_erp-GA_filter” - (JSON) Information about how the ERP data from the Active Paradigm were derived from the filtered EEG data (including ICA) including epoch windows, trial rejection, and ICA componant parameters. “task-N400Passive_erp-GA_filter” - (JSON) Information about how the ERP data from the Passive Paradigm were derived from the filtered EEG data (including ICA) including epoch windows,trial rejection, and ICA componant parameters. “task-N400PvsA_erp-GA_trialrej”- (JSON) Information about the “erp-GA_trialrej” file for both paradigms that is within each subjects ChRej folder. “sub-###”: Subjects ERP Folder. Within each subjects “erps” folder are ERP files derived from the filtered (including ICA) EEG data used for in final analysis (i.e. derived from files within a subjects data folder “sub-###_task-N400Passive_eeg-filter” or “sub-###_task-N400Active_eeg-filter”). These include:
> “sub-###_task-N400Active_erpICA-GA”- (MAT) The grand average ERP data derived from the filtered (including ICA) EEG data from the Active paradigm. Each preprocessed MAT file per division contains 3 variables: 1. “fs”, which is the sampling rate (i.e. 512); 2. “t” which is the total time of the epoch in datapoints (i.e 615, since our epoch is -200 ms before the stimulus to 1000 ms after the stimulus).To convert ms to data points is ms\*fs/1000; and 3. “ERPs”, which is the filtered, epoched ERP data. Within the ERPs file, there will be 2 cells, one for each condition. Cell 1 is for the congruent condition and cell 2 is for the incongruent condition. To ensure the proper comparison is being made, look at the field “event” within the ERPs struct. The “type” column of “event” will correspond to the condition (i.e. if “type” contains the number “1000”, the struct is for the congruent condition, whereas “2000” is for the incongruent condition). > “sub-###_task-N400Active_erpICA-GA_trialrej”- (TSV) Information about how many trials were accepted into final grand average ERP analysis of the filtered (including ICA) EEG data from the Active Paradigm relative to the total trials in each condition. > “sub-###_task-N400Passive_erpICA-GA”- (MAT) The grand average ERP data derived from the filtered (including ICA) EEG data from Passive paradigm. Each preprocessed MAT file per division contains 3 variables: 1. “fs”, which is the sampling rate (i.e. 512); 2. “t” which is the total time of the epoch in datapoints (i.e 615, since our epoch is -200 ms before the stimulus to 1000 ms after the stimulus).To convert ms to data points is ms\*fs/1000; and 3. “ERPs”, which is the filtered, epoched ERP data. Within the ERPs file, there will be 2 cells, one for each condition. Cell 1 is for the congruent condition and cell 2 is for the incongruent condition. To ensure the proper comparison is being made, look at the field “event” within the ERPs struct. The “type” column of “event” will correspond to the condition (i.e. if “type” contains the number “1000”, the struct is for the congruent condition, whereas “2000” is for the incongruent condition). > “sub-###_task-N400Passive_erpICA-GA_trialrej”- (TSV) Information about how many trials were accepted into final grand average ERP analysis of the filtered (including ICA) EEG data from the Passive Paradigm relative to the total trials in each condition. CODE: : Within the “code” folder is the code used to filter the EEG data. If using for your specific data as opposed to this dataset, you will have to add lines for your specific study in each function used for data processing. These sections are denoted by the study name (“studynm”) and “task”. : First, download Matlab and then EEGLab. Within the “code” folder is the preprocessing code used to make ERPs for each derivative, and the presentation code used for task administration. We recommend researchers using their own code to processes the data. However, we provide rough directions on how to apply sections of this code to your data. When using this code, adjust all file paths for your code folders and data folders and EEGlab folders (lines 8-27). Next, if using for your own data, as stated above, you will need to add your specific study name to each preprocessing function, the function “variables”, and any changes to filter parameters in “filtersettings”. I run a lot of studies, so several lines in my functions may not be relevant to you. Running “variables” (line 44 in master), the bdf2mat script will run through all the subjects and output a .mat file within each subject’s “eeg” folder. It will then ask what stage of preprocessing you are at. I personally like to do extensive cleaning before using ICA, so I have 2 stages “filter-NoChRej” and “filter”. Respond accordingly. It will also ask if any stimulus times need to be adjusted. With this N400 datasets, the stimulus onset time during recording was at the beginning of the sentence. To analyze data I had to shift this start to the onset of the last word via times denoted in the “parameters” file. If your times do not need to be adjusted, a lot of the provided preprocessing functions will need to be modified for your specific needs. However, if using the code for this study, just type ‘yes’ for both responses.
Note: Each subjects .mat file can be uploaded into Matlab by dragging and dropping, clicking on the file within matlab, or using the function “load”. The variable “EEG” will appear in the workspace as a struct. EEG.data contains the time series data per channel. All analysis code uses MATLAB language, thus is restricted to MATLAB. These files can be opened and edited with a standard text editor, but cannot be run without MATLAB. Moving on to preprocessing, event files are made via “eventfile_PvsA”), and then the data will undergo assisted manual channel and data rejection with “filter_NoChRej_raw”. This function uses two methods of channel selection for rejection (one from “Mitch and Jim” and another from a modified version of “EEGLab’s” clean channels. Line 149-164 of EEGLab’s “clean_channels” was modified (commented out) so that it wouldn’t automatically reject channels without your approval). After both these methods run, it suggests what channels you should reject with “clean_channels” being way more rigorous). Then “eegplot” will plot a subject’s whole dataset so you can manually view the data and make decisions on what channels are actually bad. The function will then ask you to list what channels you think should be rejected. Once you list the channels in brackets [1,2,3….], it will save these channels in the derivatives folder “ChRej” for the subject as “sub-###_task-TASK_ChRej-firstbadchans.tsv”. Eegplot will again plot your subjects full dataset, and you will also need to manually remove sections of data (if any). Once done removing sections, type any string (preferably ‘y’) for the program to save the rejection windows as “sub-###_task-TASK_datarej.tsv” to the same “ChRej” folder. After this first round of rejection, “channels_creation” will make a BIDs formatted channel file denoting good and bad channels. Because this isn’t your final cleaned data, this will be put into a derivatives folder called “ChRej”. The average ERPs for this subject will be made via “preprocdat_GA” and also saved into the “ChRej” derivatives folder, and then you will go through the second/final channel rejection phase “ChRej_selection”. Because the channel suggestions and manual rejection are only so good, I also like to plot the spectral density of the data (using EEGLabs “pop_spectopo”) at this point and two topography plots: 1. The average difference between the conditions during the entire epoch; and 2. The average signal amplitude of all trials regardless of condition during the entire epoch. This allows you to see if there are any erroneous channels you did not select in the previous step. It will prompt you to add channels in bracket format. If there are no more to add, enter [0]. Once this channel rejection stage of data processing has completed, you can finally move on to final data processing. Either run variables again, or change the variables “process” to ‘filter’, “decisionICA” to {‘ica’}, and list the appropriate “ICAweight_values” ([0 0.15;0.85 1;0.85 1;0.85 1;0.85 1;0.85 1;0.85 1]). Run lines 86-112 of “master” again, then your ICA filtered data will be saved to the derivatives folder “eeg_processed”, ERP data to the derivatives folder “erp”, and a new finalized channels file to the subject’s data folder. Following final processing with ICA, you can make ERP figures, topography plots, stat analysis etc. via functions on lines 136-147 of master. Again, if you are using this code for your specific data, you will have to add your study to each of the functions listed here. If you have any questions about data processing please email me, [Kathryn_toffolo@urmc.rochester.edu](mailto:Kathryn_toffolo@urmc.rochester.edu). ## Dataset Information | Dataset ID | `DS004940` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Neurophysiological measures of covert semantic processing in neurotypical adolescents actively ignoring spoken sentence inputs: A high-density event-related potential (ERP) study. | | Author (year) | `Toffolo2024` | | Canonical | — | | Importable as | `DS004940`, `Toffolo2024` | | Year | 2024 | | Authors | Kathryn Toffolo, Edward G. Freedman, John J. Foxe | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004940.v1.0.1](https://doi.org/10.18112/openneuro.ds004940.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004940) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004940) | [Source URL](https://openneuro.org/datasets/ds004940) | ### Copy-paste BibTeX ```bibtex @dataset{ds004940, title = {Neurophysiological measures of covert semantic processing in neurotypical adolescents actively ignoring spoken sentence inputs: A high-density event-related potential (ERP) study.}, author = {Kathryn Toffolo and Edward G. Freedman and John J. Foxe}, doi = {10.18112/openneuro.ds004940.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004940.v1.0.1}, } ``` ## Technical Details - Subjects: 22 - Recordings: 48 - Tasks: 2 - Channels: 128 - Sampling rate (Hz): 512.0 - Duration (hours): 36.86361111111111 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 118.5 GB - File count: 48 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004940.v1.0.1 - Source: openneuro - OpenNeuro: [ds004940](https://openneuro.org/datasets/ds004940) - NeMAR: [ds004940](https://nemar.org/dataexplorer/detail?dataset_id=ds004940) ## API Reference Use the `DS004940` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004940(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neurophysiological measures of covert semantic processing in neurotypical adolescents actively ignoring spoken sentence inputs: A high-density event-related potential (ERP) study. * **Study:** `ds004940` (OpenNeuro) * **Author (year):** `Toffolo2024` * **Canonical:** — Also importable as: `DS004940`, `Toffolo2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 22; recordings: 48; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004940](https://openneuro.org/datasets/ds004940) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004940](https://nemar.org/dataexplorer/detail?dataset_id=ds004940) DOI: [https://doi.org/10.18112/openneuro.ds004940.v1.0.1](https://doi.org/10.18112/openneuro.ds004940.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS004940 >>> dataset = DS004940(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004940) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004940) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004942: eeg dataset, 62 subjects *SpatialMemory* Access recordings and metadata through EEGDash. **Citation:** Paul Kieffaber, Makenna McGill (2024). *SpatialMemory*. [10.18112/openneuro.ds004942.v1.0.0](https://doi.org/10.18112/openneuro.ds004942.v1.0.0) Modality: eeg Subjects: 62 Recordings: 62 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004942 dataset = DS004942(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004942(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004942( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004942, title = {SpatialMemory}, author = {Paul Kieffaber and Makenna McGill}, doi = {10.18112/openneuro.ds004942.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004942.v1.0.0}, } ``` ## About This Dataset Visuo-spatial working memory (VSWM) for sequences is thought to be crucial for daily behaviors. Decades of research indicate that oscillations in the gamma and theta bands play important functional roles in the support of visuo-spatial working memory, but the vast majority of that research emphasizes measures of neural activity during memory retention. The primary aims of the present study were (1) to determine whether oscillatory dynamics in the Theta and Gamma ranges would reflect item-level sequence encoding during a computerized spatial span task, (2) to determine whether item-level sequence recall is also related to these neural oscillations, and (3) to determine the nature of potential changes to these processes in healthy cognitive aging. Results indicate that VSWM sequence encoding is related to later (~700 ms) gamma band oscillatory dynamics and may be preserved in healthy older adults; high gamma power over midline frontal and posterior sites increased monotonically as items were added to the spatial sequence in both age groups. Item-level oscillatory dynamics during the recall of VSWM sequences were related only to theta-gamma phase amplitude coupling (PAC), which increased monotonically with serial position in both age groups. Results suggest that, despite a general decrease in frontal theta power during VSWM sequence recall in older adults, gamma band dynamics during encoding and theta-gamma PAC during retrieval play unique roles in VSWM and that the processes they reflect may be spared in healthy aging. ## Dataset Information | Dataset ID | `DS004942` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | SpatialMemory | | Author (year) | `Kieffaber2024` | | Canonical | — | | Importable as | `DS004942`, `Kieffaber2024` | | Year | 2024 | | Authors | Paul Kieffaber, Makenna McGill | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004942.v1.0.0](https://doi.org/10.18112/openneuro.ds004942.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004942) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004942) | [Source URL](https://openneuro.org/datasets/ds004942) | ### Copy-paste BibTeX ```bibtex @dataset{ds004942, title = {SpatialMemory}, author = {Paul Kieffaber and Makenna McGill}, doi = {10.18112/openneuro.ds004942.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004942.v1.0.0}, } ``` ## Technical Details - Subjects: 62 - Recordings: 62 - Tasks: 1 - Channels: 65 - Sampling rate (Hz): 1000.0 - Duration (hours): 28.282224444444445 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 25.1 GB - File count: 62 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004942.v1.0.0 - Source: openneuro - OpenNeuro: [ds004942](https://openneuro.org/datasets/ds004942) - NeMAR: [ds004942](https://nemar.org/dataexplorer/detail?dataset_id=ds004942) ## API Reference Use the `DS004942` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004942(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SpatialMemory * **Study:** `ds004942` (OpenNeuro) * **Author (year):** `Kieffaber2024` * **Canonical:** — Also importable as: `DS004942`, `Kieffaber2024`. Modality: `eeg`. Subjects: 62; recordings: 62; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004942](https://openneuro.org/datasets/ds004942) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004942](https://nemar.org/dataexplorer/detail?dataset_id=ds004942) DOI: [https://doi.org/10.18112/openneuro.ds004942.v1.0.0](https://doi.org/10.18112/openneuro.ds004942.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004942 >>> dataset = DS004942(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004942) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004942) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004944: ieeg dataset, 22 subjects *Dataset of BCI2000-compatible intraoperative ECoG with neuromorphic encoding* Access recordings and metadata through EEGDash. **Citation:** Filippo Costa, Niklaus Krayenbühl, Georgia Ramantani, Ece Boran, Kristina König, Johannes Sarnthein (2024). *Dataset of BCI2000-compatible intraoperative ECoG with neuromorphic encoding*. [10.18112/openneuro.ds004944.v1.1.0](https://doi.org/10.18112/openneuro.ds004944.v1.1.0) Modality: ieeg Subjects: 22 Recordings: 44 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004944 dataset = DS004944(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004944(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004944( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004944, title = {Dataset of BCI2000-compatible intraoperative ECoG with neuromorphic encoding}, author = {Filippo Costa and Niklaus Krayenbühl and Georgia Ramantani and Ece Boran and Kristina König and Johannes Sarnthein}, doi = {10.18112/openneuro.ds004944.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004944.v1.1.0}, } ``` ## About This Dataset **Overview** This dataset comprises recordings of intraoperative Electrocorticography (ECoG) from 22 patients undergoing resective epilepsy surgery. For each patient, the dataset is organized into pre-resection recording (referred to as SITUATION1A) and post-resection recording (referred to as SITUATION2A). We provide raw ECoG recordings for each patient and a derivative folder that contains all the main processing stages performed with our neuromorphic processing pipeline: [https://doi.org/10.1038/s41467-024-47495-y](https://doi.org/10.1038/s41467-024-47495-y). The pipeline preprocesses ECoG recordings in real-time and performs Asynchronous Delta Modulator (ADM) encoding with a custom BCI2000 module. The ADM-encoded data are processed by a hardware Spiking Neural Network (SNN). The SNN-encoded data are then used to detect epileptiform patterns in the ECoG. The code to perform preprocessing and ADM encoding in BCI2000, together with the code to detect epileptiform patterns from SNN-encoded data, are provided at [https://github.com/CostaFilippo/BCI2000_DYNAP-SE.git](https://github.com/CostaFilippo/BCI2000_DYNAP-SE.git). ### View full README **Overview** This dataset comprises recordings of intraoperative Electrocorticography (ECoG) from 22 patients undergoing resective epilepsy surgery. For each patient, the dataset is organized into pre-resection recording (referred to as SITUATION1A) and post-resection recording (referred to as SITUATION2A). We provide raw ECoG recordings for each patient and a derivative folder that contains all the main processing stages performed with our neuromorphic processing pipeline: [https://doi.org/10.1038/s41467-024-47495-y](https://doi.org/10.1038/s41467-024-47495-y). The pipeline preprocesses ECoG recordings in real-time and performs Asynchronous Delta Modulator (ADM) encoding with a custom BCI2000 module. The ADM-encoded data are processed by a hardware Spiking Neural Network (SNN). The SNN-encoded data are then used to detect epileptiform patterns in the ECoG. The code to perform preprocessing and ADM encoding in BCI2000, together with the code to detect epileptiform patterns from SNN-encoded data, are provided at [https://github.com/CostaFilippo/BCI2000_DYNAP-SE.git](https://github.com/CostaFilippo/BCI2000_DYNAP-SE.git). In a previous publication, this dataset has been analyzed with a offline software algorithm: [https://doi.org/10.1016/j.clinph.2019.07.008](https://doi.org/10.1016/j.clinph.2019.07.008). The annotations of the epileptiform patterns detected with the offline approach are provided at: sub-\*/ses-SITUATION\*/sub-\*_ses-SITUATION\*_task-acute_events.tsv. The annotations of the epileptiform patterns detected with the online neuromorphic processing are provided at derivative/sub-\*/ses-SITUATION\*/sub-\*_ses-SITUATION\*_task-EV.csv. **Dataset Structure** The derivative folder is structured as follows: : - sub-\*
> - ses-SITUATION1A > - \*task-BCI.dat > - \*task-ADM.csv > - \*task-SNN.csv > - \*task-EV.csv > - ses-SITUATION2A > - \*task-BCI.dat > - \*task-ADM.csv > - \*task-SNN.csv > - \*task-EV.csv **File Descriptions** - derivative > - *task-BCI.dat:* BCI2000-compatible file containing the ECoG recording. > - *task-ADM.csv:* ADM encoding of the ECoG recording. > - *task-SNN.csv:* SNN encoding of the ECoG recording. > - *task-EV.cvs:* Annotations of the detected epileptiform patterns in the ECoG recording. **Data Formats** Details of the neuromorphic processing pipeline can be found at [https://doi.org/10.1038/s41467-024-47495-y](https://doi.org/10.1038/s41467-024-47495-y). **task-BCI.dat** The BCI2000-compatible file contains the raw ECoG recording. It can be streamed in real-time using the ‘FilePlayback’ BCI2000 module. **task-ADM.csv** The ADM file is formatted as follows: : - \_pulseType_: -1 for DN pulse, +1 for UP pulse. - \_pulseTime_: Time at which the pulse occurred. - \_channel_: Channel in which the pulse occurred. - \_band_: 0 for EEG band, 1 for HFO band **task-SNN.csv** The SNN file is formatted as follows: : - \_time_: Time at which the SNN neuron activated. - \_neuronId_: Number id of the SNN neuron (DYNAP-SE numbering from 0 to 1024). - \_neuronCounter_: Number id of the SNN neuron (sequential numbering from 0 to 40). - \_moduleName_: Population (ACC_4_0; ACC_0_4), band (EEG; HFO) and module number (ch 0-7) of the neuron. - ACC_4_0 = ACC UP - ACC_0_4 = ACC DN - \_moduleId_: Module number (0-7). - \_channelId_: Channel for which the SNN neuron activated. **task-EV.csv** The EV file contains annotations of the detected epileptiform patterns with the following format: : - \_time_: time of the detected event. - \_channelId_: channel id of the detected event. - \_location_: channel name of the detected event. **Contact Information** For inquiries or additional information, please contact [Filippo.Costa@usz.ch](mailto:Filippo.Costa@usz.ch) or [Johannes.Sarnthein@usz.ch](mailto:Johannes.Sarnthein@usz.ch) **Acknowledgements** We thank V. Dimakopoulos for help in reformatting the data to BIDS. We acknowledge a grant awarded by the Swiss National Science Foundation (funded by the SNSF 204651 to JS and GI with GR and NK as project partners). The funder had no role in the design or analysis of the study. ## Dataset Information | Dataset ID | `DS004944` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset of BCI2000-compatible intraoperative ECoG with neuromorphic encoding | | Author (year) | `Costa2024` | | Canonical | `BCI2000_intraop` | | Importable as | `DS004944`, `Costa2024`, `BCI2000_intraop` | | Year | 2024 | | Authors | Filippo Costa, Niklaus Krayenbühl, Georgia Ramantani, Ece Boran, Kristina König, Johannes Sarnthein | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004944.v1.1.0](https://doi.org/10.18112/openneuro.ds004944.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004944) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004944) | [Source URL](https://openneuro.org/datasets/ds004944) | ### Copy-paste BibTeX ```bibtex @dataset{ds004944, title = {Dataset of BCI2000-compatible intraoperative ECoG with neuromorphic encoding}, author = {Filippo Costa and Niklaus Krayenbühl and Georgia Ramantani and Ece Boran and Kristina König and Johannes Sarnthein}, doi = {10.18112/openneuro.ds004944.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds004944.v1.1.0}, } ``` ## Technical Details - Subjects: 22 - Recordings: 44 - Tasks: 1 - Channels: 3 (10), 6 (6), 5 (6), 4 (5), 23 (3), 20 (2), 19 (2), 25, 22, 10, 15, 28, 27, 2, 17, 11, 21 - Sampling rate (Hz): 2000.0 - Duration (hours): 3.0518518055555552 - Pathology: Epilepsy - Modality: Other - Type: Clinical/Intervention - Size on disk: 451.1 MB - File count: 44 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004944.v1.1.0 - Source: openneuro - OpenNeuro: [ds004944](https://openneuro.org/datasets/ds004944) - NeMAR: [ds004944](https://nemar.org/dataexplorer/detail?dataset_id=ds004944) ## API Reference Use the `DS004944` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004944(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of BCI2000-compatible intraoperative ECoG with neuromorphic encoding * **Study:** `ds004944` (OpenNeuro) * **Author (year):** `Costa2024` * **Canonical:** `BCI2000_intraop` Also importable as: `DS004944`, `Costa2024`, `BCI2000_intraop`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 22; recordings: 44; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004944](https://openneuro.org/datasets/ds004944) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004944](https://nemar.org/dataexplorer/detail?dataset_id=ds004944) DOI: [https://doi.org/10.18112/openneuro.ds004944.v1.1.0](https://doi.org/10.18112/openneuro.ds004944.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004944 >>> dataset = DS004944(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004944) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004944) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004951: eeg dataset, 11 subjects *Braille letters - EEG* Access recordings and metadata through EEGDash. **Citation:** Marleen Haupt, Monika Graumann, Santani Teng, Carina Kaltenbach, Radoslaw M. Cichy (2024). *Braille letters - EEG*. [10.18112/openneuro.ds004951.v1.0.0](https://doi.org/10.18112/openneuro.ds004951.v1.0.0) Modality: eeg Subjects: 11 Recordings: 23 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004951 dataset = DS004951(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004951(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004951( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004951, title = {Braille letters - EEG}, author = {Marleen Haupt and Monika Graumann and Santani Teng and Carina Kaltenbach and Radoslaw M. Cichy}, doi = {10.18112/openneuro.ds004951.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004951.v1.0.0}, } ``` ## About This Dataset This dataset contains the raw EEG data accompanying the paper “The transformation of sensory to perceptual braille letter representations in the visually deprived brain”. Please cite the above paper if you use this data. The dataset includes: Brainvision files (.eeg, .vhdr, .vmrk) for all participants. Please note, for some participants the EEG decording had to be stopped and restarted within a session. In this case, the different files are indicated as separate runs. In addition, some participants completed a second session. The events files contain the onsets, durations, trial types and values for all trials in the corresponding run. Stimuli are Braille letters (B,C,D,L,M,N,V,Z) presented on Braille cells under the left and right index fingers of participants. Triggers S1-8 are letters presented to the left hand, triggers S9-16 are letters presented to the right hand. Other triggers: starttrigger = S100; trialonset = S101; stimulusonset = S222; catchtrial = S200; pedalpress_correct = S253; pedalpress_incorrect = S254; endtrigger = S255; For a full description of the paradigm and the employed procedures please see the paper. **References for MNE BIDS conversion** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS004951` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Braille letters - EEG | | Author (year) | `Haupt2024_Braille` | | Canonical | `Haupt2025` | | Importable as | `DS004951`, `Haupt2024_Braille`, `Haupt2025` | | Year | 2024 | | Authors | Marleen Haupt, Monika Graumann, Santani Teng, Carina Kaltenbach, Radoslaw M. Cichy | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004951.v1.0.0](https://doi.org/10.18112/openneuro.ds004951.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004951) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004951) | [Source URL](https://openneuro.org/datasets/ds004951) | ### Copy-paste BibTeX ```bibtex @dataset{ds004951, title = {Braille letters - EEG}, author = {Marleen Haupt and Monika Graumann and Santani Teng and Carina Kaltenbach and Radoslaw M. Cichy}, doi = {10.18112/openneuro.ds004951.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004951.v1.0.0}, } ``` ## Technical Details - Subjects: 11 - Recordings: 23 - Tasks: 1 - Channels: 64 (13), 63 (10) - Sampling rate (Hz): 1000.0 - Duration (hours): 25.926832500000003 - Pathology: Other - Modality: Tactile - Type: Learning - Size on disk: 22.0 GB - File count: 23 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004951.v1.0.0 - Source: openneuro - OpenNeuro: [ds004951](https://openneuro.org/datasets/ds004951) - NeMAR: [ds004951](https://nemar.org/dataexplorer/detail?dataset_id=ds004951) ## API Reference Use the `DS004951` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004951(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Braille letters - EEG * **Study:** `ds004951` (OpenNeuro) * **Author (year):** `Haupt2024_Braille` * **Canonical:** `Haupt2025` Also importable as: `DS004951`, `Haupt2024_Braille`, `Haupt2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Other`. Subjects: 11; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004951](https://openneuro.org/datasets/ds004951) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004951](https://nemar.org/dataexplorer/detail?dataset_id=ds004951) DOI: [https://doi.org/10.18112/openneuro.ds004951.v1.0.0](https://doi.org/10.18112/openneuro.ds004951.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004951 >>> dataset = DS004951(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004951) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004951) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004952: eeg dataset, 10 subjects *ChineseEEG: A Chinese Linguistic Corpora EEG Dataset for Semantic Alignment and Neural Decoding* Access recordings and metadata through EEGDash. **Citation:** Xinyu Mou, Cuilin He, Liwei Tan, Junjie Yu, Huadong Liang, Jianyu Zhang, Tian Yan, Yu-Fang Yang, Ting Xu, Qing Wang, Miao Cao, Zijiao Chen, Chuan-Peng Hu, Xindi Wang, Quanying Liu, Haiyan Wu (2024). *ChineseEEG: A Chinese Linguistic Corpora EEG Dataset for Semantic Alignment and Neural Decoding*. [10.18112/openneuro.ds004952.v1.2.2](https://doi.org/10.18112/openneuro.ds004952.v1.2.2) Modality: eeg Subjects: 10 Recordings: 245 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004952 dataset = DS004952(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004952(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004952( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004952, title = {ChineseEEG: A Chinese Linguistic Corpora EEG Dataset for Semantic Alignment and Neural Decoding}, author = {Xinyu Mou and Cuilin He and Liwei Tan and Junjie Yu and Huadong Liang and Jianyu Zhang and Tian Yan and Yu-Fang Yang and Ting Xu and Qing Wang and Miao Cao and Zijiao Chen and Chuan-Peng Hu and Xindi Wang and Quanying Liu and Haiyan Wu}, doi = {10.18112/openneuro.ds004952.v1.2.2}, url = {https://doi.org/10.18112/openneuro.ds004952.v1.2.2}, } ``` ## About This Dataset **ChineseEEG: A Chinese Linguistic Corpora EEG Dataset for Semantic Alignment and Neural Decoding** **Introduction** “ChineseEEG” (Chinese Linguistic Corpora EEG Dataset) contains high-density EEG data and simultaneous eye-tracking data recorded from 10 participants, each silently reading Chinese text for about 11 hours. This dataset further comprises pre-processed EEG sensor-level data generated under different parameter settings, offering researchers a diverse range of selections. Additionally, we provide embeddings of the Chinese text materials encoded from BERT-base-chinese model, which is a pre-trained NLP specifically used for Chinese, aiding researchers in exploring the alignment between text embeddings from NLP models and brain information representations. **Participant Overview** ### View full README **ChineseEEG: A Chinese Linguistic Corpora EEG Dataset for Semantic Alignment and Neural Decoding** **Introduction** “ChineseEEG” (Chinese Linguistic Corpora EEG Dataset) contains high-density EEG data and simultaneous eye-tracking data recorded from 10 participants, each silently reading Chinese text for about 11 hours. This dataset further comprises pre-processed EEG sensor-level data generated under different parameter settings, offering researchers a diverse range of selections. Additionally, we provide embeddings of the Chinese text materials encoded from BERT-base-chinese model, which is a pre-trained NLP specifically used for Chinese, aiding researchers in exploring the alignment between text embeddings from NLP models and brain information representations. **Participant Overview** In total, data from 10 participants were used (18-24 years old, averaged 20.68 years old, and 5 males). No participants reported neurological or psychiatric history. All participants are right-handed and have normal or corrected-to-normal vision. **Experiment Materials** The experimental materials consist of two novels in Chinese, both in the genre of children’s literature. The first is \*\*The Little Prince\*\*and the second is \*\*Garnett Dream\*\*. For **The Little Prince**, the preface was used as material for the practice reading phase. The main body of the novel was then used for seven sessions in the formal reading phase. The first six sessions each included 4 chapters of the novel, while the seventh session included the last two chapters. For **Garnett Dream**, the first 18 chapters were used for 18 sessions in the formal reading stage, with each session including a complete chapter. To properly present the text on the screen during the experiments, the content of each session was segmented into a series of units, with each unit containing no more than 10 Chinese characters. These segmented contents were saved in Excel (.xlsx) format for subsequent usage. During the experiment, three adjacent units from each session’s content will be displayed on the screen in three separate lines, with the middle line highlighted for the participant to read. In summary, a total of 115,233 characters (24,324 in **The Little Prince\*\*and 90,909 in \*\* Garnett Dream**), of which 2985 characters were unique, were used as experimental stimuli in ChineseEEG dataset. The original and segmented novels are saved in the `derivatives/novels` folder. The `segmented_novel` folder in `novels` folder contains two types of Excel files: one type of file has names ending with “display,” while the other type does not contain this suffix. The former stores units that have been segmented; the latter includes units that have been reassembled according to the experimental presentation format. These files ending with “display” will be used to support the execution of relevant code, in order to achieve effective stimulus presentation in the experiment. The code for generating these two types of files, as well as the code for experimental presentation, can be found in the GitHub repository: [https://github.com/ncclabsustech/Chinese_reading_task_eeg_processing](https://github.com/ncclabsustech/Chinese_reading_task_eeg_processing). **Experiment Procedures** Participants were tasked with reading a novel and were required to keep their heads still and keep their gaze on the highlighted (red) Chinese characters moving across the screen, reading at a pace set by the program. They were required to read an entire novel in multiple runs within a single session. Each run is divided into two phases: the eye-tracker calibration phase and the reading phase. The eye-tracker calibration phase is at the beginning of each run, requiring participants to keep their gaze at a fixation point, which sequentially appeared at the four corners and the center of the screen. In the reading phase, the screen initially displayed the serial number of the current chapter. Subsequently, the text appeared with three lines per page, ensuring each line contained no more than ten Chinese characters (excluding punctuation). On each page, the middle line was highlighted as the focal point, while the upper and lower lines were displayed with reduced intensity as the background. Each character in the middle line was sequentially highlighted with red color for 0.35 s, and participants were required to read the novel content following the highlighted cues. For detailed information about the experiment settings and procedures, please refer to our paper at [https://doi.org/10.1101/2024.02.08.579481](https://doi.org/10.1101/2024.02.08.579481). **Markers** To precisely co-register EEG segments with individual characters during the experiment, we marked the EEG data with triggers. - EYES: Eyetracker starts to record - EYEE: Eyetracker stops recording - CALS: Eyetracker calibration starts - CALE: Eyetracker calibration stops - BEGN: EGI starts to record - STOP: EGI stops recording - CHxx:Beginning of specific chapter (Numbers correspond with chapters) - ROWS: Beginning of a row - ROWE: End of a row - PRES:Beginning of the preface - PREE:End of the preface **Data Record** The raw EEG data has a sampling rate of 1 kHz, while the filtered data and pre-processed data has a sampling rate of 256 Hz. **Data Structure** The dataset is organized following the EEG-BIDS specification using the MNE-BIDS package. The dataset contains some regular BIDS files, 10 participants’ data folders, and a derivatives folder. The stand-alone files offer an overview of the dataset: i) dataset_description.json is a JSON file depicting the information of the dataset, such as the name, dataset type and authors; ii) participants.tsv contains participants’ information, such as age, sex, and handedness; iii) participants.json describes the column attributes in participants.tsv; iv) README.md contains a detailed introduction of the dataset. Each participant’s folder contains two folders named ses-LittlePrince and ses-GarnettDream, which store the data of this participant reading two novels, respectively. Each of the two folders contains a folder eeg and one file sub-xx_scans.tsv. The tsv file contains information about the scanning time of each file. The eeg folder contains the source raw EEG data of several runs, channels, and marker events files. Each run includes an eeg.json file, which encompasses detailed information for that run, such as the sampling rate and the number of channels. Events are stored in events.tsv with onset and event ID. The EEG data is converted from raw metafile format (.mff file) to BrainVision format (.vhdr, .vmrk and .eeg files) since EEG-BIDS is not officially compatible with .mff format. The derivatives folder contains six folders: eyetracking_data, filtered_0.5_80, filtered_0.5_30, preproc, novels, and text_embeddings. The eyetracking_data folder contains all the eye-tracking data. Each eye-tracking data is formatted in a .zip file with eye moving trajectories and other parameters like sampling rate saved in different files. The filtered_0.5_80 folder and filtered_0.5_30 folder contain data that has been processed up to the pre-processing step of 0.5-80 Hz and 0.5-30 Hz band-pass filtering respectively. This data is suitable for researchers who have specific requirements and want to perform customized processing on subsequent pre-processing steps like ICA and re-referencing. The preproc folder contains minimally pre-processed EEG data that is processed using the whole pre-processing pipeline. It includes four additional types of files compared to the participants’ raw data folders in the root directory: i) bad_channels.json contains bad channels marked during bad channel rejection phase. ii) ica_components.npy stores the values of all independent components in the ICA phase. iii) ica_components.json includes the independent components excluded in ICA (the ICA random seed is fixed, allowing for reproducible results). iv) ica_components_topography.png is a picture of the topographic maps of all independent components, where the excluded components are labeled in grey. The novels folder contains the original and segmented text stimuli materials. The original novels are saved in .txt format and the segmented novels corresponding to each experimental run are saved in Excel (.xlsx) files. The text_embeddings folder contains embeddings of the two novels. The embeddings corresponding to each experimental run are stored in NumPy (.npy) files For an overview of the structure, please refer to our paper at [https://doi.org/10.1101/2024.02.08.579481](https://doi.org/10.1101/2024.02.08.579481). **Pre-processing** For the pre-processed data in the derivatives folder, we only did minimal pre-processing to retain most useful information. The pre-processing steps include data segmentation, downsampling, filtering, bad channel interpolation, ICA, averaging. During the data segmentation phase, we only retained data from the formal reading phase of the experiment. Based on the event markers during the data collection phase, we segmented the data, removing sections irrelevant to the formal experiment such as calibration and preface reading. To minimize the impact of subsequent filtering steps on the beginning and end of the signal, an additional 10 seconds of data was retained before the start of the formal reading phase. Subsequently, the signal was downsampled to 256 Hz. Following this, a 50 Hz notch filter was applied to remove the powerline noise from the signal. Next, we performed band-pass overlap-add FIR filter on the signal to eliminate the low-frequency direct current components and high-frequency noise. Here, two versions of filtered data were offered. The first one has a filter band of 0.5-80 Hz and the second one has a filter band of 0.5-30 Hz. Researchers can choose the appropriate version based on their specific needs. After filtering, we performed an interpolation of bad channels. Independent Component Analysis (ICA) was then applied to the data, utilizing the infomax algorithm. The number of independent components was set to 20, ensuring that they contain the majority of information while not being so numerous to increase the burden of manual processing. We excluded obvious noise components such as Electrooculography (EOG) and Electrocardiogram (ECG). Finally, the data was re-referenced using the average method. The detailed information of the pre-processing can be found in our paper at [https://doi.org/10.1101/2024.02.08.579481](https://doi.org/10.1101/2024.02.08.579481). **Text Embeddings** The dataset provides embeddings of two novels calculated using a pre-trained language model BERT-base-Chinese. During the experimental procedure, each displayed line of text contains n Chinese characters. The BERT-base-Chinese model processes these n Chinese characters, yielding an embedding of size (n, 768), where n represents the number of Chinese characters, and 768 the dimensionality of the embedding. To ensure displayed lines of varying length to have embeddings of the same shape, the first dimension of the embeddings is averaged to standardize the embedding size to (1, 768) for each instance. **Missing Data** Due to technical reasons, there are some raw data lost: - EEG: > - Sub-09 ses-LittlePrince run 1-3 > - Sub-14 ses-GranettDream run 9 > - Sub-15 ses-GranettDream run 12 > - Sub-07 ses-GranettDream run 18 (substituted by run 19) > : Notice: Sub-07 ses-GranettDream run 19 read chapter 19 of GranettDream instead of chapter 18. - Eyetracking data: - Sub-08 ses-LittlePrince run 1-2 Due to bad quality of data or other reasons, there are some pre-processed data lost: - In the 0.5-30 Hz filtered version: > - Sub-15 ses-LittlePrince run 1-7 > - Sub-15 ses-GranettDream run 1, 2, 3, 7, 10, 16 - In the 0.5-80 Hz filtered version: - Sub-13 ses-LittlePrince run 4, 5 - Sub-15 ses-LittlePrince run 1-7 - Sub-13 ses-GranettDream run 14 - Sub-15 ses-GranettDream run 1, 2, 3, 7, 10, 16 **Usage Note** If you want to know more about the dataset, including the detailed parameter settings in our pre-processing steps or how to align the text with EEG segments, please refer to our paper at [https://doi.org/10.1101/2024.02.08.579481](https://doi.org/10.1101/2024.02.08.579481). You can find the relevant code for the text presentation, data processing and other functions at the GitHub repository: [https://github.com/ncclabsustech/Chinese_reading_task_eeg_processing](https://github.com/ncclabsustech/Chinese_reading_task_eeg_processing). **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS004952` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | ChineseEEG: A Chinese Linguistic Corpora EEG Dataset for Semantic Alignment and Neural Decoding | | Author (year) | `Mou2024` | | Canonical | — | | Importable as | `DS004952`, `Mou2024` | | Year | 2024 | | Authors | Xinyu Mou, Cuilin He, Liwei Tan, Junjie Yu, Huadong Liang, Jianyu Zhang, Tian Yan, Yu-Fang Yang, Ting Xu, Qing Wang, Miao Cao, Zijiao Chen, Chuan-Peng Hu, Xindi Wang, Quanying Liu, Haiyan Wu | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004952.v1.2.2](https://doi.org/10.18112/openneuro.ds004952.v1.2.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004952) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004952) | [Source URL](https://openneuro.org/datasets/ds004952) | ### Copy-paste BibTeX ```bibtex @dataset{ds004952, title = {ChineseEEG: A Chinese Linguistic Corpora EEG Dataset for Semantic Alignment and Neural Decoding}, author = {Xinyu Mou and Cuilin He and Liwei Tan and Junjie Yu and Huadong Liang and Jianyu Zhang and Tian Yan and Yu-Fang Yang and Ting Xu and Qing Wang and Miao Cao and Zijiao Chen and Chuan-Peng Hu and Xindi Wang and Quanying Liu and Haiyan Wu}, doi = {10.18112/openneuro.ds004952.v1.2.2}, url = {https://doi.org/10.18112/openneuro.ds004952.v1.2.2}, } ``` ## Technical Details - Subjects: 10 - Recordings: 245 - Tasks: 1 - Channels: 128 - Sampling rate (Hz): 1000.0 - Duration (hours): 121.74074361111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 207.1 GB - File count: 245 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004952.v1.2.2 - Source: openneuro - OpenNeuro: [ds004952](https://openneuro.org/datasets/ds004952) - NeMAR: [ds004952](https://nemar.org/dataexplorer/detail?dataset_id=ds004952) ## API Reference Use the `DS004952` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004952(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ChineseEEG: A Chinese Linguistic Corpora EEG Dataset for Semantic Alignment and Neural Decoding * **Study:** `ds004952` (OpenNeuro) * **Author (year):** `Mou2024` * **Canonical:** — Also importable as: `DS004952`, `Mou2024`. Modality: `eeg`. Subjects: 10; recordings: 245; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004952](https://openneuro.org/datasets/ds004952) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004952](https://nemar.org/dataexplorer/detail?dataset_id=ds004952) DOI: [https://doi.org/10.18112/openneuro.ds004952.v1.2.2](https://doi.org/10.18112/openneuro.ds004952.v1.2.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004952 >>> dataset = DS004952(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004952) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004952) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004973: fnirs dataset, 20 subjects *An fNIRS dataset for driving risk cognition of passengers in highly automated driving scenarios* Access recordings and metadata through EEGDash. **Citation:** Xiaofei Zhang, Qiaoya Wang, Jun Li, Xiaorong Gao, Bowen Li, Bingbing Nie, Jianqiang Wang, Ziyuan Zhou, Yingkai Yang, Hong Wang (2024). *An fNIRS dataset for driving risk cognition of passengers in highly automated driving scenarios*. [10.18112/openneuro.ds004973.v1.0.1](https://doi.org/10.18112/openneuro.ds004973.v1.0.1) Modality: fnirs Subjects: 20 Recordings: 222 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004973 dataset = DS004973(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004973(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004973( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004973, title = {An fNIRS dataset for driving risk cognition of passengers in highly automated driving scenarios}, author = {Xiaofei Zhang and Qiaoya Wang and Jun Li and Xiaorong Gao and Bowen Li and Bingbing Nie and Jianqiang Wang and Ziyuan Zhou and Yingkai Yang and Hong Wang}, doi = {10.18112/openneuro.ds004973.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004973.v1.0.1}, } ``` ## About This Dataset The fNIRS dataset focus on the prefrontal cortex activity in fourteen types of highly automated driving scenarios, which considers age, sex and driving experience factors, and contains the data of an 8-channel fNIRS device and driving scenarios. A total of 20 participants completed this driving simulator experiment,and each participant need to finish 12 tasks. Their ages range from 21 to 46 years old, with 5 females and 15 males. : Our objective is to provides the data support for finding the difference of prefrontal cortex activity between low-risk and high-risk episodes by quantifying the risk of driving scenarios. This research may provide a solution to prevent potential hazard and improve SOTIF based on brain-computer interface technology and fNIRS, in the future. **Notes** - Here a total of 240 tasks which are need to be completed by 20 participants,and the data about 20 tasks is removed because the data are not recorded correctly. - We update the results of subjective evaluations about dangerous degree of VTD segment, and they are shown in the file “participants.tsv”. **How to cite?** doi:10.18112/openneuro.ds004973.v1.0.0 ## Dataset Information | Dataset ID | `DS004973` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | An fNIRS dataset for driving risk cognition of passengers in highly automated driving scenarios | | Author (year) | `Zhang2024_driving_risk_cognition` | | Canonical | — | | Importable as | `DS004973`, `Zhang2024_driving_risk_cognition` | | Year | 2024 | | Authors | Xiaofei Zhang, Qiaoya Wang, Jun Li, Xiaorong Gao, Bowen Li, Bingbing Nie, Jianqiang Wang, Ziyuan Zhou, Yingkai Yang, Hong Wang | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004973.v1.0.1](https://doi.org/10.18112/openneuro.ds004973.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004973) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004973) | [Source URL](https://openneuro.org/datasets/ds004973) | ### Copy-paste BibTeX ```bibtex @dataset{ds004973, title = {An fNIRS dataset for driving risk cognition of passengers in highly automated driving scenarios}, author = {Xiaofei Zhang and Qiaoya Wang and Jun Li and Xiaorong Gao and Bowen Li and Bingbing Nie and Jianqiang Wang and Ziyuan Zhou and Yingkai Yang and Hong Wang}, doi = {10.18112/openneuro.ds004973.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds004973.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 222 - Tasks: 12 - Channels: 16 - Sampling rate (Hz): 50.0 - Duration (hours): 50.14391111111111 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 2.3 GB - File count: 222 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004973.v1.0.1 - Source: openneuro - OpenNeuro: [ds004973](https://openneuro.org/datasets/ds004973) - NeMAR: [ds004973](https://nemar.org/dataexplorer/detail?dataset_id=ds004973) ## API Reference Use the `DS004973` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004973(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) An fNIRS dataset for driving risk cognition of passengers in highly automated driving scenarios * **Study:** `ds004973` (OpenNeuro) * **Author (year):** `Zhang2024_driving_risk_cognition` * **Canonical:** — Also importable as: `DS004973`, `Zhang2024_driving_risk_cognition`. Modality: `fnirs`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 20; recordings: 222; tasks: 12. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004973](https://openneuro.org/datasets/ds004973) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004973](https://nemar.org/dataexplorer/detail?dataset_id=ds004973) DOI: [https://doi.org/10.18112/openneuro.ds004973.v1.0.1](https://doi.org/10.18112/openneuro.ds004973.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS004973 >>> dataset = DS004973(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004973) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004973) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) * [eegdash.dataset.DS005929](eegdash.dataset.DS005929.md) # DS004977: ieeg dataset, 4 subjects *CARLA: Adjusted common average referencing for cortico-cortical evoked potential data* Access recordings and metadata through EEGDash. **Citation:** Harvey Huang, Gabriela Ojeda Valencia, Nicholas M Gregg, Gamaleldin M Osman, Morgan N Montoya, Gregory A Worrell, Kai J Miller, Dora Hermes (2024). *CARLA: Adjusted common average referencing for cortico-cortical evoked potential data*. [10.18112/openneuro.ds004977.v1.2.0](https://doi.org/10.18112/openneuro.ds004977.v1.2.0) Modality: ieeg Subjects: 4 Recordings: 6 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004977 dataset = DS004977(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004977(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004977( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004977, title = {CARLA: Adjusted common average referencing for cortico-cortical evoked potential data}, author = {Harvey Huang and Gabriela Ojeda Valencia and Nicholas M Gregg and Gamaleldin M Osman and Morgan N Montoya and Gregory A Worrell and Kai J Miller and Dora Hermes}, doi = {10.18112/openneuro.ds004977.v1.2.0}, url = {https://doi.org/10.18112/openneuro.ds004977.v1.2.0}, } ``` ## About This Dataset **CARLA: Adjusted common average referencing for cortico-cortical evoked potential data** This dataset contains intracranial EEG recordings from four patients during single pulse electrical stimulation as described in: \* H Huang, G Ojeda Valencia, NM Gregg, GM Osman, MN Montoya, GA Worrell, KJ Miller, and D Hermes. (2024). CARLA: Adjusted common average referencing for cortico-cortical evoked potential data. Journal of Neuroscience Methods, 110153. DOI: [https://doi.org/10.1016/j.jneumeth.2024.110153](https://doi.org/10.1016/j.jneumeth.2024.110153). Currently, this dataset contains the raw data needed to generate all results EXCEPT for those pertaining to figures 7 and 8 (unavailable data samples are censored with 0). The complete data are currently being used to answer other scientific questions, and will be released in time with other manuscripts. Please cite this work when using the data. These data were recorded at the Mayo Clinic in Rochester, MN, as part of the NIH Brain Initiative supported project R01 MH122258 “CRCNS: Processing speed in the human connectome across the lifespan”. Research reported in this publication was supported by the National Institute Of Mental Health of the National Institutes of Health under Award Number R01MH122258, by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number T32GM145408, and by the American Epilepsy Society under award number 937450. The project was also supported by the Mayo Clinic DERIVE Office and the Mayo Clinic Center for Biomedical Discovery. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The data were collected by Harvey Huang, Dora Hermes, Nicholas M. Gregg, Gamaleldin M. Osman, and Cindy Nelson. The BIDS formatting was performed by Harvey Huang, Dora Hermes, Gabriela Ojeda Valencia, and Morgan Montoya. The iEEG data collection was facilitated by Gregory A. Worrell and Kai J. Miller. Data can be analyzed using the Matlab code at: \* [https://github.com/hharveygit/CARLA_JNM](https://github.com/hharveygit/CARLA_JNM) **Format** Data are formatted according to BIDS version 1.14.0 **Single pulse stimulation** The patient were resting in the hospital bed, while single pulse stimulation was performed with a frequency of ~0.2 Hz. The stimulation had a duration of 200 microseconds, was biphasic and had an amplitude of 6mA. **Contact** Please contact Harvey Huang ([huang.harvey@mayo.edu](mailto:huang.harvey@mayo.edu)) or Dora Hermes ([hermes.dora@mayo.edu](mailto:hermes.dora@mayo.edu)) for questions. ## Dataset Information | Dataset ID | `DS004977` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | CARLA: Adjusted common average referencing for cortico-cortical evoked potential data | | Author (year) | `Huang2024` | | Canonical | `CARLA` | | Importable as | `DS004977`, `Huang2024`, `CARLA` | | Year | 2024 | | Authors | Harvey Huang, Gabriela Ojeda Valencia, Nicholas M Gregg, Gamaleldin M Osman, Morgan N Montoya, Gregory A Worrell, Kai J Miller, Dora Hermes | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004977.v1.2.0](https://doi.org/10.18112/openneuro.ds004977.v1.2.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004977) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004977) | [Source URL](https://openneuro.org/datasets/ds004977) | ### Copy-paste BibTeX ```bibtex @dataset{ds004977, title = {CARLA: Adjusted common average referencing for cortico-cortical evoked potential data}, author = {Harvey Huang and Gabriela Ojeda Valencia and Nicholas M Gregg and Gamaleldin M Osman and Morgan N Montoya and Gregory A Worrell and Kai J Miller and Dora Hermes}, doi = {10.18112/openneuro.ds004977.v1.2.0}, url = {https://doi.org/10.18112/openneuro.ds004977.v1.2.0}, } ``` ## Technical Details - Subjects: 4 - Recordings: 6 - Tasks: 1 - Channels: 273 (4), 152, 232 - Sampling rate (Hz): 4800.0 - Duration (hours): 0.8750833333333332 - Pathology: Epilepsy - Modality: Other - Type: Other - Size on disk: 1.5 GB - File count: 6 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004977.v1.2.0 - Source: openneuro - OpenNeuro: [ds004977](https://openneuro.org/datasets/ds004977) - NeMAR: [ds004977](https://nemar.org/dataexplorer/detail?dataset_id=ds004977) ## API Reference Use the `DS004977` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004977(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CARLA: Adjusted common average referencing for cortico-cortical evoked potential data * **Study:** `ds004977` (OpenNeuro) * **Author (year):** `Huang2024` * **Canonical:** `CARLA` Also importable as: `DS004977`, `Huang2024`, `CARLA`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Epilepsy`. Subjects: 4; recordings: 6; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004977](https://openneuro.org/datasets/ds004977) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004977](https://nemar.org/dataexplorer/detail?dataset_id=ds004977) DOI: [https://doi.org/10.18112/openneuro.ds004977.v1.2.0](https://doi.org/10.18112/openneuro.ds004977.v1.2.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004977 >>> dataset = DS004977(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004977) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004977) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004980: eeg dataset, 17 subjects *EEG data set for a architectural affordances task* Access recordings and metadata through EEGDash. **Citation:** Wang,S., Oliveira,G.S., Djebbara,Z, Gramann, K. (2024). *EEG data set for a architectural affordances task*. [10.18112/openneuro.ds004980.v1.0.0](https://doi.org/10.18112/openneuro.ds004980.v1.0.0) Modality: eeg Subjects: 17 Recordings: 17 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004980 dataset = DS004980(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004980(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004980( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004980, title = {EEG data set for a architectural affordances task}, author = {Wang,S. and Oliveira,G.S. and Djebbara,Z and Gramann, K.}, doi = {10.18112/openneuro.ds004980.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004980.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS004980` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG data set for a architectural affordances task | | Author (year) | `Wang2024_architectural_affordances` | | Canonical | — | | Importable as | `DS004980`, `Wang2024_architectural_affordances` | | Year | 2024 | | Authors | Wang,S., Oliveira,G.S., Djebbara,Z, Gramann, K. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004980.v1.0.0](https://doi.org/10.18112/openneuro.ds004980.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004980) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004980) | [Source URL](https://openneuro.org/datasets/ds004980) | ### Copy-paste BibTeX ```bibtex @dataset{ds004980, title = {EEG data set for a architectural affordances task}, author = {Wang,S. and Oliveira,G.S. and Djebbara,Z and Gramann, K.}, doi = {10.18112/openneuro.ds004980.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds004980.v1.0.0}, } ``` ## Technical Details - Subjects: 17 - Recordings: 17 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 500.0 (4), 499.9914353, 499.9919367, 499.9923017, 499.9923795, 499.9917272, 499.9914553, 499.9919292, 499.9917286, 499.9911824, 499.9912809, 499.9917378, 499.991385, 499.9915179 - Duration (hours): 36.846338890955984 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 15.8 GB - File count: 17 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004980.v1.0.0 - Source: openneuro - OpenNeuro: [ds004980](https://openneuro.org/datasets/ds004980) - NeMAR: [ds004980](https://nemar.org/dataexplorer/detail?dataset_id=ds004980) ## API Reference Use the `DS004980` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004980(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data set for a architectural affordances task * **Study:** `ds004980` (OpenNeuro) * **Author (year):** `Wang2024_architectural_affordances` * **Canonical:** — Also importable as: `DS004980`, `Wang2024_architectural_affordances`. Modality: `eeg`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004980](https://openneuro.org/datasets/ds004980) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004980](https://nemar.org/dataexplorer/detail?dataset_id=ds004980) DOI: [https://doi.org/10.18112/openneuro.ds004980.v1.0.0](https://doi.org/10.18112/openneuro.ds004980.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004980 >>> dataset = DS004980(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004980) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004980) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004993: ieeg dataset, 3 subjects *WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS* Access recordings and metadata through EEGDash. **Citation:** Liberty S. Hamilton, Maansi Desai, Alyssa Field (2024). *WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS*. [10.18112/openneuro.ds004993.v1.1.2](https://doi.org/10.18112/openneuro.ds004993.v1.1.2) Modality: ieeg Subjects: 3 Recordings: 3 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004993 dataset = DS004993(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004993(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004993( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004993, title = {WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS}, author = {Liberty S. Hamilton and Maansi Desai and Alyssa Field}, doi = {10.18112/openneuro.ds004993.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds004993.v1.1.2}, } ``` ## About This Dataset **WIRED ICM TUTORIAL DATA** *Contributors:* Liberty S. Hamilton, PhD, Maansi Desai, PhD, Alyssa Field, MEd *Email:* [liberty.hamilton@austin.utexas.edu](mailto:liberty.hamilton@austin.utexas.edu) This is a sample BIDS dataset for the WIRED ICM course in Paris, France in March 2024. This contains intracranial recordings collected by the Hamilton Lab at the University of Texas at Austin. These recordings include examples of evoked data during natural listening tasks along with some examples of seizure-related activity and vagus nerve stimulator (VNS) artifact for illustrative purposes. All procedures were approved by the University of Texas at Austin Institutional Review Board. *Funding:* Support was provided by the National Institutes of Health National Institute on Deafness and Other Communication Disorders (R01 DC018579, to LSH). **Tasks:** 1. `movietrailers` - this task involves patients listening to movie clips from various Pixar, Disney, Dreamworks, and other movies. We have published previously using these stimuli in EEG (Desai et al. 2021). 2. `timit4` and `timit5` - these tasks involve patients listening to subsets of the TIMIT acoustic phonetic corpus (Garofolo et al 1993). The events provided in the dataset mark the onset and offset of each sentence. In `timit4`, each sentence is unique, while in `timit5`, 10 sentences are repeated 10 times. This is the same stimulus set used in Mesgarani et al. 2014, Hamilton et al. 2018, Hamilton et al. 2021, and Desai et. al 2021. **Notes:** \* The movie trailer data for subject W1 was acquired at the start of a generalized tonic clonic seizure, and the research session was terminated. Large, synchronized spikes can be observed on multiple channels on the right parietal grid throughout the iEEG data. \* The TIMIT data for subject W2 is an example of fairly clean sentence evoked data. \* The TIMIT data for subject W3 is a good example of on-and-off VNS artifact. The VNS has a strong artifact at ~20 Hz. Some patients with epilepsy may have these implanted devices to help control their seizures, so you should know how to spot artifact-related activity. Despite these artifacts, the evoked responses to sentences are quite strong. \* The acquisition number (B3, B8, etc) has to do with the order in which this task was run relative to other tasks in an iEEG session, and can be ignored here. **References** \* Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) \* Desai, M., Holder, J., Villarreal, C., Clark, N., Hoang, B., & Hamilton, L. S. (2021). Generalizable EEG encoding models with naturalistic audiovisual stimuli. Journal of Neuroscience, 41(43), 8946-8962. \* Garofolo, J. S., Lamel, L. F., Fisher, W. M., Fiscus, J. G., & Pallett, D. S. (1993). DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI/Recon technical report n, 93, 27403. \* Hamilton, L. S., Edwards, E., & Chang, E. F. (2018). A spatial map of onset and sustained responses to speech in the human superior temporal gyrus. Current Biology, 28(12), 1860-1871. \* Hamilton, L. S., Oganian, Y., Hall, J., & Chang, E. F. (2021). Parallel and distributed encoding of speech across human auditory cortex. Cell, 184(18), 4626-4639. \* Holdgraf, C., Appelhoff, S., Bickel, S., Bouchard, K., D’Ambrosio, S., David, O., … Hermes, D. (2019). iEEG-BIDS, extending the Brain Imaging Data Structure specification to human intracranial electrophysiology. Scientific Data, 6, 102. [https://doi.org/10.1038/s41597-019-0105-7](https://doi.org/10.1038/s41597-019-0105-7) \* Mesgarani, N., Cheung, C., Johnson, K., & Chang, E. F. (2014). Phonetic feature encoding in human superior temporal gyrus. Science, 343(6174), 1006-1010. ## Dataset Information | Dataset ID | `DS004993` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS | | Author (year) | `Hamilton2024` | | Canonical | `WIRED_ICM` | | Importable as | `DS004993`, `Hamilton2024`, `WIRED_ICM` | | Year | 2024 | | Authors | Liberty S. Hamilton, Maansi Desai, Alyssa Field | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004993.v1.1.2](https://doi.org/10.18112/openneuro.ds004993.v1.1.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004993) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004993) | [Source URL](https://openneuro.org/datasets/ds004993) | ### Copy-paste BibTeX ```bibtex @dataset{ds004993, title = {WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS}, author = {Liberty S. Hamilton and Maansi Desai and Alyssa Field}, doi = {10.18112/openneuro.ds004993.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds004993.v1.1.2}, } ``` ## Technical Details - Subjects: 3 - Recordings: 3 - Tasks: 3 - Channels: 148, 106, 160 - Sampling rate (Hz): 512.0 (2), 2048.0 - Duration (hours): 0.2299173990885416 - Pathology: Epilepsy - Modality: Auditory - Type: Perception - Size on disk: 305.1 MB - File count: 3 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004993.v1.1.2 - Source: openneuro - OpenNeuro: [ds004993](https://openneuro.org/datasets/ds004993) - NeMAR: [ds004993](https://nemar.org/dataexplorer/detail?dataset_id=ds004993) ## API Reference Use the `DS004993` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004993(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS * **Study:** `ds004993` (OpenNeuro) * **Author (year):** `Hamilton2024` * **Canonical:** `WIRED_ICM` Also importable as: `DS004993`, `Hamilton2024`, `WIRED_ICM`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Epilepsy`. Subjects: 3; recordings: 3; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004993](https://openneuro.org/datasets/ds004993) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004993](https://nemar.org/dataexplorer/detail?dataset_id=ds004993) DOI: [https://doi.org/10.18112/openneuro.ds004993.v1.1.2](https://doi.org/10.18112/openneuro.ds004993.v1.1.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004993 >>> dataset = DS004993(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004993) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004993) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS004995: eeg dataset, 20 subjects *The Time-Course of Food Representation in the Human Brain* Access recordings and metadata through EEGDash. **Citation:** Denise Moerel, James Psihoyos, Thomas A. Carlson (2024). *The Time-Course of Food Representation in the Human Brain*. [10.18112/openneuro.ds004995.v1.0.2](https://doi.org/10.18112/openneuro.ds004995.v1.0.2) Modality: eeg Subjects: 20 Recordings: 20 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004995 dataset = DS004995(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004995(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004995( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004995, title = {The Time-Course of Food Representation in the Human Brain}, author = {Denise Moerel and James Psihoyos and Thomas A. Carlson}, doi = {10.18112/openneuro.ds004995.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004995.v1.0.2}, } ``` ## About This Dataset The main folder contains the raw EEG data in standard bids format. See references. Code and figures: [https://doi.org/10.17605/OSF.IO/PWC4K](https://doi.org/10.17605/OSF.IO/PWC4K) Manuscript: [https://doi.org/10.1101/2023.06.06.543985](https://doi.org/10.1101/2023.06.06.543985) References: Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `DS004995` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The Time-Course of Food Representation in the Human Brain | | Author (year) | `Moerel2024` | | Canonical | `Moerel2023` | | Importable as | `DS004995`, `Moerel2024`, `Moerel2023` | | Year | 2024 | | Authors | Denise Moerel, James Psihoyos, Thomas A. Carlson | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004995.v1.0.2](https://doi.org/10.18112/openneuro.ds004995.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004995) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004995) | [Source URL](https://openneuro.org/datasets/ds004995) | ### Copy-paste BibTeX ```bibtex @dataset{ds004995, title = {The Time-Course of Food Representation in the Human Brain}, author = {Denise Moerel and James Psihoyos and Thomas A. Carlson}, doi = {10.18112/openneuro.ds004995.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds004995.v1.0.2}, } ``` ## Technical Details - Subjects: 20 - Recordings: 20 - Tasks: 1 - Channels: 127 - Sampling rate (Hz): 1000.0 - Duration (hours): 16.189522222222223 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 27.6 GB - File count: 20 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004995.v1.0.2 - Source: openneuro - OpenNeuro: [ds004995](https://openneuro.org/datasets/ds004995) - NeMAR: [ds004995](https://nemar.org/dataexplorer/detail?dataset_id=ds004995) ## API Reference Use the `DS004995` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004995(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Time-Course of Food Representation in the Human Brain * **Study:** `ds004995` (OpenNeuro) * **Author (year):** `Moerel2024` * **Canonical:** `Moerel2023` Also importable as: `DS004995`, `Moerel2024`, `Moerel2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004995](https://openneuro.org/datasets/ds004995) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004995](https://nemar.org/dataexplorer/detail?dataset_id=ds004995) DOI: [https://doi.org/10.18112/openneuro.ds004995.v1.0.2](https://doi.org/10.18112/openneuro.ds004995.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004995 >>> dataset = DS004995(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004995) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004995) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS004998: meg dataset, 20 subjects *Exploring the electrophysiology of Parkinson’s disease - magnetoencephalography combined with deep brain recordings from the subthalamic nucleus.* Access recordings and metadata through EEGDash. **Citation:** Fayed Rassoulou, Alexandra Steina, Christian J. Hartmann, Jan Vesper, Markus Butz, Alfons Schnitzler, Jan Hirschmann (2024). *Exploring the electrophysiology of Parkinson’s disease - magnetoencephalography combined with deep brain recordings from the subthalamic nucleus.*. [10.18112/openneuro.ds004998.v1.2.2](https://doi.org/10.18112/openneuro.ds004998.v1.2.2) Modality: meg Subjects: 20 Recordings: 145 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS004998 dataset = DS004998(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS004998(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS004998( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds004998, title = {Exploring the electrophysiology of Parkinson's disease - magnetoencephalography combined with deep brain recordings from the subthalamic nucleus.}, author = {Fayed Rassoulou and Alexandra Steina and Christian J. Hartmann and Jan Vesper and Markus Butz and Alfons Schnitzler and Jan Hirschmann}, doi = {10.18112/openneuro.ds004998.v1.2.2}, url = {https://doi.org/10.18112/openneuro.ds004998.v1.2.2}, } ``` ## About This Dataset This dataset contains data from externalized DBS patients undergoing simultaneous MEG - STN LFP recordings with (MedOn) and without (MedOn) dopaminergic medication. It has two movement conditions: 1) 5 min of rest followed by static forearm extension (hold) and 2) 5 min of rest followed by self-paced fist-clenching (move). The movement parts contain pauses. Some patients were recorded in resting-state only (rest). The project aimed to understand the neurophysiology of basal ganglia-cortex loops and its modulation by movement and medication. Code for quickly start is available here: [https://github.com/Fayed-Rsl/RHM_preprocessing](https://github.com/Fayed-Rsl/RHM_preprocessing) **References** Rassoulou, F., Steina, A., Hartmann, C. J., Vesper, J., Butz, M., Schnitzler, A., & Hirschmann, J. (2024). Exploring the electrophysiology of Parkinson’s disease with magnetoencephalography and deep brain recordings. Scientific data, 11(1), 889. [https://doi.org/10.1038/s41597-024-03768-1](https://doi.org/10.1038/s41597-024-03768-1) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `DS004998` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Exploring the electrophysiology of Parkinson’s disease - magnetoencephalography combined with deep brain recordings from the subthalamic nucleus. | | Author (year) | `Rassoulou2024` | | Canonical | — | | Importable as | `DS004998`, `Rassoulou2024` | | Year | 2024 | | Authors | Fayed Rassoulou, Alexandra Steina, Christian J. Hartmann, Jan Vesper, Markus Butz, Alfons Schnitzler, Jan Hirschmann | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds004998.v1.2.2](https://doi.org/10.18112/openneuro.ds004998.v1.2.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds004998) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds004998) | [Source URL](https://openneuro.org/datasets/ds004998) | ### Copy-paste BibTeX ```bibtex @dataset{ds004998, title = {Exploring the electrophysiology of Parkinson's disease - magnetoencephalography combined with deep brain recordings from the subthalamic nucleus.}, author = {Fayed Rassoulou and Alexandra Steina and Christian J. Hartmann and Jan Vesper and Markus Butz and Alfons Schnitzler and Jan Hirschmann}, doi = {10.18112/openneuro.ds004998.v1.2.2}, url = {https://doi.org/10.18112/openneuro.ds004998.v1.2.2}, } ``` ## Technical Details - Subjects: 20 - Recordings: 145 - Tasks: 6 - Channels: 323 (122), 326 (6), 333 (6), 324 (6), 347 (4), 319 - Sampling rate (Hz): 2000.0 - Duration (hours): 10.766654722222222 - Pathology: Parkinson’s - Modality: Motor - Type: Motor - Size on disk: 161.8 GB - File count: 145 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds004998.v1.2.2 - Source: openneuro - OpenNeuro: [ds004998](https://openneuro.org/datasets/ds004998) - NeMAR: [ds004998](https://nemar.org/dataexplorer/detail?dataset_id=ds004998) ## API Reference Use the `DS004998` class to access this dataset programmatically. ### *class* eegdash.dataset.DS004998(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Exploring the electrophysiology of Parkinson’s disease - magnetoencephalography combined with deep brain recordings from the subthalamic nucleus. * **Study:** `ds004998` (OpenNeuro) * **Author (year):** `Rassoulou2024` * **Canonical:** — Also importable as: `DS004998`, `Rassoulou2024`. Modality: `meg`; Experiment type: `Motor`; Subject type: `Parkinson's`. Subjects: 20; recordings: 145; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004998](https://openneuro.org/datasets/ds004998) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004998](https://nemar.org/dataexplorer/detail?dataset_id=ds004998) DOI: [https://doi.org/10.18112/openneuro.ds004998.v1.2.2](https://doi.org/10.18112/openneuro.ds004998.v1.2.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004998 >>> dataset = DS004998(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds004998) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds004998) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS005007: ieeg dataset, 40 subjects *Auditory naming task with questions that begin or end with a wh-interrogative* Access recordings and metadata through EEGDash. **Citation:** Yu Kitazawa, Eishi Asano (2024). *Auditory naming task with questions that begin or end with a wh-interrogative*. [10.18112/openneuro.ds005007.v1.0.0](https://doi.org/10.18112/openneuro.ds005007.v1.0.0) Modality: ieeg Subjects: 40 Recordings: 42 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005007 dataset = DS005007(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005007(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005007( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005007, title = {Auditory naming task with questions that begin or end with a wh-interrogative}, author = {Yu Kitazawa and Eishi Asano}, doi = {10.18112/openneuro.ds005007.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005007.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS005007` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Auditory naming task with questions that begin or end with a wh-interrogative | | Author (year) | `Kitazawa2024` | | Canonical | `Kitazawa2025` | | Importable as | `DS005007`, `Kitazawa2024`, `Kitazawa2025` | | Year | 2024 | | Authors | Yu Kitazawa, Eishi Asano | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005007.v1.0.0](https://doi.org/10.18112/openneuro.ds005007.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005007) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005007) | [Source URL](https://openneuro.org/datasets/ds005007) | ### Copy-paste BibTeX ```bibtex @dataset{ds005007, title = {Auditory naming task with questions that begin or end with a wh-interrogative}, author = {Yu Kitazawa and Eishi Asano}, doi = {10.18112/openneuro.ds005007.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005007.v1.0.0}, } ``` ## Technical Details - Subjects: 40 - Recordings: 42 - Tasks: 1 - Channels: 100 (3), 74 (2), 78 (2), 156 (2), 86 (2), 58 (2), 66 (2), 116 (2), 82 (2), 122, 127, 155, 68, 94, 128, 140, 154, 138, 142, 91, 72, 102, 124, 137, 51, 120, 163, 48, 114, 129, 184, 88 - Sampling rate (Hz): 1000.0 - Duration (hours): 9.423494444444446 - Pathology: Healthy - Modality: Auditory - Type: Other - Size on disk: 8.3 GB - File count: 42 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005007.v1.0.0 - Source: openneuro - OpenNeuro: [ds005007](https://openneuro.org/datasets/ds005007) - NeMAR: [ds005007](https://nemar.org/dataexplorer/detail?dataset_id=ds005007) ## API Reference Use the `DS005007` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005007(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory naming task with questions that begin or end with a wh-interrogative * **Study:** `ds005007` (OpenNeuro) * **Author (year):** `Kitazawa2024` * **Canonical:** `Kitazawa2025` Also importable as: `DS005007`, `Kitazawa2024`, `Kitazawa2025`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 40; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005007](https://openneuro.org/datasets/ds005007) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005007](https://nemar.org/dataexplorer/detail?dataset_id=ds005007) DOI: [https://doi.org/10.18112/openneuro.ds005007.v1.0.0](https://doi.org/10.18112/openneuro.ds005007.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005007 >>> dataset = DS005007(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005007) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005007) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005021: eeg dataset, 36 subjects *Tilt Illusion by Phase* Access recordings and metadata through EEGDash. **Citation:** Jessica G. Williams, William J. Harrison, Henry A. Beale, Jason B. Mattingley, Anthony M. Harris (2024). *Tilt Illusion by Phase*. [10.18112/openneuro.ds005021.v1.2.1](https://doi.org/10.18112/openneuro.ds005021.v1.2.1) Modality: eeg Subjects: 36 Recordings: 36 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005021 dataset = DS005021(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005021(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005021( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005021, title = {Tilt Illusion by Phase}, author = {Jessica G. Williams and William J. Harrison and Henry A. Beale and Jason B. Mattingley and Anthony M. Harris}, doi = {10.18112/openneuro.ds005021.v1.2.1}, url = {https://doi.org/10.18112/openneuro.ds005021.v1.2.1}, } ``` ## About This Dataset **Overview** This is the “Tilt Illusion” dataset. In brief, it contains EEG data for 36 subjects responding to the percieved orientation of a central target grating, that is titrated to appear vertical on average, and is surrounded by an anular grating of +-30 degrees. We then looked at the prestimulus EEG correlates of an increased or decreased tilt illusion. **Citing this dataset** ### View full README **Overview** This is the “Tilt Illusion” dataset. In brief, it contains EEG data for 36 subjects responding to the percieved orientation of a central target grating, that is titrated to appear vertical on average, and is surrounded by an anular grating of +-30 degrees. We then looked at the prestimulus EEG correlates of an increased or decreased tilt illusion. **Citing this dataset** Please cite as follows: > Williams, J.G., Harrison, W.J., Beale, H.A., Mattingley, J.B., & Harris, A.M. (2024). Effects of alpha oscillation power and phase on discrimination performance in a visual tilt illusion. Current Biology. For more information, see the `dataset_description.json` file. **License** The `tilt illusion` dataset is made available under the CC BY 4.0 license. Copyright (c) 2024, Jessica Williams, William Harrison, Henry Beale, Jason Mattingley, & Anthony Harris A human readable information can be found at: [https://creativecommons.org/licenses/by/4.0/deed.en](https://creativecommons.org/licenses/by/4.0/deed.en) **Format** The dataset is formatted according to the Brain Imaging Data Structure (BIDS). See the `dataset_description.json` file for the specific version used. Generally, you can find metadata in the `.tsv` files and documentation thereof in the accompanying `.json` files. An important BIDS definition to consider is the “Inheritance Principle”, which is described in the BIDS specification under the following link: [https://bids-specification.readthedocs.io/en/latest/common-principles.html#the-inheritance-principle](https://bids-specification.readthedocs.io/en/latest/common-principles.html#the-inheritance-principle) In brief, the Inheritance Pinciple states that any metadata file (such as `.json`, `.tsv`) may be defined at any directory level, but no more than one applicable file may be defined at a given level […], and the values from the top level are inherited by all lower levels – unless they are overridden by a file at the lower level. **Details about the experiment** For a detailed description of the task, see Williams et al. (2024) What follows is a brief summary. Participants were seated in front of a computer screen placed on a desk. On each trial they were presented with a central target grating, surrounded by an annular grating of +-30 degrees. This induced a ‘tilt illusion’ whereby the percieved angle of the central grating was biased away from the angle of the surround. We first titrated the angle of the central grating to each participant’s percieved vertical angle, separately for each surround. Percieved vertical was defined as the angle at which the participant reported the grating as tilted leftward and rightward equally often. Participants responded with their right hand by pressing the left and right arrow keys on a standard USB keyboard. Stimuli were presented very briefly (8.3ms) at 60% contrast, and were clearly visible. Between trials, a mask made from the combination of several gratings was presented to prevent the buildup of tilt aftereffects across trials. Throughout the experiment, EEG data was recorded using a Biosemi Active 2 system with 64 scalp electrods and 6 EOG electrodes (left and right HEOG, VEOG on left eye, and left and right mastoids - in positions EXG 3-8). For more information, you can also consult the events.tsv and events.json files. The original data was recorded in `.bdf` format using Actiview. It is stored in the `/sourcedata` directory. To comply with the BIDS format, the .bdf format was converted to EEGLab format, constituting a ‘.set’ file and a ‘fdt’ file for each dataset. Participant 1’s data was corrupted by large artefacts that could not be corrected. Participants 8, 16, and 28 had no EEG data recorded, as their pre-task titration failed to converge. As such, the data for these 4 participants are not included in this dataset. ## Dataset Information | Dataset ID | `DS005021` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Tilt Illusion by Phase | | Author (year) | `Williams2024` | | Canonical | — | | Importable as | `DS005021`, `Williams2024` | | Year | 2024 | | Authors | Jessica G. Williams, William J. Harrison, Henry A. Beale, Jason B. Mattingley, Anthony M. Harris | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005021.v1.2.1](https://doi.org/10.18112/openneuro.ds005021.v1.2.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005021) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005021) | [Source URL](https://openneuro.org/datasets/ds005021) | ### Copy-paste BibTeX ```bibtex @dataset{ds005021, title = {Tilt Illusion by Phase}, author = {Jessica G. Williams and William J. Harrison and Henry A. Beale and Jason B. Mattingley and Anthony M. Harris}, doi = {10.18112/openneuro.ds005021.v1.2.1}, url = {https://doi.org/10.18112/openneuro.ds005021.v1.2.1}, } ``` ## Technical Details - Subjects: 36 - Recordings: 36 - Tasks: 1 - Channels: 72 - Sampling rate (Hz): 1024.0 - Duration (hours): 43.06305555555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 47.5 GB - File count: 36 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005021.v1.2.1 - Source: openneuro - OpenNeuro: [ds005021](https://openneuro.org/datasets/ds005021) - NeMAR: [ds005021](https://nemar.org/dataexplorer/detail?dataset_id=ds005021) ## API Reference Use the `DS005021` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005021(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Tilt Illusion by Phase * **Study:** `ds005021` (OpenNeuro) * **Author (year):** `Williams2024` * **Canonical:** — Also importable as: `DS005021`, `Williams2024`. Modality: `eeg`. Subjects: 36; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005021](https://openneuro.org/datasets/ds005021) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005021](https://nemar.org/dataexplorer/detail?dataset_id=ds005021) DOI: [https://doi.org/10.18112/openneuro.ds005021.v1.2.1](https://doi.org/10.18112/openneuro.ds005021.v1.2.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005021 >>> dataset = DS005021(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005021) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005021) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005028: eeg dataset, 11 subjects *Comparing P300 Flashing paradigms in online typing with language models* Access recordings and metadata through EEGDash. **Citation:** Nand Chandravadia, Shrita Pendekanti, Dustin Roberts, Robert Tran, Saarang Panchavati, Corey Arnold, Nader Pouratian, William Speier (2024). *Comparing P300 Flashing paradigms in online typing with language models*. [10.18112/openneuro.ds005028.v1.0.0](https://doi.org/10.18112/openneuro.ds005028.v1.0.0) Modality: eeg Subjects: 11 Recordings: 105 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005028 dataset = DS005028(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005028(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005028( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005028, title = {Comparing P300 Flashing paradigms in online typing with language models}, author = {Nand Chandravadia and Shrita Pendekanti and Dustin Roberts and Robert Tran and Saarang Panchavati and Corey Arnold and Nader Pouratian and William Speier}, doi = {10.18112/openneuro.ds005028.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005028.v1.0.0}, } ``` ## About This Dataset This dataset was created using BCI2000. The goal of this study was to explore the online typing performance of the P300 speller using language models and various flashing paradigms. For more information see Chandravadia et al. ([https://www.medrxiv.org/content/10.1101/2022.06.24.22276882v1](https://www.medrxiv.org/content/10.1101/2022.06.24.22276882v1)). If you reference this dataset in your publications, please acknowledge its authors. This dataset is made available under CC0. Note: subject 5 was not included in the analysis because the testing stage did not include all three flashing paradigms. ## Dataset Information | Dataset ID | `DS005028` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Comparing P300 Flashing paradigms in online typing with language models | | Author (year) | `Chandravadia2024` | | Canonical | `Chandravadia2022` | | Importable as | `DS005028`, `Chandravadia2024`, `Chandravadia2022` | | Year | 2024 | | Authors | Nand Chandravadia, Shrita Pendekanti, Dustin Roberts, Robert Tran, Saarang Panchavati, Corey Arnold, Nader Pouratian, William Speier | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005028.v1.0.0](https://doi.org/10.18112/openneuro.ds005028.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005028) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005028) | [Source URL](https://openneuro.org/datasets/ds005028) | ### Copy-paste BibTeX ```bibtex @dataset{ds005028, title = {Comparing P300 Flashing paradigms in online typing with language models}, author = {Nand Chandravadia and Shrita Pendekanti and Dustin Roberts and Robert Tran and Saarang Panchavati and Corey Arnold and Nader Pouratian and William Speier}, doi = {10.18112/openneuro.ds005028.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005028.v1.0.0}, } ``` ## Technical Details - Subjects: 11 - Recordings: 105 - Tasks: 3 - Channels: 32 - Sampling rate (Hz): Varies - Duration (hours): 0.0291666666666666 - Pathology: Not specified - Modality: Visual - Type: Attention - Size on disk: 422.1 MB - File count: 105 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005028.v1.0.0 - Source: openneuro - OpenNeuro: [ds005028](https://openneuro.org/datasets/ds005028) - NeMAR: [ds005028](https://nemar.org/dataexplorer/detail?dataset_id=ds005028) ## API Reference Use the `DS005028` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005028(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Comparing P300 Flashing paradigms in online typing with language models * **Study:** `ds005028` (OpenNeuro) * **Author (year):** `Chandravadia2024` * **Canonical:** `Chandravadia2022` Also importable as: `DS005028`, `Chandravadia2024`, `Chandravadia2022`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Unknown`. Subjects: 11; recordings: 105; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005028](https://openneuro.org/datasets/ds005028) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005028](https://nemar.org/dataexplorer/detail?dataset_id=ds005028) DOI: [https://doi.org/10.18112/openneuro.ds005028.v1.0.0](https://doi.org/10.18112/openneuro.ds005028.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005028 >>> dataset = DS005028(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005028) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005028) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005034: eeg dataset, 25 subjects *The effect of theta tACS on working memory* Access recordings and metadata through EEGDash. **Citation:** Yuri G. Pavlov, Dauren Kasanov (2024). *The effect of theta tACS on working memory*. [10.18112/openneuro.ds005034.v1.0.1](https://doi.org/10.18112/openneuro.ds005034.v1.0.1) Modality: eeg Subjects: 25 Recordings: 100 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005034 dataset = DS005034(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005034(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005034( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005034, title = {The effect of theta tACS on working memory}, author = {Yuri G. Pavlov and Dauren Kasanov}, doi = {10.18112/openneuro.ds005034.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005034.v1.0.1}, } ``` ## About This Dataset Following either a 20-minute verum or sham stimulation applied to Fpz-CPz at 1 mA and 6 Hz, the participants performed WM tasks, while EEG was recorded. The task required participants to either mentally manipulate memory items or retain them in memory as they were originally presented. In addition, before the working memory task, resting state EEG with eyes closed was recorded for 3 minutes and with eyes open for 1.5 minutes. Behavioral performance data are available on OSF ([https://osf.io/v2qwc/](https://osf.io/v2qwc/)) ## Dataset Information | Dataset ID | `DS005034` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The effect of theta tACS on working memory | | Author (year) | `Pavlov2024_effect_theta_tACS` | | Canonical | — | | Importable as | `DS005034`, `Pavlov2024_effect_theta_tACS` | | Year | 2024 | | Authors | Yuri G. Pavlov, Dauren Kasanov | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005034.v1.0.1](https://doi.org/10.18112/openneuro.ds005034.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005034) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005034) | [Source URL](https://openneuro.org/datasets/ds005034) | ### Copy-paste BibTeX ```bibtex @dataset{ds005034, title = {The effect of theta tACS on working memory}, author = {Yuri G. Pavlov and Dauren Kasanov}, doi = {10.18112/openneuro.ds005034.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005034.v1.0.1}, } ``` ## Technical Details - Subjects: 25 - Recordings: 100 - Tasks: 2 - Channels: 129 - Sampling rate (Hz): 1000.0 - Duration (hours): 34.91862888888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 61.4 GB - File count: 100 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005034.v1.0.1 - Source: openneuro - OpenNeuro: [ds005034](https://openneuro.org/datasets/ds005034) - NeMAR: [ds005034](https://nemar.org/dataexplorer/detail?dataset_id=ds005034) ## API Reference Use the `DS005034` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005034(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effect of theta tACS on working memory * **Study:** `ds005034` (OpenNeuro) * **Author (year):** `Pavlov2024_effect_theta_tACS` * **Canonical:** — Also importable as: `DS005034`, `Pavlov2024_effect_theta_tACS`. Modality: `eeg`. Subjects: 25; recordings: 100; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005034](https://openneuro.org/datasets/ds005034) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005034](https://nemar.org/dataexplorer/detail?dataset_id=ds005034) DOI: [https://doi.org/10.18112/openneuro.ds005034.v1.0.1](https://doi.org/10.18112/openneuro.ds005034.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005034 >>> dataset = DS005034(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005034) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005034) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005048: eeg dataset, 35 subjects *40Hz Auditory Entrainment* Access recordings and metadata through EEGDash. **Citation:** Mojtaba Lahijanian, Hamid Aghajan, Zahra Vahabi (2024). *40Hz Auditory Entrainment*. [10.18112/openneuro.ds005048.v1.0.1](https://doi.org/10.18112/openneuro.ds005048.v1.0.1) Modality: eeg Subjects: 35 Recordings: 35 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005048 dataset = DS005048(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005048(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005048( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005048, title = {40Hz Auditory Entrainment}, author = {Mojtaba Lahijanian and Hamid Aghajan and Zahra Vahabi}, doi = {10.18112/openneuro.ds005048.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005048.v1.0.1}, } ``` ## About This Dataset Introduction This experiment was designed to entrain the brain oscillations through synthetic auditory stimulation conducted on a group of elderly suffering from dementia. Recently, gamma entrainment has been proposed and shown effective in improving several symptoms of Alzheimer’s Diseases (AD). The aim of this study is to investigate the effect of entrainment on brain oscillations using EEG signal recording during the auditory brain stimulation. This study was approved by the Review Board of Tehran University of Medical Sciences (Approval ID: IR.TUMS.MEDICINE.REC.1398.524). All methods were performed in accordance with the relevant guidelines and regulations, and all participants provided informed consent before participating and were free to withdraw at any time. To accommodate participants who preferred a shorter duration of data gathering, we designed both short and long sessions for entrainment. This approach aimed to minimize inconvenience for the participants who were less inclined to engage in lengthy procedures. Entrainment session and auditory stimulation Each session involved the presentation of a multi-trial auditory stimulus while simultaneously recording EEG data from the participant. To deliver the auditory stimulus, two speakers were placed in front of the participant 50cm apart from each other and directly pointed at the participant’s ears at a distance of 50cm. The sound intensity was around -40dB within a fixed range for all participants. To ascertain adequate hearing ability of the participants and to ensure individual comfort, each participant was asked before commencing the task if the sound was at a comfortable level, and adjustments were made to the volume. The auditory stimulus was a 5kHz carrier tone amplitude modulated with a 40Hz rectangular wave (40Hz On and Off cycles). Since a 40Hz tone cannot be easily heard, the 5KHz carrier frequency was used to render the 40Hz pulse train audible. In order to minimize the effect of the carrier sound, the duty cycle of the modulating 40Hz waveform was set to 4% (1ms of the 25ms cycle was On). The auditory stimulant was generated in MATLAB and played as a .wav file. This file consisted of multiple trials, with each trial lasting 40sec and interleaved by 20sec of rest (silence). The short session included six trials, while the long session comprised ten trials of the stimulus. EEG recording and preprocessing All EEG data were recorded using 19 monopolar channels based on the standard 10/20 system. For the short session, the reference electrodes were placed on the earlobes, while for the long session, referencing was done to the FCz channel. Notably, referencing to the average was implemented during preprocessing, ensuring data integrity and minimizing potential interference. The sampling rate was set to 250Hz, and the impedance of the electrodes was kept under 20kΩ. During the experiment, participants were seated comfortably with open eyes in a quiet room, and they were instructed to relax their body to avoid muscle artifacts and to move their head as little as possible. Data from all the participants were preprocessed identically following Makoto’s preprocessing pipeline: Highpass filtering above 1Hz; removal of the line noise; rejecting potential bad channels; interpolating rejected channels; re-referencing data to the average; artifact subspace reconstruction (ASR); re-referencing data to the average again; estimating the brain source activity using independent component analysis (ICA); dipole fitting; rejecting bad dipoles (sources) for further cleaning the data. These preprocessing steps were performed using EEGLab toolbox in MATLAB. ## Dataset Information | Dataset ID | `DS005048` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 40Hz Auditory Entrainment | | Author (year) | `Lahijanian2024` | | Canonical | — | | Importable as | `DS005048`, `Lahijanian2024` | | Year | 2024 | | Authors | Mojtaba Lahijanian, Hamid Aghajan, Zahra Vahabi | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005048.v1.0.1](https://doi.org/10.18112/openneuro.ds005048.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005048) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005048) | [Source URL](https://openneuro.org/datasets/ds005048) | ### Copy-paste BibTeX ```bibtex @dataset{ds005048, title = {40Hz Auditory Entrainment}, author = {Mojtaba Lahijanian and Hamid Aghajan and Zahra Vahabi}, doi = {10.18112/openneuro.ds005048.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005048.v1.0.1}, } ``` ## Technical Details - Subjects: 35 - Recordings: 35 - Tasks: 1 - Channels: 19 - Sampling rate (Hz): 250.0 - Duration (hours): 5.2027777777777775 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 355.9 MB - File count: 35 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005048.v1.0.1 - Source: openneuro - OpenNeuro: [ds005048](https://openneuro.org/datasets/ds005048) - NeMAR: [ds005048](https://nemar.org/dataexplorer/detail?dataset_id=ds005048) ## API Reference Use the `DS005048` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005048(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 40Hz Auditory Entrainment * **Study:** `ds005048` (OpenNeuro) * **Author (year):** `Lahijanian2024` * **Canonical:** — Also importable as: `DS005048`, `Lahijanian2024`. Modality: `eeg`. Subjects: 35; recordings: 35; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005048](https://openneuro.org/datasets/ds005048) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005048](https://nemar.org/dataexplorer/detail?dataset_id=ds005048) DOI: [https://doi.org/10.18112/openneuro.ds005048.v1.0.1](https://doi.org/10.18112/openneuro.ds005048.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005048 >>> dataset = DS005048(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005048) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005048) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005059: ieeg dataset, 69 subjects *Paired Associates Learning: Memory for Word Pairs in Cued Recall* Access recordings and metadata through EEGDash. **Citation:** Haydn G. Herrema, Michael J. Kahana (2024). *Paired Associates Learning: Memory for Word Pairs in Cued Recall*. [10.18112/openneuro.ds005059.v1.0.6](https://doi.org/10.18112/openneuro.ds005059.v1.0.6) Modality: ieeg Subjects: 69 Recordings: 282 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005059 dataset = DS005059(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005059(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005059( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005059, title = {Paired Associates Learning: Memory for Word Pairs in Cued Recall}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005059.v1.0.6}, url = {https://doi.org/10.18112/openneuro.ds005059.v1.0.6}, } ``` ## About This Dataset **Paired Associates Learning of Word Pairs** **Description** This dataset contains behavioral events and intracranial electrophysiological recordings from a paired associates memory task. The experiment consists of participants studying pairs of visually presented words, solving simple arithmetic problems that function as a distractor, and then completing a cued recall task. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania. Each session contains 25 lists of the structure: encoding, distractor, cued recall. During encoding, 6 pairs of words are presented one pair at a time. Each pair remains on screen for 4000 ms and is followed by a 1000 ms interstimulus interval. During the cued recall, one randomly chosen word from each pair is shown, and the participant is asked to vocally recall the other word from the pair. Participants have 5000 ms for each recall, and then the next cue (i.e., a word from another pair) is shown. All 6 pairs of words are tested on each list. **To Note:** - The iEEG recordings are labeled either “monopolar” or “bipolar.” The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis. The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables. - Each subject has a unique montage of electrode locations. MNI and Talairach coordinates are provided when available, along with brain region annotations. - Recordings were made on multiple different systems, so we have done the scaling to provide all voltage values in V. **Contact** For questions or inquiries, please contact [sas-kahana-sysadmin@sas.upenn.edu](mailto:sas-kahana-sysadmin@sas.upenn.edu). ## Dataset Information | Dataset ID | `DS005059` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Paired Associates Learning: Memory for Word Pairs in Cued Recall | | Author (year) | `Herrema2024_Paired` | | Canonical | `PAL` | | Importable as | `DS005059`, `Herrema2024_Paired`, `PAL` | | Year | 2024 | | Authors | Haydn G. Herrema, Michael J. Kahana | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005059.v1.0.6](https://doi.org/10.18112/openneuro.ds005059.v1.0.6) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005059) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005059) | [Source URL](https://openneuro.org/datasets/ds005059) | ### Copy-paste BibTeX ```bibtex @dataset{ds005059, title = {Paired Associates Learning: Memory for Word Pairs in Cued Recall}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005059.v1.0.6}, url = {https://doi.org/10.18112/openneuro.ds005059.v1.0.6}, } ``` ## Technical Details - Subjects: 69 - Recordings: 282 - Tasks: 1 - Channels: 112 (22), 126 (15), 85 (11), 110 (10), 128 (10), 104 (9), 88 (9), 100 (9), 72 (8), 64 (8), 186 (8), 102 (7), 116 (7), 121 (7), 92 (6), 142 (6), 119 (5), 97 (5), 95 (5), 94 (5), 106 (4), 140 (4), 124 (4), 96 (4), 123 (4), 139 (4), 86 (4), 130 (4), 68 (4), 87 (3), 107 (3), 188 (3), 84 (3), 120 (3), 58 (3), 74 (3), 114 (3), 83 (3), 108 (3), 55 (3), 80 (3), 117 (3), 173 (3), 118 (2), 141 (2), 73 (2), 138 (2), 115 (2), 122 (2), 111 (2), 149 (2), 60, 146, 77, 67, 93, 76, 46, 53, 14, 99, 177, 90, 98, 52, 133, 16 - Sampling rate (Hz): 1000.0 (193), 500.0 (71), 1024.0 (8), 499.7071 (6), 1600.0 (4) - Duration (hours): 261.3157287560348 - Pathology: Epilepsy - Modality: Visual - Type: Memory - Size on disk: 167.3 GB - File count: 282 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005059.v1.0.6 - Source: openneuro - OpenNeuro: [ds005059](https://openneuro.org/datasets/ds005059) - NeMAR: [ds005059](https://nemar.org/dataexplorer/detail?dataset_id=ds005059) ## API Reference Use the `DS005059` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005059(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Paired Associates Learning: Memory for Word Pairs in Cued Recall * **Study:** `ds005059` (OpenNeuro) * **Author (year):** `Herrema2024_Paired` * **Canonical:** `PAL` Also importable as: `DS005059`, `Herrema2024_Paired`, `PAL`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 69; recordings: 282; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005059](https://openneuro.org/datasets/ds005059) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005059](https://nemar.org/dataexplorer/detail?dataset_id=ds005059) DOI: [https://doi.org/10.18112/openneuro.ds005059.v1.0.6](https://doi.org/10.18112/openneuro.ds005059.v1.0.6) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005059 >>> dataset = DS005059(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005059) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005059) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005065: meg dataset, 21 subjects *Heuristics in risky decision-making relate to preferential representation of information MEG data* Access recordings and metadata through EEGDash. **Citation:** Evan M. Russek, Rani Moran, Yunzhe Liu, Ray Dolan, Quentin Huys (2024). *Heuristics in risky decision-making relate to preferential representation of information MEG data*. [10.18112/openneuro.ds005065.v1.0.0](https://doi.org/10.18112/openneuro.ds005065.v1.0.0) Modality: meg Subjects: 21 Recordings: 275 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005065 dataset = DS005065(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005065(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005065( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005065, title = {Heuristics in risky decision-making relate to preferential representation of information MEG data}, author = {Evan M. Russek and Rani Moran and Yunzhe Liu and Ray Dolan and Quentin Huys}, doi = {10.18112/openneuro.ds005065.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005065.v1.0.0}, } ``` ## About This Dataset The task consisted of 13 scanner runs (except for subject 1 who completed 5 rather than 3 localizer runs). Runs 1-3 (1-5 for subject 1) are the localizer task. Runs 4-5 are non-analyzed data from the ‘probability learning’ task. Runs 6-13 (8-15 for subject 1) are the risky decision-making task. Event times were recorded with a photodiode, which is accessible as a MEG channel. This has been processed so that event times are listed in derivatives/Event_Info_Tables. Raw times of events in the scan are in column “onset_time”. The corresponding index into the unprocessed MEG data is in column “scanner_onset_idx”. The onset into the downsampled data is in “onset_idx_ds”. In the table, each row corresponds to an event. Block number denotes which scanner run that event belongs to. For the localizer task (denoted in phase column), events are image onsets. “image_type” specifies the role of that image in the task (“CHOICE” or “OUTCOME”) and “image_number” denotes which choice or outcome it is (see paper Fig. 1). Finally, “image_name” denotes which image category was shown (e.g. “Hand”). For the task, events correspond to gamble information onset (Info), Probability stimulus presentation (“Choice”), response (“Gamble Response”) and outcome onset (“Outcome”). Columns denote which image was shown and what the response was (accept). derivatives/Epoched_Data contains epoched preprocessed data for each subject for the localizer task and then around each choice in the main choice task. Both are epoched from from 0-500 ms following the event. Code to analyze the data along with additional behavioral data is available at [https://github.com/evanrussek/MEG_Heuristics_Risk_Preferential_Information](https://github.com/evanrussek/MEG_Heuristics_Risk_Preferential_Information) ## Dataset Information | Dataset ID | `DS005065` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Heuristics in risky decision-making relate to preferential representation of information MEG data | | Author (year) | `Russek2024` | | Canonical | — | | Importable as | `DS005065`, `Russek2024` | | Year | 2024 | | Authors | Evan M. Russek, Rani Moran, Yunzhe Liu, Ray Dolan, Quentin Huys | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005065.v1.0.0](https://doi.org/10.18112/openneuro.ds005065.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005065) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005065) | [Source URL](https://openneuro.org/datasets/ds005065) | ### Copy-paste BibTeX ```bibtex @dataset{ds005065, title = {Heuristics in risky decision-making relate to preferential representation of information MEG data}, author = {Evan M. Russek and Rani Moran and Yunzhe Liu and Ray Dolan and Quentin Huys}, doi = {10.18112/openneuro.ds005065.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005065.v1.0.0}, } ``` ## Technical Details - Subjects: 21 - Recordings: 275 - Tasks: 1 - Channels: 415 (210), 341 (65) - Sampling rate (Hz): 1200.0 - Duration (hours): 68.0 - Pathology: Healthy - Modality: Visual - Type: Decision-making - Size on disk: 425.8 GB - File count: 275 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005065.v1.0.0 - Source: openneuro - OpenNeuro: [ds005065](https://openneuro.org/datasets/ds005065) - NeMAR: [ds005065](https://nemar.org/dataexplorer/detail?dataset_id=ds005065) ## API Reference Use the `DS005065` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005065(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Heuristics in risky decision-making relate to preferential representation of information MEG data * **Study:** `ds005065` (OpenNeuro) * **Author (year):** `Russek2024` * **Canonical:** — Also importable as: `DS005065`, `Russek2024`. Modality: `meg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 21; recordings: 275; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005065](https://openneuro.org/datasets/ds005065) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005065](https://nemar.org/dataexplorer/detail?dataset_id=ds005065) DOI: [https://doi.org/10.18112/openneuro.ds005065.v1.0.0](https://doi.org/10.18112/openneuro.ds005065.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005065 >>> dataset = DS005065(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005065) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005065) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS005079: eeg dataset, 1 subjects *The Effects of Directed Therapeutic Intent on Live and Damaged Cells* Access recordings and metadata through EEGDash. **Citation:** Lorenzo Cohen, Arnaud Delorme, Peiying Yang, Andrew Cusimano, Sharmistha Chakraborty, Phuong Nguyen, Defeng Deng, Shafaqmuhammad Iqbal, Monica Nelson, Chris Fields (2024). *The Effects of Directed Therapeutic Intent on Live and Damaged Cells*. [10.18112/openneuro.ds005079.v2.0.0](https://doi.org/10.18112/openneuro.ds005079.v2.0.0) Modality: eeg Subjects: 1 Recordings: 60 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005079 dataset = DS005079(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005079(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005079( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005079, title = {The Effects of Directed Therapeutic Intent on Live and Damaged Cells}, author = {Lorenzo Cohen and Arnaud Delorme and Peiying Yang and Andrew Cusimano and Sharmistha Chakraborty and Phuong Nguyen and Defeng Deng and Shafaqmuhammad Iqbal and Monica Nelson and Chris Fields}, doi = {10.18112/openneuro.ds005079.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds005079.v2.0.0}, } ``` ## About This Dataset **Summary**: In this case study, a self-described practitioner of energy medicine (PEM) participated in a study, engaging in multiple (n=60) treatment and control (non-treatment) sessions under double-blind conditions. **Protocol:** Data were collected during 40 sessions over 10 days, with ten sessions of about 25 minutes daily. Each session was comprised of one file divided into five segments. First, there was a 2-minute control period where the PEM rested in the absence of cells (BaselinePre) for 2 minutes. Next, the cells (alive or control) were brought in, and the PEM conducted a 5-minute treatment of the cells while remaining still (TreatmentFirst5min). Next, the PEM performed another 5-minute treatment of the cells, but movement was allowed (Treatment 2). During a third treatment period (TreatmentMid5min), the PEM remained still while treating the cells, as in first treatment period (TreatmentLast5min). Finally, the cells were removed from the PEM’s vicinity, and physiology data were collected for another 2-minute control period (BaselinePost). The PEM was fully blind to the type of cells presented to him, and cell type presentation to the PEM was randomized. The experimenter presenting the cell to the PEM was also blind to the type of cells. In 40 sessions, live cells were presented to the PEM (CellPresent condition). In 10 sessions, no cells (medium only) were presented to the PEM. In the other ten sessions, dead cells (x-rayed) were presented to the PEM (Control1 and Control2 conditions). In order to have control samples for the cellular outcomes and control for the passage of time and potential effects of the equipment, 40 matching set of cells were treated in a different location by a sham therapist (these are available in the behavioral files (BEH) as control cell measures. Data curators: Data acquired at the MD Anderson Cancer Research Center ## Dataset Information | Dataset ID | `DS005079` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The Effects of Directed Therapeutic Intent on Live and Damaged Cells | | Author (year) | `Cohen2024` | | Canonical | — | | Importable as | `DS005079`, `Cohen2024` | | Year | 2024 | | Authors | Lorenzo Cohen, Arnaud Delorme, Peiying Yang, Andrew Cusimano, Sharmistha Chakraborty, Phuong Nguyen, Defeng Deng, Shafaqmuhammad Iqbal, Monica Nelson, Chris Fields | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005079.v2.0.0](https://doi.org/10.18112/openneuro.ds005079.v2.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005079) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005079) | [Source URL](https://openneuro.org/datasets/ds005079) | ### Copy-paste BibTeX ```bibtex @dataset{ds005079, title = {The Effects of Directed Therapeutic Intent on Live and Damaged Cells}, author = {Lorenzo Cohen and Arnaud Delorme and Peiying Yang and Andrew Cusimano and Sharmistha Chakraborty and Phuong Nguyen and Defeng Deng and Shafaqmuhammad Iqbal and Monica Nelson and Chris Fields}, doi = {10.18112/openneuro.ds005079.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds005079.v2.0.0}, } ``` ## Technical Details - Subjects: 1 - Recordings: 60 - Tasks: 15 - Channels: 65 - Sampling rate (Hz): 500.0 - Duration (hours): 3.800026666666666 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 1.7 GB - File count: 60 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005079.v2.0.0 - Source: openneuro - OpenNeuro: [ds005079](https://openneuro.org/datasets/ds005079) - NeMAR: [ds005079](https://nemar.org/dataexplorer/detail?dataset_id=ds005079) ## API Reference Use the `DS005079` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005079(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Effects of Directed Therapeutic Intent on Live and Damaged Cells * **Study:** `ds005079` (OpenNeuro) * **Author (year):** `Cohen2024` * **Canonical:** — Also importable as: `DS005079`, `Cohen2024`. Modality: `eeg`. Subjects: 1; recordings: 60; tasks: 15. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005079](https://openneuro.org/datasets/ds005079) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005079](https://nemar.org/dataexplorer/detail?dataset_id=ds005079) DOI: [https://doi.org/10.18112/openneuro.ds005079.v2.0.0](https://doi.org/10.18112/openneuro.ds005079.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005079 >>> dataset = DS005079(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005079) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005079) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005083: ieeg dataset, 61 subjects *Safety and Accuracy of Stereoelectroencephalography for Pediatric Patients with Prior Craniotomy* Access recordings and metadata through EEGDash. **Citation:** Peter H Yang, Nathan Wulfekammer, Amanda V. Jenson, Elliot Neal, Stuart Tomko, John Zempel, Peter Brunner, Sean D McEvoy, Matthew D Smyth, Jarod L Roland (—). *Safety and Accuracy of Stereoelectroencephalography for Pediatric Patients with Prior Craniotomy*. [10.18112/openneuro.ds005083.v1.0.0](https://doi.org/10.18112/openneuro.ds005083.v1.0.0) Modality: ieeg Subjects: 61 Recordings: 1357 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005083 dataset = DS005083(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005083(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005083( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005083, title = {Safety and Accuracy of Stereoelectroencephalography for Pediatric Patients with Prior Craniotomy}, author = {Peter H Yang and Nathan Wulfekammer and Amanda V. Jenson and Elliot Neal and Stuart Tomko and John Zempel and Peter Brunner and Sean D McEvoy and Matthew D Smyth and Jarod L Roland}, doi = {10.18112/openneuro.ds005083.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005083.v1.0.0}, } ``` ## About This Dataset BIDS iEEG dataset for the SEEG electrode data used for analysis in the manuscript title “Safety and Accuracy of Stereoelectroencephalography for Pediatric Patients with Prior Craniotomy.” All coordinates are recorded in the individual native post-operative CT imaging space. There was no alignment to other imaging modalities or standardized atlases. ## Dataset Information | Dataset ID | `DS005083` | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Safety and Accuracy of Stereoelectroencephalography for Pediatric Patients with Prior Craniotomy | | Author (year) | `Yang2024` | | Canonical | — | | Importable as | `DS005083`, `Yang2024` | | Year | — | | Authors | Peter H Yang, Nathan Wulfekammer, Amanda V. Jenson, Elliot Neal, Stuart Tomko, John Zempel, Peter Brunner, Sean D McEvoy, Matthew D Smyth, Jarod L Roland | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005083.v1.0.0](https://doi.org/10.18112/openneuro.ds005083.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005083) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005083) | [Source URL](https://openneuro.org/datasets/ds005083/versions/1.0.0) | ### Copy-paste BibTeX ```bibtex @dataset{ds005083, title = {Safety and Accuracy of Stereoelectroencephalography for Pediatric Patients with Prior Craniotomy}, author = {Peter H Yang and Nathan Wulfekammer and Amanda V. Jenson and Elliot Neal and Stuart Tomko and John Zempel and Peter Brunner and Sean D McEvoy and Matthew D Smyth and Jarod L Roland}, doi = {10.18112/openneuro.ds005083.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005083.v1.0.0}, } ``` ## Technical Details - Subjects: 61 - Recordings: 1357 - Tasks: 3 - Channels: 105 (2), 114 (2), 150, 102, 99, 62, 98, 148, 166, 138, 124, 129, 61, 117, 83, 230, 144, 95, 100, 134, 132, 112, 73, 123, 93, 152, 65, 103 - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Surgery - Modality: — - Type: Clinical/Intervention - Size on disk: 281.7 KB - File count: 1357 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005083.v1.0.0 - Source: openneuro - OpenNeuro: [ds005083](https://openneuro.org/datasets/ds005083) - NeMAR: [ds005083](https://nemar.org/dataexplorer/detail?dataset_id=ds005083) ## API Reference Use the `DS005083` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005083(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Safety and Accuracy of Stereoelectroencephalography for Pediatric Patients with Prior Craniotomy * **Study:** `ds005083` (OpenNeuro) * **Author (year):** `Yang2024` * **Canonical:** — Also importable as: `DS005083`, `Yang2024`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 61; recordings: 1357; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005083](https://openneuro.org/datasets/ds005083) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005083](https://nemar.org/dataexplorer/detail?dataset_id=ds005083) DOI: [https://doi.org/10.18112/openneuro.ds005083.v1.0.0](https://doi.org/10.18112/openneuro.ds005083.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005083 >>> dataset = DS005083(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005083) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005083) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005087: eeg dataset, 20 subjects *rapid-hemifield-object-eeg* Access recordings and metadata through EEGDash. **Citation:** Amanda K Robinson, Tijl Grootswagers, Sophia M Shatek, Marlene Behrmann, Thomas A Carlson (2024). *rapid-hemifield-object-eeg*. [10.18112/openneuro.ds005087.v1.0.1](https://doi.org/10.18112/openneuro.ds005087.v1.0.1) Modality: eeg Subjects: 20 Recordings: 60 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005087 dataset = DS005087(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005087(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005087( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005087, title = {rapid-hemifield-object-eeg}, author = {Amanda K Robinson and Tijl Grootswagers and Sophia M Shatek and Marlene Behrmann and Thomas A Carlson}, doi = {10.18112/openneuro.ds005087.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005087.v1.0.1}, } ``` ## About This Dataset Object and word stimuli presented at 5Hz to the left or right visual fields, or centrally, while participants performed an orthogonal red target detection task [PUBLICATION] Robinson A.K., Grootswagers T., Shatek S., Behrmann M., Carlson T.A. (2025). Dynamics of visual object coding within and across the hemispheres: Objects in the periphery. Science Advances, 11, eadq0889, [https://doi.org/10.1126/sciadv.adq0889](https://doi.org/10.1126/sciadv.adq0889) ## Dataset Information | Dataset ID | `DS005087` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | rapid-hemifield-object-eeg | | Author (year) | `Robinson2024_rapid` | | Canonical | — | | Importable as | `DS005087`, `Robinson2024_rapid` | | Year | 2024 | | Authors | Amanda K Robinson, Tijl Grootswagers, Sophia M Shatek, Marlene Behrmann, Thomas A Carlson | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005087.v1.0.1](https://doi.org/10.18112/openneuro.ds005087.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005087) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005087) | [Source URL](https://openneuro.org/datasets/ds005087) | ### Copy-paste BibTeX ```bibtex @dataset{ds005087, title = {rapid-hemifield-object-eeg}, author = {Amanda K Robinson and Tijl Grootswagers and Sophia M Shatek and Marlene Behrmann and Thomas A Carlson}, doi = {10.18112/openneuro.ds005087.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005087.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 60 - Tasks: 3 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 14.447911111111113 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 12.2 GB - File count: 60 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005087.v1.0.1 - Source: openneuro - OpenNeuro: [ds005087](https://openneuro.org/datasets/ds005087) - NeMAR: [ds005087](https://nemar.org/dataexplorer/detail?dataset_id=ds005087) ## API Reference Use the `DS005087` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005087(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) rapid-hemifield-object-eeg * **Study:** `ds005087` (OpenNeuro) * **Author (year):** `Robinson2024_rapid` * **Canonical:** — Also importable as: `DS005087`, `Robinson2024_rapid`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 20; recordings: 60; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005087](https://openneuro.org/datasets/ds005087) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005087](https://nemar.org/dataexplorer/detail?dataset_id=ds005087) DOI: [https://doi.org/10.18112/openneuro.ds005087.v1.0.1](https://doi.org/10.18112/openneuro.ds005087.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005087 >>> dataset = DS005087(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005087) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005087) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005089: eeg dataset, 36 subjects *Proactive selective attention across competition contexts* Access recordings and metadata through EEGDash. **Citation:** Blanca Aguado-Lopez, Ana F. Palenciano, Jose M. G. Penalver, Paloma Diaz-Gutierrez, David Lopez-Garcia, Chiara Avancini, Luis F. Ciria, Maria Ruz (2024). *Proactive selective attention across competition contexts*. [10.18112/openneuro.ds005089.v1.0.1](https://doi.org/10.18112/openneuro.ds005089.v1.0.1) Modality: eeg Subjects: 36 Recordings: 36 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005089 dataset = DS005089(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005089(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005089( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005089, title = {Proactive selective attention across competition contexts}, author = {Blanca Aguado-Lopez and Ana F. Palenciano and Jose M. G. Penalver and Paloma Diaz-Gutierrez and David Lopez-Garcia and Chiara Avancini and Luis F. Ciria and Maria Ruz}, doi = {10.18112/openneuro.ds005089.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005089.v1.0.1}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS005089` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Proactive selective attention across competition contexts | | Author (year) | `AguadoLopez2024` | | Canonical | — | | Importable as | `DS005089`, `AguadoLopez2024` | | Year | 2024 | | Authors | Blanca Aguado-Lopez, Ana F. Palenciano, Jose M. G. Penalver, Paloma Diaz-Gutierrez, David Lopez-Garcia, Chiara Avancini, Luis F. Ciria, Maria Ruz | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005089.v1.0.1](https://doi.org/10.18112/openneuro.ds005089.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005089) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005089) | [Source URL](https://openneuro.org/datasets/ds005089) | ### Copy-paste BibTeX ```bibtex @dataset{ds005089, title = {Proactive selective attention across competition contexts}, author = {Blanca Aguado-Lopez and Ana F. Palenciano and Jose M. G. Penalver and Paloma Diaz-Gutierrez and David Lopez-Garcia and Chiara Avancini and Luis F. Ciria and Maria Ruz}, doi = {10.18112/openneuro.ds005089.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005089.v1.0.1}, } ``` ## Technical Details - Subjects: 36 - Recordings: 36 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 68.82001666666666 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 68.0 GB - File count: 36 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005089.v1.0.1 - Source: openneuro - OpenNeuro: [ds005089](https://openneuro.org/datasets/ds005089) - NeMAR: [ds005089](https://nemar.org/dataexplorer/detail?dataset_id=ds005089) ## API Reference Use the `DS005089` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005089(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Proactive selective attention across competition contexts * **Study:** `ds005089` (OpenNeuro) * **Author (year):** `AguadoLopez2024` * **Canonical:** — Also importable as: `DS005089`, `AguadoLopez2024`. Modality: `eeg`. Subjects: 36; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005089](https://openneuro.org/datasets/ds005089) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005089](https://nemar.org/dataexplorer/detail?dataset_id=ds005089) DOI: [https://doi.org/10.18112/openneuro.ds005089.v1.0.1](https://doi.org/10.18112/openneuro.ds005089.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005089 >>> dataset = DS005089(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005089) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005089) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005095: eeg dataset, 48 subjects *STERNBERG DIFFICULT* Access recordings and metadata through EEGDash. **Citation:** Natalia Zhozhikashvili, Maria Protopova, Tatiana Shkurenko, Marie Arsalidou, Ilya Zakharov, Boris Kotchoubey, Sergey Malykh, Yuri Pavlov (2024). *STERNBERG DIFFICULT*. [10.18112/openneuro.ds005095.v1.0.2](https://doi.org/10.18112/openneuro.ds005095.v1.0.2) Modality: eeg Subjects: 48 Recordings: 48 License: CC0 Source: openneuro Citations: 7.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005095 dataset = DS005095(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005095(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005095( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005095, title = {STERNBERG DIFFICULT}, author = {Natalia Zhozhikashvili and Maria Protopova and Tatiana Shkurenko and Marie Arsalidou and Ilya Zakharov and Boris Kotchoubey and Sergey Malykh and Yuri Pavlov}, doi = {10.18112/openneuro.ds005095.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds005095.v1.0.2}, } ``` ## About This Dataset **Overview** This is the “Sternberg Difficult” dataset contain RAW EEG data. Raven Progressive Standard Matrices scores for each participant are provided in participants.tsv **Task** Participants completed a version of the Sternberg task (Sternberg, 1966; Fig 1) during EEG recording. Stimuli were all consonants of the Russian alphabet letters, except for “щ”[sch] and “й” [ij], presented in sets of 3, 6, 9, 12, and 15 letters. No letter repeated within a set. Each trial was preceded by a 500-1000 ms fixation cross. Encoding (letter set), retention (blank screen), and retrieval (probe letter) phases were allocated 1500ms, 2000ms and 1500ms, respectively. After 1500 ms period, the probe letter disappeared from the screen. Participants were asked to recall whether the probe letter was in the letter set presented during encoding phase. They had unlimited time to respond by pressing a button: the “left arrow” for “no” and the “right arrow” for “yes”. The trial concluded immediately after a response was made, regardless of the reaction time. Participants completed 200 trials in total with 40 trials in difficulty blocks corresponding to each particular set size (i.e., 3, 6, 9, 12, and 15 letters) with an opportunity to take a break after each block. The order of blocks was random, and the number of positive and negative probes was equal in each block. All stimuli were presented and responses were recorded using Psychopy2. **Event triggers** Important: Triggers in the dataset correspond only to the beginning of the stimulus presentation. No additional triggers were implemented to mark the onset of the retention and retrieval periods. However, these timepoints can be computed based on the experimental design. Each sample was presented for 1500 ms, meaning that the retention time occurred strictly 1500 ms after the trigger point appeared in the data. Similarly, the time of retrieval (when participants had to explicitly state whether a new letter had been shown previously) could be marked at 3500 ms relative to the trial onset. ## Dataset Information | Dataset ID | `DS005095` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | STERNBERG DIFFICULT | | Author (year) | `Zhozhikashvili2024` | | Canonical | — | | Importable as | `DS005095`, `Zhozhikashvili2024` | | Year | 2024 | | Authors | Natalia Zhozhikashvili, Maria Protopova, Tatiana Shkurenko, Marie Arsalidou, Ilya Zakharov, Boris Kotchoubey, Sergey Malykh, Yuri Pavlov | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005095.v1.0.2](https://doi.org/10.18112/openneuro.ds005095.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005095) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005095) | [Source URL](https://openneuro.org/datasets/ds005095) | ### Copy-paste BibTeX ```bibtex @dataset{ds005095, title = {STERNBERG DIFFICULT}, author = {Natalia Zhozhikashvili and Maria Protopova and Tatiana Shkurenko and Marie Arsalidou and Ilya Zakharov and Boris Kotchoubey and Sergey Malykh and Yuri Pavlov}, doi = {10.18112/openneuro.ds005095.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds005095.v1.0.2}, } ``` ## Technical Details - Subjects: 48 - Recordings: 48 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 16.901139444444443 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 14.3 GB - File count: 48 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005095.v1.0.2 - Source: openneuro - OpenNeuro: [ds005095](https://openneuro.org/datasets/ds005095) - NeMAR: [ds005095](https://nemar.org/dataexplorer/detail?dataset_id=ds005095) ## API Reference Use the `DS005095` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005095(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) STERNBERG DIFFICULT * **Study:** `ds005095` (OpenNeuro) * **Author (year):** `Zhozhikashvili2024` * **Canonical:** — Also importable as: `DS005095`, `Zhozhikashvili2024`. Modality: `eeg`. Subjects: 48; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005095](https://openneuro.org/datasets/ds005095) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005095](https://nemar.org/dataexplorer/detail?dataset_id=ds005095) DOI: [https://doi.org/10.18112/openneuro.ds005095.v1.0.2](https://doi.org/10.18112/openneuro.ds005095.v1.0.2) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS005095 >>> dataset = DS005095(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005095) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005095) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005106: eeg dataset, 42 subjects *200 Objects Infants EEG* Access recordings and metadata through EEGDash. **Citation:** Tijl Grootswagers, Genevieve Quek, Zhen Zeng, Manuel Varlet (2024). *200 Objects Infants EEG*. [10.18112/openneuro.ds005106.v1.5.0](https://doi.org/10.18112/openneuro.ds005106.v1.5.0) Modality: eeg Subjects: 42 Recordings: 42 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005106 dataset = DS005106(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005106(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005106( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005106, title = {200 Objects Infants EEG}, author = {Tijl Grootswagers and Genevieve Quek and Zhen Zeng and Manuel Varlet}, doi = {10.18112/openneuro.ds005106.v1.5.0}, url = {https://doi.org/10.18112/openneuro.ds005106.v1.5.0}, } ``` ## About This Dataset Data and code for the paper: Tijl Grootswagers, Genevieve Quek, Zhen Zeng, & Manuel Varlet. 2025. “Human Infant EEG Recordings for 200 Object Images Presented in Rapid Visual Streams.” Scientific Data. [https://doi.org/10.1038/s41597-025-04744-z](https://doi.org/10.1038/s41597-025-04744-z) See the linked paper for details. The “code” directory contains all the code to reproduce the figures in the paper. It requires fieldtrip and cosmomvpa, change the paths to these toolboxes at the top of each script (or remove the lines and add them to the path manually). Then run the scripts to reproduce each step reported in the paper: 1. run_preprocessing.m (preprocess and epoch data) 2. run_rsa.m (makes the individual RDMs) 3. stats_rsa.m (computes the RSA correlations) 4. plot_design.m (produces Figure 1 in the paper) 5. plot_peaks.m (produces Figure 2 in the paper) 6. plot_rsa.m (produces Figure 3 in the paper) Each script can also run standalone, as intermediate results are saved in the derivates folder ## Dataset Information | Dataset ID | `DS005106` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 200 Objects Infants EEG | | Author (year) | `Grootswagers2024` | | Canonical | — | | Importable as | `DS005106`, `Grootswagers2024` | | Year | 2024 | | Authors | Tijl Grootswagers, Genevieve Quek, Zhen Zeng, Manuel Varlet | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005106.v1.5.0](https://doi.org/10.18112/openneuro.ds005106.v1.5.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005106) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005106) | [Source URL](https://openneuro.org/datasets/ds005106) | ### Copy-paste BibTeX ```bibtex @dataset{ds005106, title = {200 Objects Infants EEG}, author = {Tijl Grootswagers and Genevieve Quek and Zhen Zeng and Manuel Varlet}, doi = {10.18112/openneuro.ds005106.v1.5.0}, url = {https://doi.org/10.18112/openneuro.ds005106.v1.5.0}, } ``` ## Technical Details - Subjects: 42 - Recordings: 42 - Tasks: 1 - Channels: 33 - Sampling rate (Hz): 500.0 - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: 1.2 GB - File count: 42 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005106.v1.5.0 - Source: openneuro - OpenNeuro: [ds005106](https://openneuro.org/datasets/ds005106) - NeMAR: [ds005106](https://nemar.org/dataexplorer/detail?dataset_id=ds005106) ## API Reference Use the `DS005106` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005106(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 200 Objects Infants EEG * **Study:** `ds005106` (OpenNeuro) * **Author (year):** `Grootswagers2024` * **Canonical:** — Also importable as: `DS005106`, `Grootswagers2024`. Modality: `eeg`. Subjects: 42; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005106](https://openneuro.org/datasets/ds005106) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005106](https://nemar.org/dataexplorer/detail?dataset_id=ds005106) DOI: [https://doi.org/10.18112/openneuro.ds005106.v1.5.0](https://doi.org/10.18112/openneuro.ds005106.v1.5.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005106 >>> dataset = DS005106(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005106) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005106) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005107: meg dataset, 21 subjects *FACE-DEC* Access recordings and metadata through EEGDash. **Citation:** Wei Xu, et al. (2024). *FACE-DEC*. [10.18112/openneuro.ds005107.v2.0.0](https://doi.org/10.18112/openneuro.ds005107.v2.0.0) Modality: meg Subjects: 21 Recordings: 350 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005107 dataset = DS005107(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005107(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005107( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005107, title = {FACE-DEC}, author = {Wei Xu and et al.}, doi = {10.18112/openneuro.ds005107.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds005107.v2.0.0}, } ``` ## About This Dataset Main entrance: face_0_main Preprocessing: face_1_prep Decoding: face_2_dec RSA: face_3_rsa Statistical: face_4_stat BMS: face_6_bayes.m During original OPM-MEG data acquisition, individual facial point clouds and structural MRIs were not collected due to the unavailability of optical scanning equipment. Hence, all analyses were conducted at the whole-brain & sensor level. The raw data were originally stored using in-house LabVIEW format and were converted into the FIF format later. It should be noted that the sensor coordinates used were approximated by selecting corresponding locations from the Elekta layout and do not reflect the actual sensor positions (only for visualizing topographic maps). We are currently checking all the data to ensure that everything has been uploaded correctly. :) Correspondence: [weixu@mail.bnu.edu.cn](mailto:weixu@mail.bnu.edu.cn) ## Dataset Information | Dataset ID | `DS005107` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | FACE-DEC | | Author (year) | `Xu2024_DEC` | | Canonical | — | | Importable as | `DS005107`, `Xu2024_DEC` | | Year | 2024 | | Authors | Wei Xu, et al. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005107.v2.0.0](https://doi.org/10.18112/openneuro.ds005107.v2.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005107) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005107) | [Source URL](https://openneuro.org/datasets/ds005107) | ### Copy-paste BibTeX ```bibtex @dataset{ds005107, title = {FACE-DEC}, author = {Wei Xu and et al.}, doi = {10.18112/openneuro.ds005107.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds005107.v2.0.0}, } ``` ## Technical Details - Subjects: 21 - Recordings: 350 - Tasks: 1 - Channels: 65 - Sampling rate (Hz): 1000.0 - Duration (hours): 31.625122222222224 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 27.6 GB - File count: 350 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005107.v2.0.0 - Source: openneuro - OpenNeuro: [ds005107](https://openneuro.org/datasets/ds005107) - NeMAR: [ds005107](https://nemar.org/dataexplorer/detail?dataset_id=ds005107) ## API Reference Use the `DS005107` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005107(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FACE-DEC * **Study:** `ds005107` (OpenNeuro) * **Author (year):** `Xu2024_DEC` * **Canonical:** — Also importable as: `DS005107`, `Xu2024_DEC`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 21; recordings: 350; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005107](https://openneuro.org/datasets/ds005107) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005107](https://nemar.org/dataexplorer/detail?dataset_id=ds005107) DOI: [https://doi.org/10.18112/openneuro.ds005107.v2.0.0](https://doi.org/10.18112/openneuro.ds005107.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005107 >>> dataset = DS005107(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005107) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005107) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS005114: eeg dataset, 91 subjects *EEG: DPX Cog Ctl Task in Acute Mild TBI* Access recordings and metadata through EEGDash. **Citation:** James F Cavanagh (2024). *EEG: DPX Cog Ctl Task in Acute Mild TBI*. [10.18112/openneuro.ds005114.v1.0.0](https://doi.org/10.18112/openneuro.ds005114.v1.0.0) Modality: eeg Subjects: 91 Recordings: 223 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005114 dataset = DS005114(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005114(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005114( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005114, title = {EEG: DPX Cog Ctl Task in Acute Mild TBI}, author = {James F Cavanagh}, doi = {10.18112/openneuro.ds005114.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005114.v1.0.0}, } ``` ## About This Dataset Dot Probe Continuous Performance Task in control & sub-acute mild TBI. Published here: [https://pubmed.ncbi.nlm.nih.gov/31368085/](https://pubmed.ncbi.nlm.nih.gov/31368085/) For CTL and sub-acute mTBI: Session 1 was from 3 to 14 days post-injury and was the only session with MRI. MRI will be uploaded later (bug issues on upload). Session 2 was ~2 months (1.5 to 3) and Session 3 was ~4 months (3 to 5) following Session 1. There was A LOT of subject attrition over timepoints. Same samples as reported here: [https://psycnet.apa.org/record/2020-66677-001](https://psycnet.apa.org/record/2020-66677-001) [https://pubmed.ncbi.nlm.nih.gov/31344589/](https://pubmed.ncbi.nlm.nih.gov/31344589/) 10.1016/j.neuropsychologia.2019.107125 Task included in Matlab programming language. Data collected 2016-2018 in the Center for Brain Recovery and Repair at the UNM Health Sciences Center. Check the .xls sheet under code folder for *LOTS* more meta data. Analysis scripts are included. - James F Cavanagh 04/29/2024 ## Dataset Information | Dataset ID | `DS005114` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: DPX Cog Ctl Task in Acute Mild TBI | | Author (year) | `Cavanagh2024` | | Canonical | — | | Importable as | `DS005114`, `Cavanagh2024` | | Year | 2024 | | Authors | James F Cavanagh | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005114.v1.0.0](https://doi.org/10.18112/openneuro.ds005114.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005114) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005114) | [Source URL](https://openneuro.org/datasets/ds005114) | ### Copy-paste BibTeX ```bibtex @dataset{ds005114, title = {EEG: DPX Cog Ctl Task in Acute Mild TBI}, author = {James F Cavanagh}, doi = {10.18112/openneuro.ds005114.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005114.v1.0.0}, } ``` ## Technical Details - Subjects: 91 - Recordings: 223 - Tasks: 1 - Channels: 65 (217), 64 (6) - Sampling rate (Hz): 500.0 - Duration (hours): 125.70132055555555 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 55.9 GB - File count: 223 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005114.v1.0.0 - Source: openneuro - OpenNeuro: [ds005114](https://openneuro.org/datasets/ds005114) - NeMAR: [ds005114](https://nemar.org/dataexplorer/detail?dataset_id=ds005114) ## API Reference Use the `DS005114` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005114(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: DPX Cog Ctl Task in Acute Mild TBI * **Study:** `ds005114` (OpenNeuro) * **Author (year):** `Cavanagh2024` * **Canonical:** — Also importable as: `DS005114`, `Cavanagh2024`. Modality: `eeg`. Subjects: 91; recordings: 223; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005114](https://openneuro.org/datasets/ds005114) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005114](https://nemar.org/dataexplorer/detail?dataset_id=ds005114) DOI: [https://doi.org/10.18112/openneuro.ds005114.v1.0.0](https://doi.org/10.18112/openneuro.ds005114.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005114 >>> dataset = DS005114(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005114) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005114) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005121: eeg dataset, 34 subjects *Siefert2024* Access recordings and metadata through EEGDash. **Citation:** Elizabeth M. Siefert, Sindhuja Uppuluri, Jianing Mu, Marlie C. Tandoc, James W. Antony, Anna C. Schapiro (2024). *Siefert2024*. [10.18112/openneuro.ds005121.v1.0.2](https://doi.org/10.18112/openneuro.ds005121.v1.0.2) Modality: eeg Subjects: 34 Recordings: 39 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005121 dataset = DS005121(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005121(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005121( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005121, title = {Siefert2024}, author = {Elizabeth M. Siefert and Sindhuja Uppuluri and Jianing Mu and Marlie C. Tandoc and James W. Antony and Anna C. Schapiro}, doi = {10.18112/openneuro.ds005121.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds005121.v1.0.2}, } ``` ## About This Dataset Overview: This is the “Siefert2024” dataset. It is the sleep EEG data from Siefert et al., 2024 ([https://doi.org/10.1523/JNEUROSCI.0022-24.2024](https://doi.org/10.1523/JNEUROSCI.0022-24.2024)). In brief, it contains sleep EEG data from 34 participants while Targeted Memory Reactivation was administered. Please cite the following paper: E.M. Siefert, S. Uppuluri, J. Mu., M.C. Tandoc, J.W. Antony, A.C. Schapiro (2024). Memory reactivation during sleep does not act holistically on object memory. Journal of Neuroscience, 10.1523/JNEUROSCI.0022-24.2024 The dataset is formatted according to the Brain Imaging Data Structure (BIDS). Data organization was performed using FieldTrip data2bids function. Additional details: Events.tsv files contain information about the different cueing events. - item_value is the sound index number used by the TMR system. This number corresponds to the literal code name of the satellite (i.e., “nivex” or “sorex”). Here, 33 indicates no sound was played but a SO was tagged. - SatNum is the corresponding satellite number. This number is the same satellite number that is used in the behavioral data. > - 0 indicates no sound was played. > - Satellites 1-5 are the studied satellites from the blocked category. > - Satellites 6-10 are the studied satellites from the interleaved category. > - Satellites 11-15 are the studied satellites from the uncued category. System crashed and had to be restarted for participants 6, 7, 15, 17, 21, resulting in two EEG files for these participants. Please contact Liz Siefert ([sieferte@pennmedicine.upenn.edu](mailto:sieferte@pennmedicine.upenn.edu)) with any additional questions. ## Dataset Information | Dataset ID | `DS005121` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Siefert2024 | | Author (year) | `Siefert2024` | | Canonical | — | | Importable as | `DS005121`, `Siefert2024` | | Year | 2024 | | Authors | Elizabeth M. Siefert, Sindhuja Uppuluri, Jianing Mu, Marlie C. Tandoc, James W. Antony, Anna C. Schapiro | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005121.v1.0.2](https://doi.org/10.18112/openneuro.ds005121.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005121) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005121) | [Source URL](https://openneuro.org/datasets/ds005121) | ### Copy-paste BibTeX ```bibtex @dataset{ds005121, title = {Siefert2024}, author = {Elizabeth M. Siefert and Sindhuja Uppuluri and Jianing Mu and Marlie C. Tandoc and James W. Antony and Anna C. Schapiro}, doi = {10.18112/openneuro.ds005121.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds005121.v1.0.2}, } ``` ## Technical Details - Subjects: 34 - Recordings: 39 - Tasks: 1 - Channels: 65 - Sampling rate (Hz): 512.0 - Duration (hours): 40.52590386284722 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 9.0 GB - File count: 39 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005121.v1.0.2 - Source: openneuro - OpenNeuro: [ds005121](https://openneuro.org/datasets/ds005121) - NeMAR: [ds005121](https://nemar.org/dataexplorer/detail?dataset_id=ds005121) ## API Reference Use the `DS005121` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005121(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Siefert2024 * **Study:** `ds005121` (OpenNeuro) * **Author (year):** `Siefert2024` * **Canonical:** — Also importable as: `DS005121`, `Siefert2024`. Modality: `eeg`. Subjects: 34; recordings: 39; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005121](https://openneuro.org/datasets/ds005121) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005121](https://nemar.org/dataexplorer/detail?dataset_id=ds005121) DOI: [https://doi.org/10.18112/openneuro.ds005121.v1.0.2](https://doi.org/10.18112/openneuro.ds005121.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005121 >>> dataset = DS005121(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005121) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005121) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005131: eeg dataset, 58 subjects *Evoked responses to elevated sounds* Access recordings and metadata through EEGDash. **Citation:** Ole Bialas, Marc Schoewiesner (2024). *Evoked responses to elevated sounds*. [10.18112/openneuro.ds005131.v1.0.1](https://doi.org/10.18112/openneuro.ds005131.v1.0.1) Modality: eeg Subjects: 58 Recordings: 63 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005131 dataset = DS005131(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005131(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005131( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005131, title = {Evoked responses to elevated sounds}, author = {Ole Bialas and Marc Schoewiesner}, doi = {10.18112/openneuro.ds005131.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005131.v1.0.1}, } ``` ## About This Dataset **Overview** The dataset consists of data from two experiments in which subjects were presented bursts of noise from loudspeakers at different elevations. Subjects who participated in either experiment were initially tested in their ability to localize elevated sound sources. Both experiments were conducted in a hemi-anechoic chamber. Localization Tests Bursts of pink noise were presented from loudspeakers at different elevations and 10° azimuth (to the listeners right). In the localization test preceding experiment I, these loudspeakers were positioned at elevations of +50°, +25°, 0° and -25° while the localization test preceding experiment II also included a loudspeaker at -50° elevation. Localization test data is missing for sub-001, sub-002 and sub-003 **Deviant Detection (Experiment 1)** Subjects 001-023 participated in this experiment. Subjects heard a long trail of noise from one loudspeaker (adapter), followed by a short burst of noise from another loudspeaker (probe). The elevation of the adapter and probe are encoded in the event values: 2: adapter at 37.5°, probe at 12.5° 3: adapter at 37.5°, probe at -12.5° 4: adapter at 37.5°, probe at -37.5° 5: adapter at -37.5°, probe at 37.5° 6: adapter at -37.5°, probe at 12.5° 7: adapter at -37.5°, probe at -12.5° 8: no adapter, any non-target location (deviant) The behavioral data contains the trial numbers where a deviant was presented and weather the subject responded within one second by pressing a button. **One-Back (Experiment II)** Subjects 100-134 participated in this experiment. Subjects heard a long trail of white noise through open headphones followed by a short burst of noise from one of the loudspeakers. The loudspeaker’s elevation is encoded in the event values: 1: 37.5°, 2: 12.5°, 3:-23.5°, 4:-37.5° Roughly five percent of trials were targets where subjects heard a beep after the trial, prompting them to localize the previously heard sound. The number of those target trials, as well as the target’s elevation and the subject’s response can be found in thee behavioral data. A subset (sub-130-134) participated in a second session of the experiment. This session was identical to the first task with the difference that the subjects had molds inserted that disrupted their ability to perceive sound elevation. ## Dataset Information | Dataset ID | `DS005131` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Evoked responses to elevated sounds | | Author (year) | `Bialas2024` | | Canonical | — | | Importable as | `DS005131`, `Bialas2024` | | Year | 2024 | | Authors | Ole Bialas, Marc Schoewiesner | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005131.v1.0.1](https://doi.org/10.18112/openneuro.ds005131.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005131) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005131) | [Source URL](https://openneuro.org/datasets/ds005131) | ### Copy-paste BibTeX ```bibtex @dataset{ds005131, title = {Evoked responses to elevated sounds}, author = {Ole Bialas and Marc Schoewiesner}, doi = {10.18112/openneuro.ds005131.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005131.v1.0.1}, } ``` ## Technical Details - Subjects: 58 - Recordings: 63 - Tasks: 2 - Channels: 64 - Sampling rate (Hz): 500.0 - Duration (hours): 52.035291666666666 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 22.3 GB - File count: 63 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005131.v1.0.1 - Source: openneuro - OpenNeuro: [ds005131](https://openneuro.org/datasets/ds005131) - NeMAR: [ds005131](https://nemar.org/dataexplorer/detail?dataset_id=ds005131) ## API Reference Use the `DS005131` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005131(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Evoked responses to elevated sounds * **Study:** `ds005131` (OpenNeuro) * **Author (year):** `Bialas2024` * **Canonical:** — Also importable as: `DS005131`, `Bialas2024`. Modality: `eeg`. Subjects: 58; recordings: 63; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005131](https://openneuro.org/datasets/ds005131) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005131](https://nemar.org/dataexplorer/detail?dataset_id=ds005131) DOI: [https://doi.org/10.18112/openneuro.ds005131.v1.0.1](https://doi.org/10.18112/openneuro.ds005131.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005131 >>> dataset = DS005131(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005131) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005131) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005169: ieeg dataset, 20 subjects *Dataset of intracranial EEG during cortical stimulation evoking visual effects* Access recordings and metadata through EEGDash. **Citation:** Andrei Barborica, Felicia Mihai, Laurentiu Tofan, Irina Oane, Ioana Mindruta (2024). *Dataset of intracranial EEG during cortical stimulation evoking visual effects*. [10.18112/openneuro.ds005169.v1.0.0](https://doi.org/10.18112/openneuro.ds005169.v1.0.0) Modality: ieeg Subjects: 20 Recordings: 112 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005169 dataset = DS005169(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005169(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005169( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005169, title = {Dataset of intracranial EEG during cortical stimulation evoking visual effects}, author = {Andrei Barborica and Felicia Mihai and Laurentiu Tofan and Irina Oane and Ioana Mindruta}, doi = {10.18112/openneuro.ds005169.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005169.v1.0.0}, } ``` ## About This Dataset In this dataset we included iEEG recordings of responses to 115 intracranial high frequency stimulations evoking visual hallucinations, in 22 patients undergoing stereo-EEG presurgical evaluation for drug-resistant epilepsy. The dataset contains 21 seconds of iEEG data around each stimulation, 8 seconds before the start of the stimulation, up to 5 seconds of intracranial stimulation and 8 seconds after the end of the stimulation. We have used high-frequency bipolar stimulations of different areas of the brain, using alternating polarity biphasic pulses having a duration of 1 ms, at 43.2 Hz or 50 Hz, current intensity between 0.25 to 3 mA, for up to 5 s. Alternating polarity protocol allows disambiguating neuronal responses time-locked to the stimulation pulses from the artefactual components, according to Barborica et al., 2022 (doi: 10.1002/hbm.25749). It is therefore possible to identify the brain networks underlying the clinical effects, and to create symptom-related activation/connectivity maps. The contact pair on which stimulation is applied, the current intensity level and evoked effect are specified in the events tsv. The responses are classified in 14 clinical categories: elementary (unstructured flashes of light), plus hallucination (presence of light in different forms or colors overlaying the background vision), minus hallucination (negative elementary phenomena described as scotoma, quadrantanopia, hemianopia or amaurosis), static, dynamic, continuous hallucination, intermittent hallucination, peripheric, central, whole visual field, color, non-color, combined visual symptoms, multimodal hallucinations. Not all patients in which stimulations evoked visual hallucinations met the inclusion criteria for network analysis that requires running the freesurfer pipeline, for instance patients having prior resections, therefore there are subjects that do not contain ieeg data. However, they were kept in order to match the number of patients in the companion manuscript. Contact: [andrei.barborica@fizica.unibuc.ro](mailto:andrei.barborica@fizica.unibuc.ro) ## Dataset Information | Dataset ID | `DS005169` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset of intracranial EEG during cortical stimulation evoking visual effects | | Author (year) | `Barborica2024` | | Canonical | — | | Importable as | `DS005169`, `Barborica2024` | | Year | 2024 | | Authors | Andrei Barborica, Felicia Mihai, Laurentiu Tofan, Irina Oane, Ioana Mindruta | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005169.v1.0.0](https://doi.org/10.18112/openneuro.ds005169.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005169) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005169) | [Source URL](https://openneuro.org/datasets/ds005169) | ### Copy-paste BibTeX ```bibtex @dataset{ds005169, title = {Dataset of intracranial EEG during cortical stimulation evoking visual effects}, author = {Andrei Barborica and Felicia Mihai and Laurentiu Tofan and Irina Oane and Ioana Mindruta}, doi = {10.18112/openneuro.ds005169.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005169.v1.0.0}, } ``` ## Technical Details - Subjects: 20 - Recordings: 112 - Tasks: 1 - Channels: 82 (19), 94 (9), 101 (6), 136 (5), 95 (5), 102 (5), 193 (5), 70 (4), 40 (4), 83 (3), 85 (3), 184 (3), 79 (3), 84 (3), 106 (2), 143 (2), 58 (2), 104 (2), 91 (2), 113, 71, 88, 96, 100, 80, 39, 144, 105, 99, 92, 160, 29, 186, 69, 98, 76, 109, 38, 81, 114, 188, 86, 89, 103 - Sampling rate (Hz): 4096.0 - Duration (hours): 0.6533333333333333 - Pathology: Epilepsy - Modality: Other - Type: Clinical/Intervention - Size on disk: 4.0 GB - File count: 112 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005169.v1.0.0 - Source: openneuro - OpenNeuro: [ds005169](https://openneuro.org/datasets/ds005169) - NeMAR: [ds005169](https://nemar.org/dataexplorer/detail?dataset_id=ds005169) ## API Reference Use the `DS005169` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005169(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of intracranial EEG during cortical stimulation evoking visual effects * **Study:** `ds005169` (OpenNeuro) * **Author (year):** `Barborica2024` * **Canonical:** — Also importable as: `DS005169`, `Barborica2024`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 20; recordings: 112; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005169](https://openneuro.org/datasets/ds005169) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005169](https://nemar.org/dataexplorer/detail?dataset_id=ds005169) DOI: [https://doi.org/10.18112/openneuro.ds005169.v1.0.0](https://doi.org/10.18112/openneuro.ds005169.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005169 >>> dataset = DS005169(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005169) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005169) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005170: eeg dataset, 5 subjects *Chisco* Access recordings and metadata through EEGDash. **Citation:** Zihan Zhang, Yi Zhao, Yu Bao, Xiao Ding (2024). *Chisco*. [10.18112/openneuro.ds005170.v1.1.2](https://doi.org/10.18112/openneuro.ds005170.v1.1.2) Modality: eeg Subjects: 5 Recordings: 225 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005170 dataset = DS005170(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005170(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005170( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005170, title = {Chisco}, author = {Zihan Zhang and Yi Zhao and Yu Bao and Xiao Ding}, doi = {10.18112/openneuro.ds005170.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds005170.v1.1.2}, } ``` ## About This Dataset **Chisco Dataset** This dataset is a Chinese imagined speech dataset with five participants, identified as sub-01 to sub-05. The dataset includes raw data and preprocessed data in both fif and pkl formats. Information also can be found in [https://github.com/zhangzihan-is-good/Chisco](https://github.com/zhangzihan-is-good/Chisco) **Supplementary Information** The initial dataset release encompassed data from three participants (sub-01 to sub-03) as detailed in related Chisco publications. Subsequently, data from two additional subjects (sub-04 and sub-05) were incorporated. During the interval between the original dataset release and the addition of the new data, the BIDS protocol underwent updates. To preserve the integrity of the data processing code presented in our publications, the supplementary data continue to adhere to the previous version of the BIDS protocol. Consequently, the BIDS validator on our website may report errors; however, these do not compromise the usability of the dataset. Future releases will include data from sub-06 and sub-07, who participated under a new experimental paradigm. These will be published as part of a new dataset, Chisco 2.0. We invite you to stay tuned for further updates. ### View full README **Chisco Dataset** This dataset is a Chinese imagined speech dataset with five participants, identified as sub-01 to sub-05. The dataset includes raw data and preprocessed data in both fif and pkl formats. Information also can be found in [https://github.com/zhangzihan-is-good/Chisco](https://github.com/zhangzihan-is-good/Chisco) **Supplementary Information** The initial dataset release encompassed data from three participants (sub-01 to sub-03) as detailed in related Chisco publications. Subsequently, data from two additional subjects (sub-04 and sub-05) were incorporated. During the interval between the original dataset release and the addition of the new data, the BIDS protocol underwent updates. To preserve the integrity of the data processing code presented in our publications, the supplementary data continue to adhere to the previous version of the BIDS protocol. Consequently, the BIDS validator on our website may report errors; however, these do not compromise the usability of the dataset. Future releases will include data from sub-06 and sub-07, who participated under a new experimental paradigm. These will be published as part of a new dataset, Chisco 2.0. We invite you to stay tuned for further updates. **Dataset Structure** **Root Directory** - `dataset_description.json` - `participants.tsv` - `README` - `derivatives/` - `sub-01/` to `sub-05/` - `textdataset/` - `json/` **Raw Data** The root directory contains folders `sub-01` to `sub-05` with raw data. Each participant’s folder contains 5-6 session folders, corresponding to data collected over 5-6 days. **Preprocessed Data** Preprocessed data is stored in the `derivatives` folder in both fif and pkl formats. **Text Data** The `textdataset` folder and `json` folder contain text data used to stimulate the participants. **File Structure** ```text /Chisco /sub-01 /ses-01 /eeg sub-01_ses-01_task-imagine_eeg.edf ... /sub-02 ... /sub-03 ... /derivatives /fif /sub-01 ... /sub-02 ... /sub-03 ... /pkl /sub-01 ... /sub-02 ... /sub-03 ... /textdataset ... /json ... dataset_description.json README participants.tsv ``` **License** This dataset is licensed under the CC0 license. You are free to use the dataset for non-commercial purposes, but the original author needs to be properly indicated. **Citation** If you use this dataset in your research, please cite the following link: [https://github.com/zhangzihan-is-good/Chisco](https://github.com/zhangzihan-is-good/Chisco) **Contact Information** For any questions, please contact the dataset authors. Thank you for using the Chisco! ## Dataset Information | Dataset ID | `DS005170` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Chisco | | Author (year) | `Zhang2024_Chisco` | | Canonical | `Chisco` | | Importable as | `DS005170`, `Zhang2024_Chisco`, `Chisco` | | Year | 2024 | | Authors | Zihan Zhang, Yi Zhao, Yu Bao, Xiao Ding | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005170.v1.1.2](https://doi.org/10.18112/openneuro.ds005170.v1.1.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005170) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005170) | [Source URL](https://openneuro.org/datasets/ds005170) | ### Copy-paste BibTeX ```bibtex @dataset{ds005170, title = {Chisco}, author = {Zihan Zhang and Yi Zhao and Yu Bao and Xiao Ding}, doi = {10.18112/openneuro.ds005170.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds005170.v1.1.2}, } ``` ## Technical Details - Subjects: 5 - Recordings: 225 - Tasks: 1 - Channels: 134 - Sampling rate (Hz): Varies - Duration (hours): 104.28733333333334 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 90.7 GB - File count: 225 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005170.v1.1.2 - Source: openneuro - OpenNeuro: [ds005170](https://openneuro.org/datasets/ds005170) - NeMAR: [ds005170](https://nemar.org/dataexplorer/detail?dataset_id=ds005170) ## API Reference Use the `DS005170` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005170(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Chisco * **Study:** `ds005170` (OpenNeuro) * **Author (year):** `Zhang2024_Chisco` * **Canonical:** `Chisco` Also importable as: `DS005170`, `Zhang2024_Chisco`, `Chisco`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 225; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005170](https://openneuro.org/datasets/ds005170) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005170](https://nemar.org/dataexplorer/detail?dataset_id=ds005170) DOI: [https://doi.org/10.18112/openneuro.ds005170.v1.1.2](https://doi.org/10.18112/openneuro.ds005170.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005170 >>> dataset = DS005170(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005170) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005170) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005178: eeg dataset, 10 subjects *Ear-EEG Sleep Monitoring 2023 (EESM23)* Access recordings and metadata through EEGDash. **Citation:** Yousef Rezaei Tabar, Kaare Mikkelsen, Laura Birch, Nelly Shenton, Simon L Kappel, Astrid R Bertelsen, Reza Nikbakht, Hans O Toft, Chris H Henriksen, Martin C Hemmsen, Mike L Rank, Marit Otto, Preben Kidmose (2024). *Ear-EEG Sleep Monitoring 2023 (EESM23)*. [10.18112/openneuro.ds005178.v1.0.0](https://doi.org/10.18112/openneuro.ds005178.v1.0.0) Modality: eeg Subjects: 10 Recordings: 140 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005178 dataset = DS005178(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005178(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005178( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005178, title = {Ear-EEG Sleep Monitoring 2023 (EESM23)}, author = {Yousef Rezaei Tabar and Kaare Mikkelsen and Laura Birch and Nelly Shenton and Simon L Kappel and Astrid R Bertelsen and Reza Nikbakht and Hans O Toft and Chris H Henriksen and Martin C Hemmsen and Mike L Rank and Marit Otto and Preben Kidmose}, doi = {10.18112/openneuro.ds005178.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005178.v1.0.0}, } ``` ## About This Dataset Ear-EEG Sleep Monitoring 2023 (EESM23) data set **Overview** This dataset was collected as part of a research project on ear-EEG sleep monitoring which took place in 2020-2022. The data set contains nightly EEG recordings from 10 healthy participants (‘subjects’). The first two recordings consist of polysomnogrpahy (PSG) measurements and ear-EEG measurements. The remaining ten recordings consist of only ear-EEG measurements, though a few subjects were asked to repeat a recording. Only the accepted recordings can be found in the BIDS formatted data set. Each file consists of a video sequence followed by a sleep sequence. After the video sequence, the subject sent triggers to distinguish between the two sequences. Due to potential variability in triggering the device, the sequences remain in one file though it should be possible to manually sort the file into distinct video and sleep sequences. There are no events.tsv files for Ear-EEG. **Task description** The patient performed tasks prior to going to bed. These recordings are labeled with ‘video’ as task. After his, the real recording started, which took place during the night and began when the subject went to bed. These recordings are labeled as having task ‘sleep’. For the first two recordings, the recording equipment was mounted in the afternoon. For the remaining recordings, the subject mounted the ear-EEG equipment by themselves immediately prior to going to bed. All recordings took place at the subject’s home. As can be seen in the diaries accompanying the recordings, the subjects wrote down recording start, electrode test start, when they went to bed, lights-out and recording end, and marked these in the data files using the trigger button on the equipment. **Format** The dataset is formatted according to the Brain Imaging Data Structure. See the ‘dataset_description.json’ file for the specific BIDS version used. The EEG data format chosen is the ‘.set’ format of EEGLAB. For more information, see the following link: [https://bids-specification.readthedocs.io/en/stable/01-introduction.html](https://bids-specification.readthedocs.io/en/stable/01-introduction.html) **Contact** For questions regarding this data set, contact: Preben Kidmose, [pki@ece.au.dk](mailto:pki@ece.au.dk), [https://orcid.org/0000-0001-8628-8057](https://orcid.org/0000-0001-8628-8057) Kaare Mikkelsen, [Mikkelsen.kaare@ece.au.dk](mailto:Mikkelsen.kaare@ece.au.dk), [https://orcid.org/0000-0002-7360-8629](https://orcid.org/0000-0002-7360-8629) ## Dataset Information | Dataset ID | `DS005178` | |----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Ear-EEG Sleep Monitoring 2023 (EESM23) | | Author (year) | `Tabar2024` | | Canonical | `EESM23` | | Importable as | `DS005178`, `Tabar2024`, `EESM23` | | Year | 2024 | | Authors | Yousef Rezaei Tabar, Kaare Mikkelsen, Laura Birch, Nelly Shenton, Simon L Kappel, Astrid R Bertelsen, Reza Nikbakht, Hans O Toft, Chris H Henriksen, Martin C Hemmsen, Mike L Rank, Marit Otto, Preben Kidmose | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005178.v1.0.0](https://doi.org/10.18112/openneuro.ds005178.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005178) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005178) | [Source URL](https://openneuro.org/datasets/ds005178) | ### Copy-paste BibTeX ```bibtex @dataset{ds005178, title = {Ear-EEG Sleep Monitoring 2023 (EESM23)}, author = {Yousef Rezaei Tabar and Kaare Mikkelsen and Laura Birch and Nelly Shenton and Simon L Kappel and Astrid R Bertelsen and Reza Nikbakht and Hans O Toft and Chris H Henriksen and Martin C Hemmsen and Mike L Rank and Marit Otto and Preben Kidmose}, doi = {10.18112/openneuro.ds005178.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005178.v1.0.0}, } ``` ## Technical Details - Subjects: 10 - Recordings: 140 - Tasks: 1 - Channels: 4 (120), 13 (20) - Sampling rate (Hz): 250.0 - Duration (hours): 1012.5174533333332 - Pathology: Healthy - Modality: Sleep - Type: Sleep - Size on disk: 25.7 GB - File count: 140 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005178.v1.0.0 - Source: openneuro - OpenNeuro: [ds005178](https://openneuro.org/datasets/ds005178) - NeMAR: [ds005178](https://nemar.org/dataexplorer/detail?dataset_id=ds005178) ## API Reference Use the `DS005178` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005178(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Ear-EEG Sleep Monitoring 2023 (EESM23) * **Study:** `ds005178` (OpenNeuro) * **Author (year):** `Tabar2024` * **Canonical:** `EESM23` Also importable as: `DS005178`, `Tabar2024`, `EESM23`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 10; recordings: 140; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005178](https://openneuro.org/datasets/ds005178) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005178](https://nemar.org/dataexplorer/detail?dataset_id=ds005178) DOI: [https://doi.org/10.18112/openneuro.ds005178.v1.0.0](https://doi.org/10.18112/openneuro.ds005178.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005178 >>> dataset = DS005178(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005178) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005178) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005185: eeg dataset, 20 subjects *Ear-EEG Sleep Monitoring 2019 (EESM19)* Access recordings and metadata through EEGDash. **Citation:** Kaare B. Mikkelsen, Preben Kidmose, Yousef Rezaei Tabar (2024). *Ear-EEG Sleep Monitoring 2019 (EESM19)*. [10.18112/openneuro.ds005185.v1.0.2](https://doi.org/10.18112/openneuro.ds005185.v1.0.2) Modality: eeg Subjects: 20 Recordings: 356 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005185 dataset = DS005185(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005185(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005185( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005185, title = {Ear-EEG Sleep Monitoring 2019 (EESM19)}, author = {Kaare B. Mikkelsen and Preben Kidmose and Yousef Rezaei Tabar}, doi = {10.18112/openneuro.ds005185.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds005185.v1.0.2}, } ``` ## About This Dataset EESM19: Ear-EEG Sleep Monitoring data set This data set was collected as part of development and quality assessment of the ear-EEG as a sleep monitoring platform. Data collection took place between 2018 and 2020. First publication was in 2019 ([https://doi.org/10.1038/s41598-019-53115-3](https://doi.org/10.1038/s41598-019-53115-3)), hence the ‘19’ in the name. The data set consists of 2 parts (a & b): a: 20 subjects who each spent 4 nights sleeping with a partial PSG (EEG, EOG and chin EMG electrodes), ear-EEG and a wristworn actigraph, in their own homes. b: Of these 20 subjects, 10 also slept a further 12 nights wearing only ear-EEG, actigraph and a single EOG electrode. Each night is saved as a separate ‘session’, meaning that some subjects have 4 sessions while others have 16. The PSG-nights area always sessions 1-4. Each PSG night has an additional ‘scoring’ event file, where ‘scoring’ is the ‘acquisition’ type. Questionnaires: After each night’s recording, the subject answered a short questionnaire regarding the quality of the night’s sleep. This has been archived as behavioral data (task=’comfort’). Diaries: Besides the comfort questionnaire, the subjects also kept a standardized diary regarding the events of the night. This have been imported too, however only the requried fields ‘Syncronization’,’Electrodetest’,’Went to bed’, ‘Lights out’ and ‘Got up’ have been translated from Danish to English. We suggest using an online translation tool for any additional entries. The diaries have a column ‘pressedTrigger’, which indicates that the subject marked the precise time of the event on their wrist worn actigraph. As there is some interpretation necessary due to both spurious extra trigger presses and also missing trigger presses, and these event markings eventually turned out not to be important for our own research, we have not exported these trigger times in the data set. However, as the full actigraphy file is included in this data set, any interested future user can do the matching themselves. For consistency, we have chosen to use the starting time written in the scored edf file (‘edf1’) as the starting time of each PSG recording. For non-PSG recordings, the starting time is what is written in the diary. An alternative would be using the start time as seen in the wrist actigraph, described below. Actigraphy: Subjects wore GENEactive actigraphs (‘actiwatches’ for short). These record 3-axis acceleration as well as temperature, light and user button presses. Given that the temperature and light readings are very impacted by whether the subjects had their hand above or below the covers, we found that only the actigraphy and button presses had much use. However, all data is found in the actigraphy files (in the behavior folders). The ensure the possibility of perfect alignment between actiwatch and EEG recorder (TMSI ‘mobita’), at the beginning of each recording, the subjects shook the mobita and the actiwatch together in a repeated rythmical pattern. By accessing the mobita actigraphy data from the .set file (EEG.etc.acc.data) it is possible to get perfect alignment. This is advantageous if very high precision of various sleep events is desired, since the clock in in the actiwatch was very reliable. In practice, we have not used this option, and hence the actigraphy alignment is left up to the user. Electrode test: As a quality check on the electrode connections subjects viewed a short video containing various instructions: repeated jaw clenching, open/closed eyes, horizontal eye movements. These are marked in the diaries, and can be used as a simple test that the EEG equipment is working as intended. An analysis of these responses can be found in [https://doi.org/10.3389/fncom.2021.565244](https://doi.org/10.3389/fncom.2021.565244). Note regarding artifact rejection: We advice against using the data directly from the .poly5 files. The primary reason for this is that we had some issues with faulty shielding on some of the electrodes (good shielding is necessary for dry-contact electrodes). This caused signal leakage between electrodes, which is highly unwanted, and which could make the ear-EEG channels contain PSG data, even after rereferencing. We went to great lengths to identify these electrodes, using both algorithms and physical inspection of all electrodes between recordings, and are confident that there are no issues in the .set files (for which these electrodes have been set to ‘NaN’). Note that that this identification and discarding is the only preprocessing which has been done to the EEG data. For questions regarding this data set, contact: Kaare Mikkelsen, [Mikkelsen.kaare@ece.au.dk](mailto:Mikkelsen.kaare@ece.au.dk), [https://orcid.org/0000-0002-7360-8629](https://orcid.org/0000-0002-7360-8629) ## Dataset Information | Dataset ID | `DS005185` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Ear-EEG Sleep Monitoring 2019 (EESM19) | | Author (year) | `Mikkelsen2024_Ear_Sleep_Monitoring` | | Canonical | `EESM19` | | Importable as | `DS005185`, `Mikkelsen2024_Ear_Sleep_Monitoring`, `EESM19` | | Year | 2024 | | Authors | Kaare B. Mikkelsen, Preben Kidmose, Yousef Rezaei Tabar | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005185.v1.0.2](https://doi.org/10.18112/openneuro.ds005185.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005185) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005185) | [Source URL](https://openneuro.org/datasets/ds005185) | ### Copy-paste BibTeX ```bibtex @dataset{ds005185, title = {Ear-EEG Sleep Monitoring 2019 (EESM19)}, author = {Kaare B. Mikkelsen and Preben Kidmose and Yousef Rezaei Tabar}, doi = {10.18112/openneuro.ds005185.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds005185.v1.0.2}, } ``` ## Technical Details - Subjects: 20 - Recordings: 356 - Tasks: 3 - Channels: 25 - Sampling rate (Hz): 500.0 - Duration (hours): 1365.566388888889 - Pathology: Healthy - Modality: Sleep - Type: Sleep - Size on disk: 267.6 GB - File count: 356 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005185.v1.0.2 - Source: openneuro - OpenNeuro: [ds005185](https://openneuro.org/datasets/ds005185) - NeMAR: [ds005185](https://nemar.org/dataexplorer/detail?dataset_id=ds005185) ## API Reference Use the `DS005185` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005185(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Ear-EEG Sleep Monitoring 2019 (EESM19) * **Study:** `ds005185` (OpenNeuro) * **Author (year):** `Mikkelsen2024_Ear_Sleep_Monitoring` * **Canonical:** `EESM19` Also importable as: `DS005185`, `Mikkelsen2024_Ear_Sleep_Monitoring`, `EESM19`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 20; recordings: 356; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005185](https://openneuro.org/datasets/ds005185) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005185](https://nemar.org/dataexplorer/detail?dataset_id=ds005185) DOI: [https://doi.org/10.18112/openneuro.ds005185.v1.0.2](https://doi.org/10.18112/openneuro.ds005185.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS005185 >>> dataset = DS005185(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005185) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005185) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005189: eeg dataset, 30 subjects *Search Superiority Recollection Familiarity* Access recordings and metadata through EEGDash. **Citation:** Jason Helbing, Dejan Draschkow, Melissa L.-H. Võ (2024). *Search Superiority Recollection Familiarity*. [10.18112/openneuro.ds005189.v1.0.1](https://doi.org/10.18112/openneuro.ds005189.v1.0.1) Modality: eeg Subjects: 30 Recordings: 30 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005189 dataset = DS005189(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005189(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005189( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005189, title = {Search Superiority Recollection Familiarity}, author = {Jason Helbing and Dejan Draschkow and Melissa L.-H. Võ}, doi = {10.18112/openneuro.ds005189.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005189.v1.0.1}, } ``` ## About This Dataset In this experiment, participants searched for objects in some scenes and intentionally memorized others. We then tested their memory of these objects, finding stronger (quantitative difference) and different (qualitative difference: recollection benefit) memory representations for search targets. We recorded both EEG and eye movements. Behavioral data is split into encoding (Encode_beh) and memory testing (Test_beh). Analysis scripts and preprocessed data as well as additional materials are available on the OSF at [https://osf.io/esr5q/](https://osf.io/esr5q/). Project Abstract: Most memory is not formed deliberately but as a by-product of natural behavior. These incidental representations, when generated during visual search, can be stronger than intentionally memorized content (search superiority effect). In this study, we investigate whether this effect is purely quantitative (stronger memory) or also due to qualitative memory differences; more precisely, differences in recollection and familiarity, two processes supporting recognition memory. In an EEG study with eye tracking, 30 participants searched for objects in scenes and intentionally memorized others before completing a surprise recognition memory test. We find that compared to new objects, both search targets and intentionally memorized objects elicit a more positive-going mid-frontal negativity peaking at around 400 ms post stimulus onset (FN400), which is associated with familiarity, as well as a more positive-going parietal late component (LPC), indicative of recollection. Both components show no differences between tasks, indicating equal contributions of recollection and familiarity to remembering searched and memorized objects. Behavioral data from remember–know judgments and receiver operating characteristics (ROCs), however, contrasts with the EEG findings: Search targets are more often reported as recollected and their ROCs show higher intercepts, indicating more recollection, whereas there are essentially no behavioral differences in familiarity between tasks. These results indicate that search superiority relies on increased recollection rather than familiarity. The absent LPC effect despite the behavioral task difference challenges existing assumptions about the neural correlates of recognition memory, raising the question whether they hold when investigated using real-world scenes and incidental encoding during naturalistic tasks. ## Dataset Information | Dataset ID | `DS005189` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Search Superiority Recollection Familiarity | | Author (year) | `Helbing2024` | | Canonical | — | | Importable as | `DS005189`, `Helbing2024` | | Year | 2024 | | Authors | Jason Helbing, Dejan Draschkow, Melissa L.-H. Võ | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005189.v1.0.1](https://doi.org/10.18112/openneuro.ds005189.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005189) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005189) | [Source URL](https://openneuro.org/datasets/ds005189) | ### Copy-paste BibTeX ```bibtex @dataset{ds005189, title = {Search Superiority Recollection Familiarity}, author = {Jason Helbing and Dejan Draschkow and Melissa L.-H. Võ}, doi = {10.18112/openneuro.ds005189.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005189.v1.0.1}, } ``` ## Technical Details - Subjects: 30 - Recordings: 30 - Tasks: 1 - Channels: 62 - Sampling rate (Hz): 1000.0 - Duration (hours): 19.32608055555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 16.1 GB - File count: 30 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005189.v1.0.1 - Source: openneuro - OpenNeuro: [ds005189](https://openneuro.org/datasets/ds005189) - NeMAR: [ds005189](https://nemar.org/dataexplorer/detail?dataset_id=ds005189) ## API Reference Use the `DS005189` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005189(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Search Superiority Recollection Familiarity * **Study:** `ds005189` (OpenNeuro) * **Author (year):** `Helbing2024` * **Canonical:** — Also importable as: `DS005189`, `Helbing2024`. Modality: `eeg`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005189](https://openneuro.org/datasets/ds005189) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005189](https://nemar.org/dataexplorer/detail?dataset_id=ds005189) DOI: [https://doi.org/10.18112/openneuro.ds005189.v1.0.1](https://doi.org/10.18112/openneuro.ds005189.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005189 >>> dataset = DS005189(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005189) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005189) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005207: eeg dataset, 20 subjects *Surrey cEEGrid sleep data set* Access recordings and metadata through EEGDash. **Citation:** Kaare B. Mikkelsen, James K Ebajemito, Maria A Bonmati-Carrion, Nayantara Santhi, Victoria L Revell, Giuseppe Atzori, Laura Birch, Ciro Della Monica, Stefan Debener, Derk-Jan Dijk, Annette Sterr, Maarten De Vos (2024). *Surrey cEEGrid sleep data set*. [10.18112/openneuro.ds005207.v1.0.0](https://doi.org/10.18112/openneuro.ds005207.v1.0.0) Modality: eeg Subjects: 20 Recordings: 39 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005207 dataset = DS005207(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005207(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005207( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005207, title = {Surrey cEEGrid sleep data set}, author = {Kaare B. Mikkelsen and James K Ebajemito and Maria A Bonmati-Carrion and Nayantara Santhi and Victoria L Revell and Giuseppe Atzori and Laura Birch and Ciro Della Monica and Stefan Debener and Derk-Jan Dijk and Annette Sterr and Maarten De Vos}, doi = {10.18112/openneuro.ds005207.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005207.v1.0.0}, } ``` ## About This Dataset Surrey sleep data set **Overview** This dataset was collected as part of a research project on wearable sleep monitoring which took place in spring 2017. The data set contains nightly EEG recordings from 20 healthy participants (‘subjects’). Some recordings are full polysomnography (PSG) measurements, others are cEEGrid measurements. Most subjects have both PSG and ceegrid recordings from the same night, though a few are missing one or the other. **Format** The dataset is formatted according to the Brain Imaging Data Structure. See the ‘dataset_description.json’ file for the specific BIDS version used. The EEG data format chosen is the ‘.set’ format of EEGLAB. For more information, see the following link: [https://bids-specification.readthedocs.io/en/stable/01-introduction.html](https://bids-specification.readthedocs.io/en/stable/01-introduction.html) **Task description** The patient performed no tasks. The recording equipment was mounted immediately prior to bedtime, and the recordings took place at the sleep laboratory of the Surrey Clinical Research Centre. Note that due to a miscommunication during the study, alignment information between cEEGrid and PSG recordings has not been saved. This means that to obtain a useful comparison between the two methods, for instance to align the manual scoring with the cEEGrid recordings, some post processing has to be performed. In the derivative dataset, ‘aligned1’, we have shared our own best attempt at alignment. The data set was previously described in the paper ‘Machine-learning-derived sleep–wake staging from around-the-ear electroencephalogram outperforms manual scoring and actigraphy’, Mikkelsen et al 2018, [https://doi.org/10.1111/jsr.12786](https://doi.org/10.1111/jsr.12786) **Contact** For questions regarding this data set, contact: Kaare Mikkelsen, [Mikkelsen.kaare@ece.au.dk](mailto:Mikkelsen.kaare@ece.au.dk), [https://orcid.org/0000-0002-7360-8629](https://orcid.org/0000-0002-7360-8629) ## Dataset Information | Dataset ID | `DS005207` | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Surrey cEEGrid sleep data set | | Author (year) | `Mikkelsen2024_Surrey_cEEGrid_sleep` | | Canonical | `Surrey_cEEGrid_sleep` | | Importable as | `DS005207`, `Mikkelsen2024_Surrey_cEEGrid_sleep`, `Surrey_cEEGrid_sleep` | | Year | 2024 | | Authors | Kaare B. Mikkelsen, James K Ebajemito, Maria A Bonmati-Carrion, Nayantara Santhi, Victoria L Revell, Giuseppe Atzori, Laura Birch, Ciro Della Monica, Stefan Debener, Derk-Jan Dijk, Annette Sterr, Maarten De Vos | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005207.v1.0.0](https://doi.org/10.18112/openneuro.ds005207.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005207) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005207) | [Source URL](https://openneuro.org/datasets/ds005207) | ### Copy-paste BibTeX ```bibtex @dataset{ds005207, title = {Surrey cEEGrid sleep data set}, author = {Kaare B. Mikkelsen and James K Ebajemito and Maria A Bonmati-Carrion and Nayantara Santhi and Victoria L Revell and Giuseppe Atzori and Laura Birch and Ciro Della Monica and Stefan Debener and Derk-Jan Dijk and Annette Sterr and Maarten De Vos}, doi = {10.18112/openneuro.ds005207.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005207.v1.0.0}, } ``` ## Technical Details - Subjects: 20 - Recordings: 39 - Tasks: 1 - Channels: 13 (8), 24 (6), 20 (5), 11 (5), 27 (4), 18 (3), 21 (3), 15 (2), 23 (2), 22 - Sampling rate (Hz): 128.0 (20), 250.0 (19) - Duration (hours): 422.5796577083333 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 28.5 GB - File count: 39 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005207.v1.0.0 - Source: openneuro - OpenNeuro: [ds005207](https://openneuro.org/datasets/ds005207) - NeMAR: [ds005207](https://nemar.org/dataexplorer/detail?dataset_id=ds005207) ## API Reference Use the `DS005207` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005207(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Surrey cEEGrid sleep data set * **Study:** `ds005207` (OpenNeuro) * **Author (year):** `Mikkelsen2024_Surrey_cEEGrid_sleep` * **Canonical:** `Surrey_cEEGrid_sleep` Also importable as: `DS005207`, `Mikkelsen2024_Surrey_cEEGrid_sleep`, `Surrey_cEEGrid_sleep`. Modality: `eeg`. Subjects: 20; recordings: 39; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005207](https://openneuro.org/datasets/ds005207) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005207](https://nemar.org/dataexplorer/detail?dataset_id=ds005207) DOI: [https://doi.org/10.18112/openneuro.ds005207.v1.0.0](https://doi.org/10.18112/openneuro.ds005207.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005207 >>> dataset = DS005207(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005207) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005207) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005241: meg dataset, 24 subjects *NeuroMorph: A High-Temporal Resolution MEG Dataset for Morpheme-Based Linguistic Analysis* Access recordings and metadata through EEGDash. **Citation:** Amilleah Rodriguez, Dan Zhao, Kyra Wilson, Ritika Saboo, Sergey V Samsonau, Alec Marantz (2024). *NeuroMorph: A High-Temporal Resolution MEG Dataset for Morpheme-Based Linguistic Analysis*. [10.18112/openneuro.ds005241.v1.1.0](https://doi.org/10.18112/openneuro.ds005241.v1.1.0) Modality: meg Subjects: 24 Recordings: 117 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005241 dataset = DS005241(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005241(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005241( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005241, title = {NeuroMorph: A High-Temporal Resolution MEG Dataset for Morpheme-Based Linguistic Analysis}, author = {Amilleah Rodriguez and Dan Zhao and Kyra Wilson and Ritika Saboo and Sergey V Samsonau and Alec Marantz}, doi = {10.18112/openneuro.ds005241.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds005241.v1.1.0}, } ``` ## About This Dataset KIT/Yokogawa MEG system with 208 magnetometer channels 24 subjects amounting to over 17 hours of data Supplementary code can be found [here](github.com/amilleah/neuromorph) **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110.https://doi.org/10.1038/sdata.2018.110 ## Dataset Information | Dataset ID | `DS005241` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | NeuroMorph: A High-Temporal Resolution MEG Dataset for Morpheme-Based Linguistic Analysis | | Author (year) | `Rodriguez2024` | | Canonical | `NeuroMorph`, `neuromorph` | | Importable as | `DS005241`, `Rodriguez2024`, `NeuroMorph`, `neuromorph` | | Year | 2024 | | Authors | Amilleah Rodriguez, Dan Zhao, Kyra Wilson, Ritika Saboo, Sergey V Samsonau, Alec Marantz | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005241.v1.1.0](https://doi.org/10.18112/openneuro.ds005241.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005241) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005241) | [Source URL](https://openneuro.org/datasets/ds005241) | ### Copy-paste BibTeX ```bibtex @dataset{ds005241, title = {NeuroMorph: A High-Temporal Resolution MEG Dataset for Morpheme-Based Linguistic Analysis}, author = {Amilleah Rodriguez and Dan Zhao and Kyra Wilson and Ritika Saboo and Sergey V Samsonau and Alec Marantz}, doi = {10.18112/openneuro.ds005241.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds005241.v1.1.0}, } ``` ## Technical Details - Subjects: 24 - Recordings: 117 - Tasks: 2 - Channels: 256 - Sampling rate (Hz): 1000.0 - Duration (hours): 3.731936944444445 - Pathology: Healthy - Modality: — - Type: Other - Size on disk: 140.5 GB - File count: 117 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005241.v1.1.0 - Source: openneuro - OpenNeuro: [ds005241](https://openneuro.org/datasets/ds005241) - NeMAR: [ds005241](https://nemar.org/dataexplorer/detail?dataset_id=ds005241) ## API Reference Use the `DS005241` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005241(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NeuroMorph: A High-Temporal Resolution MEG Dataset for Morpheme-Based Linguistic Analysis * **Study:** `ds005241` (OpenNeuro) * **Author (year):** `Rodriguez2024` * **Canonical:** `NeuroMorph`, `neuromorph` Also importable as: `DS005241`, `Rodriguez2024`, `NeuroMorph`, `neuromorph`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 24; recordings: 117; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005241](https://openneuro.org/datasets/ds005241) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005241](https://nemar.org/dataexplorer/detail?dataset_id=ds005241) DOI: [https://doi.org/10.18112/openneuro.ds005241.v1.1.0](https://doi.org/10.18112/openneuro.ds005241.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005241 >>> dataset = DS005241(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005241) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005241) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS005261: meg dataset, 17 subjects *Gloups_MEG* Access recordings and metadata through EEGDash. **Citation:** Snezana Todorovic, Elin Runnqvist, Valerie Chanoine, Jean-Michel Badier (2024). *Gloups_MEG*. [10.18112/openneuro.ds005261.v3.0.0](https://doi.org/10.18112/openneuro.ds005261.v3.0.0) Modality: meg Subjects: 17 Recordings: 128 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005261 dataset = DS005261(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005261(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005261( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005261, title = {Gloups_MEG}, author = {Snezana Todorovic and Elin Runnqvist and Valerie Chanoine and Jean-Michel Badier}, doi = {10.18112/openneuro.ds005261.v3.0.0}, url = {https://doi.org/10.18112/openneuro.ds005261.v3.0.0}, } ``` ## About This Dataset README Seventeen adult participants completed a learning task and a resting-state condition during MEG recording (4D NeuroImaging system with 248 magnetometer channels). Current dataset: OpenNeuro MEG Dataset ds005261 (Gloups_MEG, [https://openneuro.org/datasets/ds005261/versions/2.0.0](https://openneuro.org/datasets/ds005261/versions/2.0.0); see Todorović et al., in revision). The same participants performed an identical learning task during fMRI scanning. Related dataset: OpenNeuro fMRI Dataset ds004597 (Gloups, [https://openneuro.org/datasets/ds004597/versions/2.0.0](https://openneuro.org/datasets/ds004597/versions/2.0.0); see Todorović et al., 2023). Note: Participant identifiers differ between the fMRI and MEG datasets. For details, refer to Table 1 in Todorović et al., in revision. **References MNE-BIDS** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) Todorović, S., Anton, J.-L., Sein, J., Nazarian, B., Chanoine, V., Rauchbauer, B., Kotz, S. A., & Runnqvist, E. (2023). Cortico-Cerebellar Monitoring of Speech Sequence Production. Neurobiology of Language, 1–21. Todorović, S., Chanoine, V., Nazarian, B., Badier, J-M., Kanzari, K., Brovelli, A., Kotz, S. A., & Runnqvist, E. (in revision). Dataset for Evaluating the Production of Phonotactically Legal and Illegal Pseudowords. Scientific Data. ## Dataset Information | Dataset ID | `DS005261` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Gloups_MEG | | Author (year) | `Todorovic2024` | | Canonical | `Todorovic2023` | | Importable as | `DS005261`, `Todorovic2024`, `Todorovic2023` | | Year | 2024 | | Authors | Snezana Todorovic, Elin Runnqvist, Valerie Chanoine, Jean-Michel Badier | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005261.v3.0.0](https://doi.org/10.18112/openneuro.ds005261.v3.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005261) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005261) | [Source URL](https://openneuro.org/datasets/ds005261) | ### Copy-paste BibTeX ```bibtex @dataset{ds005261, title = {Gloups_MEG}, author = {Snezana Todorovic and Elin Runnqvist and Valerie Chanoine and Jean-Michel Badier}, doi = {10.18112/openneuro.ds005261.v3.0.0}, url = {https://doi.org/10.18112/openneuro.ds005261.v3.0.0}, } ``` ## Technical Details - Subjects: 17 - Recordings: 128 - Tasks: 2 - Channels: 248 (71), 278 (31), 245 (24) - Sampling rate (Hz): 2034.5100996195154 (31), 2034.5101318359375 (7) - Duration (hours): 3.0415982996288142 - Pathology: Healthy - Modality: — - Type: Learning - Size on disk: 137.2 GB - File count: 128 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005261.v3.0.0 - Source: openneuro - OpenNeuro: [ds005261](https://openneuro.org/datasets/ds005261) - NeMAR: [ds005261](https://nemar.org/dataexplorer/detail?dataset_id=ds005261) ## API Reference Use the `DS005261` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005261(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Gloups_MEG * **Study:** `ds005261` (OpenNeuro) * **Author (year):** `Todorovic2024` * **Canonical:** `Todorovic2023` Also importable as: `DS005261`, `Todorovic2024`, `Todorovic2023`. Modality: `meg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 17; recordings: 128; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005261](https://openneuro.org/datasets/ds005261) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005261](https://nemar.org/dataexplorer/detail?dataset_id=ds005261) DOI: [https://doi.org/10.18112/openneuro.ds005261.v3.0.0](https://doi.org/10.18112/openneuro.ds005261.v3.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005261 >>> dataset = DS005261(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005261) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005261) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS005262: eeg dataset, 12 subjects *ArEEG: Arabic Inner Speech EEG dataset* Access recordings and metadata through EEGDash. **Citation:** Donia Metwalli, Eslam Ahmed, Antony Emil, Yousef A. Radwan, Mariam Barakat, Anas Ahmed, Amro Omar, Sahar Selim (2024). *ArEEG: Arabic Inner Speech EEG dataset*. [10.18112/openneuro.ds005262.v1.0.1](https://doi.org/10.18112/openneuro.ds005262.v1.0.1) Modality: eeg Subjects: 12 Recordings: 186 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005262 dataset = DS005262(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005262(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005262( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005262, title = {ArEEG: Arabic Inner Speech EEG dataset}, author = {Donia Metwalli and Eslam Ahmed and Antony Emil and Yousef A. Radwan and Mariam Barakat and Anas Ahmed and Amro Omar and Sahar Selim}, doi = {10.18112/openneuro.ds005262.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005262.v1.0.1}, } ``` ## About This Dataset **ArEEG: Arabic EEG Dataset** This dataset is a collection of Inner Speech EEG recordings from 12 subjects, 7 males and 5 females with visual cues written in Modern Standard Arabic. Go to [GitHub Repository](https://github.com/Eslam21/ArEEG-an-Open-Access-Arabic-Inner-Speech-EEG-Dataset) for usage instructions. ## Dataset Information | Dataset ID | `DS005262` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | ArEEG: Arabic Inner Speech EEG dataset | | Author (year) | `Metwalli2024` | | Canonical | `ArEEG` | | Importable as | `DS005262`, `Metwalli2024`, `ArEEG` | | Year | 2024 | | Authors | Donia Metwalli, Eslam Ahmed, Antony Emil, Yousef A. Radwan, Mariam Barakat, Anas Ahmed, Amro Omar, Sahar Selim | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005262.v1.0.1](https://doi.org/10.18112/openneuro.ds005262.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005262) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005262) | [Source URL](https://openneuro.org/datasets/ds005262) | ### Copy-paste BibTeX ```bibtex @dataset{ds005262, title = {ArEEG: Arabic Inner Speech EEG dataset}, author = {Donia Metwalli and Eslam Ahmed and Antony Emil and Yousef A. Radwan and Mariam Barakat and Anas Ahmed and Amro Omar and Sahar Selim}, doi = {10.18112/openneuro.ds005262.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005262.v1.0.1}, } ``` ## Technical Details - Subjects: 12 - Recordings: 186 - Tasks: 1 - Channels: 8 - Sampling rate (Hz): 250.0 - Duration (hours): 25.04766111111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 688.8 MB - File count: 186 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005262.v1.0.1 - Source: openneuro - OpenNeuro: [ds005262](https://openneuro.org/datasets/ds005262) - NeMAR: [ds005262](https://nemar.org/dataexplorer/detail?dataset_id=ds005262) ## API Reference Use the `DS005262` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005262(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ArEEG: Arabic Inner Speech EEG dataset * **Study:** `ds005262` (OpenNeuro) * **Author (year):** `Metwalli2024` * **Canonical:** `ArEEG` Also importable as: `DS005262`, `Metwalli2024`, `ArEEG`. Modality: `eeg`. Subjects: 12; recordings: 186; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005262](https://openneuro.org/datasets/ds005262) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005262](https://nemar.org/dataexplorer/detail?dataset_id=ds005262) DOI: [https://doi.org/10.18112/openneuro.ds005262.v1.0.1](https://doi.org/10.18112/openneuro.ds005262.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005262 >>> dataset = DS005262(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005262) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005262) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005273: eeg dataset, 33 subjects *Neural representation of consciously seen and unseen information* Access recordings and metadata through EEGDash. **Citation:** Pablo Rodríguez-San Esteban, Ana B. Chica, José A. González-López (2024). *Neural representation of consciously seen and unseen information*. [10.18112/openneuro.ds005273.v1.0.0](https://doi.org/10.18112/openneuro.ds005273.v1.0.0) Modality: eeg Subjects: 33 Recordings: 33 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005273 dataset = DS005273(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005273(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005273( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005273, title = {Neural representation of consciously seen and unseen information}, author = {Pablo Rodríguez-San Esteban and Ana B. Chica and José A. González-López}, doi = {10.18112/openneuro.ds005273.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005273.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS005273` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Neural representation of consciously seen and unseen information | | Author (year) | `Esteban2024` | | Canonical | — | | Importable as | `DS005273`, `Esteban2024` | | Year | 2024 | | Authors | Pablo Rodríguez-San Esteban, Ana B. Chica, José A. González-López | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005273.v1.0.0](https://doi.org/10.18112/openneuro.ds005273.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005273) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005273) | [Source URL](https://openneuro.org/datasets/ds005273) | ### Copy-paste BibTeX ```bibtex @dataset{ds005273, title = {Neural representation of consciously seen and unseen information}, author = {Pablo Rodríguez-San Esteban and Ana B. Chica and José A. González-López}, doi = {10.18112/openneuro.ds005273.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005273.v1.0.0}, } ``` ## Technical Details - Subjects: 33 - Recordings: 33 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 58.05492972222223 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 44.4 GB - File count: 33 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005273.v1.0.0 - Source: openneuro - OpenNeuro: [ds005273](https://openneuro.org/datasets/ds005273) - NeMAR: [ds005273](https://nemar.org/dataexplorer/detail?dataset_id=ds005273) ## API Reference Use the `DS005273` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005273(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neural representation of consciously seen and unseen information * **Study:** `ds005273` (OpenNeuro) * **Author (year):** `Esteban2024` * **Canonical:** — Also importable as: `DS005273`, `Esteban2024`. Modality: `eeg`. Subjects: 33; recordings: 33; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005273](https://openneuro.org/datasets/ds005273) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005273](https://nemar.org/dataexplorer/detail?dataset_id=ds005273) DOI: [https://doi.org/10.18112/openneuro.ds005273.v1.0.0](https://doi.org/10.18112/openneuro.ds005273.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005273 >>> dataset = DS005273(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005273) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005273) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005274: eeg dataset, 22 subjects *UV_EEG* Access recordings and metadata through EEGDash. **Citation:** Yukako Ito (2024). *UV_EEG*. [10.18112/openneuro.ds005274.v1.0.0](https://doi.org/10.18112/openneuro.ds005274.v1.0.0) Modality: eeg Subjects: 22 Recordings: 22 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005274 dataset = DS005274(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005274(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005274( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005274, title = {UV_EEG}, author = {Yukako Ito}, doi = {10.18112/openneuro.ds005274.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005274.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS005274` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | UV_EEG | | Author (year) | `Ito2024` | | Canonical | — | | Importable as | `DS005274`, `Ito2024` | | Year | 2024 | | Authors | Yukako Ito | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005274.v1.0.0](https://doi.org/10.18112/openneuro.ds005274.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005274) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005274) | [Source URL](https://openneuro.org/datasets/ds005274) | ### Copy-paste BibTeX ```bibtex @dataset{ds005274, title = {UV_EEG}, author = {Yukako Ito}, doi = {10.18112/openneuro.ds005274.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005274.v1.0.0}, } ``` ## Technical Details - Subjects: 22 - Recordings: 22 - Tasks: 1 - Channels: 6 - Sampling rate (Hz): 500.0 - Duration (hours): 1.6619166666666665 - Pathology: Healthy - Modality: — - Type: — - Size on disk: 71.9 MB - File count: 22 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005274.v1.0.0 - Source: openneuro - OpenNeuro: [ds005274](https://openneuro.org/datasets/ds005274) - NeMAR: [ds005274](https://nemar.org/dataexplorer/detail?dataset_id=ds005274) ## API Reference Use the `DS005274` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005274(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) UV_EEG * **Study:** `ds005274` (OpenNeuro) * **Author (year):** `Ito2024` * **Canonical:** — Also importable as: `DS005274`, `Ito2024`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 22; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005274](https://openneuro.org/datasets/ds005274) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005274](https://nemar.org/dataexplorer/detail?dataset_id=ds005274) DOI: [https://doi.org/10.18112/openneuro.ds005274.v1.0.0](https://doi.org/10.18112/openneuro.ds005274.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005274 >>> dataset = DS005274(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005274) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005274) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005279: meg dataset, 30 subjects *Picture-Word Interference Dataset* Access recordings and metadata through EEGDash. **Citation:** Hsi T. Wei, Farhan B. Faisal, Tamara Beck, Claire Shao, Jed A. Meltzer (2024). *Picture-Word Interference Dataset*. [10.18112/openneuro.ds005279.v1.0.3](https://doi.org/10.18112/openneuro.ds005279.v1.0.3) Modality: meg Subjects: 30 Recordings: 90 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005279 dataset = DS005279(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005279(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005279( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005279, title = {Picture-Word Interference Dataset}, author = {Hsi T. Wei and Farhan B. Faisal and Tamara Beck and Claire Shao and Jed A. Meltzer}, doi = {10.18112/openneuro.ds005279.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005279.v1.0.3}, } ``` ## About This Dataset This study was conducted at the Rotman Research Institute at Baycrest Hospital in Toronto, Canada. This dataset contains 30 healthy young adults’ MEG (CTF), sMRI, and behavioural data on a picture-word interference (PWI) task. Subjects were shown images of objects one by one and were instructed to retrieve the name of the pictures covertly and judge whether the name ends in a target sound given at the beginning of each task block, by pressing the yes or no buttons with their right hand. Whenever they see an image, they will often also hear a distractor word played through their earphone. The picture and word could be phonologically related, semantically related, or unrelated. There were 3 runs of the PWI task for each participant. Each run contained 120 trials, containing an equal number of trials for each picture-word condition. Behaviourally, the reaction time and accuracy of their button-pressing response were recorded. Meanwhile, the MEG data was epoched to the picture onset and response onset for event-related analyses. Each subject obtained their own structural MRI for MEG source localization. Corresponding analysis code can be found under the code folder, with the “analysis walkthrough” documenting more detailed explanation of the analysis. ## Dataset Information | Dataset ID | `DS005279` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Picture-Word Interference Dataset | | Author (year) | `Wei2024` | | Canonical | — | | Importable as | `DS005279`, `Wei2024` | | Year | 2024 | | Authors | Hsi T. Wei, Farhan B. Faisal, Tamara Beck, Claire Shao, Jed A. Meltzer | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005279.v1.0.3](https://doi.org/10.18112/openneuro.ds005279.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005279) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005279) | [Source URL](https://openneuro.org/datasets/ds005279) | ### Copy-paste BibTeX ```bibtex @dataset{ds005279, title = {Picture-Word Interference Dataset}, author = {Hsi T. Wei and Farhan B. Faisal and Tamara Beck and Claire Shao and Jed A. Meltzer}, doi = {10.18112/openneuro.ds005279.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005279.v1.0.3}, } ``` ## Technical Details - Subjects: 30 - Recordings: 90 - Tasks: — - Channels: Varies - Sampling rate (Hz): 1200 - Duration (hours): 3.4166666666666665 - Pathology: Healthy - Modality: Multisensory - Type: Other - Size on disk: 58.9 GB - File count: 90 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005279.v1.0.3 - Source: openneuro - OpenNeuro: [ds005279](https://openneuro.org/datasets/ds005279) - NeMAR: [ds005279](https://nemar.org/dataexplorer/detail?dataset_id=ds005279) ## API Reference Use the `DS005279` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005279(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Picture-Word Interference Dataset * **Study:** `ds005279` (OpenNeuro) * **Author (year):** `Wei2024` * **Canonical:** — Also importable as: `DS005279`, `Wei2024`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 30; recordings: 90; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005279](https://openneuro.org/datasets/ds005279) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005279](https://nemar.org/dataexplorer/detail?dataset_id=ds005279) DOI: [https://doi.org/10.18112/openneuro.ds005279.v1.0.3](https://doi.org/10.18112/openneuro.ds005279.v1.0.3) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005279 >>> dataset = DS005279(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005279) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005279) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS005280: eeg dataset, 223 subjects *223 By BP* Access recordings and metadata through EEGDash. **Citation:** Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li (2024). *223 By BP*. [10.18112/openneuro.ds005280.v1.0.0](https://doi.org/10.18112/openneuro.ds005280.v1.0.0) Modality: eeg Subjects: 223 Recordings: 669 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005280 dataset = DS005280(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005280(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005280( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005280, title = {223 By BP}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005280.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005280.v1.0.0}, } ``` ## About This Dataset 1.Study introduction: In this experiment, participants received fixed-intensity pain stimuli at 3J / 3.5J (low pain) and 3.5J / 4J (high pain). Each participant underwent stimulation in 3 blocks, with each block comprising 10 stimuli, totaling 30 stimuli. High and low pain stimuli were evenly distributed within each block. After each stimulation, participants provided pain ratings individually. Pain ratings were as follows: 0 indicated no sensation at all, 4 indicated the onset of pain, 6 represented moderate pain, 8 indicated severe pain, and 10 denoted unbearable pain. 2.Participant task information(description of the experiment): Participants received laser stimulation and subsequently provided pain intensity ratings one by one. 3.Participant instructions(as exact as possible): Participants were instructed to focus on the laser stimulation, keep their eyes open, and fix their gaze on the crosshairs displayed on the screen. After each laser stimulation, there is a five-second pause. Participants then rated the intensity of the pain. Subsequent trials began at random 5 seconds after the score was provided. ## Dataset Information | Dataset ID | `DS005280` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 223 By BP | | Author (year) | `Xiangyue2024_223_BP` | | Canonical | — | | Importable as | `DS005280`, `Xiangyue2024_223_BP` | | Year | 2024 | | Authors | Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005280.v1.0.0](https://doi.org/10.18112/openneuro.ds005280.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005280) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005280) | [Source URL](https://openneuro.org/datasets/ds005280) | ### Copy-paste BibTeX ```bibtex @dataset{ds005280, title = {223 By BP}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005280.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005280.v1.0.0}, } ``` ## Technical Details - Subjects: 223 - Recordings: 669 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 98.77304805555556 - Pathology: Healthy - Modality: Tactile - Type: Perception - Size on disk: 42.4 GB - File count: 669 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005280.v1.0.0 - Source: openneuro - OpenNeuro: [ds005280](https://openneuro.org/datasets/ds005280) - NeMAR: [ds005280](https://nemar.org/dataexplorer/detail?dataset_id=ds005280) ## API Reference Use the `DS005280` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005280(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 223 By BP * **Study:** `ds005280` (OpenNeuro) * **Author (year):** `Xiangyue2024_223_BP` * **Canonical:** — Also importable as: `DS005280`, `Xiangyue2024_223_BP`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 223; recordings: 669; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005280](https://openneuro.org/datasets/ds005280) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005280](https://nemar.org/dataexplorer/detail?dataset_id=ds005280) DOI: [https://doi.org/10.18112/openneuro.ds005280.v1.0.0](https://doi.org/10.18112/openneuro.ds005280.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005280 >>> dataset = DS005280(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005280) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005280) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005284: eeg dataset, 26 subjects *26 By Biosemi* Access recordings and metadata through EEGDash. **Citation:** Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li (2024). *26 By Biosemi*. [10.18112/openneuro.ds005284.v1.0.0](https://doi.org/10.18112/openneuro.ds005284.v1.0.0) Modality: eeg Subjects: 26 Recordings: 26 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005284 dataset = DS005284(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005284(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005284( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005284, title = {26 By Biosemi}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005284.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005284.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS005284` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 26 By Biosemi | | Author (year) | `Xiangyue2024_26_Biosemi` | | Canonical | — | | Importable as | `DS005284`, `Xiangyue2024_26_Biosemi` | | Year | 2024 | | Authors | Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005284.v1.0.0](https://doi.org/10.18112/openneuro.ds005284.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005284) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005284) | [Source URL](https://openneuro.org/datasets/ds005284) | ### Copy-paste BibTeX ```bibtex @dataset{ds005284, title = {26 By Biosemi}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005284.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005284.v1.0.0}, } ``` ## Technical Details - Subjects: 26 - Recordings: 26 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1024.0 (25), 2048.0 - Duration (hours): 2.3897222222222223 - Pathology: Healthy - Modality: — - Type: — - Size on disk: 1.7 GB - File count: 26 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005284.v1.0.0 - Source: openneuro - OpenNeuro: [ds005284](https://openneuro.org/datasets/ds005284) - NeMAR: [ds005284](https://nemar.org/dataexplorer/detail?dataset_id=ds005284) ## API Reference Use the `DS005284` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005284(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 26 By Biosemi * **Study:** `ds005284` (OpenNeuro) * **Author (year):** `Xiangyue2024_26_Biosemi` * **Canonical:** — Also importable as: `DS005284`, `Xiangyue2024_26_Biosemi`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005284](https://openneuro.org/datasets/ds005284) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005284](https://nemar.org/dataexplorer/detail?dataset_id=ds005284) DOI: [https://doi.org/10.18112/openneuro.ds005284.v1.0.0](https://doi.org/10.18112/openneuro.ds005284.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005284 >>> dataset = DS005284(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005284) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005284) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005285: eeg dataset, 29 subjects *29 By ANT* Access recordings and metadata through EEGDash. **Citation:** Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li (2024). *29 By ANT*. [10.18112/openneuro.ds005285.v1.0.0](https://doi.org/10.18112/openneuro.ds005285.v1.0.0) Modality: eeg Subjects: 29 Recordings: 116 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005285 dataset = DS005285(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005285(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005285( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005285, title = {29 By ANT}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005285.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005285.v1.0.0}, } ``` ## About This Dataset 1.Study introduction: Firstly, participants underwent a series of laser stimulations of varying intensities. The experimenters determined the energy intensities corresponding to average scores of 4 and 7 points among the participants. Subsequently, each participant received a fixed-intensity laser stimulation approximately every 20 seconds, constituting one block of 40 trials, with half being high intensity and half low intensity. There were a total of 4 blocks, resulting in 160 stimulations in total. During this period, participants provided pain ratings ranging from 0 to 10. A rating of 0 indicated no sensation, 4 denoted the onset of pain perception, 6 represented moderate pain, 8 indicated severe pain, and 10 signified intolerable pain. 2.Participant task information(description of the experiment): Participants received laser stimulation and used a computer mouse to click on the appropriate position on the screen, corresponding to a scale of 0 to 10. 3.Participant instructions(as exact as possible): Participants were instructed to focus their attention on the laser stimuli, keep their eyes open, and fixate their gaze on the cross displayed on the screen. Following each laser stimulation, there was a 3-second pause. Subsequently, participants used the computer screen and keyboard to assess the intensity of pain within a 5-second time window. The subsequent trial commenced randomly within 1-3 seconds after the rating was provided. 4.References and links: Bi Y, Liu X, Zhao X, et al. Enhancing pain modulation: the efficacy of synchronous combination of virtual reality and transcutaneous electrical nerve stimulation. General Psychiatry 2023;36:e101164. doi:10.1136/gpsych-2023-101164. 5.Comments: In the raw data, “32” is used to represent “s32”,”64” is used to represent “s64”. ## Dataset Information | Dataset ID | `DS005285` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 29 By ANT | | Author (year) | `Xiangyue2024_29_ANT` | | Canonical | — | | Importable as | `DS005285`, `Xiangyue2024_29_ANT` | | Year | 2024 | | Authors | Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005285.v1.0.0](https://doi.org/10.18112/openneuro.ds005285.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005285) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005285) | [Source URL](https://openneuro.org/datasets/ds005285) | ### Copy-paste BibTeX ```bibtex @dataset{ds005285, title = {29 By ANT}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005285.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005285.v1.0.0}, } ``` ## Technical Details - Subjects: 29 - Recordings: 116 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 1000.0 - Duration (hours): 26.726972777777775 - Pathology: Healthy - Modality: Tactile - Type: Perception - Size on disk: 11.8 GB - File count: 116 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005285.v1.0.0 - Source: openneuro - OpenNeuro: [ds005285](https://openneuro.org/datasets/ds005285) - NeMAR: [ds005285](https://nemar.org/dataexplorer/detail?dataset_id=ds005285) ## API Reference Use the `DS005285` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005285(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 29 By ANT * **Study:** `ds005285` (OpenNeuro) * **Author (year):** `Xiangyue2024_29_ANT` * **Canonical:** — Also importable as: `DS005285`, `Xiangyue2024_29_ANT`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 29; recordings: 116; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005285](https://openneuro.org/datasets/ds005285) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005285](https://nemar.org/dataexplorer/detail?dataset_id=ds005285) DOI: [https://doi.org/10.18112/openneuro.ds005285.v1.0.0](https://doi.org/10.18112/openneuro.ds005285.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005285 >>> dataset = DS005285(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005285) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005285) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005286: eeg dataset, 30 subjects *30 By ANT* Access recordings and metadata through EEGDash. **Citation:** Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li (2024). *30 By ANT*. [10.18112/openneuro.ds005286.v1.0.0](https://doi.org/10.18112/openneuro.ds005286.v1.0.0) Modality: eeg Subjects: 30 Recordings: 30 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005286 dataset = DS005286(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005286(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005286( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005286, title = {30 By ANT}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005286.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005286.v1.0.0}, } ``` ## About This Dataset 1.Study introduction: In this experiment, participants were initially exposed to a series of laser stimulations of varying intensities. Researchers identified the energy intensity corresponding to an average rating of 7 from the participants. Subsequently, each participant underwent 30 laser stimulis and provided verbal pain ratings one by one. The pain ratings were on a scale where 0 indicated no sensation at all, 4 indicated the onset of pain, 6 represented moderate pain, 8 indicated severe pain, and 10 denoted unbearable pain. 2.Participant task information(description of the experiment): Participants underwent laser stimulation and subsequently verbally rated the intensity of pain. 3.Participant instructions(as exact as possible): Participants were instructed to focus on the laser stimulation, keep their eyes open, and fix their gaze on the crosshairs displayed on the screen. After each laser stimulation, there is a five-second pause. Participants then rated the intensity of the pain. Subsequent trials began at random 5 seconds after the score was provided. 4.References and links: None 5.Comment: All laser markers are delayed by 100ms ## Dataset Information | Dataset ID | `DS005286` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 30 By ANT | | Author (year) | `Xiangyue2024_30_ANT` | | Canonical | — | | Importable as | `DS005286`, `Xiangyue2024_30_ANT` | | Year | 2024 | | Authors | Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005286.v1.0.0](https://doi.org/10.18112/openneuro.ds005286.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005286) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005286) | [Source URL](https://openneuro.org/datasets/ds005286) | ### Copy-paste BibTeX ```bibtex @dataset{ds005286, title = {30 By ANT}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005286.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005286.v1.0.0}, } ``` ## Technical Details - Subjects: 30 - Recordings: 30 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 1000.0 - Duration (hours): 21.157749444444445 - Pathology: Healthy - Modality: Tactile - Type: Perception - Size on disk: 9.4 GB - File count: 30 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005286.v1.0.0 - Source: openneuro - OpenNeuro: [ds005286](https://openneuro.org/datasets/ds005286) - NeMAR: [ds005286](https://nemar.org/dataexplorer/detail?dataset_id=ds005286) ## API Reference Use the `DS005286` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005286(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 30 By ANT * **Study:** `ds005286` (OpenNeuro) * **Author (year):** `Xiangyue2024_30_ANT` * **Canonical:** — Also importable as: `DS005286`, `Xiangyue2024_30_ANT`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005286](https://openneuro.org/datasets/ds005286) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005286](https://nemar.org/dataexplorer/detail?dataset_id=ds005286) DOI: [https://doi.org/10.18112/openneuro.ds005286.v1.0.0](https://doi.org/10.18112/openneuro.ds005286.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005286 >>> dataset = DS005286(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005286) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005286) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005289: eeg dataset, 39 subjects *39 By BP* Access recordings and metadata through EEGDash. **Citation:** Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li (2024). *39 By BP*. [10.18112/openneuro.ds005289.v1.0.0](https://doi.org/10.18112/openneuro.ds005289.v1.0.0) Modality: eeg Subjects: 39 Recordings: 195 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005289 dataset = DS005289(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005289(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005289( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005289, title = {39 By BP}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005289.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005289.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS005289` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 39 By BP | | Author (year) | `Xiangyue2024_39_BP` | | Canonical | — | | Importable as | `DS005289`, `Xiangyue2024_39_BP` | | Year | 2024 | | Authors | Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005289.v1.0.0](https://doi.org/10.18112/openneuro.ds005289.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005289) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005289) | [Source URL](https://openneuro.org/datasets/ds005289) | ### Copy-paste BibTeX ```bibtex @dataset{ds005289, title = {39 By BP}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005289.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005289.v1.0.0}, } ``` ## Technical Details - Subjects: 39 - Recordings: 195 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 16.56392138888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 7.1 GB - File count: 195 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005289.v1.0.0 - Source: openneuro - OpenNeuro: [ds005289](https://openneuro.org/datasets/ds005289) - NeMAR: [ds005289](https://nemar.org/dataexplorer/detail?dataset_id=ds005289) ## API Reference Use the `DS005289` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005289(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 39 By BP * **Study:** `ds005289` (OpenNeuro) * **Author (year):** `Xiangyue2024_39_BP` * **Canonical:** — Also importable as: `DS005289`, `Xiangyue2024_39_BP`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 39; recordings: 195; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005289](https://openneuro.org/datasets/ds005289) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005289](https://nemar.org/dataexplorer/detail?dataset_id=ds005289) DOI: [https://doi.org/10.18112/openneuro.ds005289.v1.0.0](https://doi.org/10.18112/openneuro.ds005289.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005289 >>> dataset = DS005289(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005289) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005289) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005291: eeg dataset, 65 subjects *65 By ANT* Access recordings and metadata through EEGDash. **Citation:** Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li (2024). *65 By ANT*. [10.18112/openneuro.ds005291.v1.0.0](https://doi.org/10.18112/openneuro.ds005291.v1.0.0) Modality: eeg Subjects: 65 Recordings: 65 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005291 dataset = DS005291(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005291(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005291( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005291, title = {65 By ANT}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005291.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005291.v1.0.0}, } ``` ## About This Dataset 1.Study introduction: In this experiment, participants were initially exposed to a series of laser stimulations of varying intensities. Researchers identified the energy intensity corresponding to an average rating of 7 from the participants. Subsequently, each participant underwent 30 laser stimuli and provided verbal pain ratings one by one. The pain ratings were on a scale where 0 indicated no sensation at all, 4 indicated the onset of pain, 6 represented moderate pain, 8 indicated severe pain, and 10 denoted unbearable pain. 2.Participant task information(description of the experiment): Participants underwent laser stimulation and subsequently verbally rated the intensity of pain. 3.Participant instructions(as exact as possible): Participants were instructed to focus on the laser stimulation, keep their eyes open, and fix their gaze on the crosshairs displayed on the screen. After each laser stimulation, there is a five-second pause. Participants then rated the intensity of the pain. Subsequent trials began at random 5 seconds after the score was provided. 4.References and links: None 5.Comment: All laser markers are delayed by 100ms ## Dataset Information | Dataset ID | `DS005291` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 65 By ANT | | Author (year) | `Xiangyue2024_65_ANT` | | Canonical | — | | Importable as | `DS005291`, `Xiangyue2024_65_ANT` | | Year | 2024 | | Authors | Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005291.v1.0.0](https://doi.org/10.18112/openneuro.ds005291.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005291) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005291) | [Source URL](https://openneuro.org/datasets/ds005291) | ### Copy-paste BibTeX ```bibtex @dataset{ds005291, title = {65 By ANT}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005291.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005291.v1.0.0}, } ``` ## Technical Details - Subjects: 65 - Recordings: 65 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 1000.0 - Duration (hours): 47.67444555555556 - Pathology: Healthy - Modality: Tactile - Type: Perception - Size on disk: 20.5 GB - File count: 65 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005291.v1.0.0 - Source: openneuro - OpenNeuro: [ds005291](https://openneuro.org/datasets/ds005291) - NeMAR: [ds005291](https://nemar.org/dataexplorer/detail?dataset_id=ds005291) ## API Reference Use the `DS005291` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005291(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 65 By ANT * **Study:** `ds005291` (OpenNeuro) * **Author (year):** `Xiangyue2024_65_ANT` * **Canonical:** — Also importable as: `DS005291`, `Xiangyue2024_65_ANT`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 65; recordings: 65; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005291](https://openneuro.org/datasets/ds005291) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005291](https://nemar.org/dataexplorer/detail?dataset_id=ds005291) DOI: [https://doi.org/10.18112/openneuro.ds005291.v1.0.0](https://doi.org/10.18112/openneuro.ds005291.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005291 >>> dataset = DS005291(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005291) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005291) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005292: eeg dataset, 142 subjects *142 by Biosemi* Access recordings and metadata through EEGDash. **Citation:** Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li (2024). *142 by Biosemi*. [10.18112/openneuro.ds005292.v1.0.0](https://doi.org/10.18112/openneuro.ds005292.v1.0.0) Modality: eeg Subjects: 142 Recordings: 426 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005292 dataset = DS005292(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005292(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005292( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005292, title = {142 by Biosemi}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005292.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005292.v1.0.0}, } ``` ## About This Dataset 1.Study introduction: In this experiment, the first 135 participants received fixed-intensity pain stimuli at 3J (low pain) and 3.5J (high pain), while participants 136-142 received fixed-intensity pain stimuli at 3.5J (low pain) and 4J (high pain). Each participant underwent stimulation in 3 blocks, with each block comprising 10 stimuli, totaling 30 stimuli. High and low pain stimuli were evenly distributed within each block. After each stimulation, participants provided pain ratings individually. Pain ratings were as follows: 0 indicated no sensation at all, 4 indicated the onset of pain, 6 represented moderate pain, 8 indicated severe pain, and 10 denoted unbearable pain. 2.Participant task information(description of the experiment): Participants received laser stimulation and subsequently provided pain intensity ratings one by one. 3.Participant instructions(as exact as possible): Participants were instructed to focus on the laser stimulation, keep their eyes open, and fix their gaze on the crosshairs displayed on the screen. After each laser stimulation, there is a five-second pause. Participants then rated the intensity of the pain. Subsequent trials began at random 5 seconds after the score was provided. ## Dataset Information | Dataset ID | `DS005292` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 142 by Biosemi | | Author (year) | `Xiangyue2024_142_Biosemi` | | Canonical | — | | Importable as | `DS005292`, `Xiangyue2024_142_Biosemi` | | Year | 2024 | | Authors | Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005292.v1.0.0](https://doi.org/10.18112/openneuro.ds005292.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005292) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005292) | [Source URL](https://openneuro.org/datasets/ds005292) | ### Copy-paste BibTeX ```bibtex @dataset{ds005292, title = {142 by Biosemi}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005292.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005292.v1.0.0}, } ``` ## Technical Details - Subjects: 142 - Recordings: 426 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1024.0 (420), 2048.0 (6) - Duration (hours): 64.33333333333333 - Pathology: Healthy - Modality: Tactile - Type: Perception - Size on disk: 50.9 GB - File count: 426 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005292.v1.0.0 - Source: openneuro - OpenNeuro: [ds005292](https://openneuro.org/datasets/ds005292) - NeMAR: [ds005292](https://nemar.org/dataexplorer/detail?dataset_id=ds005292) ## API Reference Use the `DS005292` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005292(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 142 by Biosemi * **Study:** `ds005292` (OpenNeuro) * **Author (year):** `Xiangyue2024_142_Biosemi` * **Canonical:** — Also importable as: `DS005292`, `Xiangyue2024_142_Biosemi`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 142; recordings: 426; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005292](https://openneuro.org/datasets/ds005292) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005292](https://nemar.org/dataexplorer/detail?dataset_id=ds005292) DOI: [https://doi.org/10.18112/openneuro.ds005292.v1.0.0](https://doi.org/10.18112/openneuro.ds005292.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005292 >>> dataset = DS005292(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005292) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005292) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005293: eeg dataset, 95 subjects *95 By BP* Access recordings and metadata through EEGDash. **Citation:** Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li (2024). *95 By BP*. [10.18112/openneuro.ds005293.v1.0.0](https://doi.org/10.18112/openneuro.ds005293.v1.0.0) Modality: eeg Subjects: 95 Recordings: 570 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005293 dataset = DS005293(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005293(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005293( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005293, title = {95 By BP}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005293.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005293.v1.0.0}, } ``` ## About This Dataset 1.Study introduction: In this experiment, the intensity of laser stimulation varied individually based on participants’pain thresholds. Prior to the formal commencement of the experiment, participants underwent a series of stimuli of increasing intensity, incrementally rising from low to high in 0.25J steps until reaching the maximum tolerable intensity for each individual. Participants were instructed to verbally report the perceived pain intensity of each laser stimulation using a numerical rating scale (NRS) ranging from 0 (no sensation) to 10 (the maximum tolerable level of pain), with 4 indicating the pain perception threshold akin to a pricking sensation. In this study, each participant received four levels of stimulation intensity, corresponding to ratings of 2, 4, 6, and 8 on the NRS (E1: 2.0 ± 0.2 J; E2: 2.7 ± 0.3 J; E3: 3.4 ± 0.3 J; E4: 4.1 ± 0.4 J). 2.Participant task information(description of the experiment): Participants received laser stimulation and subsequently provided pain intensity ratings one by one. 3.Participant instructions(as exact as possible): The participants were instructed to relax and sit comfortably on a chair, focusing their attention on the sensation of laser stimulation. The experimental environment was quiet, with a constant room temperature, and no unrelated individuals were present. Both the participants and the experimenter wore protective goggles. The experimental design employed a two-factor repeated measures within-subject design, with 4 levels of stimulation intensity crossed with 2 levels of stimulation location (left hand dorsum and right hand dorsum), resulting in a total of 8 conditions (stimulation locations: left hand dorsum and right hand dorsum). There were 10 trials for each condition, totaling 80 trials. Participants received pain stimulation and provided ratings for each trial individually. ## Dataset Information | Dataset ID | `DS005293` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 95 By BP | | Author (year) | `Xiangyue2024_95_BP` | | Canonical | — | | Importable as | `DS005293`, `Xiangyue2024_95_BP` | | Year | 2024 | | Authors | Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005293.v1.0.0](https://doi.org/10.18112/openneuro.ds005293.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005293) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005293) | [Source URL](https://openneuro.org/datasets/ds005293) | ### Copy-paste BibTeX ```bibtex @dataset{ds005293, title = {95 By BP}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005293.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005293.v1.0.0}, } ``` ## Technical Details - Subjects: 95 - Recordings: 570 - Tasks: 1 - Channels: 60 - Sampling rate (Hz): 1000.0 - Duration (hours): 234.03106055555557 - Pathology: Healthy - Modality: Tactile - Type: Perception - Size on disk: 98.9 GB - File count: 570 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005293.v1.0.0 - Source: openneuro - OpenNeuro: [ds005293](https://openneuro.org/datasets/ds005293) - NeMAR: [ds005293](https://nemar.org/dataexplorer/detail?dataset_id=ds005293) ## API Reference Use the `DS005293` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005293(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 95 By BP * **Study:** `ds005293` (OpenNeuro) * **Author (year):** `Xiangyue2024_95_BP` * **Canonical:** — Also importable as: `DS005293`, `Xiangyue2024_95_BP`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 95; recordings: 570; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005293](https://openneuro.org/datasets/ds005293) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005293](https://nemar.org/dataexplorer/detail?dataset_id=ds005293) DOI: [https://doi.org/10.18112/openneuro.ds005293.v1.0.0](https://doi.org/10.18112/openneuro.ds005293.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005293 >>> dataset = DS005293(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005293) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005293) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005296: eeg dataset, 62 subjects *Assessing sensitivity to semantic and syntactic information in deaf readers: An ERP study* Access recordings and metadata through EEGDash. **Citation:** Karen Emmorey, Emily M. Akers, Priscilla Martinez, Katherine J. Midgley, Phillip J. Holcomb (2024). *Assessing sensitivity to semantic and syntactic information in deaf readers: An ERP study*. [10.18112/openneuro.ds005296.v1.0.1](https://doi.org/10.18112/openneuro.ds005296.v1.0.1) Modality: eeg Subjects: 62 Recordings: 62 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005296 dataset = DS005296(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005296(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005296( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005296, title = {Assessing sensitivity to semantic and syntactic information in deaf readers: An ERP study}, author = {Karen Emmorey and Emily M. Akers and Priscilla Martinez and Katherine J. Midgley and Phillip J. Holcomb}, doi = {10.18112/openneuro.ds005296.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005296.v1.0.1}, } ``` ## About This Dataset Data collection took place at the NeuroCognition Laboratory (NCL) in San Diego, California under the supervision of Dr. Phillip Holcomb. This project followed the San Diego State University’s IRB guidelines. Participants sat in a comfortable chair in a darkened sound attenuated room throughout the experiment. They were given a keyboard for button pressing and wore a lightweight headset to record their verbal responses. They were instructed to watch the LCD video monitor that was at a viewing distance of 60in. Participants were presented with 180 sentences in white font on a black background. Conditions consisted of 30 subject-verb agreement violations, 30 semantic violations, 30 double (subject-verb agreement + semantic) violations, 30 word-order violations, and 60 control (correct) sentences. Sentences were presented in an RSVP design, one word at a time, in the middle of the screen for a duration of 600ms with an ISI of 200ms. ## Dataset Information | Dataset ID | `DS005296` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Assessing sensitivity to semantic and syntactic information in deaf readers: An ERP study | | Author (year) | `Emmorey2024` | | Canonical | — | | Importable as | `DS005296`, `Emmorey2024` | | Year | 2024 | | Authors | Karen Emmorey, Emily M. Akers, Priscilla Martinez, Katherine J. Midgley, Phillip J. Holcomb | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005296.v1.0.1](https://doi.org/10.18112/openneuro.ds005296.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005296) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005296) | [Source URL](https://openneuro.org/datasets/ds005296) | ### Copy-paste BibTeX ```bibtex @dataset{ds005296, title = {Assessing sensitivity to semantic and syntactic information in deaf readers: An ERP study}, author = {Karen Emmorey and Emily M. Akers and Priscilla Martinez and Katherine J. Midgley and Phillip J. Holcomb}, doi = {10.18112/openneuro.ds005296.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005296.v1.0.1}, } ``` ## Technical Details - Subjects: 62 - Recordings: 62 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 37.20480388888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 8.5 GB - File count: 62 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005296.v1.0.1 - Source: openneuro - OpenNeuro: [ds005296](https://openneuro.org/datasets/ds005296) - NeMAR: [ds005296](https://nemar.org/dataexplorer/detail?dataset_id=ds005296) ## API Reference Use the `DS005296` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005296(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Assessing sensitivity to semantic and syntactic information in deaf readers: An ERP study * **Study:** `ds005296` (OpenNeuro) * **Author (year):** `Emmorey2024` * **Canonical:** — Also importable as: `DS005296`, `Emmorey2024`. Modality: `eeg`. Subjects: 62; recordings: 62; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005296](https://openneuro.org/datasets/ds005296) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005296](https://nemar.org/dataexplorer/detail?dataset_id=ds005296) DOI: [https://doi.org/10.18112/openneuro.ds005296.v1.0.1](https://doi.org/10.18112/openneuro.ds005296.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005296 >>> dataset = DS005296(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005296) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005296) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005305: eeg dataset, 165 subjects *EEG Resting-state Microstates Correlates of Executive Functions* Access recordings and metadata through EEGDash. **Citation:** Chenot Quentin, Hamery Caroline, Truninger Moritz, De Boissezon Xavier, Langer Nicolas, Scannella Sébastien (2024). *EEG Resting-state Microstates Correlates of Executive Functions*. [10.18112/openneuro.ds005305.v1.0.1](https://doi.org/10.18112/openneuro.ds005305.v1.0.1) Modality: eeg Subjects: 165 Recordings: 165 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005305 dataset = DS005305(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005305(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005305( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005305, title = {EEG Resting-state Microstates Correlates of Executive Functions}, author = {Chenot Quentin and Hamery Caroline and Truninger Moritz and De Boissezon Xavier and Langer Nicolas and Scannella Sébastien}, doi = {10.18112/openneuro.ds005305.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005305.v1.0.1}, } ``` ## About This Dataset **README** Project name: Project Microstates & Executive Functions (Project_microstates_EFs) Years: 2021-2023 Contact person: Quentin chenot ([quentinchenot@gmail.com](mailto:quentinchenot@gmail.com)) **Overview** Summary: This study aimed to specifically explore the relationship between intrinsic brain spatio-temporal dynamics and Executive Functions. ### View full README **README** Project name: Project Microstates & Executive Functions (Project_microstates_EFs) Years: 2021-2023 Contact person: Quentin chenot ([quentinchenot@gmail.com](mailto:quentinchenot@gmail.com)) **Overview** Summary: This study aimed to specifically explore the relationship between intrinsic brain spatio-temporal dynamics and Executive Functions. To do so, resting-state EEG microstates were used to assess brain spatio-temporal dynamics in 140 healthy participants, while a comprehensive battery of nine cognitive function tasks was employed to evaluate their executive functions. Correlations were computed between the EEG microstates metrics at rest and the mean score in executive function tasks. **Dataset content** 1160 Files, 6.38GB 165 - Subjects 2 - Session Available Tasks: Resting-State, Antisaccade, Category-Switch, Color-Shape, Dual N-back, Keep-Track, Letter-Memory, Number-Letter, Stop-Signal, Stroop Available Modalities: EEG Independent variables: NA Dependent variables: > DV1: Mean score in the nine executive functions tasks (z-scored) > DV2: resting-state EEG microstates metrics (number of occurrences, mean duration) Control variables: : Age Gender Education Handedness **Methods** **Subjects** 165 participants were recruited for this experiment. 140 constitute the final sample. Recrutment procedure: participants were recruited with flyers, mailing-lists and mouth to hear in the Toulouse University campuses. Inclusion criteria: age (18-35 years); affiliation to social insurance; having read the information document about the experiment; signed informed consent form; native French language Exclusion criteria: addiction (alcohol, drugs); major hearing loss; major visual deficit; including hemianopsia and color blindness; neurological or psychiatric pathology; known brain injury, drugs intake targeting the central nervous system; refusal to sign the consent form **Material** Participants performed the experiment in a windowless room at a stable temperature, seated and facing a 24’ inches screen. They underwent two sessions: session 1: EEG measures with a 5 min resting-state alternating between eyes closed and eyes open (30 sec each). EEG apparatus: 64 electrodes Biosemi Active-two amplifier (data acquired at 512 Hz) session 2: behavioral measures, with the nine cognitive tasks in the following order: antisaccade, letter-memory, color–shape, number–letter, Stroop, keep track, dual n-back, category switch, stop-signal. **Experimental location and acquisition timeframe** The experiment took place in the Centre de Neuroergonomie at ISAE-SUPAERO (Toulouse, France), from february 2022 to july 2023. **Installation and Setup** The repository contains scripts for preprocessing and analyzing behavioral and EEG data, as well as the project’s documentation and results. See code_documentation.pdf in docs (available in [https://osf.io/fm58p/](https://osf.io/fm58p/)). **Publications** Registered Report stage 1 IPA: EEG resting-state microstates correlates of executive functions ([https://osf.io/dwz2r](https://osf.io/dwz2r)) Registered Report stage 2: Investigating the relationship between Resting-State EEG Microstates and Executive Functions: A Null Finding ([https://doi.org/10.1016/j.cortex.2024.05.019](https://doi.org/10.1016/j.cortex.2024.05.019)) **Authors** Quentin Chenot, Caroline Hamery, Moritz Truninger, Xavier De Boissezon, Nicolas Langer, Sébastien Scannella **CRediT author statement** QC: conceptualization, methodology, software, formal analysis, investigation, data curation, writing – original draft, writing – review & editing, Visualization. CH: methodology, software, validation, formal analysis, data curation, writing – review & editing, visualization. MT: methodology, software, validation, formal analysis, data curation, writing – review & editing, visualization. NL: methodology, software, validation, formal analysis, resources, writing – review & editing. XDB: resources, writing – review & editing, supervision, funding acquisition. SS: conceptualization, methodology, resources, writing – review & editing, supervision, project administration, funding acquisition. **License** This work is licensed under a Creative Commons Attribution 4.0 International License. **Funding** This work is supported by the French National Research Agency (ANR) and the Defense Procurement Agency (DGA), ASTRID program [grant numbers ANR-17-ASTR-0005] **Notes - Participant and sessions timeframe** Participant Session 1 Session 2 (DD/MM/YYYY - HH) sub-001 21/02/2022 - 10h 24/02/2022 - 14h sub-002 21/02/2022 - 17h 01/03/2022 - 17h sub-003 22/02/2022 - 8h 28/02/2022 - 11h sub-004 22/02/2022 - 13h 01/03/2022 - 9h sub-005 08/03/2022 - 9h 14/03/2022 - 9h15 sub-006 08/03/2022 - 13h 10/03/2022 - 13h30 sub-007 08/03/2022 - 15h30 16/03/2022 - 9h sub-008 09/03/2022 - 13h 11/03/2022 - 10h sub-009 09/03/2022 - 17h 17/03/2022 - 10h30 sub-010 10/03/2022 - 13h 11/03/2022 - 13h sub-011 10/03/2022 - 19h 17/03/2022 - 18h30 sub-012 14/03/2022 - 8h 21/03/2022 - 8h30 sub-013 14/03/2022 - 17h 18/03/2022 - 17h sub-014 15/03/2022 - 8h 18/03/2022 - 14h30 sub-015 15/03/2022 - 13h 21/03/2022 - 15h30 sub-016 16/03/2022 - 8h 23/03/2022 - 8h sub-017 16/03/2022 - 13h 22/03/2022 - 15h00 sub-018 16/03/2022 - 15h30 22/03/2022 - 9h sub-019 17/03/2022 - 8h 25/03/2022 - 8h15 sub-020 17/03/2022 - 16h30 24/03/2022 - 14h sub-021 18/03/2022 - 9h 22/03/2022 - 13h sub-022 18/03/2022 - 13h 24/03/2022 - 11h30 sub-023 18/03/2022 - 16h 22/03/2022 - 17h sub-024 21/03/2022 - 16h 28/03/2022 - 16h sub-025 22/03/2022 - 9h 28/03/2022 - 9h sub-026 22/03/2022 - 13h 25/03/2022 - 15h sub-027 23/03/2022 - 18h 30/03/2022 - 18h sub-028 24/03/2022 - 17h 25/03/2022 - 13h sub-029 25/03/2022 - 13h 31/03/2022 - 17h30 sub-030 28/03/2022 - 9h 31/03/2022 - 9h sub-031 29/03/2022 - 17h 31/03/2022 - 13h sub-032 30/03/2022 - 17h 01/04/2022 - 14h sub-033 04/04/2022 - 9h 05/04/2022 - 14h sub-034 04/04/2022 - 13h 05/04/2022 - 10h sub-035 04/04/2022 - 16h 11/04/2022 - 16h sub-036 06/04/2022 - 9h 08/04/2022 - 9h sub-037 07/04/2022 - 15h 08/04/2022 - 13h sub-038 08/04/2022 - 13h 25/04/2022 - 9h sub-039 12/04/2022 - 9h 15/04/2022 - 14h sub-040 14/04/2022 - 9h 25/04/2022 - 17h sub-041 15/04/2022 - 13h 27/04/2022 - 14h sub-042 19/04/2022 - 16h 20/04/2022 - 16h sub-043 20/04/2022 - 9h 21/04/2022 - 9h sub-044 21/04/2022 - 13h 22/04/2022 - 17h sub-045 21/04/2022 - 16h 28/04/2022 - 17h sub-046 22/04/2022 - 16h 27/04/2022 - 17h sub-047 25/04/2022 - 9h 28/04/2022 - 10h sub-048 25/04/2022 - 10h30 04/05/2022 - 14h sub-049 25/04/2022 - 16h 29/04/2022 - 9h sub-050 26/04/2022 - 13h 27/04/2022 - 10h sub-051 27/04/2022 - 9h 03/05/2022 - 14h30 sub-052 27/04/2022 - 16h 02/05/2022 - 17h sub-053 28/04/2022 - 13h 13/05/2022 - 14h sub-054 28/04/2022 - 16h 04/05/2022 - 16h sub-055 29/04/2022 - 13h 05/05/2022 - 16h sub-056 29/04/2022 - 17h 03/05/2022 - 17h15 sub-057 06/05/2022 - 13h 10/05/2022 - 14h sub-058 09/05/2022 - 14h 16/05/2022 - 16h sub-059 09/05/2022 - 16h 16/05/2022 - 9h30 sub-060 10/05/2022 - 9h 11/05/2022 - 15h30 sub-061 10/05/2022 - 16h 12/05/2022 - 14h sub-062 11/05/2022 - 16h 17/05/2022 - 16h sub-063 12/05/2022 - 13h 19/05/2022 - 15h sub-064 13/05/2022 - 13h 20/05/2022 - 16h sub-065 17/05/2022 - 9h 18/05/2022 - 16h sub-066 18/05/2022 - 9h 19/05/2022 - 9h sub-067 18/05/2022 - 11h 20/05/2022 - 9h sub-068 19/05/2022 - 13h 25/05/2022 - 10h sub-069 20/05/2022 - 9h 23/05/2022 - 9h00 sub-070 24/05/2022 - 9h 31/05/2022 - 9h00 sub-071 25/05/2022 - 9h 31/05/2022 - 16h sub-072 31/05/2022 - 13h 01/06/2022 - 13h sub-073 01/05/2022 - 9h30 10/06/2022 - 14h sub-074 08/06/2022 - 13h30 15/06/2022 - 15h30 sub-075 10/06/2022 - 13h30 15/06/2022 - 10h sub-076 13/06/2022 - 16h 16/06/2022 - 17h sub-077 13/06/2022 - 18h30 15/06/2022 - 18h30 sub-078 15/06/2022 - 13h30 20/06/2022 - 16h sub-079 15/06/2022 - 16h 21/06/2022 - 16h sub-080 16/06/2022 - 16h 22/06/2022 - 17h sub-081 17/06/2022 - 9h30 21/06/2022 - 9h sub-082 17/06/2022 - 13h30 24/06/2022 - 11h sub-083 20/06/2022 - 13h30 23/06/2022 - 9h sub-084 21/06/2022 - 10h30 23/06/2022 - 14h sub-085 21/06/2022 - 16h 27/06/2022 - 16h sub-086 22/06/2022 - 16h 29/06/2022 - 11h30 sub-087 23/06/2022 - 9h30 12/07/2022 - 10h sub-088 24/06/2022 - 10h 30/06/2022 - 10h sub-089 24/06/2022 - 16h 27/06/2022 - 18h sub-090 27/06/2022 - 9h30 28/06/2022 - 14h sub-091 27/06/2022 - 16h 04/07/2022 - 10h sub-092 29/06/2022 - 16h 04/07/2022 - 16h sub-093 01/07/2022 - 16h 05/07/2022 - 17h30 sub-094 04/07/2022 - 9h30 05/07/2022 - 12h sub-095 08/07/2022 - 16h 13/07/2022 - 17h sub-096 11/07/2022 - 17h 26/07/2022 - 17h sub-097 12/07/2022 - 16h 19/07/2022 - 9h30 sub-098 20/07/2022 - 16h 25/07/2022 - 14h sub-099 25/07/2022 - 16h 01/08/2022 - 16h sub-100 26/07/2022 - 10h 01/08/2022 - 10h sub-101 05/09/2022 - 13h30 07/09/2022 - 13h30 sub-102 07/09/2022 - 13h30 08/09/2022 - 14h sub-103 26/09/2022 - 13h 27/09/2022 - 9h sub-104 03/10/2022 - 9h30 04/10/2022 - 9h30 sub-105 07/10/2022 - 14h 11/10/2022 - 17h30 sub-106 07/10/2022 - 16h No session sub-107 17/10/2022 - 16h30 20/10/2022 - 16h45 sub-108 19/10/2022 - 16h 20/10/2022 - 10h sub-109 20/10/2022 - 13h30 21/10/2022 - 15h sub-110 21/10/2022 - 9h30 25/10/2022 - 9h30 sub-111 26/10/2022 - 18h30 03/11/2022 - 18h30 sub-112 27/10/2022 - 16h 07/11/2022 - 14h sub-113 28/10/2022 - 17h30 02/11/2022 - 9h sub-114 10/11/2022 - 13h30 10/11/2022 - 15h sub-115 24/11/2022 - 13h 01/12/2022 - 13h30 sub-116 29/11/2022 - 10h 08/12/2022 - 13h sub-117 02/12/2022 - 15h30 09/12/2022 - 16h30 sub-118 05/12/2022 - 13h 20/12/2022 - 14h30 sub-119 05/12/2022 - 10h 12/12/2022 - 18h30 sub-120 07/12/2022 - 15h30 09/12/2022 - 15h00 sub-121 20/12/2022 - 10h 21/12/2022 - 10h sub-122 18/01/2023 - 10h 25/01/2023 - 13h sub-123 18/01/2023 - 13h 19/01/2023 - 14h sub-124 19/01/2023 - 14h 23/01/2023 - 14h sub-125 19/01/2023 - 16h 23/01/2023 - 9h sub-126 20/01/2023 - 16h 01/02/2023 - 15h sub-127 30/01/2023 - 17h 06/02/2023 - 15h30 sub-128 31/01/2023 - 15h30 06/02/2023 - 9h sub-129 01/02/2023 - 17h 02/02/2023 - 10h sub-130 03/02/2023 - 15h30 08/02/2023 - 16h sub-131 08/02/2023 - 10h 09/02/2023 - 10h sub-132 08/02/2023 - 15h30 15/02/2023 - 16h30 sub-133 14/02/2023 - 17h 20/02/2023 - 10h sub-134 15/03/2023 - 15h30 22/03/2023 - 15h30 sub-135 16/03/2023 - 15h00 17/03/2023 - 8h30 sub-136 23/03/2023 - 15h30 27/03/2023 - 10h30 sub-137 28/03/2023 - 13h30 29/03/2023 - 17h sub-138 29/03/2023 - 17h 03/04/2023 - 17h sub-139 30/03/2023 - 9h 31/03/2023 - 9h sub-140 31/03/2023 - 15h30 05/04/2023 - 8h30 sub-141 05/04/2023 - 9h 12/04/2023 - 15h30 sub-142 12/04/2023 - 11h 19/04/2023 - 11h sub-143 13/04/2023 - 15h30 19/04/2023 - 9h sub-144 17/04/2023 - 10h 20/04/2023 - 13h30 sub-145 19/04/2023 - 13h30 20/04/2023 - 9h sub-146 21/04/2023 - 10h 21/04/2023 - 10h sub-147 21/04/2023 - 17h 04/05/2023 - 14h sub-148 24/04/2023 - 14h 26/04/2023 - 14h sub-149 25/04/2023 - 15h 26/04/2023 - 15h30 sub-150 26/04/2023 - 15h 04/05/2023 - 14h sub-151 27/04/2023 - 15h30 02/05/2023 - 14h sub-152 05/05/2023 - 8h 09/05/2023 - 17h sub-153 05/05/2023 - 14h 11/05/2023 - 16h sub-154 11/05/2023 - 17h 15/05/2023 - 17h sub-155 12/05/2023 - 14h 22/05/2023 - 16h30 sub-156 12/05/2023 - 17h30 17/05/2023 - 17h30 sub-157 23/05/2023 - 10h 26/05/2023 - 14h sub-158 05/06/2023 - 17h 07/06/2023 - 16h15 sub-159 06/06/2023 - 11h 07/06/2023 - 10h sub-160 29/06/2023 - 16h30 30/06/2023 - 17h sub-161 06/07/2023 - 15h 10/07/2023 - 15h sub-162 10/07/2023 - 9h30 11/07/2023 - 15h sub-163 11/07/2023 - 10h No session sub-164 12/07/2023 - 16h30 13/07/2023 - 17h sub-165 17/07/2023 - 16h30 18/07/2023 - 17h ## Dataset Information | Dataset ID | `DS005305` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG Resting-state Microstates Correlates of Executive Functions | | Author (year) | `Quentin2024` | | Canonical | — | | Importable as | `DS005305`, `Quentin2024` | | Year | 2024 | | Authors | Chenot Quentin, Hamery Caroline, Truninger Moritz, De Boissezon Xavier, Langer Nicolas, Scannella Sébastien | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005305.v1.0.1](https://doi.org/10.18112/openneuro.ds005305.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005305) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005305) | [Source URL](https://openneuro.org/datasets/ds005305) | ### Copy-paste BibTeX ```bibtex @dataset{ds005305, title = {EEG Resting-state Microstates Correlates of Executive Functions}, author = {Chenot Quentin and Hamery Caroline and Truninger Moritz and De Boissezon Xavier and Langer Nicolas and Scannella Sébastien}, doi = {10.18112/openneuro.ds005305.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005305.v1.0.1}, } ``` ## Technical Details - Subjects: 165 - Recordings: 165 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 512.0 (164), 2048.0 - Duration (hours): 14.136398111979169 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 6.4 GB - File count: 165 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005305.v1.0.1 - Source: openneuro - OpenNeuro: [ds005305](https://openneuro.org/datasets/ds005305) - NeMAR: [ds005305](https://nemar.org/dataexplorer/detail?dataset_id=ds005305) ## API Reference Use the `DS005305` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005305(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Resting-state Microstates Correlates of Executive Functions * **Study:** `ds005305` (OpenNeuro) * **Author (year):** `Quentin2024` * **Canonical:** — Also importable as: `DS005305`, `Quentin2024`. Modality: `eeg`. Subjects: 165; recordings: 165; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005305](https://openneuro.org/datasets/ds005305) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005305](https://nemar.org/dataexplorer/detail?dataset_id=ds005305) DOI: [https://doi.org/10.18112/openneuro.ds005305.v1.0.1](https://doi.org/10.18112/openneuro.ds005305.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005305 >>> dataset = DS005305(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005305) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005305) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005307: eeg dataset, 7 subjects *Laser-evoked potentials in the human spinal cord and cortex* Access recordings and metadata through EEGDash. **Citation:** Birgit Nierula, Tilman Stephani, Emma Bailey, Merve Kaptan, Lisa-Marie Pohle, Ulrike Horn, Andre Mouraux, Burkhard Maess, Arno Villringer, Gabriel Curio, Vadim Nikulin, Falk Eippert (2024). *Laser-evoked potentials in the human spinal cord and cortex*. [10.18112/openneuro.ds005307.v1.0.1](https://doi.org/10.18112/openneuro.ds005307.v1.0.1) Modality: eeg Subjects: 7 Recordings: 73 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005307 dataset = DS005307(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005307(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005307( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005307, title = {Laser-evoked potentials in the human spinal cord and cortex}, author = {Birgit Nierula and Tilman Stephani and Emma Bailey and Merve Kaptan and Lisa-Marie Pohle and Ulrike Horn and Andre Mouraux and Burkhard Maess and Arno Villringer and Gabriel Curio and Vadim Nikulin and Falk Eippert}, doi = {10.18112/openneuro.ds005307.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005307.v1.0.1}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS005307` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Laser-evoked potentials in the human spinal cord and cortex | | Author (year) | `Nierula2024` | | Canonical | `Nierula2019` | | Importable as | `DS005307`, `Nierula2024`, `Nierula2019` | | Year | 2024 | | Authors | Birgit Nierula, Tilman Stephani, Emma Bailey, Merve Kaptan, Lisa-Marie Pohle, Ulrike Horn, Andre Mouraux, Burkhard Maess, Arno Villringer, Gabriel Curio, Vadim Nikulin, Falk Eippert | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005307.v1.0.1](https://doi.org/10.18112/openneuro.ds005307.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005307) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005307) | [Source URL](https://openneuro.org/datasets/ds005307) | ### Copy-paste BibTeX ```bibtex @dataset{ds005307, title = {Laser-evoked potentials in the human spinal cord and cortex}, author = {Birgit Nierula and Tilman Stephani and Emma Bailey and Merve Kaptan and Lisa-Marie Pohle and Ulrike Horn and Andre Mouraux and Burkhard Maess and Arno Villringer and Gabriel Curio and Vadim Nikulin and Falk Eippert}, doi = {10.18112/openneuro.ds005307.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005307.v1.0.1}, } ``` ## Technical Details - Subjects: 7 - Recordings: 73 - Tasks: 1 - Channels: 77 (50), 109 (18), 110 (5) - Sampling rate (Hz): 10000.0 - Duration (hours): 1.5768097222222224 - Pathology: Healthy - Modality: Tactile - Type: Perception - Size on disk: 18.1 GB - File count: 73 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005307.v1.0.1 - Source: openneuro - OpenNeuro: [ds005307](https://openneuro.org/datasets/ds005307) - NeMAR: [ds005307](https://nemar.org/dataexplorer/detail?dataset_id=ds005307) ## API Reference Use the `DS005307` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005307(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Laser-evoked potentials in the human spinal cord and cortex * **Study:** `ds005307` (OpenNeuro) * **Author (year):** `Nierula2024` * **Canonical:** `Nierula2019` Also importable as: `DS005307`, `Nierula2024`, `Nierula2019`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 7; recordings: 73; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005307](https://openneuro.org/datasets/ds005307) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005307](https://nemar.org/dataexplorer/detail?dataset_id=ds005307) DOI: [https://doi.org/10.18112/openneuro.ds005307.v1.0.1](https://doi.org/10.18112/openneuro.ds005307.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005307 >>> dataset = DS005307(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005307) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005307) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005340: eeg dataset, 15 subjects *Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech* Access recordings and metadata through EEGDash. **Citation:** Melissa J. Polonenko, Ross K. Maddox (2024). *Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech*. [10.18112/openneuro.ds005340.v1.0.4](https://doi.org/10.18112/openneuro.ds005340.v1.0.4) Modality: eeg Subjects: 15 Recordings: 15 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005340 dataset = DS005340(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005340(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005340( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005340, title = {Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech}, author = {Melissa J. Polonenko and Ross K. Maddox}, doi = {10.18112/openneuro.ds005340.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds005340.v1.0.4}, } ``` ## About This Dataset **README** **Details related to access to the data** Please contact the following authors for further information: : Melissa Polonenko(email: [mpolonen@umn.edu](mailto:mpolonen@umn.edu)) Ross Maddox (email: [rkmaddox@med.umich.edu](mailto:rkmaddox@med.umich.edu)) ### View full README **README** **Details related to access to the data** Please contact the following authors for further information: : Melissa Polonenko(email: [mpolonen@umn.edu](mailto:mpolonen@umn.edu)) Ross Maddox (email: [rkmaddox@med.umich.edu](mailto:rkmaddox@med.umich.edu)) **Overview** This is the “peaky_pitchshift”” dataset for the paper Polonenko MJ & Maddox RK (2024), with citation listed below. Peer-reviewed manuscript: Melissa J. Polonenko, Ross K. Maddox; Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech. JASA Express Lett. 1 November 2024; 4 (11): 114401. [https://doi.org/10.1121/10.0034329](https://doi.org/10.1121/10.0034329) BioRxiv pre-print: Melissa Jane Polonenko, Ross K Maddox (2024). Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech. bioRxiv 2024.07.12.603125; doi: [https://doi.org/10.1101/2024.07.12.603125](https://doi.org/10.1101/2024.07.12.603125) Auditory brainstem responses (ABRs) were derived to continuous peaky speech from two talkers with different fundamental frequencies (f0s) and from clicks that have mean stimulus rates set to the mean f0s. Data was collected from May to June 2021. Aims: > 1. replicate the male/female talker effect with each at their natural f0 > 2. systematically determine if f0 is the main driver of this talker difference > 3. evaluate if the f0 effect resembles the click rate effect The details of the experiment can be found at Polonenko & Maddox (2024). Stimuli: > 1) randomized click trains at 3 stimulus rates (123, 150, 183 Hz), > 30 x 10 s trials each for a total of 90 trials (15 min, 5 min each rate) > 2) peaky speech for a male and female narrator at 3 f0s (123, 150, 183 Hz), > 120 x 10 s trials each of the 6 narrator-f0 combo for a total of 720 trials > (2 hours, 20 min each) > NOTE: f0s used: original f0s (low & high respectively) and f0s > shifted to the other narrator’s f0 and an f0 at the midpoint between the f0s. > click rates used: set to the mean f0s used for the speech The code for stimulus preprocessing and EEG analysis is available on Github: : [https://github.com/polonenkolab/peaky_pitchshift](https://github.com/polonenkolab/peaky_pitchshift) **Format** The dataset is formatted according to the EEG Brain Imaging Data Structure. It includes EEG recording from participant 01 to 15 in raw brainvision format (3 files: .eeg, .vhdr, .vmrk) and stimuli files in format of .hdf5. The stimuli files contain the audio (‘x’), and regressors for the deconvolution (‘pinds’ are the pulse indices, ‘anm’ is an auditory nerve model regressor, > which was used during analyses but was not included as part of the article). Generally, you can find detailed event data in the .tsv files and descriptions in the accompanying .json files. Raw eeg files are provided in the Brain Products format. **Participants** 15 participants, mean ± SD age of 24.1 ± 6.1 years (19-35 years) Inclusion criteria: > 1. Age between 18-40 years > 2. Normal hearing: audiometric thresholds 20 dB HL or better from 500 to 8000 Hz > 3. Speak English as their primary language Please see participants.tsv for more information. **Apparatus** Participants sat in a darkened sound-isolating booth and rested or watched silent videos with closed captioning. Stimuli were presented at an average level of 65 dB SPL and a sampling rate of 48 kHz through ER-2 insert earphones plugged into an RME Babyface Pro digital sound card. Custom python scripts using expyfun were used to control the experiment and stimulus presentation. **Details about the experiment** For a detailed description of the task, see Polonenko & Maddox (2024) and the supplied `task-peaky_pitch_eeg.json` file. The 6 peaky speech conditions (2 narrators x 3 f0s) were randomly interleaved for each block of trials (i.e., for trial 1, the 6 conditions were randomized) and the story token was randomized. This means that the participant would not be able to follow the story. For clicks the trials were not randomized (already random clicks). Trigger onset times in the tsv files have already been corrected for the tubing delay of the insert earphones (but not in the events of the raw files). Triggers with values of “1” were recorded to the onset of the 10 s audio, and shortly after triggers with values of “4” or “8” were stamped to indicate the overall trial number out of 120 for each speech conditon and out of 30 for each click condition. This was done by converting the decimal trial number to bits, denoted b, then calculating 2 \*\* (b + 2). We’ve specified these trial numbers and more metadata of the events in each of the ‘\*_eeg_events.tsv” file, which is sufficient to know which trial corresponded to which type of stimulus (clicks, male narrator, female narrator), which f0 (low, mid, high), and which file - e.g., male_low_000_regress.hdf5 for the male narrator with the low f0. ## Dataset Information | Dataset ID | `DS005340` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech | | Author (year) | `Polonenko2024_Fundamental` | | Canonical | — | | Importable as | `DS005340`, `Polonenko2024_Fundamental` | | Year | 2024 | | Authors | Melissa J. Polonenko, Ross K. Maddox | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005340.v1.0.4](https://doi.org/10.18112/openneuro.ds005340.v1.0.4) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005340) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005340) | [Source URL](https://openneuro.org/datasets/ds005340) | ### Copy-paste BibTeX ```bibtex @dataset{ds005340, title = {Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech}, author = {Melissa J. Polonenko and Ross K. Maddox}, doi = {10.18112/openneuro.ds005340.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds005340.v1.0.4}, } ``` ## Technical Details - Subjects: 15 - Recordings: 15 - Tasks: 1 - Channels: 2 - Sampling rate (Hz): 10000.0 - Duration (hours): 35.29713844444445 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 9.5 GB - File count: 15 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005340.v1.0.4 - Source: openneuro - OpenNeuro: [ds005340](https://openneuro.org/datasets/ds005340) - NeMAR: [ds005340](https://nemar.org/dataexplorer/detail?dataset_id=ds005340) ## API Reference Use the `DS005340` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005340(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech * **Study:** `ds005340` (OpenNeuro) * **Author (year):** `Polonenko2024_Fundamental` * **Canonical:** — Also importable as: `DS005340`, `Polonenko2024_Fundamental`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 15; recordings: 15; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005340](https://openneuro.org/datasets/ds005340) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005340](https://nemar.org/dataexplorer/detail?dataset_id=ds005340) DOI: [https://doi.org/10.18112/openneuro.ds005340.v1.0.4](https://doi.org/10.18112/openneuro.ds005340.v1.0.4) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005340 >>> dataset = DS005340(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005340) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005340) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005342: eeg dataset, 32 subjects *EEG data offline and online during motor imagery for standing and sitting* Access recordings and metadata through EEGDash. **Citation:** Nayid Triana-Guzman, Alvaro D Orjuela-Cañon, Andres L Jutinico, Omar Mendoza-Montoya, Javier M Antelis (2024). *EEG data offline and online during motor imagery for standing and sitting*. [10.18112/openneuro.ds005342.v1.0.3](https://doi.org/10.18112/openneuro.ds005342.v1.0.3) Modality: eeg Subjects: 32 Recordings: 32 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005342 dataset = DS005342(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005342(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005342( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005342, title = {EEG data offline and online during motor imagery for standing and sitting}, author = {Nayid Triana-Guzman and Alvaro D Orjuela-Cañon and Andres L Jutinico and Omar Mendoza-Montoya and Javier M Antelis}, doi = {10.18112/openneuro.ds005342.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005342.v1.0.3}, } ``` ## About This Dataset The experiments were conducted in an acoustically isolated room where only the participant and the experimenter were present. Participants voluntarily signed an informed consent form in accordance with the experimental protocol approved by the ethics committee of the Universidad Antonio Nariño. The participant was seated in a chair in a posture that was comfortable for him/her but did not affect data collection. In front of the participant, a 40-inch TV screen was placed at about 3 m. On this screen, a graphical user interface (GUI) displayed images that guided the participant through the experiment. Each experimental session was divided into two phases: an offline phase and an online phase. The offline experiments consisted of recording participants´ EEG signals during motor imagery trials for standing and sitting that were guided by the GUI presented on the TV screen. Six offline runs were conducted in which the participants were standing in three runs and sitting in the other three runs. In each run, the participant had to repeat a block of 30 trials of mental tasks indicated by visual cues continuously presented on the screen in a pseudo-random sequence. The first phase of the experimental session was conducted to construct the offline parts of the dataset: (A) Sit-to-stand and (B) Stand-to-sit. The participant´s EEG data were collected from 90 sequences for part A (45 trials of MotorImageryA tasks and 45 trials of IdleStateA tasks) and 90 sequences for part B (45 trials of MotorImageryB tasks and 45 trials of IdleStateB tasks). For each participant, the two machine learning models obtained in the offline phase were used to carry out the online experiment parts of the dataset: (C) Sit-to-stand and (D) Stand-to-sit. Each participant was instructed to select, in no particular order, 30 sequences for part C (15 trials of MotorImageryA tasks and 15 trials of IdleStateA tasks) and 30 other sequences for part D (15 trials of MotorImageryB tasks and 15 trials of IdleStateB tasks). Each trial was unique and was generated pseudo-randomly before the experiment. The database consisted of 32 electroencephalographic files corresponding to the 32 participants. All recordings were collected on channels F3, Fz, F4, FC5, FC1, FC2, FC6, C3, Cz, C4, CP5, CP1, CP2, CP6, P3, Pz, and P4 according to the 10-20 EEG electrode placement standard, grounded to AFz channel and referenced to right mastoid (M2). Each data file contained the data stream in a 2D matrix where rows corresponded to channels and columns corresponded to time samples with a sampling frequency of 250Hz. The following marker numbers encoded information about the execution of the experiment. Marker numbers 200, 201, 202, and 203, indicated the beginning and end of the four steps of the sequence in a trial (resting, fixation, action observation, and imagining). Marker numbers 1, 2, 3, and 4, indicated the figure activated on the screen to the participant perform the task corresponding to 1. actively imagining the sit-to-stand movement (labeled as MotorImageryA), 2. sitting motionless without imagining the sit-to-stand movement (labeled as IdleStateA), 3. standing motionless while actively imagining the stand-to-sit movement (labeled as MotorImageryB), or 4. standing motionless without imagining the stand-to-sit movement (labeled as IdleStateB). Finally, marker numbers 101, 102, 103, and 104, indicated the task detected by the BCI in real time during the online experiment: 101. MotorImageryA, 102. IdleStateA, 103. MotorImageryB, or 104. IdleStateB. ## Dataset Information | Dataset ID | `DS005342` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG data offline and online during motor imagery for standing and sitting | | Author (year) | `TrianaGuzman2024` | | Canonical | — | | Importable as | `DS005342`, `TrianaGuzman2024` | | Year | 2024 | | Authors | Nayid Triana-Guzman, Alvaro D Orjuela-Cañon, Andres L Jutinico, Omar Mendoza-Montoya, Javier M Antelis | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005342.v1.0.3](https://doi.org/10.18112/openneuro.ds005342.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005342) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005342) | [Source URL](https://openneuro.org/datasets/ds005342) | ### Copy-paste BibTeX ```bibtex @dataset{ds005342, title = {EEG data offline and online during motor imagery for standing and sitting}, author = {Nayid Triana-Guzman and Alvaro D Orjuela-Cañon and Andres L Jutinico and Omar Mendoza-Montoya and Javier M Antelis}, doi = {10.18112/openneuro.ds005342.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005342.v1.0.3}, } ``` ## Technical Details - Subjects: 32 - Recordings: 32 - Tasks: 1 - Channels: 17 - Sampling rate (Hz): 250.0 - Duration (hours): 33.01657222222222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 2.0 GB - File count: 32 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005342.v1.0.3 - Source: openneuro - OpenNeuro: [ds005342](https://openneuro.org/datasets/ds005342) - NeMAR: [ds005342](https://nemar.org/dataexplorer/detail?dataset_id=ds005342) ## API Reference Use the `DS005342` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005342(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data offline and online during motor imagery for standing and sitting * **Study:** `ds005342` (OpenNeuro) * **Author (year):** `TrianaGuzman2024` * **Canonical:** — Also importable as: `DS005342`, `TrianaGuzman2024`. Modality: `eeg`. Subjects: 32; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005342](https://openneuro.org/datasets/ds005342) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005342](https://nemar.org/dataexplorer/detail?dataset_id=ds005342) DOI: [https://doi.org/10.18112/openneuro.ds005342.v1.0.3](https://doi.org/10.18112/openneuro.ds005342.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005342 >>> dataset = DS005342(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005342) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005342) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005343: eeg dataset, 43 subjects *Gaffrey Lab Infant Microstates and Attention* Access recordings and metadata through EEGDash. **Citation:** Armen Bagdasarov, Michael S. Gaffrey (2024). *Gaffrey Lab Infant Microstates and Attention*. [10.18112/openneuro.ds005343.v1.0.0](https://doi.org/10.18112/openneuro.ds005343.v1.0.0) Modality: eeg Subjects: 43 Recordings: 43 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005343 dataset = DS005343(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005343(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005343( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005343, title = {Gaffrey Lab Infant Microstates and Attention}, author = {Armen Bagdasarov and Michael S. Gaffrey}, doi = {10.18112/openneuro.ds005343.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005343.v1.0.0}, } ``` ## About This Dataset Participants were 43, 5-10-month-old infants. Their caregivers provided informed consent and compensation was provided for their participation. Infant-caregiver dyads were part of a larger study investigating the impact of bias and discrimination on prenatal and postnatal maternal health and infant development. All research was approved by the Duke University Health System Institutional Review Board and carried out in accordance with the Declaration of Helsinki. Infants sat on their caregiver’s lap and watched up to 15 minutes of relaxing videos with sound (i.e., 10, 90-second videos separated by breaks during which caregivers could play with their infant). Before each video started, an attention grabber (i.e., three-second video of a noisy rattle) directed the infant’s attention to the screen. Videos were presented with E-Prime software (Psychological Software Tools, Pittsburgh, PA). Caregivers were instructed to silently sit still during videos. If infants shifted their attention away from the screen, caregivers were permitted to re-direct their attention only by pointing to the screen. EEG was recorded at 1000 Hertz (Hz) and referenced to the vertex (channel Cz) using a 128-channel HydroCel Geodesic Sensor Net (Electrical Geodesics, Eugene, OR). Impedances were maintained below 50 kilohms throughout the EEG session. For more information, visit: [https://github.com/gaffreylab/](https://github.com/gaffreylab/) ## Dataset Information | Dataset ID | `DS005343` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Gaffrey Lab Infant Microstates and Attention | | Author (year) | `Bagdasarov2024` | | Canonical | — | | Importable as | `DS005343`, `Bagdasarov2024` | | Year | 2024 | | Authors | Armen Bagdasarov, Michael S. Gaffrey | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005343.v1.0.0](https://doi.org/10.18112/openneuro.ds005343.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005343) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005343) | [Source URL](https://openneuro.org/datasets/ds005343) | ### Copy-paste BibTeX ```bibtex @dataset{ds005343, title = {Gaffrey Lab Infant Microstates and Attention}, author = {Armen Bagdasarov and Michael S. Gaffrey}, doi = {10.18112/openneuro.ds005343.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005343.v1.0.0}, } ``` ## Technical Details - Subjects: 43 - Recordings: 43 - Tasks: 1 - Channels: 129 - Sampling rate (Hz): 1000.0 - Duration (hours): 14.927455833333331 - Pathology: Development - Modality: Multisensory - Type: Perception - Size on disk: 22.7 GB - File count: 43 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005343.v1.0.0 - Source: openneuro - OpenNeuro: [ds005343](https://openneuro.org/datasets/ds005343) - NeMAR: [ds005343](https://nemar.org/dataexplorer/detail?dataset_id=ds005343) ## API Reference Use the `DS005343` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005343(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Gaffrey Lab Infant Microstates and Attention * **Study:** `ds005343` (OpenNeuro) * **Author (year):** `Bagdasarov2024` * **Canonical:** — Also importable as: `DS005343`, `Bagdasarov2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Development`. Subjects: 43; recordings: 43; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005343](https://openneuro.org/datasets/ds005343) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005343](https://nemar.org/dataexplorer/detail?dataset_id=ds005343) DOI: [https://doi.org/10.18112/openneuro.ds005343.v1.0.0](https://doi.org/10.18112/openneuro.ds005343.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005343 >>> dataset = DS005343(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005343) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005343) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005345: eeg dataset, 26 subjects *Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset* Access recordings and metadata through EEGDash. **Citation:** Zhengwu Ma, Nan Wang, Jixing Li (2024). *Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset*. [10.18112/openneuro.ds005345.v1.0.1](https://doi.org/10.18112/openneuro.ds005345.v1.0.1) Modality: eeg Subjects: 26 Recordings: 26 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005345 dataset = DS005345(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005345(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005345( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005345, title = {Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset}, author = {Zhengwu Ma and Nan Wang and Jixing Li}, doi = {10.18112/openneuro.ds005345.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005345.v1.0.1}, } ``` ## About This Dataset **Participants** This dataset includes 25 native Mandarin Chinese speakers (14 females, mean age = 24.04 ± 2.28 years) who participated in both EEG and fMRI experiments. The participants were all right-handed, with no reported history of neurological disorders. They were enrolled in undergraduate or graduate programs in Shanghai. All participants gave informed consent, and the experiments were approved by the Ethics Committee of the Ninth People’s Hospital, affiliated with Shanghai Jiao Tong University School of Medicine (SH9H-2019-T33-2 and SH9H-2022-T379-2). In the case of French participants, due to legal constraints, additional session considerations were taken into account, such as shorter session durations. **Experiment Procedure** MRI Scanning Sessions Participants underwent both EEG and fMRI experiments while listening to the Chinese version of \*Le Petit Prince\*. During the MRI session, participants were instructed to maintain fixation on a crosshair on the screen and minimize eye movements and head motions. The task involved attending to different talkers in the multitalker condition (single male, single female, mixed male, and mixed female talkers). ### View full README **Participants** This dataset includes 25 native Mandarin Chinese speakers (14 females, mean age = 24.04 ± 2.28 years) who participated in both EEG and fMRI experiments. The participants were all right-handed, with no reported history of neurological disorders. They were enrolled in undergraduate or graduate programs in Shanghai. All participants gave informed consent, and the experiments were approved by the Ethics Committee of the Ninth People’s Hospital, affiliated with Shanghai Jiao Tong University School of Medicine (SH9H-2019-T33-2 and SH9H-2022-T379-2). In the case of French participants, due to legal constraints, additional session considerations were taken into account, such as shorter session durations. **Experiment Procedure** MRI Scanning Sessions Participants underwent both EEG and fMRI experiments while listening to the Chinese version of \*Le Petit Prince\*. During the MRI session, participants were instructed to maintain fixation on a crosshair on the screen and minimize eye movements and head motions. The task involved attending to different talkers in the multitalker condition (single male, single female, mixed male, and mixed female talkers). Session Breakdown - The entire session lasted approximately 70 minutes for fMRI participants, including a series of 4 conditions (single-talker, mixed-attended, and mixed-unattended conditions). - Quiz questions were administered after each run to assess participants’ comprehension of the narrative. In the French cohort, due to legal time constraints, the experiment durations were adjusted. **Stimuli** The stimuli were selected excerpts from the Chinese version of *Le Petit Prince* (available at [xiaowangzi.org](http://www.xiaowangzi.org/)). These audio clips were previously used in both EEG (Li et al., 2024) and fMRI (Li et al., 2022) studies. The English and Chinese versions were enhanced with visual stimuli (e.g., images of scenes from the book) to align with the storyline. However, visual stimuli were not presented in the French version to comply with legal restrictions. **Acquisition** MRI Hardware & Scanning Parameters - EEG: Data were collected using a 64-channel actiCAP system, sampled at 500 Hz, and filtered between 0.016 and 80 Hz. - fMRI: Scanning was performed on a 7.0 T Terra Siemens MRI scanner at the Zhangjiang International Brain Imaging Centre. The scanning parameters differed slightly between the English/Chinese and French studies due to equipment availability. > - Functional MRI: 85 interleaved axial slices (1.6×1.6×1.6 mm voxel size, TR = 1000 ms, TE = 22.2 ms) > - Anatomical MRI: MP-RAGE sequence, T1-weighted images (voxel size = 0.7×0.7×0.7 mm). **Preprocessing** MRI Data Processing 1. DICOM to NIfTI Conversion: All raw MRI data were converted to NIfTI format using `dcm2niix` (version 1.0.20220505) and processed using the `fMRIPrep` pipeline (version 20.2.0). 2. Anatomical Preprocessing: > - Skull stripping > - Segmentation into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) > - Registration to the Montreal Neurological Institute (MNI) space using MNI152NLin2009cAsym:res-2 template. 1. Functional Preprocessing: - Motion correction - Slice-timing correction - Multi-echo ICA for denoising - Voxel resampling to native and MNI spaces. Note: Visual stimuli processing for the English and Chinese conditions was handled separately to avoid potential biases in the analysis. ## Dataset Information | Dataset ID | `DS005345` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset | | Author (year) | `Ma2024` | | Canonical | `LPP` | | Importable as | `DS005345`, `Ma2024`, `LPP` | | Year | 2024 | | Authors | Zhengwu Ma, Nan Wang, Jixing Li | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005345.v1.0.1](https://doi.org/10.18112/openneuro.ds005345.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005345) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005345) | [Source URL](https://openneuro.org/datasets/ds005345) | ### Copy-paste BibTeX ```bibtex @dataset{ds005345, title = {Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset}, author = {Zhengwu Ma and Nan Wang and Jixing Li}, doi = {10.18112/openneuro.ds005345.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005345.v1.0.1}, } ``` ## Technical Details - Subjects: 26 - Recordings: 26 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 500.0 - Duration (hours): 19.983922222222223 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 162.5 GB - File count: 26 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005345.v1.0.1 - Source: openneuro - OpenNeuro: [ds005345](https://openneuro.org/datasets/ds005345) - NeMAR: [ds005345](https://nemar.org/dataexplorer/detail?dataset_id=ds005345) ## API Reference Use the `DS005345` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005345(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset * **Study:** `ds005345` (OpenNeuro) * **Author (year):** `Ma2024` * **Canonical:** `LPP` Also importable as: `DS005345`, `Ma2024`, `LPP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005345](https://openneuro.org/datasets/ds005345) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005345](https://nemar.org/dataexplorer/detail?dataset_id=ds005345) DOI: [https://doi.org/10.18112/openneuro.ds005345.v1.0.1](https://doi.org/10.18112/openneuro.ds005345.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005345 >>> dataset = DS005345(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005345) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005345) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005346: meg dataset, 30 subjects *Naturalistic fMRI and MEG recordings during viewing of a reality TV show* Access recordings and metadata through EEGDash. **Citation:** Jixing Li, Yike Wang, Chengcheng Wang, Zhengwu Ma (2024). *Naturalistic fMRI and MEG recordings during viewing of a reality TV show*. [10.18112/openneuro.ds005346.v1.0.5](https://doi.org/10.18112/openneuro.ds005346.v1.0.5) Modality: meg Subjects: 30 Recordings: 90 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005346 dataset = DS005346(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005346(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005346( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005346, title = {Naturalistic fMRI and MEG recordings during viewing of a reality TV show}, author = {Jixing Li and Yike Wang and Chengcheng Wang and Zhengwu Ma}, doi = {10.18112/openneuro.ds005346.v1.0.5}, url = {https://doi.org/10.18112/openneuro.ds005346.v1.0.5}, } ``` ## About This Dataset **Participants** Thirty participants (17 females, mean age=23.17±2.31 years) were recruited for the fMRI experiment at Shanghai International Studies University, Shanghai, China. An additional thirty participants (16 females, mean age=22.67±1.99 years) were recruited from the West China Hospital of Sichuan University, Chengdu, China for MEG experiment. All participants were right-handed, had normal or corrected-to-normal vision, and reported no history of neurological disorders. Before the experiment, all participants provided written informed consent and were compensated for their participation. Data from 6 participants in the MEG experiment exhibited distinct PSD patterns that diverged from the other 24 participants (10 females, mean age=22.75±1.94 years; see figure below), we excluded their data from the ISC and regression analysis for MEG data. However, all datasets remain available in the OpenNeuro repository for other researchers’ use. [Power Spectrum Analysis](https://raw.githubusercontent.com/compneurolinglab/baba/main/psd.png) **Experiment Procedure** The experimental procedures for both fMRI and MEG experiments were identical. Participants watched the video while inside the scanner. The video was presented via a mirror attached to the head coil in the fMRI and MEG. Audio was delivered through MRI-compatible headphones (Sinorad, Shenzhen, China) during the fMRI experiment and MEG-compatible insert earphones (ComfortBuds 24, Sinorad, Shenzhen, China) during the MEG experiment. Following the video, participants were visually presented with 5 multiple-choice questions on the screen to assess their comprehension and ensure engagement with the stimuli. Participants responded using a button press, with a maximum response time of 10 seconds per question. If no response was recorded within this time, the experiment proceeded to the next question automatically. After the quiz, participants were instructed to close their eyes for 15 minutes without an explicit task. This period allowed for the recording of neural activity, capturing spontaneous mental replay of the video stimulus. The entire experimental procedure lasted approximately 45 minutes per participant. The fMRI experiment was approved by the Ethics Committee of Shanghai Key Laboratory of Brain-Machine Intelligence for Information Behavior (No. 2024BC028), and the MEG experiment was approved by the West China Hospital of Sichuan University Biomedical Research Ethics Committee (No. 2024[657]). **Stimuli** The video stimulus was extracted from the first episode of the Chinese reality TV show “Where Are We Going, Dad? (Season 1)” (openly available at [https://www.youtube.com/watch?v=ZgRdRHmYuN8](https://www.youtube.com/watch?v=ZgRdRHmYuN8)), which originally aired in 2013. The show features unscripted interactions between fathers and their child as they travel to a rural village and engage in daily activities. The selected excerpt has a total duration of 25 minutes and 19 seconds. The original video had a resolution of 640×368 pixels with a frame rate of 15 frames per second. It was presented in full-color (RGB) format, without embedded subtitles or captions. **Acquisition** The fMRI data was collected in a 3.0 T Siemens Prisma MRI scanner at Shanghai International Studies University, Shanghai. Anatomical scans were obtained using a Magnetization Prepared RApid Gradient-Echo (MP-RAGE) ANDI iPAT2 pulse sequence with T1-weighted contrast (192 single-shot interleaved sagittal slices with A/P phase encoding direction; voxel size=1×1×1 mm; FOV=256 mm; TR=2300 ms; TE=2.98 ms; TI=900 ms; flip angle=9°; acquisition time=6 min; GRAPPA in-plane acceleration factor=2). Functional scans were acquired using T2-weighted echo planar imaging (63 interleaved axial slices with A/P phase encoding direction, voxel size=2.5×2.5×2.5 mm; FOV=220 mm; TR=2000ms; TE=30 ms; acceleration factor=3; flip angle=60°). MEG data were recorded at West China Hospital of Sichuan University in Chengdu, China, using a 64-channel optically pumped magnetometer (OPM) MEG system (Quanmag, Beijing, China). The system consists of 64 single-axis OPM sensors (radial direction, fixed helmet) with a 1000 Hz sampling rate, <20 fT/√Hz sensitivity, and >100 Hz bandwidth. Each sensor (16 × 19 × 66 mm³) contains a 4 × 4 × 4 mm³ rubidium vapor cell and an integrated laser. The sensitive volume is located ~6 mm from the sensor’s outer surface. Sensors were mounted on a rigid, adult-sized helmet providing full-brain coverage. The system was housed in a six-layer magnetically shielded cylinder (1.5 mm permalloy, 10 mm aluminum), with residual magnetic field ≤1 nT and typical system noise of 20–30 fT/√Hz. Participants lay on a scanning bed inserted into the cylinder, wearing air-conduction headphones during the auditory task. Sensor positions were fixed by the helmet geometry, without additional digitization. OPM-MEG is a new type of MEG instrumentation that offers several advantages over conventional MEG systems. These include higher signal sensitivity, improved spatial resolution, and more uniform scalp coverage. Additionally, OPM-MEG allows for greater participant comfort and compliance, supports free movement during scanning, and features lower system complexity, making it a promising tool for more flexible and accessible neuroimaging. The MEG Data were sampled at 1,000 Hz and bandpass-filtered online between 0 and 500 Hz. To facilitate source localization, T1-weighted MRI scans were acquired from the participants using a 3.0 T Siemens TrioTim MRI scanner at West China Hospital of Sichuan University (176 single-shot interleaved sagittal slices with A/P phase encoding direction; voxel size=1×1×1 mm; FOV=256 mm; TR=1900 ms; TE=2.3 ms; TI=900 ms; flip angle=9°; acquisition time=7 min). All participants provided written informed consent outlining the experimental procedures and the data sharing plan prior to participation. They were compensated for their time and contribution. **Preprocessing** All Digital Imaging and Communications in Medicine (DICOM) files of the raw fMRI data were first converted into the Brain Imaging Data Structure (BIDS) format using dcm2bids (v3.1.1) and subsequently transformed into Neuroimaging Informatics Technology Initiative (NIfTI) format via dcm2niix (v1.0.20220505). Facial features were removed from anatomical images using PyDeface (v2.0.2). Preprocessing was carried out with fMRIPrep (v20.2.0), following standard neuroimaging pipelines. For anatomical images, T1-weighted scans underwent bias field correction, skull stripping, and tissue segmentation into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF). These images were then spatially normalized to the Montreal Neurological Institute (MNI) space using the MNI152NLin2009cAsym:res-2 template, ensuring consistent alignment across participants. Functional MRI preprocessing included skull stripping, motion correction, slice-timing correction, and co-registration to the T1-weighted anatomical reference. For each BOLD run, head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) are estimated before any spatiotemporal filtering using ‘mcflirt’ (FSL 5.0.9) and slice timing correction was applied using 3dTshift (AFNI 20160207). Co-registration to the anatomical image was done with flirt using boundary-based registration (6 degrees of freedom). No susceptibility distortion correction was applied. Confound regressors included motion parameters (and their derivatives/quadratics), framewise displacement (FD), DVARS, global signals, and t/aCompCor components computed from white matter and CSF after high-pass filtering (128s cutoff). Volumes exceeding FD>0.5 mm or standardized DVARS>1.5 were flagged as motion outliers. All transforms were applied in a single interpolation step using antsApplyTransforms with Lanczos interpolation. We further performed spatial smoothing on the preprocessed fMRI data (post-fMRIPrep) using an isotropic Gaussian kernel with an 8 mm FWHM. However, the versions uploaded to OpenNeuro remain unsmoothed so that researchers can choose whether to apply smoothing. MEG data preprocessing was conducted using MNE-Python (v1.8.0). We first applied a bandpass filter (1–38 Hz) to remove low-frequency drifts and high-frequency noise. We then identified bad channels through visual inspection and cross-validated using PyPREP (v0.4.3), these bad channels were interpolated to maintain data integrity. To mitigate physiological artifacts, we performed independent component analysis (ICA) and removed components corresponding to heartbeat and eye movements. The data were then segmented into three task-related epochs corresponding to the video watching, question answering, and post-task replay conditions. Because our paradigm uses naturalistic video viewing rather than discrete event trials, there is no true pre‐stimulus baseline period for noise covariance estimation. Instead, we computed the noise covariance from the mean over each full epoch. T1-weighted MRI data were converted to NIfTI format and processed with FreeSurfer (v7.3.2) to reconstruct cortical surfaces and generate boundary element model (BEM) surfaces using a single-layer conductivity of 0.3 S/m. MEG-MRI coregistration was performed with fiducial points and refined via MNE-Python’s graphical interface. A source space (resolution=5 mm) was generated using a fourth-order icosahedral mesh, and a BEM solution was computed to model head conductivity. A forward model was then created based on anatomical MRI and digitized head shape. Noise covariance matrices were estimated from raw MEG recordings, and inverse operators were constructed using minimum norm estimation (SNR=3). Source reconstruction employed dynamic statistical parametric mapping (dSPM) for noise-normalized estimates. Task-related epochs (video watching, question answering, post-task replay) were used to compute source estimates, which were morphed onto the FreeSurfer average brain template for group-level comparisons. ## Dataset Information | Dataset ID | `DS005346` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Naturalistic fMRI and MEG recordings during viewing of a reality TV show | | Author (year) | `Li2024_Naturalistic_fMRI_viewing` | | Canonical | — | | Importable as | `DS005346`, `Li2024_Naturalistic_fMRI_viewing` | | Year | 2024 | | Authors | Jixing Li, Yike Wang, Chengcheng Wang, Zhengwu Ma | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005346.v1.0.5](https://doi.org/10.18112/openneuro.ds005346.v1.0.5) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005346) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005346) | [Source URL](https://openneuro.org/datasets/ds005346) | ### Copy-paste BibTeX ```bibtex @dataset{ds005346, title = {Naturalistic fMRI and MEG recordings during viewing of a reality TV show}, author = {Jixing Li and Yike Wang and Chengcheng Wang and Zhengwu Ma}, doi = {10.18112/openneuro.ds005346.v1.0.5}, url = {https://doi.org/10.18112/openneuro.ds005346.v1.0.5}, } ``` ## Technical Details - Subjects: 30 - Recordings: 90 - Tasks: 3 - Channels: 66 (72), 65 (18) - Sampling rate (Hz): 1000.0 - Duration (hours): 20.35939638888889 - Pathology: Healthy - Modality: Multisensory - Type: Memory - Size on disk: 38.9 GB - File count: 90 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005346.v1.0.5 - Source: openneuro - OpenNeuro: [ds005346](https://openneuro.org/datasets/ds005346) - NeMAR: [ds005346](https://nemar.org/dataexplorer/detail?dataset_id=ds005346) ## API Reference Use the `DS005346` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005346(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Naturalistic fMRI and MEG recordings during viewing of a reality TV show * **Study:** `ds005346` (OpenNeuro) * **Author (year):** `Li2024_Naturalistic_fMRI_viewing` * **Canonical:** — Also importable as: `DS005346`, `Li2024_Naturalistic_fMRI_viewing`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 30; recordings: 90; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005346](https://openneuro.org/datasets/ds005346) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005346](https://nemar.org/dataexplorer/detail?dataset_id=ds005346) DOI: [https://doi.org/10.18112/openneuro.ds005346.v1.0.5](https://doi.org/10.18112/openneuro.ds005346.v1.0.5) ### Examples ```pycon >>> from eegdash.dataset import DS005346 >>> dataset = DS005346(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005346) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005346) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS005356: meg dataset, 85 subjects *MEG: Major Depression & Probabilistic Learning Task* Access recordings and metadata through EEGDash. **Citation:** [Unspecified] (2024). *MEG: Major Depression & Probabilistic Learning Task*. [10.18112/openneuro.ds005356.v1.5.0](https://doi.org/10.18112/openneuro.ds005356.v1.5.0) Modality: meg Subjects: 85 Recordings: 116 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005356 dataset = DS005356(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005356(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005356( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005356, title = {MEG: Major Depression & Probabilistic Learning Task}, author = {[Unspecified]}, doi = {10.18112/openneuro.ds005356.v1.5.0}, url = {https://doi.org/10.18112/openneuro.ds005356.v1.5.0}, } ``` ## About This Dataset Howdy y’all. Here’s data from: Pirrung, C.J.H., Singh G., Hogeveen, J., Quinn, D. & Cavanagh, J.F. (2025) Hypoactivation of ventromedial frontal cortex in major depressive disorder: an MEG study of the Reward Positivity. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging An MEG study (306-sensor Elekta Neuromag System) of the Reward Positivity during reinforcement learning. Participants were all SCID interviewed to meet either control (CTL, non-depressed, n=38) or major depressive disorder (MDD, n=52) group criteria. Task was an MEG-compatible probabilistic selection task. We’ll upload their T1s and resting state soon. <[jcavanagh@unm.edu](mailto:jcavanagh@unm.edu)> ## Dataset Information | Dataset ID | `DS005356` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MEG: Major Depression & Probabilistic Learning Task | | Author (year) | `DS5356_MajorDepression` | | Canonical | — | | Importable as | `DS005356`, `DS5356_MajorDepression` | | Year | 2024 | | Authors | [Unspecified] | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005356.v1.5.0](https://doi.org/10.18112/openneuro.ds005356.v1.5.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005356) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005356) | [Source URL](https://openneuro.org/datasets/ds005356) | ### Copy-paste BibTeX ```bibtex @dataset{ds005356, title = {MEG: Major Depression & Probabilistic Learning Task}, author = {[Unspecified]}, doi = {10.18112/openneuro.ds005356.v1.5.0}, url = {https://doi.org/10.18112/openneuro.ds005356.v1.5.0}, } ``` ## Technical Details - Subjects: 85 - Recordings: 116 - Tasks: 1 - Channels: 396 (113), 450 (2) - Sampling rate (Hz): 1000.0 - Duration (hours): 18.24137361111111 - Pathology: Depression - Modality: Visual - Type: Learning - Size on disk: 161.6 GB - File count: 116 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005356.v1.5.0 - Source: openneuro - OpenNeuro: [ds005356](https://openneuro.org/datasets/ds005356) - NeMAR: [ds005356](https://nemar.org/dataexplorer/detail?dataset_id=ds005356) ## API Reference Use the `DS005356` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005356(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEG: Major Depression & Probabilistic Learning Task * **Study:** `ds005356` (OpenNeuro) * **Author (year):** `DS5356_MajorDepression` * **Canonical:** — Also importable as: `DS005356`, `DS5356_MajorDepression`. Modality: `meg`; Experiment type: `Learning`; Subject type: `Depression`. Subjects: 85; recordings: 116; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005356](https://openneuro.org/datasets/ds005356) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005356](https://nemar.org/dataexplorer/detail?dataset_id=ds005356) DOI: [https://doi.org/10.18112/openneuro.ds005356.v1.5.0](https://doi.org/10.18112/openneuro.ds005356.v1.5.0) ### Examples ```pycon >>> from eegdash.dataset import DS005356 >>> dataset = DS005356(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005356) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005356) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS005363: eeg dataset, 43 subjects *Object recognition in healthy aging (ORHA) - EEG* Access recordings and metadata through EEGDash. **Citation:** Marleen Haupt, Douglas D. Garrett, Radoslaw M. Cichy (2024). *Object recognition in healthy aging (ORHA) - EEG*. [10.18112/openneuro.ds005363.v1.0.0](https://doi.org/10.18112/openneuro.ds005363.v1.0.0) Modality: eeg Subjects: 43 Recordings: 43 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005363 dataset = DS005363(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005363(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005363( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005363, title = {Object recognition in healthy aging (ORHA) - EEG}, author = {Marleen Haupt and Douglas D. Garrett and Radoslaw M. Cichy}, doi = {10.18112/openneuro.ds005363.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005363.v1.0.0}, } ``` ## About This Dataset This dataset contains the raw EEG data accompanying the paper “Healthy aging delays and dedifferentiates high-level visual representations”. Please cite the above paper if you use this data. The dataset includes: Brainvision files (.eeg, .vhdr, .vmrk) for all participants. The events files contain the onsets, durations, trial types and values for all trials in the corresponding run. Stimuli are images presented on a grey background with a central fixation: images of faces = S1-16 images of animals = S17-32 images of places = S33-48 images of objects = S49-64 catch trials = S65-69 Other triggers: button_press = S99 run_onset = S100+run_number (8 runs in total) run_end = S199 For a full description of the paradigm and the employed procedures please see the paper. **References for MNE BIDS conversion** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS005363` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Object recognition in healthy aging (ORHA) - EEG | | Author (year) | `Haupt2024_Object` | | Canonical | `ORHA` | | Importable as | `DS005363`, `Haupt2024_Object`, `ORHA` | | Year | 2024 | | Authors | Marleen Haupt, Douglas D. Garrett, Radoslaw M. Cichy | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005363.v1.0.0](https://doi.org/10.18112/openneuro.ds005363.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005363) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005363) | [Source URL](https://openneuro.org/datasets/ds005363) | ### Copy-paste BibTeX ```bibtex @dataset{ds005363, title = {Object recognition in healthy aging (ORHA) - EEG}, author = {Marleen Haupt and Douglas D. Garrett and Radoslaw M. Cichy}, doi = {10.18112/openneuro.ds005363.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005363.v1.0.0}, } ``` ## Technical Details - Subjects: 43 - Recordings: 43 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 43.08531583333333 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 17.7 GB - File count: 43 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005363.v1.0.0 - Source: openneuro - OpenNeuro: [ds005363](https://openneuro.org/datasets/ds005363) - NeMAR: [ds005363](https://nemar.org/dataexplorer/detail?dataset_id=ds005363) ## API Reference Use the `DS005363` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005363(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Object recognition in healthy aging (ORHA) - EEG * **Study:** `ds005363` (OpenNeuro) * **Author (year):** `Haupt2024_Object` * **Canonical:** `ORHA` Also importable as: `DS005363`, `Haupt2024_Object`, `ORHA`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 43; recordings: 43; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005363](https://openneuro.org/datasets/ds005363) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005363](https://nemar.org/dataexplorer/detail?dataset_id=ds005363) DOI: [https://doi.org/10.18112/openneuro.ds005363.v1.0.0](https://doi.org/10.18112/openneuro.ds005363.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005363 >>> dataset = DS005363(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005363) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005363) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005383: eeg dataset, 30 subjects *TMNRED, A Chinese Language EEG Dataset for Fuzzy Semantic Target Identification in Natural Reading Environments* Access recordings and metadata through EEGDash. **Citation:** Yanru Bai, Qi Tang, Ran Zhao, Hongxing Liu, Mingkun Guo, Shuming Zhang, Minghan Guo, Junjie Wang, Changjian Wang, Mu Xing, Guangjian Ni, Dong Ming (2024). *TMNRED, A Chinese Language EEG Dataset for Fuzzy Semantic Target Identification in Natural Reading Environments*. [10.18112/openneuro.ds005383.v1.0.0](https://doi.org/10.18112/openneuro.ds005383.v1.0.0) Modality: eeg Subjects: 30 Recordings: 240 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005383 dataset = DS005383(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005383(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005383( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005383, title = {TMNRED, A Chinese Language EEG Dataset for Fuzzy Semantic Target Identification in Natural Reading Environments}, author = {Yanru Bai and Qi Tang and Ran Zhao and Hongxing Liu and Mingkun Guo and Shuming Zhang and Minghan Guo and Junjie Wang and Changjian Wang and Mu Xing and Guangjian Ni and Dong Ming}, doi = {10.18112/openneuro.ds005383.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005383.v1.0.0}, } ``` ## About This Dataset **TMNRED Dataset - Chinese Natural Reading EEG for Fuzzy Semantic Target Identification** **Overview** This dataset, named TMNRED, consists of electroencephalogram (EEG) recordings obtained from 30 participants engaged in natural reading tasks. The aim is to investigate the mechanisms of semantic processing in the Chinese language within a natural reading environment. **Data Collection** ### View full README **TMNRED Dataset - Chinese Natural Reading EEG for Fuzzy Semantic Target Identification** **Overview** This dataset, named TMNRED, consists of electroencephalogram (EEG) recordings obtained from 30 participants engaged in natural reading tasks. The aim is to investigate the mechanisms of semantic processing in the Chinese language within a natural reading environment. **Data Collection** - Participants: 30 healthy, right-handed individuals (average age: 22.07 years, standard deviation: 2.7 years; 18 females, 12 males) who are native Chinese speakers. - Materials: Text ranging from 15 to 20 characters, presented as news headlines or short sentences. Materials include target semantic items and non-target semantic items. - Procedure: Participants read sentences displayed on a screen at their own pace. Each participant completed 8 blocks of 400 trials in total, with each trial lasting approximately 2.2 seconds, including a fixation cross and inter-stimulus intervals. **Data Structure** The dataset is organized according to the BIDS standard: - Main Folder: > - `dataset_description.json`: Description of the dataset. > - `participants.tsv`: Participant information. > - `participants.json`: Details of columns in `participants.tsv`. > - `README`: General information about the dataset. > - `data_all.mat`: Labeled EEG data of all subjects in MAT format. - Derivative Data: - `final_bids/`: EEG data stored in JSON, TSV, and EDF formats. - `preproc/`: Preprocessed data, including subfolders for each subject (`sub-01`, etc.), with data in various formats (BDF, SET, FDT, ERP, MAT). **Technical Validation** Sensor-level EEG analyses were performed, showing distinct responses to target and non-target words at different time points, with notable changes in potential distribution across the scalp. **Distribution** The raw and preprocessed EEG data are openly available online at [https://github.com/tym5049/TMNRED_Dataset](https://github.com/tym5049/TMNRED_Dataset) under the Creative Commons Attribution 4.0 International Public License ([https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/)). **Usage Notes** - Researchers should cite the dataset appropriately when using it. - For any questions or issues, please refer to the `README` file or contact the corresponding authors: Yanru Bai (yr56 [bai@tju.edu.cn](mailto:bai@tju.edu.cn)), Guangjian Ni ([niguangjian@tju.edu.cn](mailto:niguangjian@tju.edu.cn)). **Acknowledgments** This work was mainly supported by the National Key R&D Program of China (2023YFF1203503) and the National Natural Science Foundation of China (82202290). We also thank all research assistants who provided general support in participant recruiting and data collection. ## Dataset Information | Dataset ID | `DS005383` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | TMNRED, A Chinese Language EEG Dataset for Fuzzy Semantic Target Identification in Natural Reading Environments | | Author (year) | `Bai2024` | | Canonical | `TMNRED` | | Importable as | `DS005383`, `Bai2024`, `TMNRED` | | Year | 2024 | | Authors | Yanru Bai, Qi Tang, Ran Zhao, Hongxing Liu, Mingkun Guo, Shuming Zhang, Minghan Guo, Junjie Wang, Changjian Wang, Mu Xing, Guangjian Ni, Dong Ming | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005383.v1.0.0](https://doi.org/10.18112/openneuro.ds005383.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005383) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005383) | [Source URL](https://openneuro.org/datasets/ds005383) | ### Copy-paste BibTeX ```bibtex @dataset{ds005383, title = {TMNRED, A Chinese Language EEG Dataset for Fuzzy Semantic Target Identification in Natural Reading Environments}, author = {Yanru Bai and Qi Tang and Ran Zhao and Hongxing Liu and Mingkun Guo and Shuming Zhang and Minghan Guo and Junjie Wang and Changjian Wang and Mu Xing and Guangjian Ni and Dong Ming}, doi = {10.18112/openneuro.ds005383.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005383.v1.0.0}, } ``` ## Technical Details - Subjects: 30 - Recordings: 240 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 200.0 - Duration (hours): 8.32688888888889 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 358.2 MB - File count: 240 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005383.v1.0.0 - Source: openneuro - OpenNeuro: [ds005383](https://openneuro.org/datasets/ds005383) - NeMAR: [ds005383](https://nemar.org/dataexplorer/detail?dataset_id=ds005383) ## API Reference Use the `DS005383` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005383(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TMNRED, A Chinese Language EEG Dataset for Fuzzy Semantic Target Identification in Natural Reading Environments * **Study:** `ds005383` (OpenNeuro) * **Author (year):** `Bai2024` * **Canonical:** `TMNRED` Also importable as: `DS005383`, `Bai2024`, `TMNRED`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 30; recordings: 240; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005383](https://openneuro.org/datasets/ds005383) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005383](https://nemar.org/dataexplorer/detail?dataset_id=ds005383) DOI: [https://doi.org/10.18112/openneuro.ds005383.v1.0.0](https://doi.org/10.18112/openneuro.ds005383.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005383 >>> dataset = DS005383(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005383) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005383) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005385: eeg dataset, 608 subjects *Resting-state EEG data before and after cognitive activity across the adult lifespan and a 5-year follow-up* Access recordings and metadata through EEGDash. **Citation:** Edmund Wascher, Daniel Schneider, Patrick D. Gajewski, Stephan Getzmann (2024). *Resting-state EEG data before and after cognitive activity across the adult lifespan and a 5-year follow-up*. [10.18112/openneuro.ds005385.v1.0.3](https://doi.org/10.18112/openneuro.ds005385.v1.0.3) Modality: eeg Subjects: 608 Recordings: 3264 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005385 dataset = DS005385(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005385(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005385( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005385, title = {Resting-state EEG data before and after cognitive activity across the adult lifespan and a 5-year follow-up}, author = {Edmund Wascher and Daniel Schneider and Patrick D. Gajewski and Stephan Getzmann}, doi = {10.18112/openneuro.ds005385.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005385.v1.0.3}, } ``` ## About This Dataset **README** **Details related to access to the data** - Data user agreement Dataset publicly available under the Creative Commons CC0 license after a grace period of 36 months. - Contact person ### View full README **README** **Details related to access to the data** - Data user agreement Dataset publicly available under the Creative Commons CC0 license after a grace period of 36 months. - Contact person Edmund Wascher, IfADo, [wascher@ifado.de](mailto:wascher@ifado.de), ORCID: 0000-0003-3616-9767 Daniel Schneider, IfADo, [schneiderd@ifado.de](mailto:schneiderd@ifado.de), ORCID: 0000-0002-2867-2613 Patrick D. Gajewski, IfADo, [gajewski@ifado.de](mailto:gajewski@ifado.de), ORCID: 0000-0001-8240-1702 Stephan Getzmann, IfADo, [getzmann@ifado.de](mailto:getzmann@ifado.de), ORCID: 0000-0002-6382-0183 - Practical information to access the data The data are provided at OpenNeuro (dataset accession number: ds005385, DOI: [https://doi.org/10.18112/openneuro.ds005385.v1.0.0](https://doi.org/10.18112/openneuro.ds005385.v1.0.0)) in BIDS format. **Overview** - Project name Resting-state EEG activity before and after cognitive activity at baseline and a 5-years follow-up. - Year(s) that the project ran 2016-2024 - Brief overview of the tasks in the experiment Resting-state EEG (rs-EEG) is a non-invasive measure of the spontaneous electrical activity of the brain, measured while remaining still and relaxed, and without performing any assigned cognitive tasks. Changes in rs-EEG are associated with numerous psychiatric disorders, but also with normal aging and with factors such as fatigue and motivation. Analyses of longitudinal rs-EEG measurements in healthy subjects over the entire adult lifespan could help to better understand the underlying brain processes, their development across the lifespan, and differences in brain activity between healthy and clinically relevant groups. The data set is part of the Dortmund Vital Study (ClinicalTrials.gov Identifier: NCT05155397), a prospective study on the determinants of healthy cognitive aging. The experiments comprised the recording of resting-state EEG data before and after a 2-hour block of cognitive experimental tasks. There are baseline measurements and about 5-years follow-up measurements of a subsample of healthy adult participants (for more information, Gajewski et al., 2022, doi: 10.2196/32352). - Description of the contents of the dataset The dataset consists of 64-channels resting-state EEG recordings of 608 participants aged between 20 and 70 years, measured for three minutes with eyes open and eyes closed before and after a 2-hour block of demanding cognitive experimental tasks. Additional follow-up measurements are available from 208 subjects who also took part in the baseline measurement. The baseline measurements took place between 2016 and 2023, the follow-up measurements at intervals of around 5 years, starting 2021. The years of the baseline and follow-up measurements are specified in the sub-xxx_sessions.tsv file for each subject. The procedure for this (ongoing) follow-up measurement is exactly the same as for the first measurement. - Independent variables Information on Age, Sex, Handeness of the participants are provided. - Dependent variables Spontaneous EEG Activity is measured. - Control variables n/a - Quality assessment of the data The data was checked for completeness and includes the non-preprocessed raw EEG. An estimate of the reliability of the rs-EEG data was provided by a study, in which the intra-class correlation (ICC) in absolute EEG alpha power (8-13 Hz) of all four recordings at the first measurement (session 1) was examined on selected frontal and parietal electrodes in a subgroup of 370 participants (Metzen et al., 2022, doi: 10.1007/s00429-021-02399-1). The ICC ranged between 0.92 and 0.94 in the eyes-closed condition and between 0.87 and 0.90 in the eyes-open condition, indicating good to excellent ratings of alpha power reliability. A recent analysis of the reliability of EEG microstate indicated good to excellent short-term retest-reliability of microstate durations, occurrences, and coverages in a subgroup of 583 participants, as well as good overall short, intermediate, and long-term re-test reliability of these microstate characteristics across session 1 and session 2, covering a period of more than half a year (Kleinert et al., 2024, doi: 10.1007/s10548-023-00982-9). **Methods** **Subjects** The subject pool consists of participants in the Dortmund Vital Study and includes people of working age between 20 and 70 years. 61.8% of the subjects of the baseline measurement are female, 93.1% are right-handed. The participants reported to be healthy and free of medication that might affect their attention during the experimental sessions. In general, the study population can be considered as representative in terms of age distribution, genetics, cognitive performance parameters, and occupation, whereas there were differences in gender distribution and educational qualifications compared to the general population in Germany (for details, see Gajewski et al., 2022, doi: 10.2196/32352). - Information about the recruitment procedure The participants were recruited from local companies, and public institutions, and through advertisements in newspapers and public media. - Subject inclusion criteria (if relevant) - Subject exclusion criteria (if relevant) Exclusion criteria were history of severe diseases, namely neurological diseases (such as dementia, Parkinson disease, or stroke); cardiovascular, oncological, and eye diseases; psychiatric and affective disorders; head injuries, head surgery, and head implants; use of psychotropic drugs and neuroleptics; limited physical fitness and mobility. **Apparatus** The measurements took place in a quiet laboratory room while the subject was sitting. The resting-state EEG was recorded using a 64-channel elastic cap (actiCap system, Brain Products GmbH; Munich, Germany) arranged based on the 10-20 system with FCz electrode as on-line reference, and a BrainVision Brainamp DC amplifier and BrainVision Recorder software (BrainProducts GmbH). The EEG signal was recorded with 1000-Hz sampling rate and filtered online by a 250-Hz low-pass filter. Impedances were kept below 10 kOhm. **Initial setup** After arriving at institute there was an introductory meeting to clarify the procedure and open questions, to explain the aim of the study, and to explain open questions regarding informed consent forms and anonymization of the data. In the next step the EEG cap was mounted, and the tasks explained. **Task organization** - Was task order counter-balanced? The resting-state EEG was always recorded first with the eyes closed and then with the eyes open. - What other activities were interspersed between tasks? The resting-state EEG with eyes closed and eyes open was measured before and after a 2-hour block of cognitive tasks. - In what order were the tasks and other activities performed? The cognitive block comprised five tasks on visual attention, vigilance, stimulus-response conflict processing, updating and statistical learning, and speech-in-noise perception and auditory selective attention. The tasks were carried out one after the other with short breaks (for details, see Gajewski et al., 2022, doi: 10.2196/32352). **Task details** Recordings consists of resting-state EEG epochs measured for three minutes with eyes open and eyes closed before and after a 2-hour block of cognitive experimental tasks. During the resting-state EEG measurement, the subjects should sit quietly and relaxed. **Additional data acquired** n/a **Experimental location** The measurements took place at the Leibniz Research Centre for Working Environment and Human Factors at Dortmund University (IfADo) in Dortmund, Germany. **Missing data** - Information on handeness is missing for two subjects. **Notes** Please note that the physical max/min specifications in the header of the EDF files may contain invalid values. These should be ignored. ## Dataset Information | Dataset ID | `DS005385` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Resting-state EEG data before and after cognitive activity across the adult lifespan and a 5-year follow-up | | Author (year) | `Wascher2024` | | Canonical | — | | Importable as | `DS005385`, `Wascher2024` | | Year | 2024 | | Authors | Edmund Wascher, Daniel Schneider, Patrick D. Gajewski, Stephan Getzmann | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005385.v1.0.3](https://doi.org/10.18112/openneuro.ds005385.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005385) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005385) | [Source URL](https://openneuro.org/datasets/ds005385) | ### Copy-paste BibTeX ```bibtex @dataset{ds005385, title = {Resting-state EEG data before and after cognitive activity across the adult lifespan and a 5-year follow-up}, author = {Edmund Wascher and Daniel Schneider and Patrick D. Gajewski and Stephan Getzmann}, doi = {10.18112/openneuro.ds005385.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005385.v1.0.3}, } ``` ## Technical Details - Subjects: 608 - Recordings: 3264 - Tasks: 2 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 169.35916666666665 - Pathology: Healthy - Modality: Resting State - Type: Resting-state - Size on disk: 74.1 GB - File count: 3264 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005385.v1.0.3 - Source: openneuro - OpenNeuro: [ds005385](https://openneuro.org/datasets/ds005385) - NeMAR: [ds005385](https://nemar.org/dataexplorer/detail?dataset_id=ds005385) ## API Reference Use the `DS005385` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005385(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting-state EEG data before and after cognitive activity across the adult lifespan and a 5-year follow-up * **Study:** `ds005385` (OpenNeuro) * **Author (year):** `Wascher2024` * **Canonical:** — Also importable as: `DS005385`, `Wascher2024`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 608; recordings: 3264; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005385](https://openneuro.org/datasets/ds005385) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005385](https://nemar.org/dataexplorer/detail?dataset_id=ds005385) DOI: [https://doi.org/10.18112/openneuro.ds005385.v1.0.3](https://doi.org/10.18112/openneuro.ds005385.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005385 >>> dataset = DS005385(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005385) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005385) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005397: eeg dataset, 26 subjects *Affordances of stairs* Access recordings and metadata through EEGDash. **Citation:** Christopher Hilton, Lilian Befort, Ronja Brinkmann, Matthias Ballestrem, Joerg Fingerhut, Klaus Gramann (2024). *Affordances of stairs*. [10.18112/openneuro.ds005397.v1.0.4](https://doi.org/10.18112/openneuro.ds005397.v1.0.4) Modality: eeg Subjects: 26 Recordings: 26 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005397 dataset = DS005397(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005397(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005397( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005397, title = {Affordances of stairs}, author = {Christopher Hilton and Lilian Befort and Ronja Brinkmann and Matthias Ballestrem and Joerg Fingerhut and Klaus Gramann}, doi = {10.18112/openneuro.ds005397.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds005397.v1.0.4}, } ``` ## About This Dataset An EEG dataset and behavioural response data for a task that required participants to view images of scenes and rate their aesthetic properties (beauty, complexity, interestingness), or rate their appropriateness for either a reading activity, or a social activity. You can also find the behavioural data already extracted from the EEG events for convenience, and the full stimuli set with identifiable file names. For detailed information about the methods and an analysis of the data please see the published article: [https://doi.org/10.1016/j.jenvp.2025.102528](https://doi.org/10.1016/j.jenvp.2025.102528) Contact: [c.hilton@tu-berlin.de](mailto:c.hilton@tu-berlin.de) in case of questions. ## Dataset Information | Dataset ID | `DS005397` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Affordances of stairs | | Author (year) | `Hilton2024` | | Canonical | — | | Importable as | `DS005397`, `Hilton2024` | | Year | 2024 | | Authors | Christopher Hilton, Lilian Befort, Ronja Brinkmann, Matthias Ballestrem, Joerg Fingerhut, Klaus Gramann | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005397.v1.0.4](https://doi.org/10.18112/openneuro.ds005397.v1.0.4) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005397) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005397) | [Source URL](https://openneuro.org/datasets/ds005397) | ### Copy-paste BibTeX ```bibtex @dataset{ds005397, title = {Affordances of stairs}, author = {Christopher Hilton and Lilian Befort and Ronja Brinkmann and Matthias Ballestrem and Joerg Fingerhut and Klaus Gramann}, doi = {10.18112/openneuro.ds005397.v1.0.4}, url = {https://doi.org/10.18112/openneuro.ds005397.v1.0.4}, } ``` ## Technical Details - Subjects: 26 - Recordings: 26 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 500.0 - Duration (hours): 27.923140555555555 - Pathology: Healthy - Modality: Visual - Type: Affect - Size on disk: 12.0 GB - File count: 26 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005397.v1.0.4 - Source: openneuro - OpenNeuro: [ds005397](https://openneuro.org/datasets/ds005397) - NeMAR: [ds005397](https://nemar.org/dataexplorer/detail?dataset_id=ds005397) ## API Reference Use the `DS005397` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005397(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Affordances of stairs * **Study:** `ds005397` (OpenNeuro) * **Author (year):** `Hilton2024` * **Canonical:** — Also importable as: `DS005397`, `Hilton2024`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005397](https://openneuro.org/datasets/ds005397) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005397](https://nemar.org/dataexplorer/detail?dataset_id=ds005397) DOI: [https://doi.org/10.18112/openneuro.ds005397.v1.0.4](https://doi.org/10.18112/openneuro.ds005397.v1.0.4) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005397 >>> dataset = DS005397(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005397) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005397) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005398: ieeg dataset, 185 subjects *Open iEEG Dataset (Pediatric iEEG, Wayne State University and UCLA)* Access recordings and metadata through EEGDash. **Citation:** Yipeng Zhang, Atsuro Daida, Lawrence Liu, Naoto Kuroda, Yuanyi Ding, Shingo Oana, Tonmoy Monsoor, Chenda Duan, Shaun A. Hussain, Joe X Qiao, Noriko Salamon, Aria Fallah, Myung Shin Sim, Raman Sankar, Richard J. Staba, Jerome Engel Jr., Eishi Asano, Vwani Roychowdhury, Hiroki Nariai (2024). *Open iEEG Dataset (Pediatric iEEG, Wayne State University and UCLA)*. [10.18112/openneuro.ds005398.v1.1.1](https://doi.org/10.18112/openneuro.ds005398.v1.1.1) Modality: ieeg Subjects: 185 Recordings: 185 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005398 dataset = DS005398(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005398(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005398( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005398, title = {Open iEEG Dataset (Pediatric iEEG, Wayne State University and UCLA)}, author = {Yipeng Zhang and Atsuro Daida and Lawrence Liu and Naoto Kuroda and Yuanyi Ding and Shingo Oana and Tonmoy Monsoor and Chenda Duan and Shaun A. Hussain and Joe X Qiao and Noriko Salamon and Aria Fallah and Myung Shin Sim and Raman Sankar and Richard J. Staba and Jerome Engel Jr. and Eishi Asano and Vwani Roychowdhury and Hiroki Nariai}, doi = {10.18112/openneuro.ds005398.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds005398.v1.1.1}, } ``` ## About This Dataset This dataset was utilized for the publication of the manuscript by Zhang et al. [1]. A subset of the data has been employed in [2], [3], and [4]. Summary: This data set comprises the de-identified subjects with interictal iEEG recordings with sleep from University of California Los Angels Mattel Children’s Hospital, and Children’s Hospital of Michigan, Detroit. Subject-wise information is contained in each folder, including iEEGs collected from 185 subjects during sleep. The channel name and valuables, such as the anatomical label and the resection status, are attached to each folder. The outcome and background information of all the subjects are summarized in ‘paticipant.tsv’ located in the parental directory. Derivatives The processed data for HFO detection and classification are shown in the derivatives/folder. The HFO analysis contains detection from two methods: RMS and MNI detectors. References: [1] Zhang Y, Daida A, Liu L, Kuroda N, Ding Y, Oana S, Kanai S, Monsoor T, Duan C, Hussain SA, Qiao JX, Salamon N, Fallah A, Sim MS, Sankar R, Staba RJ, Engel J Jr, Asano E, Roychowdhury V, Nariai H. Self-supervised data-driven approach defines pathological high-frequency oscillations in epilepsy. Epilepsia. 2025 Nov;66(11):4434-4450. doi: 10.1111/epi.18545. [2] Monsoor T, Kanai S, Daida A, Kuroda N, Sinha P, Oana S, Zhang Y, Liu L, Singh G, Duan C, Sim MS, Fallah A, Speier W, Asano E, Roychowdhury V, Nariai H. Mini-Seizures: Novel Interictal iEEG Biomarker Capturing Synchronization Network Dynamics at the Epileptogenic Zone. medRxiv. 2025 Feb 2:2025.01.31.25321482. doi: 10.1101/2025.01.31.25321482. [3] Zhang Y, Lu Q, Monsoor T, Hussain SA, Qiao JX, Salamon N, Fallah A, Sim MS, Asano E, Sankar R, Staba RJ, Engel J Jr, Speier W, Roychowdhury V, Nariai H. Refining epileptogenic high-frequency oscillations using deep learning: a reverse engineering approach. Brain Commun. 2021 Nov 3;4(1):fcab267. doi: 10.1093/braincomms/fcab267. [4] Kuroda N, Sonoda M, Miyakoshi M, Nariai H, Jeong JW, Motoi H, Luat AF, Sood S, Asano E. Objective interictal electrophysiology biomarkers optimize prediction of epilepsy surgery outcome. Brain Commun. 2021 Mar 14;3(2):fcab042. doi: 10.1093/braincomms/fcab042. ## Dataset Information | Dataset ID | `DS005398` | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Open iEEG Dataset (Pediatric iEEG, Wayne State University and UCLA) | | Author (year) | `Zhang2024_Open_Pediatric_Wayne` | | Canonical | — | | Importable as | `DS005398`, `Zhang2024_Open_Pediatric_Wayne` | | Year | 2024 | | Authors | Yipeng Zhang, Atsuro Daida, Lawrence Liu, Naoto Kuroda, Yuanyi Ding, Shingo Oana, Tonmoy Monsoor, Chenda Duan, Shaun A. Hussain, Joe X Qiao, Noriko Salamon, Aria Fallah, Myung Shin Sim, Raman Sankar, Richard J. Staba, Jerome Engel Jr., Eishi Asano, Vwani Roychowdhury, Hiroki Nariai | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005398.v1.1.1](https://doi.org/10.18112/openneuro.ds005398.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005398) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005398) | [Source URL](https://openneuro.org/datasets/ds005398) | ### Copy-paste BibTeX ```bibtex @dataset{ds005398, title = {Open iEEG Dataset (Pediatric iEEG, Wayne State University and UCLA)}, author = {Yipeng Zhang and Atsuro Daida and Lawrence Liu and Naoto Kuroda and Yuanyi Ding and Shingo Oana and Tonmoy Monsoor and Chenda Duan and Shaun A. Hussain and Joe X Qiao and Noriko Salamon and Aria Fallah and Myung Shin Sim and Raman Sankar and Richard J. Staba and Jerome Engel Jr. and Eishi Asano and Vwani Roychowdhury and Hiroki Nariai}, doi = {10.18112/openneuro.ds005398.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds005398.v1.1.1}, } ``` ## Technical Details - Subjects: 185 - Recordings: 185 - Tasks: 1 - Channels: 128 (30), 112 (20), 108 (8), 104 (8), 118 (6), 102 (5), 124 (5), 106 (5), 132 (4), 120 (4), 64 (4), 138 (4), 100 (4), 122 (3), 114 (3), 130 (3), 110 (3), 116 (3), 74 (2), 58 (2), 98 (2), 86 (2), 94 (2), 73 (2), 140 (2), 79 (2), 96 (2), 70 (2), 107 (2), 77 (2), 150 (2), 76 (2), 126 (2), 144 (2), 62, 44, 149, 45, 40, 80, 60, 136, 32, 99, 63, 101, 93, 33, 133, 69, 127, 56, 92, 81, 84, 109, 34, 156, 68, 67, 95, 83, 72, 111, 164 - Sampling rate (Hz): 1000.0 (135), 2000.0 (49), 200.0 - Duration (hours): 90.98912097222222 - Pathology: Epilepsy - Modality: Sleep - Type: Clinical/Intervention - Size on disk: 102.2 GB - File count: 185 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005398.v1.1.1 - Source: openneuro - OpenNeuro: [ds005398](https://openneuro.org/datasets/ds005398) - NeMAR: [ds005398](https://nemar.org/dataexplorer/detail?dataset_id=ds005398) ## API Reference Use the `DS005398` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005398(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Open iEEG Dataset (Pediatric iEEG, Wayne State University and UCLA) * **Study:** `ds005398` (OpenNeuro) * **Author (year):** `Zhang2024_Open_Pediatric_Wayne` * **Canonical:** — Also importable as: `DS005398`, `Zhang2024_Open_Pediatric_Wayne`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 185; recordings: 185; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005398](https://openneuro.org/datasets/ds005398) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005398](https://nemar.org/dataexplorer/detail?dataset_id=ds005398) DOI: [https://doi.org/10.18112/openneuro.ds005398.v1.1.1](https://doi.org/10.18112/openneuro.ds005398.v1.1.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005398 >>> dataset = DS005398(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005398) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005398) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005403: eeg dataset, 32 subjects *Delayed Auditory Feedback EEG/EGG* Access recordings and metadata through EEGDash. **Citation:** Veillette, J., Rosen, J., Margoliash, D., Nusbaum, H. (2024). *Delayed Auditory Feedback EEG/EGG*. [10.18112/openneuro.ds005403.v1.0.1](https://doi.org/10.18112/openneuro.ds005403.v1.0.1) Modality: eeg Subjects: 32 Recordings: 32 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005403 dataset = DS005403(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005403(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005403( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005403, title = {Delayed Auditory Feedback EEG/EGG}, author = {Veillette, J. and Rosen, J. and Margoliash, D. and Nusbaum, H.}, doi = {10.18112/openneuro.ds005403.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005403.v1.0.1}, } ``` ## About This Dataset **Notes** Electroglottography (EGG) and audio are included in the EEG files themselves, rather than in sidecar files, as they were converted from analog to digital on the same hardware. The audio is the audio the subject heard, i.e. their delayed auditory feedback. If you want the speech waveform aligned to the time the subject produced it, you can shift the audio back by the timestamps recorded (for each trial) in the delay field of the events sidecar file. EGG has already been minimally preprocessed to correct for phase delays induced by the built-in hardware filter of the EGG amplifier by applying an equivalent software filter in the opposite temporal direction. (This is the same strategy employed by “zero phase shift” filters in MATLAB and scipy.) Data was organized according the the BIDS standard for EEG data using the MNE-BIDS software (Appelhoff et al., 2019; Pernet et al., 2019). **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS005403` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Delayed Auditory Feedback EEG/EGG | | Author (year) | `Veillette2024` | | Canonical | `Veillette2019` | | Importable as | `DS005403`, `Veillette2024`, `Veillette2019` | | Year | 2024 | | Authors | Veillette, J., Rosen, J., Margoliash, D., Nusbaum, H. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005403.v1.0.1](https://doi.org/10.18112/openneuro.ds005403.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005403) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005403) | [Source URL](https://openneuro.org/datasets/ds005403) | ### Copy-paste BibTeX ```bibtex @dataset{ds005403, title = {Delayed Auditory Feedback EEG/EGG}, author = {Veillette, J. and Rosen, J. and Margoliash, D. and Nusbaum, H.}, doi = {10.18112/openneuro.ds005403.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005403.v1.0.1}, } ``` ## Technical Details - Subjects: 32 - Recordings: 32 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 10000.0 - Duration (hours): 13.38265186111111 - Pathology: Healthy - Modality: Auditory - Type: Motor - Size on disk: 118.5 GB - File count: 32 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005403.v1.0.1 - Source: openneuro - OpenNeuro: [ds005403](https://openneuro.org/datasets/ds005403) - NeMAR: [ds005403](https://nemar.org/dataexplorer/detail?dataset_id=ds005403) ## API Reference Use the `DS005403` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005403(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Delayed Auditory Feedback EEG/EGG * **Study:** `ds005403` (OpenNeuro) * **Author (year):** `Veillette2024` * **Canonical:** `Veillette2019` Also importable as: `DS005403`, `Veillette2024`, `Veillette2019`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 32; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005403](https://openneuro.org/datasets/ds005403) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005403](https://nemar.org/dataexplorer/detail?dataset_id=ds005403) DOI: [https://doi.org/10.18112/openneuro.ds005403.v1.0.1](https://doi.org/10.18112/openneuro.ds005403.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005403 >>> dataset = DS005403(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005403) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005403) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005406: eeg dataset, 29 subjects *EEG frequency tagging reveals the integration of dissimilar observed actions* Access recordings and metadata through EEGDash. **Citation:** Silvia Formica, Anna Chaiken, Jan R. Wiersema, Emiel Cracco (2024). *EEG frequency tagging reveals the integration of dissimilar observed actions*. [10.18112/openneuro.ds005406.v1.0.0](https://doi.org/10.18112/openneuro.ds005406.v1.0.0) Modality: eeg Subjects: 29 Recordings: 29 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005406 dataset = DS005406(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005406(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005406( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005406, title = {EEG frequency tagging reveals the integration of dissimilar observed actions}, author = {Silvia Formica and Anna Chaiken and Jan R. Wiersema and Emiel Cracco}, doi = {10.18112/openneuro.ds005406.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005406.v1.0.0}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `DS005406` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG frequency tagging reveals the integration of dissimilar observed actions | | Author (year) | `Formica2024` | | Canonical | `Formica2025` | | Importable as | `DS005406`, `Formica2024`, `Formica2025` | | Year | 2024 | | Authors | Silvia Formica, Anna Chaiken, Jan R. Wiersema, Emiel Cracco | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005406.v1.0.0](https://doi.org/10.18112/openneuro.ds005406.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005406) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005406) | [Source URL](https://openneuro.org/datasets/ds005406) | ### Copy-paste BibTeX ```bibtex @dataset{ds005406, title = {EEG frequency tagging reveals the integration of dissimilar observed actions}, author = {Silvia Formica and Anna Chaiken and Jan R. Wiersema and Emiel Cracco}, doi = {10.18112/openneuro.ds005406.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005406.v1.0.0}, } ``` ## Technical Details - Subjects: 29 - Recordings: 29 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 15.4524875 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 13.3 GB - File count: 29 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005406.v1.0.0 - Source: openneuro - OpenNeuro: [ds005406](https://openneuro.org/datasets/ds005406) - NeMAR: [ds005406](https://nemar.org/dataexplorer/detail?dataset_id=ds005406) ## API Reference Use the `DS005406` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005406(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG frequency tagging reveals the integration of dissimilar observed actions * **Study:** `ds005406` (OpenNeuro) * **Author (year):** `Formica2024` * **Canonical:** `Formica2025` Also importable as: `DS005406`, `Formica2024`, `Formica2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 29; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005406](https://openneuro.org/datasets/ds005406) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005406](https://nemar.org/dataexplorer/detail?dataset_id=ds005406) DOI: [https://doi.org/10.18112/openneuro.ds005406.v1.0.0](https://doi.org/10.18112/openneuro.ds005406.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005406 >>> dataset = DS005406(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005406) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005406) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005407: eeg dataset, 25 subjects *The effect of speech masking on the subcortical response to speech* Access recordings and metadata through EEGDash. **Citation:** Melissa J. Polonenko, Ross K. Maddox (2024). *The effect of speech masking on the subcortical response to speech*. [10.18112/openneuro.ds005407.v1.0.1](https://doi.org/10.18112/openneuro.ds005407.v1.0.1) Modality: eeg Subjects: 25 Recordings: 29 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005407 dataset = DS005407(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005407(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005407( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005407, title = {The effect of speech masking on the subcortical response to speech}, author = {Melissa J. Polonenko and Ross K. Maddox}, doi = {10.18112/openneuro.ds005407.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005407.v1.0.1}, } ``` ## About This Dataset **README** **Details related to access to the data** Please contact the following authors for further information: : Melissa Polonenko(email: [mpolonen@umn.edu](mailto:mpolonen@umn.edu)) Ross Maddox (email: [rkmaddox@med.umich.edu](mailto:rkmaddox@med.umich.edu)) ### View full README **README** **Details related to access to the data** Please contact the following authors for further information: : Melissa Polonenko(email: [mpolonen@umn.edu](mailto:mpolonen@umn.edu)) Ross Maddox (email: [rkmaddox@med.umich.edu](mailto:rkmaddox@med.umich.edu)) **Overview** This is the “peaky_snr” dataset for the paper Polonenko MJ & Maddox RK (2024), with citation listed below. eNeuro: Polonenko, M. J., & Maddox, R. K. (2025). eNeuro 24 March 2025, 12 (4) ENEURO.0561-24.2025; [https://doi.org/10.1523/ENEURO.0561-24.2025](https://doi.org/10.1523/ENEURO.0561-24.2025) BioRxiv: The effect of speech masking on the subcortical response to speech. [https://www.biorxiv.org/content/10.1101/2024.12.10.627771v1](https://www.biorxiv.org/content/10.1101/2024.12.10.627771v1) Auditory brainstem responses (ABRs) were derived to continuous peaky speech from between one up to five simultaneously presented talkers and from clicks. Data was collected from June to July 2021. Goal: To better understand masking’s effects on the subcortical neural encoding of naturally uttered speech in human listeners. To do this we leveraged our recently developed method for determining the auditory brainstem response (ABR) to speech (Polonenko and Maddox, 2021). Whereas our previous work was aimed at encoding of single talkers, here we determined the ABR to speech in quiet as well as in the presence of varying numbers of other talkers. The details of the experiment can be found at Polonenko & Maddox (2024). Stimuli: > 1) randomized click trains at an average rate of 40 Hz, > 60 x 10 s trials for a total of 10 minutes; > 2) peaky speech for up to 5 male narrators. 30 minutes of each SNR > (clean, 0 dB, -3 dB, -6 dB), corresponding to 1, 2, 3, and 5 talkers > presented simultaneously, each set to 65 dB. > NOTE: files for each story were completely randomized. Random combinations > were created so that each story was equally represented in the data. The code for stimulus preprocessing and EEG analysis is available on Github: : [https://github.com/polonenkolab/peaky_snr](https://github.com/polonenkolab/peaky_snr) **Format** The dataset is formatted according to the EEG Brain Imaging Data Structure. It includes EEG recording from participant 01 to 25 in raw brainvision format (3 files: .eeg, .vhdr, .vmrk) and stimuli files in format of .hdf5. The stimuli files contain the audio (‘audio’), and regressors for the deconvolution (‘pinds’ are the pulse indices, ‘anm’ is an auditory nerve model regressor, > which was used during analyses but was not included as part of the article). Generally, you can find detailed event data in the .tsv files and descriptions in the accompanying .json files. Raw eeg files are provided in the Brain Products format. **Participants** 25 participants, mean ± SD age of 23.4 ± 5.5 years (19-37 years) Inclusion criteria: > 1. Age between 18-40 years > 2. Normal hearing: audiometric thresholds 20 dB HL or better from 500 to 8000 Hz > 3. Speak English as their primary language Please see participants.tsv for more information. **Apparatus** Participants sat in a darkened sound-isolating booth and rested or watched silent videos with closed captioning. Stimuli were presented at an average level of 65 dB SPL (per story; total for 5 talkers = 71 dB) and a sampling rate of 48 kHz through ER-2 insert earphones plugged into an RME Babyface Pro digital sound card. Custom python scripts using expyfun were used to control the experiment and stimulus presentation. **Details about the experiment** For a detailed description of the task, see Polonenko & Maddox (2024) and the supplied `task-peaky_snr_eeg.json` file. The 4 SNR speech conditions and the story tokens were randomized. This means that the participant would not be able to follow the stories. For clicks the trials were not randomized (already random clicks). Trigger onset times in the tsv files have already been corrected for the tubing delay of the insert earphones (but not in the events of the raw files). Triggers with values of “1” were recorded to the onset of the 10 s audio, and shortly after triggers with values of “4” or “8” were stamped to indicate info about the trial. This was done by converting the decimal trial number to bits, denoted b, then calculating 2 \*\* (b + 2). We’ve specified these trial triggers and more metadata of the events in each of the ‘\*_eeg_events.tsv” file, which is sufficient to know which trial corresponded to which type of stimulus (clicks or speech), snr, and which files of which stories were presented. e.g., alice_000_peaky_diotic_regress.hdf5 for the first file of the story called ‘alice’ (Alice in Wonderland). ## Dataset Information | Dataset ID | `DS005407` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The effect of speech masking on the subcortical response to speech | | Author (year) | `Polonenko2024_effect` | | Canonical | — | | Importable as | `DS005407`, `Polonenko2024_effect` | | Year | 2024 | | Authors | Melissa J. Polonenko, Ross K. Maddox | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005407.v1.0.1](https://doi.org/10.18112/openneuro.ds005407.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005407) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005407) | [Source URL](https://openneuro.org/datasets/ds005407) | ### Copy-paste BibTeX ```bibtex @dataset{ds005407, title = {The effect of speech masking on the subcortical response to speech}, author = {Melissa J. Polonenko and Ross K. Maddox}, doi = {10.18112/openneuro.ds005407.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005407.v1.0.1}, } ``` ## Technical Details - Subjects: 25 - Recordings: 29 - Tasks: 1 - Channels: 2 - Sampling rate (Hz): 10000.0 - Duration (hours): 56.87177694444444 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 37.8 GB - File count: 29 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005407.v1.0.1 - Source: openneuro - OpenNeuro: [ds005407](https://openneuro.org/datasets/ds005407) - NeMAR: [ds005407](https://nemar.org/dataexplorer/detail?dataset_id=ds005407) ## API Reference Use the `DS005407` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005407(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effect of speech masking on the subcortical response to speech * **Study:** `ds005407` (OpenNeuro) * **Author (year):** `Polonenko2024_effect` * **Canonical:** — Also importable as: `DS005407`, `Polonenko2024_effect`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 25; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005407](https://openneuro.org/datasets/ds005407) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005407](https://nemar.org/dataexplorer/detail?dataset_id=ds005407) DOI: [https://doi.org/10.18112/openneuro.ds005407.v1.0.1](https://doi.org/10.18112/openneuro.ds005407.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005407 >>> dataset = DS005407(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005407) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005407) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005408: eeg dataset, 25 subjects *The effect of speech masking on the subcortical response to speech* Access recordings and metadata through EEGDash. **Citation:** Melissa J. Polonenko, Ross K. Maddox (2024). *The effect of speech masking on the subcortical response to speech*. [10.18112/openneuro.ds005408.v1.0.0](https://doi.org/10.18112/openneuro.ds005408.v1.0.0) Modality: eeg Subjects: 25 Recordings: 29 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005408 dataset = DS005408(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005408(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005408( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005408, title = {The effect of speech masking on the subcortical response to speech}, author = {Melissa J. Polonenko and Ross K. Maddox}, doi = {10.18112/openneuro.ds005408.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005408.v1.0.0}, } ``` ## About This Dataset **README** **Details related to access to the data** Please contact the following authors for further information: : Melissa Polonenko(email: [mpolonen@umn.edu](mailto:mpolonen@umn.edu)) Ross Maddox (email: [rkmaddox@med.umich.edu](mailto:rkmaddox@med.umich.edu)) ### View full README **README** **Details related to access to the data** Please contact the following authors for further information: : Melissa Polonenko(email: [mpolonen@umn.edu](mailto:mpolonen@umn.edu)) Ross Maddox (email: [rkmaddox@med.umich.edu](mailto:rkmaddox@med.umich.edu)) **Overview** This is the “peaky_snr” dataset for the paper Polonenko MJ & Maddox RK (2024), with citation listed below. BioRxiv: The effect of speech masking on the subcortical response to speech Auditory brainstem responses (ABRs) were derived to continuous peaky speech from between one up to five simultaneously presented talkers and from clicks. Data was collected from June to July 2021. Goal: To better understand masking’s effects on the subcortical neural encoding of naturally uttered speech in human listeners. To do this we leveraged our recently developed method for determining the auditory brainstem response (ABR) to speech (Polonenko and Maddox, 2021). Whereas our previous work was aimed at encoding of single talkers, here we determined the ABR to speech in quiet as well as in the presence of varying numbers of other talkers. The details of the experiment can be found at Polonenko & Maddox (2024). Stimuli: > 1) randomized click trains at an average rate of 40 Hz, > 60 x 10 s trials for a total of 10 minutes; > 2) peaky speech for up to 5 male narrators. 30 minutes of each SNR > (clean, 0 dB, -3 dB, -6 dB), corresponding to 1, 2, 3, and 5 talkers > presented simultaneously, each set to 65 dB. > NOTE: files for each story were completely randomized. Random combinations > were created so that each story was equally represented in the data. The code for stimulus preprocessing and EEG analysis is available on Github: : [https://github.com/polonenkolab/peaky_snr](https://github.com/polonenkolab/peaky_snr) **Format** The dataset is formatted according to the EEG Brain Imaging Data Structure. It includes EEG recording from participant 01 to 25 in raw brainvision format (3 files: .eeg, .vhdr, .vmrk) and stimuli files in format of .hdf5. The stimuli files contain the audio (‘audio’), and regressors for the deconvolution (‘pinds’ are the pulse indices, ‘anm’ is an auditory nerve model regressor, > which was used during analyses but was not included as part of the article). Generally, you can find detailed event data in the .tsv files and descriptions in the accompanying .json files. Raw eeg files are provided in the Brain Products format. **Participants** 25 participants, mean ± SD age of 23.4 ± 5.5 years (19-37 years) Inclusion criteria: > 1. Age between 18-40 years > 2. Normal hearing: audiometric thresholds 20 dB HL or better from 500 to 8000 Hz > 3. Speak English as their primary language Please see participants.tsv for more information. **Apparatus** Participants sat in a darkened sound-isolating booth and rested or watched silent videos with closed captioning. Stimuli were presented at an average level of 65 dB SPL (per story; total for 5 talkers = 71 dB) and a sampling rate of 48 kHz through ER-2 insert earphones plugged into an RME Babyface Pro digital sound card. Custom python scripts using expyfun were used to control the experiment and stimulus presentation. **Details about the experiment** For a detailed description of the task, see Polonenko & Maddox (2024) and the supplied `task-peaky_snr_eeg.json` file. The 4 SNR speech conditions and the story tokens were randomized. This means that the participant would not be able to follow the stories. For clicks the trials were not randomized (already random clicks). Trigger onset times in the tsv files have already been corrected for the tubing delay of the insert earphones (but not in the events of the raw files). Triggers with values of “1” were recorded to the onset of the 10 s audio, and shortly after triggers with values of “4” or “8” were stamped to indicate info about the trial. This was done by converting the decimal trial number to bits, denoted b, then calculating 2 \*\* (b + 2). We’ve specified these trial triggers and more metadata of the events in each of the ‘\*_eeg_events.tsv” file, which is sufficient to know which trial corresponded to which type of stimulus (clicks or speech), snr, and which files of which stories were presented. e.g., alice_000_peaky_diotic_regress.hdf5 for the first file of the story called ‘alice’ (Alice in Wonderland). ## Dataset Information | Dataset ID | `DS005408` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The effect of speech masking on the subcortical response to speech | | Author (year) | `Polonenko2024_effect_speech` | | Canonical | — | | Importable as | `DS005408`, `Polonenko2024_effect_speech` | | Year | 2024 | | Authors | Melissa J. Polonenko, Ross K. Maddox | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005408.v1.0.0](https://doi.org/10.18112/openneuro.ds005408.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005408) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005408) | [Source URL](https://openneuro.org/datasets/ds005408) | ### Copy-paste BibTeX ```bibtex @dataset{ds005408, title = {The effect of speech masking on the subcortical response to speech}, author = {Melissa J. Polonenko and Ross K. Maddox}, doi = {10.18112/openneuro.ds005408.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005408.v1.0.0}, } ``` ## Technical Details - Subjects: 25 - Recordings: 29 - Tasks: 1 - Channels: 2 - Sampling rate (Hz): 10000.0 - Duration (hours): 56.87177694444444 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 15.3 GB - File count: 29 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005408.v1.0.0 - Source: openneuro - OpenNeuro: [ds005408](https://openneuro.org/datasets/ds005408) - NeMAR: [ds005408](https://nemar.org/dataexplorer/detail?dataset_id=ds005408) ## API Reference Use the `DS005408` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005408(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effect of speech masking on the subcortical response to speech * **Study:** `ds005408` (OpenNeuro) * **Author (year):** `Polonenko2024_effect_speech` * **Canonical:** — Also importable as: `DS005408`, `Polonenko2024_effect_speech`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 25; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005408](https://openneuro.org/datasets/ds005408) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005408](https://nemar.org/dataexplorer/detail?dataset_id=ds005408) DOI: [https://doi.org/10.18112/openneuro.ds005408.v1.0.0](https://doi.org/10.18112/openneuro.ds005408.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005408 >>> dataset = DS005408(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005408) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005408) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005410: eeg dataset, 81 subjects *Semantic_conditioning* Access recordings and metadata through EEGDash. **Citation:** Yuri G. Pavlov (2024). *Semantic_conditioning*. [10.18112/openneuro.ds005410.v1.0.1](https://doi.org/10.18112/openneuro.ds005410.v1.0.1) Modality: eeg Subjects: 81 Recordings: 81 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005410 dataset = DS005410(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005410(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005410( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005410, title = {Semantic_conditioning}, author = {Yuri G. Pavlov}, doi = {10.18112/openneuro.ds005410.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005410.v1.0.1}, } ``` ## About This Dataset Semantic conditioning task The dataset was used in this article: Pavlov YG, Menger NS, Keil A, Kotchoubey B. 2024. Contingency awareness shapes neural responses in fear conditioning. [https://doi.org/10.1101/2024.08.13.607803](https://doi.org/10.1101/2024.08.13.607803) ## Dataset Information | Dataset ID | `DS005410` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Semantic_conditioning | | Author (year) | `Pavlov2024_Semantic_conditioning` | | Canonical | — | | Importable as | `DS005410`, `Pavlov2024_Semantic_conditioning` | | Year | 2024 | | Authors | Yuri G. Pavlov | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005410.v1.0.1](https://doi.org/10.18112/openneuro.ds005410.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005410) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005410) | [Source URL](https://openneuro.org/datasets/ds005410) | ### Copy-paste BibTeX ```bibtex @dataset{ds005410, title = {Semantic_conditioning}, author = {Yuri G. Pavlov}, doi = {10.18112/openneuro.ds005410.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005410.v1.0.1}, } ``` ## Technical Details - Subjects: 81 - Recordings: 81 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 22.97631888888889 - Pathology: Healthy - Modality: Visual - Type: Affect - Size on disk: 19.8 GB - File count: 81 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005410.v1.0.1 - Source: openneuro - OpenNeuro: [ds005410](https://openneuro.org/datasets/ds005410) - NeMAR: [ds005410](https://nemar.org/dataexplorer/detail?dataset_id=ds005410) ## API Reference Use the `DS005410` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005410(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Semantic_conditioning * **Study:** `ds005410` (OpenNeuro) * **Author (year):** `Pavlov2024_Semantic_conditioning` * **Canonical:** — Also importable as: `DS005410`, `Pavlov2024_Semantic_conditioning`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 81; recordings: 81; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005410](https://openneuro.org/datasets/ds005410) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005410](https://nemar.org/dataexplorer/detail?dataset_id=ds005410) DOI: [https://doi.org/10.18112/openneuro.ds005410.v1.0.1](https://doi.org/10.18112/openneuro.ds005410.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005410 >>> dataset = DS005410(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005410) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005410) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005411: ieeg dataset, 47 subjects *Free Recall of Word Lists with Repeated Items* Access recordings and metadata through EEGDash. **Citation:** Haydn G. Herrema, Michael J. Kahana (2024). *Free Recall of Word Lists with Repeated Items*. [10.18112/openneuro.ds005411.v1.0.0](https://doi.org/10.18112/openneuro.ds005411.v1.0.0) Modality: ieeg Subjects: 47 Recordings: 193 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005411 dataset = DS005411(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005411(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005411( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005411, title = {Free Recall of Word Lists with Repeated Items}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005411.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005411.v1.0.0}, } ``` ## About This Dataset **Free Recall of Word Lists with Repeated Items** **Description** This dataset contains behavioral events and intracranial electrophysiological recordings from a repated item free recall task. The experiment consists of participants studying a list of words, presented visually one at a time, and then freely recalling the words from the just-presented list in any order. On each list, there is a 7-second delay period between the encoding and recall phases. The data were collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania. The main manipulation in this paradigm is the repetition of items in the studied list. In total, each list contains 27 encoding events, but only 12 unique words: 3 are presented one time, 3 are presented two times, and 6 are presented three times. **To Note** \* The duration of the encoding events (i.e., length of word presentation) varies among sessions. For some sessions, the words remained on screen from 750 ms, while in other sessions presentation lasted for 1600 ms. The `duration` column of the events tsv files contains this information. \* The iEEG recordings are labeled either “monopolar” or “bipolar.” The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis. The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables. \* Each subject has a unique montage of electrode locations. MNI and Talairach coordinates are provided when available. \* Recordings done with the Blackrock system are in units of 250 nV, while recordings done with the Medtronic system are estimated through testing to have units of 0.1 uV. We have completed the scaling to provide values in V. **Contact** For questions or inquiries, please contact [sas-kahana-sysadmin@sas.upenn.edu](mailto:sas-kahana-sysadmin@sas.upenn.edu). ## Dataset Information | Dataset ID | `DS005411` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Free Recall of Word Lists with Repeated Items | | Author (year) | `Herrema2024_Free` | | Canonical | — | | Importable as | `DS005411`, `Herrema2024_Free` | | Year | 2024 | | Authors | Haydn G. Herrema, Michael J. Kahana | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005411.v1.0.0](https://doi.org/10.18112/openneuro.ds005411.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005411) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005411) | [Source URL](https://openneuro.org/datasets/ds005411) | ### Copy-paste BibTeX ```bibtex @dataset{ds005411, title = {Free Recall of Word Lists with Repeated Items}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005411.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005411.v1.0.0}, } ``` ## Technical Details - Subjects: 47 - Recordings: 193 - Tasks: 1 - Channels: 120 (12), 118 (10), 109 (9), 153 (7), 168 (7), 122 (6), 116 (6), 126 (6), 110 (6), 182 (5), 106 (5), 192 (4), 115 (4), 169 (4), 211 (4), 108 (4), 200 (4), 167 (4), 134 (4), 155 (4), 152 (4), 133 (3), 132 (3), 105 (3), 127 (3), 141 (3), 121 (3), 184 (2), 166 (2), 44 (2), 55 (2), 186 (2), 213 (2), 187 (2), 99 (2), 173 (2), 181 (2), 96 (2), 160 (2), 94 (2), 195 (2), 140 (2), 84 (2), 154, 98, 230, 107, 159, 111, 138, 236, 123, 158, 210, 202, 128, 101, 232, 176, 165, 218, 239, 104, 142, 119, 215, 129 - Sampling rate (Hz): 1000.0 (150), 2048.0 (21), 2000.0 (10), 512.0 (8), 1024.0 (4) - Duration (hours): 140.0236284722222 - Pathology: Epilepsy - Modality: Visual - Type: Memory - Size on disk: 157.4 GB - File count: 193 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005411.v1.0.0 - Source: openneuro - OpenNeuro: [ds005411](https://openneuro.org/datasets/ds005411) - NeMAR: [ds005411](https://nemar.org/dataexplorer/detail?dataset_id=ds005411) ## API Reference Use the `DS005411` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005411(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Free Recall of Word Lists with Repeated Items * **Study:** `ds005411` (OpenNeuro) * **Author (year):** `Herrema2024_Free` * **Canonical:** — Also importable as: `DS005411`, `Herrema2024_Free`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 47; recordings: 193; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005411](https://openneuro.org/datasets/ds005411) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005411](https://nemar.org/dataexplorer/detail?dataset_id=ds005411) DOI: [https://doi.org/10.18112/openneuro.ds005411.v1.0.0](https://doi.org/10.18112/openneuro.ds005411.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005411 >>> dataset = DS005411(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005411) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005411) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005415: ieeg dataset, 13 subjects *Numbers* Access recordings and metadata through EEGDash. **Citation:** Alexander P. Rockhill, Ahmed M. Raslan (2024). *Numbers*. [10.18112/openneuro.ds005415.v1.0.0](https://doi.org/10.18112/openneuro.ds005415.v1.0.0) Modality: ieeg Subjects: 13 Recordings: 13 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005415 dataset = DS005415(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005415(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005415( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005415, title = {Numbers}, author = {Alexander P. Rockhill and Ahmed M. Raslan}, doi = {10.18112/openneuro.ds005415.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005415.v1.0.0}, } ``` ## About This Dataset Welcome to the numbers dataset. These data were collected using stereoelectroencephalography recordings of epilepsy patients while they were waiting on the epilepsy monitoring unit to have seizures at Oregon Health & Science University. They were shown auditory and visual numbers that were symbolic (Arabic + spoken) or non-symbolic (dots + beeps). ## Dataset Information | Dataset ID | `DS005415` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Numbers | | Author (year) | `Rockhill2024` | | Canonical | — | | Importable as | `DS005415`, `Rockhill2024` | | Year | 2024 | | Authors | Alexander P. Rockhill, Ahmed M. Raslan | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005415.v1.0.0](https://doi.org/10.18112/openneuro.ds005415.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005415) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005415) | [Source URL](https://openneuro.org/datasets/ds005415) | ### Copy-paste BibTeX ```bibtex @dataset{ds005415, title = {Numbers}, author = {Alexander P. Rockhill and Ahmed M. Raslan}, doi = {10.18112/openneuro.ds005415.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005415.v1.0.0}, } ``` ## Technical Details - Subjects: 13 - Recordings: 13 - Tasks: 1 - Channels: 182 (2), 228, 230, 210, 194, 246, 188, 266, 192, 224, 202, 200 - Sampling rate (Hz): 1000.0 (10), 2000.0 (3) - Duration (hours): 4.21944125 - Pathology: Epilepsy - Modality: Multisensory - Type: Perception - Size on disk: 7.5 GB - File count: 13 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005415.v1.0.0 - Source: openneuro - OpenNeuro: [ds005415](https://openneuro.org/datasets/ds005415) - NeMAR: [ds005415](https://nemar.org/dataexplorer/detail?dataset_id=ds005415) ## API Reference Use the `DS005415` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005415(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Numbers * **Study:** `ds005415` (OpenNeuro) * **Author (year):** `Rockhill2024` * **Canonical:** — Also importable as: `DS005415`, `Rockhill2024`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Epilepsy`. Subjects: 13; recordings: 13; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005415](https://openneuro.org/datasets/ds005415) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005415](https://nemar.org/dataexplorer/detail?dataset_id=ds005415) DOI: [https://doi.org/10.18112/openneuro.ds005415.v1.0.0](https://doi.org/10.18112/openneuro.ds005415.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005415 >>> dataset = DS005415(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005415) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005415) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005416: eeg dataset, 23 subjects *Fatigue Characterization of EEG under Mixed Reality Stereo Vision* Access recordings and metadata through EEGDash. **Citation:** Yan Wu, Chunguang Tao, Qi Li (2024). *Fatigue Characterization of EEG under Mixed Reality Stereo Vision*. [10.18112/openneuro.ds005416.v1.0.1](https://doi.org/10.18112/openneuro.ds005416.v1.0.1) Modality: eeg Subjects: 23 Recordings: 23 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005416 dataset = DS005416(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005416(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005416( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005416, title = {Fatigue Characterization of EEG under Mixed Reality Stereo Vision}, author = {Yan Wu and Chunguang Tao and Qi Li}, doi = {10.18112/openneuro.ds005416.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005416.v1.0.1}, } ``` ## About This Dataset In this study, we selected 24 electrodes for EEG recording: Fp1, Fp2, AF3, AF4, F7, Fz, F8, FC5, FC6 (frontal), FT7, FT8 (temporal), C3, Cz, C4, CP3, CP4 (central), P3, Pz, P4, PO3, PO4 (parietal), and O1, Oz, O2 (occipital). Each participant was required to complete watching 2 resting scenes and 15 movement scenes. A rating scene appeared to rate each exercise scene watched. Each movement scene consisted of 20 trials of reciprocal periodic movements at a fixed depth and velocity. We focused on analyzing EEG data from watching resting scenes. Researchers can use this EEG data to do resting-state analysis (corresponding to events ‘11’ and ‘13’) as well as task-state analysis (corresponding to event ‘12’). ## Dataset Information | Dataset ID | `DS005416` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Fatigue Characterization of EEG under Mixed Reality Stereo Vision | | Author (year) | `Wu2024` | | Canonical | — | | Importable as | `DS005416`, `Wu2024` | | Year | 2024 | | Authors | Yan Wu, Chunguang Tao, Qi Li | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005416.v1.0.1](https://doi.org/10.18112/openneuro.ds005416.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005416) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005416) | [Source URL](https://openneuro.org/datasets/ds005416) | ### Copy-paste BibTeX ```bibtex @dataset{ds005416, title = {Fatigue Characterization of EEG under Mixed Reality Stereo Vision}, author = {Yan Wu and Chunguang Tao and Qi Li}, doi = {10.18112/openneuro.ds005416.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005416.v1.0.1}, } ``` ## Technical Details - Subjects: 23 - Recordings: 23 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 24.67972222222222 - Pathology: Healthy - Modality: Visual - Type: Resting-state - Size on disk: 21.3 GB - File count: 23 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005416.v1.0.1 - Source: openneuro - OpenNeuro: [ds005416](https://openneuro.org/datasets/ds005416) - NeMAR: [ds005416](https://nemar.org/dataexplorer/detail?dataset_id=ds005416) ## API Reference Use the `DS005416` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005416(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Fatigue Characterization of EEG under Mixed Reality Stereo Vision * **Study:** `ds005416` (OpenNeuro) * **Author (year):** `Wu2024` * **Canonical:** — Also importable as: `DS005416`, `Wu2024`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 23; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005416](https://openneuro.org/datasets/ds005416) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005416](https://nemar.org/dataexplorer/detail?dataset_id=ds005416) DOI: [https://doi.org/10.18112/openneuro.ds005416.v1.0.1](https://doi.org/10.18112/openneuro.ds005416.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005416 >>> dataset = DS005416(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005416) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005416) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005420: eeg dataset, 37 subjects *Resting state EEG with closed eyes and open eyes in females from 60 to 80 years old* Access recordings and metadata through EEGDash. **Citation:** Miriam de Jesús Sánchez Gama, Luis Alberto Barradas Chacón, Leticia Chacón Gutiérrez, Thalía Fernández Harmony, Carlos Augusto Novo Olivas, José María De La Roca Chiapas (2024). *Resting state EEG with closed eyes and open eyes in females from 60 to 80 years old*. [10.18112/openneuro.ds005420.v1.0.0](https://doi.org/10.18112/openneuro.ds005420.v1.0.0) Modality: eeg Subjects: 37 Recordings: 72 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005420 dataset = DS005420(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005420(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005420( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005420, title = {Resting state EEG with closed eyes and open eyes in females from 60 to 80 years old}, author = {Miriam de Jesús Sánchez Gama and Luis Alberto Barradas Chacón and Leticia Chacón Gutiérrez and Thalía Fernández Harmony and Carlos Augusto Novo Olivas and José María De La Roca Chiapas}, doi = {10.18112/openneuro.ds005420.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005420.v1.0.0}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS005420` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Resting state EEG with closed eyes and open eyes in females from 60 to 80 years old | | Author (year) | `Gama2024` | | Canonical | `Gama2019` | | Importable as | `DS005420`, `Gama2024`, `Gama2019` | | Year | 2024 | | Authors | Miriam de Jesús Sánchez Gama, Luis Alberto Barradas Chacón, Leticia Chacón Gutiérrez, Thalía Fernández Harmony, Carlos Augusto Novo Olivas, José María De La Roca Chiapas | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005420.v1.0.0](https://doi.org/10.18112/openneuro.ds005420.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005420) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005420) | [Source URL](https://openneuro.org/datasets/ds005420) | ### Copy-paste BibTeX ```bibtex @dataset{ds005420, title = {Resting state EEG with closed eyes and open eyes in females from 60 to 80 years old}, author = {Miriam de Jesús Sánchez Gama and Luis Alberto Barradas Chacón and Leticia Chacón Gutiérrez and Thalía Fernández Harmony and Carlos Augusto Novo Olivas and José María De La Roca Chiapas}, doi = {10.18112/openneuro.ds005420.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005420.v1.0.0}, } ``` ## Technical Details - Subjects: 37 - Recordings: 72 - Tasks: 2 - Channels: 20 - Sampling rate (Hz): 500.0 - Duration (hours): 5.411903888888888 - Pathology: Healthy - Modality: Resting State - Type: Resting-state - Size on disk: 372.1 MB - File count: 72 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005420.v1.0.0 - Source: openneuro - OpenNeuro: [ds005420](https://openneuro.org/datasets/ds005420) - NeMAR: [ds005420](https://nemar.org/dataexplorer/detail?dataset_id=ds005420) ## API Reference Use the `DS005420` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005420(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting state EEG with closed eyes and open eyes in females from 60 to 80 years old * **Study:** `ds005420` (OpenNeuro) * **Author (year):** `Gama2024` * **Canonical:** `Gama2019` Also importable as: `DS005420`, `Gama2024`, `Gama2019`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 37; recordings: 72; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005420](https://openneuro.org/datasets/ds005420) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005420](https://nemar.org/dataexplorer/detail?dataset_id=ds005420) DOI: [https://doi.org/10.18112/openneuro.ds005420.v1.0.0](https://doi.org/10.18112/openneuro.ds005420.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005420 >>> dataset = DS005420(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005420) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005420) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005429: eeg dataset, 15 subjects *Auditory oddball comparison (Optimum-1, Learning-oddball, and the local–global paradigm)* Access recordings and metadata through EEGDash. **Citation:** Renate Rutiku, Chiara Fiscone, Marcello Massimini, Simone Sarasso (2024). *Auditory oddball comparison (Optimum-1, Learning-oddball, and the local–global paradigm)*. [10.18112/openneuro.ds005429.v1.0.0](https://doi.org/10.18112/openneuro.ds005429.v1.0.0) Modality: eeg Subjects: 15 Recordings: 61 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005429 dataset = DS005429(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005429(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005429( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005429, title = {Auditory oddball comparison (Optimum-1, Learning-oddball, and the local–global paradigm)}, author = {Renate Rutiku and Chiara Fiscone and Marcello Massimini and Simone Sarasso}, doi = {10.18112/openneuro.ds005429.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005429.v1.0.0}, } ``` ## About This Dataset **Introduction** This is the raw EEG data used in: Rutiku, R., Fiscone, C., Massimini, M., & Sarasso, S. (2024). Assessing mismatch negativity (MMN) and P3b within‐individual sensitivity — A comparison between the local–global paradigm and two specialized oddball sequences. European Journal of Neuroscience, 59(5), 842-859. **What’s in this dataset** Each participant (n=15) completed three different auditory oddball sequences: the Optimum-1 for MMN, the learning-oddball for P3b, and the local–global paradigm for the local and global effect. The tasks are formatted as different sessions but they were all recorded consecutively within one EEG experiment (order differed between participants). The local-global sequence was recorded in two separate EEG files (except for participant 5; see below for exception notes). Note that whereas the .vmrk files contain the original triggers for each recording, the \_events files contain the correct event samples used in the analysis (in the fieldtrip cfg.trl format). It namely sometimes happened that some triggers were skipped by the recording system and these triggers needed to be interpolated using the event timestamps from the psychtoolbox output that was used to run the stimulation sequence (see below). Note also that the local-global sequence contains triggers for every single sound, but trials should be cut only for the first sound of every quintlet. The \_events files already take that into account. ### View full README **Introduction** This is the raw EEG data used in: Rutiku, R., Fiscone, C., Massimini, M., & Sarasso, S. (2024). Assessing mismatch negativity (MMN) and P3b within‐individual sensitivity — A comparison between the local–global paradigm and two specialized oddball sequences. European Journal of Neuroscience, 59(5), 842-859. **What’s in this dataset** Each participant (n=15) completed three different auditory oddball sequences: the Optimum-1 for MMN, the learning-oddball for P3b, and the local–global paradigm for the local and global effect. The tasks are formatted as different sessions but they were all recorded consecutively within one EEG experiment (order differed between participants). The local-global sequence was recorded in two separate EEG files (except for participant 5; see below for exception notes). Note that whereas the .vmrk files contain the original triggers for each recording, the \_events files contain the correct event samples used in the analysis (in the fieldtrip cfg.trl format). It namely sometimes happened that some triggers were skipped by the recording system and these triggers needed to be interpolated using the event timestamps from the psychtoolbox output that was used to run the stimulation sequence (see below). Note also that the local-global sequence contains triggers for every single sound, but trials should be cut only for the first sound of every quintlet. The \_events files already take that into account. ```text | Subject | Session | Run | | ------- |--------------|-------| | sub-01 | ses-MMN | | | sub-01 | ses-P3b | | | sub-01 | ses-LGeffect | run-1 | | sub-01 | ses-LGeffect | run-2 | ``` **Auditory stimulation specs** The stimulation sequence information is provided in the original .mat format in the sourcedata folder. There are two files for each sequence: a file containing the sound definitions (_stimulation_SEQUENCE) and a file containing the timestamps for each sound (_critical_events). The code used to run these sequences is included in the paradigms folder. **Exceptions** Participant 13 was recorded with 5000 Hz EEG sampling rate whereas all other participants were recorded with 2500 Hz EEG sampling rate. Participants 13, 14, and 15 were recorded chronologically first and they have slightly more trials for the oddball sequences. After inspecting their data, it was decided that trial numbers can be reduced for the rest of the participants in order to keep the recording time as short as possible while still having good sensitivity for the effects of interest. Participant 5 has three runs for the local-global task due to a need for an extra break by the participant. ## Dataset Information | Dataset ID | `DS005429` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Auditory oddball comparison (Optimum-1, Learning-oddball, and the local–global paradigm) | | Author (year) | `Rutiku2024` | | Canonical | — | | Importable as | `DS005429`, `Rutiku2024` | | Year | 2024 | | Authors | Renate Rutiku, Chiara Fiscone, Marcello Massimini, Simone Sarasso | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005429.v1.0.0](https://doi.org/10.18112/openneuro.ds005429.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005429) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005429) | [Source URL](https://openneuro.org/datasets/ds005429) | ### Copy-paste BibTeX ```bibtex @dataset{ds005429, title = {Auditory oddball comparison (Optimum-1, Learning-oddball, and the local–global paradigm)}, author = {Renate Rutiku and Chiara Fiscone and Marcello Massimini and Simone Sarasso}, doi = {10.18112/openneuro.ds005429.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005429.v1.0.0}, } ``` ## Technical Details - Subjects: 15 - Recordings: 61 - Tasks: 3 - Channels: 64 - Sampling rate (Hz): 2500.0 (57), 5000.0 (4) - Duration (hours): 14.390138722222222 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 16.5 GB - File count: 61 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005429.v1.0.0 - Source: openneuro - OpenNeuro: [ds005429](https://openneuro.org/datasets/ds005429) - NeMAR: [ds005429](https://nemar.org/dataexplorer/detail?dataset_id=ds005429) ## API Reference Use the `DS005429` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005429(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory oddball comparison (Optimum-1, Learning-oddball, and the local–global paradigm) * **Study:** `ds005429` (OpenNeuro) * **Author (year):** `Rutiku2024` * **Canonical:** — Also importable as: `DS005429`, `Rutiku2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 61; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005429](https://openneuro.org/datasets/ds005429) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005429](https://nemar.org/dataexplorer/detail?dataset_id=ds005429) DOI: [https://doi.org/10.18112/openneuro.ds005429.v1.0.0](https://doi.org/10.18112/openneuro.ds005429.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005429 >>> dataset = DS005429(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005429) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005429) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005448: ieeg dataset, 13 subjects *STReEF* Access recordings and metadata through EEGDash. **Citation:** Jelsma S.B., Zijlmans M., Heijink I.B., Hoefnagels F.W.A., Raemakers M, Bourez-Swart M.D., Otte W.M, van Blooijs D., van Klink N.E.C. (2024). *STReEF*. [10.18112/openneuro.ds005448.v1.0.0](https://doi.org/10.18112/openneuro.ds005448.v1.0.0) Modality: ieeg Subjects: 13 Recordings: 18 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005448 dataset = DS005448(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005448(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005448( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005448, title = {STReEF}, author = {Jelsma S.B. and Zijlmans M. and Heijink I.B. and Hoefnagels F.W.A. and Raemakers M and Bourez-Swart M.D. and Otte W.M and van Blooijs D. and van Klink N.E.C.}, doi = {10.18112/openneuro.ds005448.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005448.v1.0.0}, } ``` ## About This Dataset Dataset description This dataset is part of a bigger dataset of intracranial EEG (iEEG) called RESPect (Registry for Epilepsy Surgery Patients), a dataset recorded at the University Medical Center of Utrecht, the Netherlands. This dataset consists of 13 patients with long-term recordings (5 patients recorded with electrocorticography and 8 patients recorded with stereo-encephalography. For a detailed description see Jelsma S.B. et al 2024, Structural and effective brain connectivity in focal epilepsy. This data is organized according to the Brain Imaging Data Structure specification: A community-driven specification for organizing neurophysiology data along with its metadata. For more information on this data specification, see [https://bids-specification.readthedocs.io/en/stable/](https://bids-specification.readthedocs.io/en/stable/) Each patient has their own folder (e.g., `sub-STREEF01`) which contains the iEEG recordings of that patient, as well as the metadata to understand the raw data and event timing. In long-term recordings, data that are recorded within one monitoring period are logically grouped in the same BIDS session and stored across runs indicating the day and time point of recording in the monitoring period. We use the optional run key-value pair to specify the day and the start time of the recording (e.g. run-021315, day 2 after implantation, which is day 1 of the monitoring period, at 13:15). The task key-value pair in long-term iEEG recordings describes the patient´s state during the recording of this file. A specific task called “SPESclin“ is defined when the clinical SPES protocol has been performed. License This dataset is made available under the Public Domain Dedication and License CC v1.0, whose full text can be found at [https://creativecommons.org/publicdomain/zero/1.0/](https://creativecommons.org/publicdomain/zero/1.0/). We hope that all users will follow the ODC Attribution/Share-Alike Community Norms ([http://www.opendatacommons.org/norms/odc-by-sa/](http://www.opendatacommons.org/norms/odc-by-sa/)). In particular, while not legally required, we hope that all users of the data will acknowledge by citing: 1. Demuru M, van Blooijs D, Zweiphenning W, Hermes D, Leijten F, Zijlmans M, on behalf of the RESPect group. “A practical workflow for organizing clinical intraoperative and long-term iEEG data in BIDS“, published in NeuroInformatics in 2022 2. Jelsma S.B. et al 2024, Structural and effective brain connectivity in focal epilepsy in any publications. Code available at: [https://github.com/UMCU-EpiLAB/umcuEpi_CCEP_DTI](https://github.com/UMCU-EpiLAB/umcuEpi_CCEP_DTI). Acknowledgements We thank the SEIN-UMCU RESPect database group (C.J.J. van Asch, L. van de Berg, S. Blok, M.D. Bourez, K.P.J. Braun, J.W. Dankbaar, C.H. Ferrier, T.A. Gebbink, P.H. Gosselaar, R. van Griethuysen, M.G.G. Hobbelink, F.W.A. Hoefnagels, N.E.C. van Klink, M.A. van ‘t Klooster, G.A.P. deKort, M.H.M. Mantione, A. Muhlebner, J.M. Ophorst, P.C. van Rijen, S.M.A. van der Salm, E.V. Schaft, M.M.J. van Schooneveld, H. Smeding, D. Sun, A. Velders, M.J.E. van Zandvoort, G.J.M. Zijlmans, E. Zuidhoek and J. Zwemmer) for their contributions and help in collecting the data. ## Dataset Information | Dataset ID | `DS005448` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | STReEF | | Author (year) | `Jelsma2024` | | Canonical | `STReEF` | | Importable as | `DS005448`, `Jelsma2024`, `STReEF` | | Year | 2024 | | Authors | Jelsma S.B., Zijlmans M., Heijink I.B., Hoefnagels F.W.A., Raemakers M, Bourez-Swart M.D., Otte W.M, van Blooijs D., van Klink N.E.C. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005448.v1.0.0](https://doi.org/10.18112/openneuro.ds005448.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005448) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005448) | [Source URL](https://openneuro.org/datasets/ds005448) | ### Copy-paste BibTeX ```bibtex @dataset{ds005448, title = {STReEF}, author = {Jelsma S.B. and Zijlmans M. and Heijink I.B. and Hoefnagels F.W.A. and Raemakers M and Bourez-Swart M.D. and Otte W.M and van Blooijs D. and van Klink N.E.C.}, doi = {10.18112/openneuro.ds005448.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005448.v1.0.0}, } ``` ## Technical Details - Subjects: 13 - Recordings: 18 - Tasks: 1 - Channels: 133 (14), 109 (2), 95, 161 - Sampling rate (Hz): 2048.0 - Duration (hours): 12.382256944444444 - Pathology: Epilepsy - Modality: Other - Type: Clinical/Intervention - Size on disk: 44.7 GB - File count: 18 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005448.v1.0.0 - Source: openneuro - OpenNeuro: [ds005448](https://openneuro.org/datasets/ds005448) - NeMAR: [ds005448](https://nemar.org/dataexplorer/detail?dataset_id=ds005448) ## API Reference Use the `DS005448` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005448(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) STReEF * **Study:** `ds005448` (OpenNeuro) * **Author (year):** `Jelsma2024` * **Canonical:** `STReEF` Also importable as: `DS005448`, `Jelsma2024`, `STReEF`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 13; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005448](https://openneuro.org/datasets/ds005448) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005448](https://nemar.org/dataexplorer/detail?dataset_id=ds005448) DOI: [https://doi.org/10.18112/openneuro.ds005448.v1.0.0](https://doi.org/10.18112/openneuro.ds005448.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005448 >>> dataset = DS005448(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005448) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005448) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005473: eeg dataset, 29 subjects *29 By BP* Access recordings and metadata through EEGDash. **Citation:** Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li (2024). *29 By BP*. [10.18112/openneuro.ds005473.v1.0.0](https://doi.org/10.18112/openneuro.ds005473.v1.0.0) Modality: eeg Subjects: 29 Recordings: 58 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005473 dataset = DS005473(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005473(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005473( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005473, title = {29 By BP}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005473.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005473.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS005473` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 29 By BP | | Author (year) | `Xiangyue2024_29_BP` | | Canonical | `Zhao2024` | | Importable as | `DS005473`, `Xiangyue2024_29_BP`, `Zhao2024` | | Year | 2024 | | Authors | Zhao Xiangyue, Zhou Jingyao, Zhang Libo, Duan Haoqing, Wei Shiyu, Bi Yanzhi, Hu Li | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005473.v1.0.0](https://doi.org/10.18112/openneuro.ds005473.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005473) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005473) | [Source URL](https://openneuro.org/datasets/ds005473) | ### Copy-paste BibTeX ```bibtex @dataset{ds005473, title = {29 By BP}, author = {Zhao Xiangyue and Zhou Jingyao and Zhang Libo and Duan Haoqing and Wei Shiyu and Bi Yanzhi and Hu Li}, doi = {10.18112/openneuro.ds005473.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005473.v1.0.0}, } ``` ## Technical Details - Subjects: 29 - Recordings: 58 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 14.631388888888887 - Pathology: Healthy - Modality: — - Type: — - Size on disk: 6.2 GB - File count: 58 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005473.v1.0.0 - Source: openneuro - OpenNeuro: [ds005473](https://openneuro.org/datasets/ds005473) - NeMAR: [ds005473](https://nemar.org/dataexplorer/detail?dataset_id=ds005473) ## API Reference Use the `DS005473` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005473(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 29 By BP * **Study:** `ds005473` (OpenNeuro) * **Author (year):** `Xiangyue2024_29_BP` * **Canonical:** `Zhao2024` Also importable as: `DS005473`, `Xiangyue2024_29_BP`, `Zhao2024`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 29; recordings: 58; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005473](https://openneuro.org/datasets/ds005473) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005473](https://nemar.org/dataexplorer/detail?dataset_id=ds005473) DOI: [https://doi.org/10.18112/openneuro.ds005473.v1.0.0](https://doi.org/10.18112/openneuro.ds005473.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005473 >>> dataset = DS005473(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005473) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005473) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005486: eeg dataset, 159 subjects *PREDICT* Access recordings and metadata through EEGDash. **Citation:** Nahian S. Chowdhury, Chuan Bi, Andrew J. Furman, Alan KI Chiang, Patrick Skippen, Emily Si, Samantha K Millard, Sarah M. Margerison, Darrah Spies, Michael L. Keaser, Joyce T. Da Silva, Shuo Chen, Siobhan M. Schabrun, David A. Seminowicz (2024). *PREDICT*. [10.18112/openneuro.ds005486.v1.0.1](https://doi.org/10.18112/openneuro.ds005486.v1.0.1) Modality: eeg Subjects: 159 Recordings: 445 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005486 dataset = DS005486(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005486(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005486( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005486, title = {PREDICT}, author = {Nahian S. Chowdhury and Chuan Bi and Andrew J. Furman and Alan KI Chiang and Patrick Skippen and Emily Si and Samantha K Millard and Sarah M. Margerison and Darrah Spies and Michael L. Keaser and Joyce T. Da Silva and Shuo Chen and Siobhan M. Schabrun and David A. Seminowicz}, doi = {10.18112/openneuro.ds005486.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005486.v1.0.1}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS005486` | |----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PREDICT | | Author (year) | `Chowdhury2024` | | Canonical | — | | Importable as | `DS005486`, `Chowdhury2024` | | Year | 2024 | | Authors | Nahian S. Chowdhury, Chuan Bi, Andrew J. Furman, Alan KI Chiang, Patrick Skippen, Emily Si, Samantha K Millard, Sarah M. Margerison, Darrah Spies, Michael L. Keaser, Joyce T. Da Silva, Shuo Chen, Siobhan M. Schabrun, David A. Seminowicz | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005486.v1.0.1](https://doi.org/10.18112/openneuro.ds005486.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005486) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005486) | [Source URL](https://openneuro.org/datasets/ds005486) | ### Copy-paste BibTeX ```bibtex @dataset{ds005486, title = {PREDICT}, author = {Nahian S. Chowdhury and Chuan Bi and Andrew J. Furman and Alan KI Chiang and Patrick Skippen and Emily Si and Samantha K Millard and Sarah M. Margerison and Darrah Spies and Michael L. Keaser and Joyce T. Da Silva and Shuo Chen and Siobhan M. Schabrun and David A. Seminowicz}, doi = {10.18112/openneuro.ds005486.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005486.v1.0.1}, } ``` ## Technical Details - Subjects: 159 - Recordings: 445 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 5000.0 (399), 25000.0 (46) - Duration (hours): 56.76204444444444 - Pathology: Not specified - Modality: Resting State - Type: Resting-state - Size on disk: 371.0 GB - File count: 445 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005486.v1.0.1 - Source: openneuro - OpenNeuro: [ds005486](https://openneuro.org/datasets/ds005486) - NeMAR: [ds005486](https://nemar.org/dataexplorer/detail?dataset_id=ds005486) ## API Reference Use the `DS005486` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005486(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PREDICT * **Study:** `ds005486` (OpenNeuro) * **Author (year):** `Chowdhury2024` * **Canonical:** — Also importable as: `DS005486`, `Chowdhury2024`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Unknown`. Subjects: 159; recordings: 445; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005486](https://openneuro.org/datasets/ds005486) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005486](https://nemar.org/dataexplorer/detail?dataset_id=ds005486) DOI: [https://doi.org/10.18112/openneuro.ds005486.v1.0.1](https://doi.org/10.18112/openneuro.ds005486.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005486 >>> dataset = DS005486(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005486) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005486) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005489: ieeg dataset, 37 subjects *Free Recall with Open-Loop Stimulation at Encoding* Access recordings and metadata through EEGDash. **Citation:** Haydn G. Herrema, Michael J. Kahana (2024). *Free Recall with Open-Loop Stimulation at Encoding*. [10.18112/openneuro.ds005489.v1.0.3](https://doi.org/10.18112/openneuro.ds005489.v1.0.3) Modality: ieeg Subjects: 37 Recordings: 154 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005489 dataset = DS005489(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005489(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005489( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005489, title = {Free Recall with Open-Loop Stimulation at Encoding}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005489.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005489.v1.0.3}, } ``` ## About This Dataset **Free Recall with Open-Loop Stimulation at Encoding** **Description** This dataset contains behavioral events and intracranial electrophysiological recordings from a delayed free recall task with open-loop stimulation at encoding. The experiment consists of participants studying a list of words, presented visually one at a time, completing simple arithmetic problems that function as a distractor, and then freely recalling the words from the just-presented list in any order. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania. This dataset is an open-loop stimulation version of the [FR1](https://openneuro.org/datasets/ds004789) dataset. This study contains open-loop electrical stimulation of the brain during encoding. There is no stimulation during the distractor or retrieval phases. Stimulation is delivered to a single electrode at a time, with locations chosen in the hippocampus and entorhinal cortex. Stimulation parameters are included in the behavioral events tsv files, denoting the anode/cathode labels, amplitude, pulse frequency, pulse width, and pulse count. 20 of the 25 lists in a session are randomly assigned as stimulation lists. On these lists, stimulation occurs on alternating two-word blocks, meaning 6 of the 12 words are presented with stimulation. Stimulation starts 200 ms prior to the onset of the first word in the block and lasts for 4.6 seconds, ending 200-450 ms after the offset of the second word (depending on the inter-stimulus interval). Half of the stimulation lists begin with a stimulation on pair and half begin with a stumulation off pair, but the order of these conditions is random. A stimulation list that begins with a stimulation on pair would look as follows (with bold indicating stimulation): **1 - 2**| 3 - 4 |\*\*5 - 6\*\*| 7 - 8 |\*\*9 - 10\*\* | 11 - 12 **To Note** \* The iEEG recordings are labeled either “monopolar” or “bipolar.” The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis. The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables. \* Each subject has a unique montage of electrode locations. MNI and Talairach coordinates are provided when available. \* Recordings done with the Blackrock system are in units of 250 nV, while recordings done with the Medtronic system are estimated through testing to have units of 0.1 uV. We have completed the scaling to provide values in V. **Contact** For questions or inquiries, please contact [sas-kahana-sysadmin@sas.upenn.edu](mailto:sas-kahana-sysadmin@sas.upenn.edu). ## Dataset Information | Dataset ID | `DS005489` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Free Recall with Open-Loop Stimulation at Encoding | | Author (year) | `Herrema2024_Free_Recall` | | Canonical | — | | Importable as | `DS005489`, `Herrema2024_Free_Recall` | | Year | 2024 | | Authors | Haydn G. Herrema, Michael J. Kahana | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005489.v1.0.3](https://doi.org/10.18112/openneuro.ds005489.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005489) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005489) | [Source URL](https://openneuro.org/datasets/ds005489) | ### Copy-paste BibTeX ```bibtex @dataset{ds005489, title = {Free Recall with Open-Loop Stimulation at Encoding}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005489.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005489.v1.0.3}, } ``` ## Technical Details - Subjects: 37 - Recordings: 154 - Tasks: 1 - Channels: 100 (12), 64 (10), 141 (9), 118 (8), 96 (8), 136 (7), 109 (7), 88 (6), 68 (6), 72 (6), 87 (4), 126 (4), 75 (4), 56 (4), 110 (4), 70 (4), 93 (3), 156 (3), 76 (3), 85 (3), 74 (3), 58 (3), 120 (3), 104 (2), 124 (2), 112 (2), 134 (2), 128 (2), 80 (2), 138 (2), 99 (2), 123 (2), 108 (2), 18, 163, 14, 20, 83, 101, 97, 114, 177, 16 - Sampling rate (Hz): 500.0 (90), 1000.0 (46), 1600.0 (10), 513.0 (4), 256.0 (2), 499.7071 (2) - Duration (hours): 138.17323425954314 - Pathology: Not specified - Modality: Visual - Type: Memory - Size on disk: 64.9 GB - File count: 154 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005489.v1.0.3 - Source: openneuro - OpenNeuro: [ds005489](https://openneuro.org/datasets/ds005489) - NeMAR: [ds005489](https://nemar.org/dataexplorer/detail?dataset_id=ds005489) ## API Reference Use the `DS005489` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005489(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Free Recall with Open-Loop Stimulation at Encoding * **Study:** `ds005489` (OpenNeuro) * **Author (year):** `Herrema2024_Free_Recall` * **Canonical:** — Also importable as: `DS005489`, `Herrema2024_Free_Recall`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 37; recordings: 154; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005489](https://openneuro.org/datasets/ds005489) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005489](https://nemar.org/dataexplorer/detail?dataset_id=ds005489) DOI: [https://doi.org/10.18112/openneuro.ds005489.v1.0.3](https://doi.org/10.18112/openneuro.ds005489.v1.0.3) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005489 >>> dataset = DS005489(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005489) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005489) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005491: ieeg dataset, 19 subjects *Categorized Free Recall with Open-Loop Stimulation at Encoding* Access recordings and metadata through EEGDash. **Citation:** Haydn G. Herrema, Michael J. Kahana (2024). *Categorized Free Recall with Open-Loop Stimulation at Encoding*. [10.18112/openneuro.ds005491.v1.0.0](https://doi.org/10.18112/openneuro.ds005491.v1.0.0) Modality: ieeg Subjects: 19 Recordings: 51 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005491 dataset = DS005491(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005491(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005491( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005491, title = {Categorized Free Recall with Open-Loop Stimulation at Encoding}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005491.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005491.v1.0.0}, } ``` ## About This Dataset **Categorized Free Recall with Open-Loop Stimulation at Encoding** **Description** This dataset contains behavioral events and intracranial electrophysiological recordings from a categorized free recall task with open-loop stimulation at encoding. The experiment consists of participants studying a list of words, presented visually one at a time, completing simple arithmetic problems that function as a distractor, and then freely recalling the words from the just-presented list in any order. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania. The word lists in this paradigm follow a specific semantic construction. Each word comes from one of 25 semantic categories, and each list of 12 items contains 6 pairs of same-category words from 3 different categories. This means that each list has 4 words from 3 semantic categories, and in each half of the list there will be 1 pair of words from each category. For example, if a list contains words from categories A, B, and C, a possible list construction would be: **A1 - A2 - B1 - B2 - C1 - C2 - A3 - A4 - C3 - C4 - B3 - B4** This study contains open-loop electrical stimulation of the brain during encoding. There is no stimulation during the distractor or retrieval phases. Stimulation is delivered to a single electrode at a time, with locations chosen in the hippocampus and entorhinal cortex. Stimulation parameters are included in the behavioral events tsv files, denoting the anode/cathode labels, amplitude, pulse frequency, pulse width, and pulse count. 20 of the 25 lists in a session are randomly assigned as stimulation lists. On these lists, stimulation occurs on alternating two-word blocks, meaning 6 of the 12 words are presented with stimulation. Stimulation starts 200 ms prior to the onset of the first word in the block and lasts for 4.6 seconds, ending 200-450 ms after the offset of the second word (depending on the inter-stimulus interval). Half of the stimulation lists begin with a stimulation on pair and half begin with a stumulation off pair, but the order of these conditions is random. A stimulation list that begins with a stimulation on pair would look as follows (with bold indicating stimulation): **1 - 2**| 3 - 4 |\*\*5 - 6\*\*| 7 - 8 |\*\*9 - 10\*\* | 11 - 12 **To Note** \* The iEEG recordings are labeled either “monopolar” or “bipolar.” The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis. The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables. \* Each subject has a unique montage of electrode locations. MNI and Talairach coordinates are provided when available. \* Recordings done with the Blackrock system are in units of 250 nV, while recordings done with the Medtronic system are estimated through testing to have units of 0.1 uV. We have completed the scaling to provide values in V. **Contact** For questions or inquiries, please contact [sas-kahana-sysadmin@sas.upenn.edu](mailto:sas-kahana-sysadmin@sas.upenn.edu). ## Dataset Information | Dataset ID | `DS005491` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Categorized Free Recall with Open-Loop Stimulation at Encoding | | Author (year) | `Herrema2024_Categorized` | | Canonical | `catFR_open_loop`, `RAM_catFR`, `catFR_stim` | | Importable as | `DS005491`, `Herrema2024_Categorized`, `catFR_open_loop`, `RAM_catFR`, `catFR_stim` | | Year | 2024 | | Authors | Haydn G. Herrema, Michael J. Kahana | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005491.v1.0.0](https://doi.org/10.18112/openneuro.ds005491.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005491) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005491) | [Source URL](https://openneuro.org/datasets/ds005491) | ### Copy-paste BibTeX ```bibtex @dataset{ds005491, title = {Categorized Free Recall with Open-Loop Stimulation at Encoding}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005491.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005491.v1.0.0}, } ``` ## Technical Details - Subjects: 19 - Recordings: 51 - Tasks: 1 - Channels: 64 (5), 126 (5), 88 (4), 85 (3), 93 (3), 113 (2), 163 (2), 104 (2), 155 (2), 133 (2), 116 (2), 119, 72, 68, 92, 96, 127, 177, 146, 78, 128, 124, 110, 70, 80, 14, 115, 130, 112, 16 - Sampling rate (Hz): 500.0 (39), 1600.0 (6), 999.0 (4), 1000.0 (2) - Duration (hours): 46.67650149705261 - Pathology: Not specified - Modality: Visual - Type: Clinical/Intervention - Size on disk: 22.5 GB - File count: 51 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005491.v1.0.0 - Source: openneuro - OpenNeuro: [ds005491](https://openneuro.org/datasets/ds005491) - NeMAR: [ds005491](https://nemar.org/dataexplorer/detail?dataset_id=ds005491) ## API Reference Use the `DS005491` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005491(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Categorized Free Recall with Open-Loop Stimulation at Encoding * **Study:** `ds005491` (OpenNeuro) * **Author (year):** `Herrema2024_Categorized` * **Canonical:** `catFR_open_loop`, `RAM_catFR`, `catFR_stim` Also importable as: `DS005491`, `Herrema2024_Categorized`, `catFR_open_loop`, `RAM_catFR`, `catFR_stim`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 19; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005491](https://openneuro.org/datasets/ds005491) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005491](https://nemar.org/dataexplorer/detail?dataset_id=ds005491) DOI: [https://doi.org/10.18112/openneuro.ds005491.v1.0.0](https://doi.org/10.18112/openneuro.ds005491.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005491 >>> dataset = DS005491(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005491) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005491) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005494: ieeg dataset, 20 subjects *Cued Recall of Paired Associates with Open-Loop Stimulation at Encoding or Retrieval* Access recordings and metadata through EEGDash. **Citation:** Haydn G. Herrema, Michael J. Kahana (2024). *Cued Recall of Paired Associates with Open-Loop Stimulation at Encoding or Retrieval*. [10.18112/openneuro.ds005494.v1.0.1](https://doi.org/10.18112/openneuro.ds005494.v1.0.1) Modality: ieeg Subjects: 20 Recordings: 51 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005494 dataset = DS005494(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005494(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005494( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005494, title = {Cued Recall of Paired Associates with Open-Loop Stimulation at Encoding or Retrieval}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005494.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005494.v1.0.1}, } ``` ## About This Dataset **Cued Recall of Paired Associates with Open-Loop Stimulation at Encoding or Retrieval** **Description** This dataset contains behavioral events and intracranial electrophysiological recordings from a paired associates memory task with open-loop stimulation at encoding or retrieval. The experiment consists of participants studying pairs of visually presented words, completing simple arithmetic problems that function as a distractor, and then completing a cued recall task. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania. This dataset is an open-loop stimulation version of the [PAL1](https://openneuro.org/datasets/ds005059) dataset. Each session contains 25 lists of the structure: encoding, distractor, cued recall. During encooding, 6 pairs of words are presented one pair at a time. Each pair remains on screen for 4000 ms and is followed by a 1000 ms interstimulus interval. During the cued recall, one randomly chosen word from each pair is shown, and the participant is asked to vocally recall the other word from the pair. Participants have 5000 ms for each recall, and then the next cue (i.e., a word from another pair) is shown. All 6 pairs of words are tested on each list, in random order. This study contains open-loop electrical stimulation of the brain during encoding or retrieval. There is no stimulation during the distractor phase. Stimulation is delivered to a single electrode at a time, with locations chosen in the hippocampus and entorhinal cortex. Stimulation parameters are included in the behavioral events tsv files, denoting the anode/cathode labels, amplitude, pulse frequency, pulse width, and pulse count. 20 of the 25 lists in a session are randomly assigned as stimulation lists, 10 of which contain stimulation at encoding and 10 of which contain stimulation at retrieval. 5 lists contain no stimulation at all, and no lists contains stimulation at both encoding and retrieval. On the encoding stimulation lists, stimulation occurs on alternating word-pairs, meaning 3 of the 6 word-pairs are presented with stimulation. Stimulation starts 200 ms prior to the onset of the word-pair and lasts for 4.6 seconds, ending 400 ms after the offset of the word-pair. On the retrieval stimulation lists, stimulation occurs on alternating cues, meaning 3 of the 6 recall cues have stimulation. Stimulation starts 200 ms prior to the onset of the recall cue and lasts for 4.6 seconds, ending 400 ms after the offset of the recall cue. Half of the stimulation lists begin with a stimulation on pair/cue and half begin with a stimulation off pair/cue, but the order of these conditions is random. An encoding stimulation list that begins with a stimulation pair would look as follows (with bold indicating stimulation): **1A/B**| 2A/B | **3A/B**| 4A/B | **5A/B** | 6A/B A retrieval stimulation list that begins with a non-stimulation cue would look as follows (with bold indicating stimulation): 3A-? | **5B-?**| 2B-? |\*\*6A-?\*\*| 4B-? ``` | ``` **1A-?** **To Note** \* The iEEG recordings are labeled either “monopolar” or “bipolar.” The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis. The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables. \* Each subject has a unique montage of electrode locations. MNI and Talairach coordinates are provided when available. \* Recordings done with the Blackrock system are in units of 250 nV, while recordings done with the Medtronic system are estimated through testing to have units of 0.1 uV. We have completed the scaling to provide values in V. **Contact** For questions or inquiries, please contact [sas-kahana-sysadmin@sas.upenn.edu](mailto:sas-kahana-sysadmin@sas.upenn.edu). **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Holdgraf, C., Appelhoff, S., Bickel, S., Bouchard, K., D’Ambrosio, S., David, O., … Hermes, D. (2019). iEEG-BIDS, extending the Brain Imaging Data Structure specification to human intracranial electrophysiology. Scientific Data, 6, 102. [https://doi.org/10.1038/s41597-019-0105-7](https://doi.org/10.1038/s41597-019-0105-7) ## Dataset Information | Dataset ID | `DS005494` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Cued Recall of Paired Associates with Open-Loop Stimulation at Encoding or Retrieval | | Author (year) | `Herrema2024_Cued` | | Canonical | `Herrema2024` | | Importable as | `DS005494`, `Herrema2024_Cued`, `Herrema2024` | | Year | 2024 | | Authors | Haydn G. Herrema, Michael J. Kahana | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005494.v1.0.1](https://doi.org/10.18112/openneuro.ds005494.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005494) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005494) | [Source URL](https://openneuro.org/datasets/ds005494) | ### Copy-paste BibTeX ```bibtex @dataset{ds005494, title = {Cued Recall of Paired Associates with Open-Loop Stimulation at Encoding or Retrieval}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005494.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005494.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 51 - Tasks: 1 - Channels: 100 (4), 88 (4), 68 (3), 128 (3), 177 (3), 72 (3), 141 (2), 112 (2), 64 (2), 114 (2), 14 (2), 85 (2), 16 (2), 84, 111, 93, 122, 124, 95, 107, 102, 86, 110, 96, 146, 104, 119, 121, 138, 106 - Sampling rate (Hz): 500.0 (35), 1000.0 (16) - Duration (hours): 55.08165333333333 - Pathology: Not specified - Modality: Visual - Type: Clinical/Intervention - Size on disk: 26.3 GB - File count: 51 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005494.v1.0.1 - Source: openneuro - OpenNeuro: [ds005494](https://openneuro.org/datasets/ds005494) - NeMAR: [ds005494](https://nemar.org/dataexplorer/detail?dataset_id=ds005494) ## API Reference Use the `DS005494` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005494(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cued Recall of Paired Associates with Open-Loop Stimulation at Encoding or Retrieval * **Study:** `ds005494` (OpenNeuro) * **Author (year):** `Herrema2024_Cued` * **Canonical:** `Herrema2024` Also importable as: `DS005494`, `Herrema2024_Cued`, `Herrema2024`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 20; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005494](https://openneuro.org/datasets/ds005494) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005494](https://nemar.org/dataexplorer/detail?dataset_id=ds005494) DOI: [https://doi.org/10.18112/openneuro.ds005494.v1.0.1](https://doi.org/10.18112/openneuro.ds005494.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005494 >>> dataset = DS005494(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005494) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005494) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005505: eeg dataset, 136 subjects *Healthy Brain Network (HBN) EEG - Release 1* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 1*. [10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) Modality: eeg Subjects: 136 Recordings: 1342 License: CC-BY-SA 4.0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005505 dataset = DS005505(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005505(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005505( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005505, title = {Healthy Brain Network (HBN) EEG - Release 1}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005505.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005505.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 1** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 1** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `DS005505` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 1 | | Author (year) | `Shirazi2024_R1` | | Canonical | `HBN_r1` | | Importable as | `DS005505`, `Shirazi2024_R1`, `HBN_r1` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [doi:10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005505) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005505) | [Source URL](https://openneuro.org/datasets/ds005505) | ### Copy-paste BibTeX ```bibtex @dataset{ds005505, title = {Healthy Brain Network (HBN) EEG - Release 1}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005505.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005505.v1.0.1}, } ``` ## Technical Details - Subjects: 136 - Recordings: 1342 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 500.0 - Duration (hours): 117.5389038888889 - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 103.1 GB - File count: 1342 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: doi:10.18112/openneuro.ds005505.v1.0.1 - Source: openneuro - OpenNeuro: [ds005505](https://openneuro.org/datasets/ds005505) - NeMAR: [ds005505](https://nemar.org/dataexplorer/detail?dataset_id=ds005505) ## API Reference Use the `DS005505` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005505(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 1 * **Study:** `ds005505` (OpenNeuro) * **Author (year):** `Shirazi2024_R1` * **Canonical:** `HBN_r1` Also importable as: `DS005505`, `Shirazi2024_R1`, `HBN_r1`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 136; recordings: 1342; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005505](https://openneuro.org/datasets/ds005505) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005505](https://nemar.org/dataexplorer/detail?dataset_id=ds005505) DOI: [https://doi.org/10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005505 >>> dataset = DS005505(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005505) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005505) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005506: eeg dataset, 150 subjects *Healthy Brain Network (HBN) EEG - Release 2* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 2*. [10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) Modality: eeg Subjects: 150 Recordings: 1405 License: CC-BY-SA 4.0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005506 dataset = DS005506(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005506(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005506( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005506, title = {Healthy Brain Network (HBN) EEG - Release 2}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005506.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005506.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 2** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 2** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `DS005506` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 2 | | Author (year) | `Shirazi2024_R2` | | Canonical | `HBN_r2` | | Importable as | `DS005506`, `Shirazi2024_R2`, `HBN_r2` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [doi:10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005506) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005506) | [Source URL](https://openneuro.org/datasets/ds005506) | ### Copy-paste BibTeX ```bibtex @dataset{ds005506, title = {Healthy Brain Network (HBN) EEG - Release 2}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005506.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005506.v1.0.1}, } ``` ## Technical Details - Subjects: 150 - Recordings: 1405 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 500.0 - Duration (hours): 127.52424388888888 - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 111.9 GB - File count: 1405 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: doi:10.18112/openneuro.ds005506.v1.0.1 - Source: openneuro - OpenNeuro: [ds005506](https://openneuro.org/datasets/ds005506) - NeMAR: [ds005506](https://nemar.org/dataexplorer/detail?dataset_id=ds005506) ## API Reference Use the `DS005506` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005506(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 2 * **Study:** `ds005506` (OpenNeuro) * **Author (year):** `Shirazi2024_R2` * **Canonical:** `HBN_r2` Also importable as: `DS005506`, `Shirazi2024_R2`, `HBN_r2`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 150; recordings: 1405; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005506](https://openneuro.org/datasets/ds005506) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005506](https://nemar.org/dataexplorer/detail?dataset_id=ds005506) DOI: [https://doi.org/10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005506 >>> dataset = DS005506(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005506) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005506) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005507: eeg dataset, 184 subjects *Healthy Brain Network (HBN) EEG - Release 3* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 3*. [10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) Modality: eeg Subjects: 184 Recordings: 1812 License: CC-BY-SA 4.0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005507 dataset = DS005507(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005507(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005507( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005507, title = {Healthy Brain Network (HBN) EEG - Release 3}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005507.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005507.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 3** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 3** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `DS005507` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 3 | | Author (year) | `Shirazi2024_R3` | | Canonical | `HBN_r3` | | Importable as | `DS005507`, `Shirazi2024_R3`, `HBN_r3` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [doi:10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005507) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005507) | [Source URL](https://openneuro.org/datasets/ds005507) | ### Copy-paste BibTeX ```bibtex @dataset{ds005507, title = {Healthy Brain Network (HBN) EEG - Release 3}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005507.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005507.v1.0.1}, } ``` ## Technical Details - Subjects: 184 - Recordings: 1812 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 500.0 - Duration (hours): 158.8272261111111 - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 139.4 GB - File count: 1812 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: doi:10.18112/openneuro.ds005507.v1.0.1 - Source: openneuro - OpenNeuro: [ds005507](https://openneuro.org/datasets/ds005507) - NeMAR: [ds005507](https://nemar.org/dataexplorer/detail?dataset_id=ds005507) ## API Reference Use the `DS005507` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005507(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 3 * **Study:** `ds005507` (OpenNeuro) * **Author (year):** `Shirazi2024_R3` * **Canonical:** `HBN_r3` Also importable as: `DS005507`, `Shirazi2024_R3`, `HBN_r3`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 184; recordings: 1812; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005507](https://openneuro.org/datasets/ds005507) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005507](https://nemar.org/dataexplorer/detail?dataset_id=ds005507) DOI: [https://doi.org/10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005507 >>> dataset = DS005507(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005507) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005507) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005508: eeg dataset, 324 subjects *Healthy Brain Network (HBN) EEG - Release 4* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 4*. [10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) Modality: eeg Subjects: 324 Recordings: 3342 License: CC-BY-SA 4.0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005508 dataset = DS005508(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005508(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005508( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005508, title = {Healthy Brain Network (HBN) EEG - Release 4}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005508.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005508.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 4** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 4** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `DS005508` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 4 | | Author (year) | `Shirazi2024_R4` | | Canonical | `HBN_r4` | | Importable as | `DS005508`, `Shirazi2024_R4`, `HBN_r4` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [doi:10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005508) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005508) | [Source URL](https://openneuro.org/datasets/ds005508) | ### Copy-paste BibTeX ```bibtex @dataset{ds005508, title = {Healthy Brain Network (HBN) EEG - Release 4}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005508.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005508.v1.0.1}, } ``` ## Technical Details - Subjects: 324 - Recordings: 3342 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 500.0 - Duration (hours): 261.8067727777778 - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 229.8 GB - File count: 3342 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: doi:10.18112/openneuro.ds005508.v1.0.1 - Source: openneuro - OpenNeuro: [ds005508](https://openneuro.org/datasets/ds005508) - NeMAR: [ds005508](https://nemar.org/dataexplorer/detail?dataset_id=ds005508) ## API Reference Use the `DS005508` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005508(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 4 * **Study:** `ds005508` (OpenNeuro) * **Author (year):** `Shirazi2024_R4` * **Canonical:** `HBN_r4` Also importable as: `DS005508`, `Shirazi2024_R4`, `HBN_r4`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 324; recordings: 3342; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005508](https://openneuro.org/datasets/ds005508) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005508](https://nemar.org/dataexplorer/detail?dataset_id=ds005508) DOI: [https://doi.org/10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005508 >>> dataset = DS005508(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005508) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005508) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005509: eeg dataset, 330 subjects *Healthy Brain Network (HBN) EEG - Release 5* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 5*. [10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) Modality: eeg Subjects: 330 Recordings: 3326 License: CC-BY-SA 4.0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005509 dataset = DS005509(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005509(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005509( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005509, title = {Healthy Brain Network (HBN) EEG - Release 5}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005509.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005509.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 5** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 5** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `DS005509` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 5 | | Author (year) | `Shirazi2024_R5` | | Canonical | `HBN_r5` | | Importable as | `DS005509`, `Shirazi2024_R5`, `HBN_r5` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [doi:10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005509) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005509) | [Source URL](https://openneuro.org/datasets/ds005509) | ### Copy-paste BibTeX ```bibtex @dataset{ds005509, title = {Healthy Brain Network (HBN) EEG - Release 5}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005509.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005509.v1.0.1}, } ``` ## Technical Details - Subjects: 330 - Recordings: 3326 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 500.0 - Duration (hours): 255.34662 - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 224.2 GB - File count: 3326 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: doi:10.18112/openneuro.ds005509.v1.0.1 - Source: openneuro - OpenNeuro: [ds005509](https://openneuro.org/datasets/ds005509) - NeMAR: [ds005509](https://nemar.org/dataexplorer/detail?dataset_id=ds005509) ## API Reference Use the `DS005509` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005509(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 5 * **Study:** `ds005509` (OpenNeuro) * **Author (year):** `Shirazi2024_R5` * **Canonical:** `HBN_r5` Also importable as: `DS005509`, `Shirazi2024_R5`, `HBN_r5`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 330; recordings: 3326; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005509](https://openneuro.org/datasets/ds005509) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005509](https://nemar.org/dataexplorer/detail?dataset_id=ds005509) DOI: [https://doi.org/10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005509 >>> dataset = DS005509(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005509) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005509) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005510: eeg dataset, 135 subjects *Healthy Brain Network (HBN) EEG - Release 6* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 6*. [10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) Modality: eeg Subjects: 135 Recordings: 1227 License: CC-BY-SA 4.0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005510 dataset = DS005510(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005510(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005510( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005510, title = {Healthy Brain Network (HBN) EEG - Release 6}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005510.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005510.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 6** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 6** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `DS005510` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 6 | | Author (year) | `Shirazi2024_R6` | | Canonical | `HBN_r6` | | Importable as | `DS005510`, `Shirazi2024_R6`, `HBN_r6` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [doi:10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005510) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005510) | [Source URL](https://openneuro.org/datasets/ds005510) | ### Copy-paste BibTeX ```bibtex @dataset{ds005510, title = {Healthy Brain Network (HBN) EEG - Release 6}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005510.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005510.v1.0.1}, } ``` ## Technical Details - Subjects: 135 - Recordings: 1227 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 500.0 - Duration (hours): 103.45367166666666 - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 90.8 GB - File count: 1227 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: doi:10.18112/openneuro.ds005510.v1.0.1 - Source: openneuro - OpenNeuro: [ds005510](https://openneuro.org/datasets/ds005510) - NeMAR: [ds005510](https://nemar.org/dataexplorer/detail?dataset_id=ds005510) ## API Reference Use the `DS005510` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005510(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 6 * **Study:** `ds005510` (OpenNeuro) * **Author (year):** `Shirazi2024_R6` * **Canonical:** `HBN_r6` Also importable as: `DS005510`, `Shirazi2024_R6`, `HBN_r6`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 135; recordings: 1227; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005510](https://openneuro.org/datasets/ds005510) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005510](https://nemar.org/dataexplorer/detail?dataset_id=ds005510) DOI: [https://doi.org/10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005510 >>> dataset = DS005510(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005510) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005510) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005512: eeg dataset, 257 subjects *Healthy Brain Network (HBN) EEG - Release 8* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 8*. [10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) Modality: eeg Subjects: 257 Recordings: 2320 License: CC-BY-SA 4.0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005512 dataset = DS005512(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005512(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005512( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005512, title = {Healthy Brain Network (HBN) EEG - Release 8}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005512.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005512.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 8** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 8** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `DS005512` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 8 | | Author (year) | `Shirazi2024_R8` | | Canonical | `HBN_r8` | | Importable as | `DS005512`, `Shirazi2024_R8`, `HBN_r8` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [doi:10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005512) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005512) | [Source URL](https://openneuro.org/datasets/ds005512) | ### Copy-paste BibTeX ```bibtex @dataset{ds005512, title = {Healthy Brain Network (HBN) EEG - Release 8}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005512.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005512.v1.0.1}, } ``` ## Technical Details - Subjects: 257 - Recordings: 2320 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 500.0 - Duration (hours): 179.12267277777778 - Pathology: Development - Modality: Multisensory - Type: Clinical/Intervention - Size on disk: 157.2 GB - File count: 2320 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: doi:10.18112/openneuro.ds005512.v1.0.1 - Source: openneuro - OpenNeuro: [ds005512](https://openneuro.org/datasets/ds005512) - NeMAR: [ds005512](https://nemar.org/dataexplorer/detail?dataset_id=ds005512) ## API Reference Use the `DS005512` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005512(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 8 * **Study:** `ds005512` (OpenNeuro) * **Author (year):** `Shirazi2024_R8` * **Canonical:** `HBN_r8` Also importable as: `DS005512`, `Shirazi2024_R8`, `HBN_r8`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 257; recordings: 2320; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005512](https://openneuro.org/datasets/ds005512) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005512](https://nemar.org/dataexplorer/detail?dataset_id=ds005512) DOI: [https://doi.org/10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS005512 >>> dataset = DS005512(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005512) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005512) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005514: eeg dataset, 295 subjects *Healthy Brain Network (HBN) EEG - Release 9* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 9*. [10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) Modality: eeg Subjects: 295 Recordings: 2885 License: CC-BY-SA 4.0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005514 dataset = DS005514(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005514(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005514( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005514, title = {Healthy Brain Network (HBN) EEG - Release 9}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005514.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005514.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 9** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 9** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `DS005514` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 9 | | Author (year) | `Shirazi2024_R9` | | Canonical | `HBN_r9` | | Importable as | `DS005514`, `Shirazi2024_R9`, `HBN_r9` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [doi:10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005514) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005514) | [Source URL](https://openneuro.org/datasets/ds005514) | ### Copy-paste BibTeX ```bibtex @dataset{ds005514, title = {Healthy Brain Network (HBN) EEG - Release 9}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005514.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005514.v1.0.1}, } ``` ## Technical Details - Subjects: 295 - Recordings: 2885 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 500.0 - Duration (hours): 210.79156111111112 - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 185.0 GB - File count: 2885 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: doi:10.18112/openneuro.ds005514.v1.0.1 - Source: openneuro - OpenNeuro: [ds005514](https://openneuro.org/datasets/ds005514) - NeMAR: [ds005514](https://nemar.org/dataexplorer/detail?dataset_id=ds005514) ## API Reference Use the `DS005514` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005514(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 9 * **Study:** `ds005514` (OpenNeuro) * **Author (year):** `Shirazi2024_R9` * **Canonical:** `HBN_r9` Also importable as: `DS005514`, `Shirazi2024_R9`, `HBN_r9`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 295; recordings: 2885; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005514](https://openneuro.org/datasets/ds005514) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005514](https://nemar.org/dataexplorer/detail?dataset_id=ds005514) DOI: [https://doi.org/10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005514 >>> dataset = DS005514(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005514) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005514) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005515: eeg dataset, 533 subjects *Healthy Brain Network (HBN) EEG - Release 10* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 10*. [10.18112/openneuro.ds005515.v1.0.1](https://doi.org/10.18112/openneuro.ds005515.v1.0.1) Modality: eeg Subjects: 533 Recordings: 2516 License: CC-BY-SA 4.0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005515 dataset = DS005515(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005515(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005515( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005515, title = {Healthy Brain Network (HBN) EEG - Release 10}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005515.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005515.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 10** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 10** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `DS005515` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 10 | | Author (year) | `Shirazi2024_R10` | | Canonical | `HBN_r10` | | Importable as | `DS005515`, `Shirazi2024_R10`, `HBN_r10` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [doi:10.18112/openneuro.ds005515.v1.0.1](https://doi.org/10.18112/openneuro.ds005515.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005515) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005515) | [Source URL](https://openneuro.org/datasets/ds005515) | ### Copy-paste BibTeX ```bibtex @dataset{ds005515, title = {Healthy Brain Network (HBN) EEG - Release 10}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005515.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005515.v1.0.1}, } ``` ## Technical Details - Subjects: 533 - Recordings: 2516 - Tasks: 8 - Channels: 129 - Sampling rate (Hz): 500.0 - Duration (hours): 183.12360166666667 - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 160.5 GB - File count: 2516 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: doi:10.18112/openneuro.ds005515.v1.0.1 - Source: openneuro - OpenNeuro: [ds005515](https://openneuro.org/datasets/ds005515) - NeMAR: [ds005515](https://nemar.org/dataexplorer/detail?dataset_id=ds005515) ## API Reference Use the `DS005515` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005515(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 10 * **Study:** `ds005515` (OpenNeuro) * **Author (year):** `Shirazi2024_R10` * **Canonical:** `HBN_r10` Also importable as: `DS005515`, `Shirazi2024_R10`, `HBN_r10`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 533; recordings: 2516; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005515](https://openneuro.org/datasets/ds005515) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005515](https://nemar.org/dataexplorer/detail?dataset_id=ds005515) DOI: [https://doi.org/10.18112/openneuro.ds005515.v1.0.1](https://doi.org/10.18112/openneuro.ds005515.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005515 >>> dataset = DS005515(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005515) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005515) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005516: eeg dataset, 430 subjects *Healthy Brain Network (HBN) EEG - Release 11* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 11*. [10.18112/openneuro.ds005516.v1.0.1](https://doi.org/10.18112/openneuro.ds005516.v1.0.1) Modality: eeg Subjects: 430 Recordings: 3397 License: CC-BY-SA 4.0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005516 dataset = DS005516(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005516(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005516( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005516, title = {Healthy Brain Network (HBN) EEG - Release 11}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005516.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005516.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 11** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 11** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `DS005516` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 11 | | Author (year) | `Shirazi2024_R11` | | Canonical | `HBN_r11` | | Importable as | `DS005516`, `Shirazi2024_R11`, `HBN_r11` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [doi:10.18112/openneuro.ds005516.v1.0.1](https://doi.org/10.18112/openneuro.ds005516.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005516) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005516) | [Source URL](https://openneuro.org/datasets/ds005516) | ### Copy-paste BibTeX ```bibtex @dataset{ds005516, title = {Healthy Brain Network (HBN) EEG - Release 11}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005516.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005516.v1.0.1}, } ``` ## Technical Details - Subjects: 430 - Recordings: 3397 - Tasks: 8 - Channels: 129 - Sampling rate (Hz): 500.0 - Duration (hours): 250.4518483333333 - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 219.2 GB - File count: 3397 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: doi:10.18112/openneuro.ds005516.v1.0.1 - Source: openneuro - OpenNeuro: [ds005516](https://openneuro.org/datasets/ds005516) - NeMAR: [ds005516](https://nemar.org/dataexplorer/detail?dataset_id=ds005516) ## API Reference Use the `DS005516` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005516(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 11 * **Study:** `ds005516` (OpenNeuro) * **Author (year):** `Shirazi2024_R11` * **Canonical:** `HBN_r11` Also importable as: `DS005516`, `Shirazi2024_R11`, `HBN_r11`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 430; recordings: 3397; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005516](https://openneuro.org/datasets/ds005516) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005516](https://nemar.org/dataexplorer/detail?dataset_id=ds005516) DOI: [https://doi.org/10.18112/openneuro.ds005516.v1.0.1](https://doi.org/10.18112/openneuro.ds005516.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005516 >>> dataset = DS005516(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005516) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005516) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005520: eeg dataset, 23 subjects *Research data supporting ‘EEG recording during playing MOBA game’* Access recordings and metadata through EEGDash. **Citation:** Hong-Zhi Li, Jia-Jia Yang, Zhen Lv, Li-Yang Wan, Wo Wang, Da-Qi Li, Dong-Dong Zhou, Li Kuang (2024). *Research data supporting ‘EEG recording during playing MOBA game’*. [10.18112/openneuro.ds005520.v1.0.1](https://doi.org/10.18112/openneuro.ds005520.v1.0.1) Modality: eeg Subjects: 23 Recordings: 69 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005520 dataset = DS005520(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005520(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005520( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005520, title = {Research data supporting 'EEG recording during playing MOBA game'}, author = {Hong-Zhi Li and Jia-Jia Yang and Zhen Lv and Li-Yang Wan and Wo Wang and Da-Qi Li and Dong-Dong Zhou and Li Kuang}, doi = {10.18112/openneuro.ds005520.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005520.v1.0.1}, } ``` ## About This Dataset **General information** This dataset contains resting(eyes closed, eyes open) and EEG recordings during playing real MOBA game with 23 participants. **Dataset** **Presentation** > The data collection was initiated in April 2023 and was terminated in July 2023. The detailed description of the dataset is currently under working by Hong-Zhi Li and Dong-Dong Zhou, and will submit to Scientific Data for publication. **EEG acquisition** > * EEG system (Neuroscan, 64 electrodes) > * Sampling frequency: 1000Hz **event type** > * 13 indicates a kill during playing game > * 14 indicates a death during playing game > * 66 indicates game start > * 444 indicates game failure > * 666 indicates game victory **Contact** > * If you have any questions or comments, please contact: > * Dong-Dong Zhou: [zhoudongdong@cqmu.edu.cn](mailto:zhoudongdong@cqmu.edu.cn) ## Dataset Information | Dataset ID | `DS005520` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Research data supporting ‘EEG recording during playing MOBA game’ | | Author (year) | `Li2024_Research_supporting_playing` | | Canonical | — | | Importable as | `DS005520`, `Li2024_Research_supporting_playing` | | Year | 2024 | | Authors | Hong-Zhi Li, Jia-Jia Yang, Zhen Lv, Li-Yang Wan, Wo Wang, Da-Qi Li, Dong-Dong Zhou, Li Kuang | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005520.v1.0.1](https://doi.org/10.18112/openneuro.ds005520.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005520) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005520) | [Source URL](https://openneuro.org/datasets/ds005520) | ### Copy-paste BibTeX ```bibtex @dataset{ds005520, title = {Research data supporting 'EEG recording during playing MOBA game'}, author = {Hong-Zhi Li and Jia-Jia Yang and Zhen Lv and Li-Yang Wan and Wo Wang and Da-Qi Li and Dong-Dong Zhou and Li Kuang}, doi = {10.18112/openneuro.ds005520.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005520.v1.0.1}, } ``` ## Technical Details - Subjects: 23 - Recordings: 69 - Tasks: 3 - Channels: 67 - Sampling rate (Hz): 1000.0 - Duration (hours): 48.870593611111104 - Pathology: Healthy - Modality: Visual - Type: Other - Size on disk: 43.9 GB - File count: 69 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005520.v1.0.1 - Source: openneuro - OpenNeuro: [ds005520](https://openneuro.org/datasets/ds005520) - NeMAR: [ds005520](https://nemar.org/dataexplorer/detail?dataset_id=ds005520) ## API Reference Use the `DS005520` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005520(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Research data supporting ‘EEG recording during playing MOBA game’ * **Study:** `ds005520` (OpenNeuro) * **Author (year):** `Li2024_Research_supporting_playing` * **Canonical:** — Also importable as: `DS005520`, `Li2024_Research_supporting_playing`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 23; recordings: 69; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005520](https://openneuro.org/datasets/ds005520) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005520](https://nemar.org/dataexplorer/detail?dataset_id=ds005520) DOI: [https://doi.org/10.18112/openneuro.ds005520.v1.0.1](https://doi.org/10.18112/openneuro.ds005520.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005520 >>> dataset = DS005520(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005520) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005520) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005522: ieeg dataset, 55 subjects *Spatial Navigation Memory of Object Locations* Access recordings and metadata through EEGDash. **Citation:** Haydn G. Herrema, Michael J. Kahana (2024). *Spatial Navigation Memory of Object Locations*. [10.18112/openneuro.ds005522.v1.0.0](https://doi.org/10.18112/openneuro.ds005522.v1.0.0) Modality: ieeg Subjects: 55 Recordings: 176 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005522 dataset = DS005522(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005522(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005522( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005522, title = {Spatial Navigation Memory of Object Locations}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005522.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005522.v1.0.0}, } ``` ## About This Dataset **Spatial Navigation Memory of Object Locations** **Description** This dataset contains behavioral events and intracranial electrophysiological recordings from a spatial navigation memory task. The experiment consists of participants encoding object locations during a guided navigation learning phase and then recalling the object locations during a self-navigation test phase. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania. Each session contains 50 trials (2 practice and 48 experimental), and each overall “trial” contains 2 learning trials followed by 1 test trial with the same object at the same location. For learning trial 1, participants are placed at a random location at a given radius from the object. They are smoothly turned to face the object (1 s), automatically driven to the object location (3 s), and then paused at the object (1 s). 5 seconds later, participants are placed at a new random location and the process repeats for learning trial 2. On test trials, participants are placed at a random location and orientation, with the object invisible. They navigate to where they believe the object was located and press a button to record their response. The environment for all sessions and trials is 64.8 x 36, with coordinates: x = (-32.4, 32.4), y = (-18.0, 18.0). The trials are blocked by a counterbalanced scheme, so for every trial there is another trial with reflected object position, starting position, and orientation. Each block contains 2 trials (i.e., 2 x (2 learning, 1 test)), with object (X, Y) and starting locations (x, y): - **(X1, Y1)** ### View full README **Spatial Navigation Memory of Object Locations** **Description** This dataset contains behavioral events and intracranial electrophysiological recordings from a spatial navigation memory task. The experiment consists of participants encoding object locations during a guided navigation learning phase and then recalling the object locations during a self-navigation test phase. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania. Each session contains 50 trials (2 practice and 48 experimental), and each overall “trial” contains 2 learning trials followed by 1 test trial with the same object at the same location. For learning trial 1, participants are placed at a random location at a given radius from the object. They are smoothly turned to face the object (1 s), automatically driven to the object location (3 s), and then paused at the object (1 s). 5 seconds later, participants are placed at a new random location and the process repeats for learning trial 2. On test trials, participants are placed at a random location and orientation, with the object invisible. They navigate to where they believe the object was located and press a button to record their response. The environment for all sessions and trials is 64.8 x 36, with coordinates: x = (-32.4, 32.4), y = (-18.0, 18.0). The trials are blocked by a counterbalanced scheme, so for every trial there is another trial with reflected object position, starting position, and orientation. Each block contains 2 trials (i.e., 2 x (2 learning, 1 test)), with object (X, Y) and starting locations (x, y): - **(X1, Y1)** > - **(x1’, y1’)** > - **(x1’’, y1’’)** > - **(x1’’’, y1’’’)** - **(X2, Y2)** : - **(x2’, y2’)** - **(x2’’, y2’’)** - **(x2’’’, y2’’’)** The paired block contains 2 trials in the opposite order with object and starting locations: - **(-X2, -Y2)** > - **(-x2’, -y2’)** > - **(-x2’’, -y2’’)** > - **(-x2’’’, -y2’’’)** - **(-X1, -Y1)** : - **(-x1’, -y1’)** - **(-x1’’, -y1’’)** - **(-x1’’’, -y1’’’)** **To Note** \* The iEEG recordings are labeled either “monopolar” or “bipolar.” The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis. The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables. \* Each subject has a unique montage of electrode locations. MNI and Talairach coordinates are provided when available. \* Recordings done with the Blackrock system are in units of 250 nV, while recordings done with the Medtronic system are estimated through testing to have units of 0.1 uV. We have completed the scaling to provide values in V. **Contact** For questions or inquiries, please contact [sas-kahana-sysadmin@sas.upenn.edu](mailto:sas-kahana-sysadmin@sas.upenn.edu). ## Dataset Information | Dataset ID | `DS005522` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Spatial Navigation Memory of Object Locations | | Author (year) | `Herrema2024_Spatial` | | Canonical | — | | Importable as | `DS005522`, `Herrema2024_Spatial` | | Year | 2024 | | Authors | Haydn G. Herrema, Michael J. Kahana | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005522.v1.0.0](https://doi.org/10.18112/openneuro.ds005522.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005522) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005522) | [Source URL](https://openneuro.org/datasets/ds005522) | ### Copy-paste BibTeX ```bibtex @dataset{ds005522, title = {Spatial Navigation Memory of Object Locations}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005522.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005522.v1.0.0}, } ``` ## Technical Details - Subjects: 55 - Recordings: 176 - Tasks: 1 - Channels: 133 (8), 110 (8), 88 (7), 120 (7), 72 (6), 188 (6), 173 (6), 126 (6), 108 (5), 56 (5), 46 (4), 128 (4), 127 (4), 68 (4), 112 (4), 64 (4), 144 (3), 146 (3), 92 (3), 123 (3), 186 (3), 124 (3), 50 (3), 104 (3), 182 (3), 86 (3), 160 (2), 59 (2), 180 (2), 138 (2), 163 (2), 85 (2), 75 (2), 140 (2), 111 (2), 70 (2), 130 (2), 63 (2), 170 (2), 96 (2), 166 (2), 158 (2), 118 (2), 100 (2), 90, 54, 151, 105, 109, 94, 149, 172, 122, 174, 76, 78, 178, 84, 165, 125, 177, 169, 136, 80, 60, 116 - Sampling rate (Hz): 1000.0 (70), 500.0 (61), 1600.0 (26), 999.0 (13), 2000.0 (4), 1999.0 (2) - Duration (hours): 145.237675247194 - Pathology: Not specified - Modality: Visual - Type: Memory - Size on disk: 107.5 GB - File count: 176 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005522.v1.0.0 - Source: openneuro - OpenNeuro: [ds005522](https://openneuro.org/datasets/ds005522) - NeMAR: [ds005522](https://nemar.org/dataexplorer/detail?dataset_id=ds005522) ## API Reference Use the `DS005522` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005522(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Spatial Navigation Memory of Object Locations * **Study:** `ds005522` (OpenNeuro) * **Author (year):** `Herrema2024_Spatial` * **Canonical:** — Also importable as: `DS005522`, `Herrema2024_Spatial`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 55; recordings: 176; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005522](https://openneuro.org/datasets/ds005522) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005522](https://nemar.org/dataexplorer/detail?dataset_id=ds005522) DOI: [https://doi.org/10.18112/openneuro.ds005522.v1.0.0](https://doi.org/10.18112/openneuro.ds005522.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005522 >>> dataset = DS005522(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005522) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005522) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005523: ieeg dataset, 21 subjects *Spatial Memory of Object Locations with Open-Loop Stimulation at Encoding* Access recordings and metadata through EEGDash. **Citation:** Haydn G. Herrema, Michael J. Kahana (2024). *Spatial Memory of Object Locations with Open-Loop Stimulation at Encoding*. [10.18112/openneuro.ds005523.v1.0.1](https://doi.org/10.18112/openneuro.ds005523.v1.0.1) Modality: ieeg Subjects: 21 Recordings: 102 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005523 dataset = DS005523(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005523(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005523( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005523, title = {Spatial Memory of Object Locations with Open-Loop Stimulation at Encoding}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005523.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005523.v1.0.1}, } ``` ## About This Dataset **Spatial Navigation Memory of Object Locations with Open-Loop Stimulation at Encoding** **Description** This dataset contains behavioral events and intracranial electrophysiological recordings from a spatial navigation memory task with open-loop stimulation at encoding. The experiment consists of participants encoding object locations during a guided navigation learning phase and then recalling the object locations during a self-navigation test phase. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania. This dataset is an open-loop stimulation version of the [YC1](https://openneuro.org/datasets/ds005522) dataset. Each session contains 50 trials (2 practice and 48 experimental), and each overall “trial” contains 2 learning trials followed by 1 test trial with the same object at the same location. For learning trial 1, participants are placed at a random location at a given radius from the object. They are smoothly turned to face the object (1 s), automatically driven to the object location (3 s), and then paused at the object (1 s). 5 seconds later, participants are placed at a new random location and the process repeats for learning trial 2. On test trials, participants are placed at a random location and orientation, with the object invisible. They navigate to where they believe the object was located and press a button to record their response. The environment for all sessions and trials is 64.8 x 36, with coordinates: x = (-32.4, 32.4), y = (-18.0, 18.0). This study contains open-loop electrical stimulation of the brain during encoding. There is no stimulation during the retrieval phase. Stimulation is delivered to a single electrode at a time, with locations chosen in the hippocampus and entorhinal cortex. Stimulation parameters are included in the behavioral events tsv files, denoting the anode/cathode labels, amplitude, pulse frequency, pulse width, and pulse count. Half of the (experimental) trials are assigned to the stimulation condition, and stimulation and no stimulation trials are alternated. On stimulation trials, stimulation occurs during both of the associated learning trials. Stimulation begins at the onset of turning towards the object’s location and lasts for the 5 seconds of the learning trial (1s turn + 3s drive + 1s pause). ### View full README **Spatial Navigation Memory of Object Locations with Open-Loop Stimulation at Encoding** **Description** This dataset contains behavioral events and intracranial electrophysiological recordings from a spatial navigation memory task with open-loop stimulation at encoding. The experiment consists of participants encoding object locations during a guided navigation learning phase and then recalling the object locations during a self-navigation test phase. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania. This dataset is an open-loop stimulation version of the [YC1](https://openneuro.org/datasets/ds005522) dataset. Each session contains 50 trials (2 practice and 48 experimental), and each overall “trial” contains 2 learning trials followed by 1 test trial with the same object at the same location. For learning trial 1, participants are placed at a random location at a given radius from the object. They are smoothly turned to face the object (1 s), automatically driven to the object location (3 s), and then paused at the object (1 s). 5 seconds later, participants are placed at a new random location and the process repeats for learning trial 2. On test trials, participants are placed at a random location and orientation, with the object invisible. They navigate to where they believe the object was located and press a button to record their response. The environment for all sessions and trials is 64.8 x 36, with coordinates: x = (-32.4, 32.4), y = (-18.0, 18.0). This study contains open-loop electrical stimulation of the brain during encoding. There is no stimulation during the retrieval phase. Stimulation is delivered to a single electrode at a time, with locations chosen in the hippocampus and entorhinal cortex. Stimulation parameters are included in the behavioral events tsv files, denoting the anode/cathode labels, amplitude, pulse frequency, pulse width, and pulse count. Half of the (experimental) trials are assigned to the stimulation condition, and stimulation and no stimulation trials are alternated. On stimulation trials, stimulation occurs during both of the associated learning trials. Stimulation begins at the onset of turning towards the object’s location and lasts for the 5 seconds of the learning trial (1s turn + 3s drive + 1s pause). The trials are blocked by a counterbalanced scheme, so for every stimulated trial there is another non-stimulated trial with reflected object position, starting position, and orientation. This counterbalancing ensures stimulated and non-stimulated trials are difficulty matched. Each block contains 2 trials (i.e., 2 x (2 learning, 1 test)), with object (X, Y) and starting locations (x, y). Bold represents stimulation: - **(X1, Y1)** > - **(x1’, y1’)** > - **(x1’’, y1’’)** > - (x1’’’, y1’’’) - (X2, Y2) : - (x2’, y2’) - (x2’’, y2’’) - (x2’’’, y2’’’) The paired block contains 2 trials in the opposite order with object and starting locations: - **(-X2, -Y2)** > - **(-x2’, -y2’)** > - **(-x2’’, -y2’’)** > - (-x2’’’, -y2’’’) - (-X1, -Y1) : - (-x1’, -y1’) - (-x1’’, -y1’’) - (-x1’’’, -y1’’’) **To Note** \* The iEEG recordings are labeled either “monopolar” or “bipolar.” The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis. The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables. \* Each subject has a unique montage of electrode locations. MNI and Talairach coordinates are provided when available. \* Recordings done with the Blackrock system are in units of 250 nV, while recordings done with the Medtronic system are estimated through testing to have units of 0.1 uV. We have completed the scaling to provide values in V. **Contact** For questions or inquiries, please contact [sas-kahana-sysadmin@sas.upenn.edu](mailto:sas-kahana-sysadmin@sas.upenn.edu). ## Dataset Information | Dataset ID | `DS005523` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Spatial Memory of Object Locations with Open-Loop Stimulation at Encoding | | Author (year) | `Herrema2024_Spatial_Memory` | | Canonical | — | | Importable as | `DS005523`, `Herrema2024_Spatial_Memory` | | Year | 2024 | | Authors | Haydn G. Herrema, Michael J. Kahana | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005523.v1.0.1](https://doi.org/10.18112/openneuro.ds005523.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005523) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005523) | [Source URL](https://openneuro.org/datasets/ds005523) | ### Copy-paste BibTeX ```bibtex @dataset{ds005523, title = {Spatial Memory of Object Locations with Open-Loop Stimulation at Encoding}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005523.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005523.v1.0.1}, } ``` ## Technical Details - Subjects: 21 - Recordings: 102 - Tasks: 1 - Channels: 166 (7), 144 (7), 180 (7), 182 (7), 126 (6), 156 (5), 56 (5), 50 (5), 118 (4), 109 (4), 141 (4), 68 (3), 64 (3), 133 (3), 138 (2), 87 (2), 104 (2), 120 (2), 108 (2), 123 (2), 76 (2), 85 (2), 174 (2), 94 (2), 100 (2), 163, 110, 173, 124, 188, 88, 143, 92, 80, 112 - Sampling rate (Hz): 1000.0 (58), 1600.0 (22), 500.0 (18), 999.0 (2), 499.7071 (2) - Duration (hours): 84.3298929568347 - Pathology: Surgery - Modality: Visual - Type: Memory - Size on disk: 69.7 GB - File count: 102 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005523.v1.0.1 - Source: openneuro - OpenNeuro: [ds005523](https://openneuro.org/datasets/ds005523) - NeMAR: [ds005523](https://nemar.org/dataexplorer/detail?dataset_id=ds005523) ## API Reference Use the `DS005523` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005523(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Spatial Memory of Object Locations with Open-Loop Stimulation at Encoding * **Study:** `ds005523` (OpenNeuro) * **Author (year):** `Herrema2024_Spatial_Memory` * **Canonical:** — Also importable as: `DS005523`, `Herrema2024_Spatial_Memory`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Surgery`. Subjects: 21; recordings: 102; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005523](https://openneuro.org/datasets/ds005523) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005523](https://nemar.org/dataexplorer/detail?dataset_id=ds005523) DOI: [https://doi.org/10.18112/openneuro.ds005523.v1.0.1](https://doi.org/10.18112/openneuro.ds005523.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005523 >>> dataset = DS005523(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005523) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005523) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005530: eeg dataset, 17 subjects *Depotentiation of emotional reactivity using TMR during REM sleep* Access recordings and metadata through EEGDash. **Citation:** Viviana Greco, Tamas A. Foldes, Neil A. Harrison, Kevin Murphy, Marta Wawrzuta, Mahmoud E. A. Abdellahi, Penelope A. Lewis (2024). *Depotentiation of emotional reactivity using TMR during REM sleep*. [10.18112/openneuro.ds005530.v1.0.9](https://doi.org/10.18112/openneuro.ds005530.v1.0.9) Modality: eeg Subjects: 17 Recordings: 21 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005530 dataset = DS005530(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005530(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005530( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005530, title = {Depotentiation of emotional reactivity using TMR during REM sleep}, author = {Viviana Greco and Tamas A. Foldes and Neil A. Harrison and Kevin Murphy and Marta Wawrzuta and Mahmoud E. A. Abdellahi and Penelope A. Lewis}, doi = {10.18112/openneuro.ds005530.v1.0.9}, url = {https://doi.org/10.18112/openneuro.ds005530.v1.0.9}, } ``` ## About This Dataset **Disarming emotional memories using Targeted Memory Reactivation during Rapid Eye Movement sleep** This dataset contains fMRI and EEG data from a study investigating the effects of Targeted Memory Reactivation (TMR) during REM sleep on emotional reactivity. As well as behavioural data and ECG collected during behavioural tasks. **Study Design** Participants rated the arousal of 48 affective images paired with semantically matching sounds. Heart rate deceleration was used as a measure of their autonomic arousal. Half of these sounds were cued during REM in the subsequent overnight sleep cycle. Participants rated the images in an MRI scanner with pulse oximetry 48 hours after encoding, and they completed an online follow up two weeks later. **Sessions** ### View full README **Disarming emotional memories using Targeted Memory Reactivation during Rapid Eye Movement sleep** This dataset contains fMRI and EEG data from a study investigating the effects of Targeted Memory Reactivation (TMR) during REM sleep on emotional reactivity. As well as behavioural data and ECG collected during behavioural tasks. **Study Design** Participants rated the arousal of 48 affective images paired with semantically matching sounds. Heart rate deceleration was used as a measure of their autonomic arousal. Half of these sounds were cued during REM in the subsequent overnight sleep cycle. Participants rated the images in an MRI scanner with pulse oximetry 48 hours after encoding, and they completed an online follow up two weeks later. **Sessions** 1. Baseline: Initial arousal ratings as well as overnight sleep with TMR 2. Session 48-H: fMRI scanning, pulse oximetry and arousal ratings (48 hours after baseline) 3. Session 2-Wk: Online follow-up (2 weeks after baseline) **Data Acquisition** - **fMRI**: Acquired using a Siemens Magnetom Prisma 3T scanner with a 32-channel head coil - **Heart Rate**: Recorded using BrainVision BrainAmp ExG with ExG AUX box and multitrodes during the behavioural session and pulse oximetry during the fMRI session3 - **Polysomnography**: Recorded using ten electrodes including 6 EEG channels (F3, F4, C3, C4, O1 and O2), 2 EMG channels and 2 EOG channels. All channels were live referenced to the average of left and right mastoids. **Dataset Contents** This initial upload contains: - T1-weighted structural images - Functional MRI data from Session 48-H - B0 field maps - Behavioural data from all sessions **Preprocessing** fMRI data were preprocessed using fMRIPrep 20.2.7. Details of the preprocessing pipeline can be found in the methods section of the associated publication. T1-weighted structural scans were defaced using pydeface version 2.0.2 to ensure participant anonymity. Within the behavioral data, the baseline ratings were centered within each participant. This was achieved by subtracting each participant’s mean baseline rating from the item-specific ratings they gave to the stimuli. **Additional Information** For more detailed information about the study design, methods, and results, please refer to the associated publication (citation to be added upon publication). This dataset was initially converted to BIDS format using ezBIDS ([https://brainlife.io/ezbids](https://brainlife.io/ezbids)). **Contact** For questions about the MRI dataset, please contact: Dr Tamas Foldes [foldesta@cardiff.ac.uk](mailto:foldesta@cardiff.ac.uk) ## Dataset Information | Dataset ID | `DS005530` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Depotentiation of emotional reactivity using TMR during REM sleep | | Author (year) | `Greco2024` | | Canonical | — | | Importable as | `DS005530`, `Greco2024` | | Year | 2024 | | Authors | Viviana Greco, Tamas A. Foldes, Neil A. Harrison, Kevin Murphy, Marta Wawrzuta, Mahmoud E. A. Abdellahi, Penelope A. Lewis | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005530.v1.0.9](https://doi.org/10.18112/openneuro.ds005530.v1.0.9) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005530) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005530) | [Source URL](https://openneuro.org/datasets/ds005530) | ### Copy-paste BibTeX ```bibtex @dataset{ds005530, title = {Depotentiation of emotional reactivity using TMR during REM sleep}, author = {Viviana Greco and Tamas A. Foldes and Neil A. Harrison and Kevin Murphy and Marta Wawrzuta and Mahmoud E. A. Abdellahi and Penelope A. Lewis}, doi = {10.18112/openneuro.ds005530.v1.0.9}, url = {https://doi.org/10.18112/openneuro.ds005530.v1.0.9}, } ``` ## Technical Details - Subjects: 17 - Recordings: 21 - Tasks: 1 - Channels: 10 - Sampling rate (Hz): 500.0 - Duration (hours): 144.7749161111111 - Pathology: Healthy - Modality: Multisensory - Type: Sleep - Size on disk: 6.5 GB - File count: 21 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005530.v1.0.9 - Source: openneuro - OpenNeuro: [ds005530](https://openneuro.org/datasets/ds005530) - NeMAR: [ds005530](https://nemar.org/dataexplorer/detail?dataset_id=ds005530) ## API Reference Use the `DS005530` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005530(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Depotentiation of emotional reactivity using TMR during REM sleep * **Study:** `ds005530` (OpenNeuro) * **Author (year):** `Greco2024` * **Canonical:** — Also importable as: `DS005530`, `Greco2024`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 17; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005530](https://openneuro.org/datasets/ds005530) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005530](https://nemar.org/dataexplorer/detail?dataset_id=ds005530) DOI: [https://doi.org/10.18112/openneuro.ds005530.v1.0.9](https://doi.org/10.18112/openneuro.ds005530.v1.0.9) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005530 >>> dataset = DS005530(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005530) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005530) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005540: eeg dataset, 59 subjects *EmoEEG-MC: A Multi-Context Emotional EEG Dataset for Cross-Context Emotion Decoding* Access recordings and metadata through EEGDash. **Citation:** Xin XU, Xinke SHEN, Xuyang CHEN, Qingzhu ZHANG, Sitian WANG, Yihan LI, Zongsheng LI, Dan ZHANG, Mingming ZHANG, Quanying LIU (2024). *EmoEEG-MC: A Multi-Context Emotional EEG Dataset for Cross-Context Emotion Decoding*. [10.18112/openneuro.ds005540.v1.0.7](https://doi.org/10.18112/openneuro.ds005540.v1.0.7) Modality: eeg Subjects: 59 Recordings: 103 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005540 dataset = DS005540(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005540(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005540( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005540, title = {EmoEEG-MC: A Multi-Context Emotional EEG Dataset for Cross-Context Emotion Decoding}, author = {Xin XU and Xinke SHEN and Xuyang CHEN and Qingzhu ZHANG and Sitian WANG and Yihan LI and Zongsheng LI and Dan ZHANG and Mingming ZHANG and Quanying LIU}, doi = {10.18112/openneuro.ds005540.v1.0.7}, url = {https://doi.org/10.18112/openneuro.ds005540.v1.0.7}, } ``` ## About This Dataset **EmoEEG-MC: A Multi-Context Emotional EEG Dataset for Cross-Context Emotion Decoding** **Authors** Xin XU[^1,†], Xinke SHEN[^1,†,\*], Xuyang CHEN[^1], Qingzhu ZHANG[^1], Sitian WANG[^1], Yihan LI[^1], Zongsheng LI[^1,^2], Dan ZHANG[^3], Mingming ZHANG[^1], Quanying LIU[^1,\*] ### View full README **EmoEEG-MC: A Multi-Context Emotional EEG Dataset for Cross-Context Emotion Decoding** **Authors** Xin XU[^1,†], Xinke SHEN[^1,†,\*], Xuyang CHEN[^1], Qingzhu ZHANG[^1], Sitian WANG[^1], Yihan LI[^1], Zongsheng LI[^1,^2], Dan ZHANG[^3], Mingming ZHANG[^1], Quanying LIU[^1,\*] *Corresponding authors:* Quanying LIU ([liuqy@sustech.edu.cn](mailto:liuqy@sustech.edu.cn)); Xinke SHEN ([shenxk@sustech.edu.cn](mailto:shenxk@sustech.edu.cn)) **† These authors contributed equally to this work.** **Abstract** Decoding emotions using electroencephalography (EEG) is gaining increasing attention due to its objectivity in measuring emotional states. However, the ability of existing EEG-based emotion decoding methods to generalize across different contexts remains underexplored, as most approaches are trained and evaluated only within a single context. Studying emotions across multiple contexts is essential for advancing our understanding of the neural mechanisms underlying emotional processing and enhancing the real-world applicability of affective computing systems. A key limitation in this field is the lack of EEG datasets designed specifically to capture emotional responses across diverse contexts. To address this gap, we present the **Multi-Context Emotional EEG (EmoEEG-MC) dataset**, featuring 64-channel EEG and peripheral physiological data from 60 participants exposed to two distinct contexts: video-induced and imagery-induced emotions. These contexts evoke seven distinct emotional categories: joy, inspiration, tenderness, fear, disgust, sadness, and neutral emotion. The emotional experience of a specific emotion category was validated through subjective reports. Using Support Vector Machines (SVMs) with L1 regularization, we achieved cross-context emotion decoding accuracies of 66.7% for binary classification (positive vs. negative emotions) and 28.9% for seven-category emotion classification, both significantly above chance levels. The **EmoEEG-MC dataset** serves as a foundational resource for advancing cross-context emotion recognition and enhancing the real-world application of emotion decoding methods. **Dataset Description** The dataset includes EEG data from 60 participants, along with peripheral physiological data (PPG and GSR) for some participants. Among the 60 participants, **sub01-sub54\*\*have complete trials (21 imagery trials and 21 video trials), while** sub55-sub60\*\* have missing trials. The details of the missing trials are as follows: - **sub55**: Missing 3 imagery trials (Trials 19-21) and 3 video trials (Trials 40-42). - **sub56**: Missing 2 imagery trials (Trials 20 and 21). - **sub57**: Missing 4 imagery trials (Trials 6, 8, 13, and 21) and 6 video trials (Trials 23, 24, 36, 37, 38, and 42). - **sub58**: Missing 3 imagery trials (Trials 9, 20, and 21). - **sub59**: Missing 6 imagery trials (Trials 2, 4, 6, 12, 19, and 21) and 4 video trials (Trials 29, 37, 39, and 42). - **sub60**: Missing 14 imagery trials (Trials 8-21) and 12 video trials (Trials 31-42). All missing values are denoted as **n/a** in the participants’ behavioral data. **Experimental Trial Reordering and Missing Trial Information** **Trial Reordering** After reordering, the sequence for both imagery and video trials is as follows: `reorder = ['sad4', 'sad5', 'sad8', 'dis4', 'dis5', 'dis8', 'fear4', 'fear5', 'fear8', 'neu4', 'neu5', 'neu8', 'joy4', 'joy5', 'joy8', 'ten4', 'ten5', 'ten8', 'ins4', 'ins5', 'ins8']` **Full Trial Participants** For participants with complete trials (**sub01-sub54**, with the same order for both imagery and video trials; detailed stimulus information can be found in `sub-xx/sub-xx_events`), the experimental sequence is as follows: 1. `['joy5', 'ins5', 'joy8', 'fear8', 'sad8', 'dis5', 'neu4', 'neu5', 'neu8', 'ten5', 'ten8', 'joy4', 'dis4', 'fear4', 'sad4', 'ins8', 'ins4', 'ten4', 'dis8', 'fear5', 'sad5']` 2. `['fear8', 'fear5', 'dis4', 'ins8', 'joy8', 'ins4', 'neu4', 'neu8', 'neu5', 'sad4', 'dis8', 'fear4', 'ten5', 'ten8', 'joy4', 'dis5', 'sad8', 'sad5', 'joy5', 'ten4', 'ins5']` 3. `['ten4', 'joy4', 'joy8', 'neu4', 'neu8', 'neu5', 'dis5', 'fear4', 'fear5', 'ten8', 'ten5', 'ins5', 'fear8', 'dis4', 'dis8', 'ins8', 'joy5', 'ins4', 'sad4', 'sad5', 'sad8']` 4. `['fear5', 'dis8', 'dis5', 'joy4', 'ten5', 'ins5', 'neu4', 'neu8', 'neu5', 'sad8', 'fear8', 'sad4', 'ins4', 'ins8', 'joy8', 'fear4', 'sad5', 'dis4', 'ten4', 'joy5', 'ten8']` 5. `['joy8', 'ten4', 'ins5', 'fear5', 'sad5', 'dis4', 'neu4', 'neu5', 'neu8', 'joy4', 'ten8', 'joy5', 'sad4', 'dis8', 'fear8', 'ins4', 'ten5', 'ins8', 'sad8', 'dis5', 'fear4']` 6. `['joy8', 'ins5', 'ins8', 'dis4', 'dis8', 'fear8', 'ten4', 'joy5', 'ten5', 'dis5', 'fear5', 'fear4', 'ten8', 'ins4', 'joy4', 'sad8', 'sad4', 'sad5', 'neu4', 'neu5', 'neu8']` 7. `['joy8', 'ten8', 'joy4', 'fear4', 'sad5', 'dis5', 'ins5', 'ten5', 'ten4', 'dis4', 'sad8', 'dis8', 'ins4', 'ins8', 'joy5', 'sad4', 'fear8', 'fear5', 'neu4', 'neu5', 'neu8']` 8. `['neu4', 'neu5', 'neu8', 'dis8', 'sad4', 'fear5', 'ins4', 'ins5', 'ten5', 'dis4', 'sad8', 'fear4', 'ins8', 'joy4', 'ten8', 'fear8', 'dis5', 'sad5', 'ten4', 'joy8', 'joy5']` 9. `['sad5', 'fear4', 'fear8', 'joy4', 'joy8', 'ten5', 'dis8', 'dis5', 'sad4', 'neu4', 'neu8', 'neu5', 'ins8', 'ten8', 'ins4', 'sad8', 'fear5', 'dis4', 'joy5', 'ten4', 'ins5']` 10. `['sad4', 'fear5', 'sad8', 'joy8', 'ten8', 'joy4', 'sad5', 'dis8', 'fear4', 'neu4', 'neu8', 'neu5', 'ten4', 'ten5', 'ins4', 'dis4', 'fear8', 'dis5', 'joy5', 'ins5', 'ins8']` 11. `['joy4', 'ins4', 'joy5', 'fear8', 'dis8', 'sad4', 'ten8', 'ins5', 'ten5', 'sad5', 'sad8', 'fear5', 'ins8', 'ten4', 'joy8', 'neu8', 'neu4', 'neu5', 'fear4', 'dis4', 'dis5']` 12. `['sad8', 'fear5', 'fear8', 'ten8', 'ten5', 'joy8', 'fear4', 'sad4', 'sad5', 'neu4', 'neu8', 'neu5', 'ins8', 'ins4', 'ten4', 'dis5', 'dis8', 'dis4', 'joy5', 'joy4', 'ins5']` 13. `['sad8', 'dis8', 'sad4', 'ten4', 'ten8', 'ins4', 'dis5', 'fear8', 'sad5', 'ten5', 'ins5', 'joy8', 'neu4', 'neu8', 'neu5', 'fear5', 'fear4', 'dis4', 'joy4', 'joy5', 'ins8']` 14. `['ins8', 'ten4', 'ins5', 'neu4', 'neu8', 'neu5', 'sad5', 'dis4', 'sad4', 'ins4', 'ten8', 'ten5', 'dis8', 'sad8', 'fear8', 'joy5', 'joy4', 'joy8', 'fear4', 'fear5', 'dis5']` 15. `['ins8', 'ten5', 'ten8', 'sad8', 'sad4', 'sad5', 'joy4', 'ins4', 'ins5', 'fear8', 'fear5', 'fear4', 'ten4', 'joy5', 'joy8', 'neu5', 'neu4', 'neu8', 'dis4', 'dis5', 'dis8']` 16. `['fear4', 'dis4', 'fear8', 'ins8', 'joy8', 'ten8', 'dis5', 'sad4', 'dis8', 'ins5', 'ins4', 'joy4', 'neu8', 'neu4', 'neu5', 'fear5', 'sad8', 'sad5', 'joy5', 'ten5', 'ten4']` 17. `['ten5', 'ins4', 'ins8', 'dis8', 'fear4', 'sad5', 'ins5', 'joy8', 'ten4', 'sad8', 'fear8', 'fear5', 'ten8', 'joy5', 'joy4', 'sad4', 'dis5', 'dis4', 'neu5', 'neu4', 'neu8']` 18. `['neu4', 'neu5', 'neu8', 'sad4', 'dis8', 'dis5', 'joy4', 'ten4', 'ten5', 'sad5', 'fear5', 'fear4', 'ins5', 'ins4', 'ten8', 'dis4', 'fear8', 'sad8', 'joy8', 'ins8', 'joy5']` 19. `['joy5', 'ten8', 'ins4', 'fear4', 'dis8', 'sad4', 'ten5', 'joy8', 'joy4', 'sad8', 'dis5', 'fear8', 'neu8', 'neu4', 'neu5', 'ins5', 'ten4', 'ins8', 'fear5', 'dis4', 'sad5']` 20. `['joy5', 'ins8', 'joy4', 'neu4', 'neu5', 'neu8', 'fear4', 'sad4', 'fear8', 'ins5', 'ten4', 'ten5', 'dis4', 'sad8', 'sad5', 'ten8', 'ins4', 'joy8', 'dis5', 'fear5', 'dis8']` 21. `['ten8', 'joy4', 'ins5', 'sad4', 'dis4', 'fear8', 'ins8', 'joy8', 'ins4', 'neu8', 'neu4', 'neu5', 'sad5', 'sad8', 'fear5', 'ten5', 'joy5', 'ten4', 'fear4', 'dis5', 'dis8']` 22. `['joy5', 'ten8', 'ten4', 'dis4', 'fear4', 'fear5', 'joy8', 'ten5', 'joy4', 'sad5', 'sad8', 'dis8', 'neu5', 'neu8', 'neu4', 'ins4', 'ins5', 'ins8', 'fear8', 'sad4', 'dis5']` 23. `['neu4', 'neu5', 'neu8', 'dis4', 'fear4', 'sad8', 'ins8', 'joy4', 'ten8', 'fear8', 'fear5', 'sad5', 'ten4', 'ins5', 'joy8', 'dis8', 'sad4', 'dis5', 'ten5', 'joy5', 'ins4']` 24. `['joy5', 'ten5', 'ins4', 'fear4', 'sad8', 'sad4', 'ins5', 'ten4', 'ten8', 'sad5', 'fear5', 'fear8', 'ins8', 'joy8', 'joy4', 'dis8', 'dis5', 'dis4', 'neu8', 'neu4', 'neu5']` 25. `['dis8', 'dis5', 'sad4', 'ins8', 'ten4', 'joy8', 'sad8', 'fear4', 'fear8', 'joy5', 'ins4', 'ten8', 'dis4', 'fear5', 'sad5', 'neu8', 'neu5', 'neu4', 'joy4', 'ins5', 'ten5']` 26. `['fear4', 'sad5', 'fear8', 'ten4', 'ins5', 'joy8', 'dis4', 'dis8', 'sad8', 'ins4', 'joy5', 'joy4', 'dis5', 'sad4', 'fear5', 'ins8', 'ten8', 'ten5', 'neu5', 'neu8', 'neu4']` 27. `['dis4', 'dis5', 'fear4', 'ins8', 'ins4', 'joy5', 'sad8', 'fear8', 'sad5', 'ins5', 'joy4', 'ten8', 'neu4', 'neu8', 'neu5', 'fear5', 'sad4', 'dis8', 'ten4', 'ten5', 'joy8']` 28. `['ten4', 'ins5', 'joy4', 'dis5', 'sad5', 'fear4', 'ins8', 'joy8', 'ins4', 'fear5', 'fear8', 'dis8', 'neu5', 'neu8', 'neu4', 'ten8', 'joy5', 'ten5', 'sad4', 'dis4', 'sad8']` 29. `['joy5', 'ten5', 'ins5', 'neu8', 'neu4', 'neu5', 'fear5', 'sad8', 'sad5', 'joy8', 'ten8', 'joy4', 'fear8', 'fear4', 'dis4', 'ten4', 'ins8', 'ins4', 'dis8', 'dis5', 'sad4']` 30. `['sad8', 'dis8', 'dis5', 'joy5', 'ten4', 'joy4', 'sad5', 'fear5', 'fear8', 'ten8', 'ins8', 'ins4', 'sad4', 'fear4', 'dis4', 'joy8', 'ins5', 'ten5', 'neu5', 'neu8', 'neu4']` 31. `['dis4', 'dis8', 'sad4', 'neu5', 'neu4', 'neu8', 'joy5', 'ins8', 'ins4', 'fear4', 'fear8', 'sad8', 'ins5', 'ten8', 'joy4', 'sad5', 'dis5', 'fear5', 'ten4', 'joy8', 'ten5']` 32. `['joy5', 'joy4', 'ten4', 'sad5', 'fear5', 'fear4', 'ins5', 'ten8', 'ins8', 'dis8', 'dis5', 'sad8', 'ten5', 'ins4', 'joy8', 'sad4', 'fear8', 'dis4', 'neu5', 'neu8', 'neu4']` 33. `['sad5', 'dis8', 'dis5', 'ins5', 'ten5', 'ten4', 'dis4', 'fear4', 'fear5', 'ten8', 'ins8', 'joy4', 'neu5', 'neu4', 'neu8', 'fear8', 'sad4', 'sad8', 'joy5', 'joy8', 'ins4']` 34. `['ten5', 'ins5', 'joy4', 'sad4', 'fear5', 'fear4', 'ten8', 'joy8', 'ins8', 'dis8', 'sad5', 'dis5', 'joy5', 'ten4', 'ins4', 'dis4', 'fear8', 'sad8', 'neu4', 'neu8', 'neu5']` 35. `['sad4', 'fear8', 'dis4', 'ins4', 'ins8', 'joy4', 'neu8', 'neu5', 'neu4', 'sad8', 'fear4', 'dis5', 'ten4', 'ten5', 'ten8', 'sad5', 'dis8', 'fear5', 'joy8', 'ins5', 'joy5']` 36. `['joy5', 'joy4', 'joy8', 'dis4', 'dis8', 'fear5', 'neu5', 'neu8', 'neu4', 'ins4', 'ten5', 'ten4', 'dis5', 'sad5', 'fear4', 'ten8', 'ins8', 'ins5', 'sad4', 'sad8', 'fear8']` 37. `['fear4', 'dis5', 'sad5', 'neu5', 'neu4', 'neu8', 'ins8', 'joy8', 'ten5', 'fear5', 'sad4', 'fear8', 'ins4', 'joy4', 'ten8', 'dis4', 'dis8', 'sad8', 'joy5', 'ins5', 'ten4']` 38. `['joy8', 'ten8', 'ins8', 'fear8', 'sad4', 'fear5', 'ten4', 'ten5', 'joy5', 'sad8', 'dis4', 'fear4', 'neu4', 'neu5', 'neu8', 'ins5', 'ins4', 'joy4', 'sad5', 'dis8', 'dis5']` 39. `['ins4', 'ten8', 'joy4', 'neu5', 'neu8', 'neu4', 'dis8', 'fear4', 'sad8', 'ins5', 'joy8', 'ten4', 'dis5', 'dis4', 'fear5', 'ins8', 'ten5', 'joy5', 'fear8', 'sad5', 'sad4']` 40. `['ins4', 'ten4', 'ins5', 'sad5', 'dis5', 'fear4', 'neu5', 'neu8', 'neu4', 'ten5', 'ins8', 'joy4', 'sad8', 'fear5', 'sad4', 'ten8', 'joy5', 'joy8', 'dis8', 'dis4', 'fear8']` 41. `['ins5', 'ten8', 'ins4', 'dis8', 'sad4', 'dis5', 'joy8', 'ten5', 'ins8', 'neu8', 'neu4', 'neu5', 'fear8', 'dis4', 'fear5', 'joy4', 'joy5', 'ten4', 'sad5', 'sad8', 'fear4']` 42. `['ten8', 'ten4', 'joy8', 'dis8', 'sad5', 'sad4', 'joy5', 'ins8', 'ins4', 'neu4', 'neu5', 'neu8', 'fear4', 'dis4', 'fear5', 'ins5', 'ten5', 'joy4', 'dis5', 'fear8', 'sad8']` 43. `['ins5', 'ten5', 'ins4', 'neu5', 'neu8', 'neu4', 'sad4', 'dis4', 'sad5', 'ins8', 'joy8', 'joy4', 'fear8', 'fear4', 'dis8', 'ten8', 'ten4', 'joy5', 'dis5', 'sad8', 'fear5']` 44. `['sad8', 'dis5', 'dis4', 'joy5', 'ins5', 'joy8', 'sad5', 'sad4', 'fear5', 'ten4', 'ten8', 'ins4', 'neu8', 'neu5', 'neu4', 'dis8', 'fear8', 'fear4', 'joy4', 'ten5', 'ins8']` 45. `['ins5', 'joy8', 'ins8', 'fear8', 'fear5', 'sad5', 'joy5', 'ten8', 'ten5', 'neu5', 'neu4', 'neu8', 'dis5', 'dis8', 'sad4', 'ins4', 'ten4', 'joy4', 'sad8', 'dis4', 'fear4']` 46. `['fear5', 'dis5', 'dis8', 'ins5', 'ten5', 'ten8', 'neu8', 'neu4', 'neu5', 'fear8', 'dis4', 'sad4', 'ten4', 'ins8', 'ins4', 'sad5', 'sad8', 'fear4', 'joy8', 'joy4', 'joy5']` 47. `['ins4', 'joy5', 'joy8', 'sad5', 'fear5', 'dis8', 'neu8', 'neu4', 'neu5', 'ins8', 'ten4', 'joy4', 'fear8', 'dis5', 'sad8', 'ins5', 'ten8', 'ten5', 'sad4', 'dis4', 'fear4']` 48. `['joy5', 'ins8', 'ins5', 'dis8', 'dis5', 'fear5', 'ten4', 'ins4', 'joy8', 'dis4', 'fear4', 'sad5', 'ten8', 'ten5', 'joy4', 'fear8', 'sad8', 'sad4', 'neu8', 'neu4', 'neu5']` 49. `['dis4', 'sad5', 'sad4', 'neu4', 'neu8', 'neu5', 'joy4', 'ten5', 'ten8', 'dis8', 'fear8', 'dis5', 'ins4', 'joy8', 'ten4', 'fear4', 'sad8', 'fear5', 'ins8', 'ins5', 'joy5']` 50. `['ten5', 'ins8', 'ins4', 'neu4', 'neu8', 'neu5', 'fear4', 'fear8', 'dis4', 'joy4', 'ten4', 'ins5', 'fear5', 'sad5', 'dis8', 'ten8', 'joy8', 'joy5', 'sad4', 'sad8', 'dis5']` 51. `['ten8', 'joy8', 'ten5', 'dis8', 'fear5', 'dis4', 'joy5', 'ten4', 'ins4', 'fear4', 'sad4', 'dis5', 'neu8', 'neu4', 'neu5', 'ins8', 'ins5', 'joy4', 'sad5', 'fear8', 'sad8']` 52. `['joy4', 'joy5', 'ins8', 'fear5', 'dis5', 'dis8', 'neu5', 'neu4', 'neu8', 'joy8', 'ins5', 'ten5', 'sad5', 'fear4', 'dis4', 'ten4', 'ten8', 'ins4', 'sad8', 'fear8', 'sad4']` 53. `['neu8', 'neu4', 'neu5', 'dis5', 'sad4', 'fear4', 'joy5', 'ins4', 'ten4', 'fear8', 'sad5', 'sad8', 'ten8', 'joy4', 'ten5', 'fear5', 'dis4', 'dis8', 'joy8', 'ins5', 'ins8']` - **sub54 Imagery sequence**: > `['ten8', 'ten5', 'ten4', 'fear8', 'fear5', 'fear4', 'dis8', 'dis5', 'dis4', 'joy8', 'joy5', 'joy4', 'sad8', 'sad5', 'sad4', 'neu8', 'neu5', 'neu4', 'ins8', 'ins5', 'ins4']` - **sub54 Video sequence**: `['joy8', 'joy5', 'joy4', 'sad8', 'sad5', 'sad4', 'dis8', 'dis5', 'dis4', 'ins8', 'ins5', 'ins4', 'fear8', 'fear5', 'fear4', 'neu8', 'neu5', 'neu4', 'ten8', 'ten5', 'ten4']` **Participants with Missing Trials** For participants with missing trials (**sub55-sub60**), the experimental sequences differ slightly: - **sub55**: The sequence for imagery and video trials is: > `['dis5', 'sad4', 'fear8', 'joy4', 'joy5', 'ten8', 'fear5', 'sad8', 'sad5', 'joy8', 'ten5', 'ins8', 'dis8', 'dis4', 'fear4', 'ins5', 'ten4', 'ins4']` - **sub56**: - Imagery sequence: > `['joy8', 'joy5', 'ins4', 'sad4', 'fear5', 'dis8', 'neu4', 'neu8', 'neu5', 'ten8', 'joy4', 'ins5', 'fear4', 'dis5', 'sad8', 'ins8', 'ten5', 'ten4', 'sad5']` - Video sequence: `['joy8', 'joy5', 'ins4', 'sad4', 'fear5', 'dis8', 'neu4', 'neu8', 'neu5', 'ten8', 'joy4', 'ins5', 'fear4', 'dis5', 'sad8', 'ins8', 'ten5', 'ten4', 'sad5', 'dis4', 'fear8']` - **sub57**: - Imagery sequence: > `['neu8', 'neu5', 'neu4', 'ins4', 'joy5', 'sad5', 'sad8', 'ins8', 'joy4', 'ten8', 'dis5', 'fear8', 'joy8', 'ins5', 'ten5', 'fear4', 'fear5']` - Video sequence: `['neu8', 'ins4', 'joy5', 'ten4', 'sad5', 'sad4', 'sad8', 'ins8', 'joy4', 'ten8', 'dis8', 'dis5', 'ten5', 'fear4', 'fear5']` - **sub58**: - Imagery sequence: > `['sad5', 'fear5', 'sad8', 'ins8', 'joy5', 'joy4', 'sad4', 'dis8', 'neu5', 'neu8', 'neu4', 'ten8', 'joy8', 'ten4', 'fear4', 'fear8', 'dis5', 'ins5']` - Video sequence: `['sad5', 'fear5', 'sad8', 'ins8', 'joy5', 'joy4', 'sad4', 'dis8', 'dis4', 'neu5', 'neu8', 'neu4', 'ten8', 'joy8', 'ten4', 'fear4', 'fear8', 'dis5', 'ins5', 'ten5', 'ins4']` - **sub59**: - Imagery sequence: > `['dis5', 'fear4', 'ins8', 'fear8', 'dis8', 'fear5', 'neu4', 'neu8', 'joy4', 'ten8', 'ten4', 'dis4', 'sad5', 'sad4', 'joy5']` - Video sequence: `['dis5', 'sad8', 'fear4', 'joy8', 'ins8', 'ins5', 'fear8', 'fear5', 'neu4', 'neu8', 'neu5', 'joy4', 'ten8', 'ten4', 'sad5', 'ten5', 'joy5']` - **sub60**: - Imagery sequence: > `['neu5', 'neu4', 'neu8', 'dis5', 'sad4', 'dis4', 'ten4']` - Video sequence: `['neu5', 'neu4', 'neu8', 'dis5', 'sad4', 'dis4', 'ten4', 'ins4', 'ten5']` **Participants’ Behaviour Reports** The ten behavioral rating items for the participants are as follows: - Joy - Inspiration - Tenderness - Sadness - Fear - Disgust - Arousal - Valence - Familiarity - Liking **Channels** The EEG channels follow the 10-20 system with 64 channels, and the channel names are as follows: ‘Fp1’, ‘Fpz’, ‘Fp2’, ‘AF7’, ‘AF3’,’AF4’,’AF8’, ‘F7’, ‘F5’,’F3’,’F1’,’Fz’, ‘F2’, ‘F4’, ‘F6’, ‘F8’, ‘FT7’, ‘FC5’, ‘FC3’, ‘FC1’,’FCz’,’FC2’,’FC4’, ‘FC6’, ‘FT8’, ‘T7’,’C5’, ‘C3’, ‘C1’, ‘Cz’, ‘C2’, ‘C4’, ‘C6’, ‘T8’, ‘TP7’, ‘CP5’, ‘CP3’, ‘CP1’,’CPz’,’CP2’, ‘CP4’,’CP6’, ‘TP8’, ‘P7’,’P5’, ‘P3’, ‘P1’, ‘Pz’,’P2’, ‘P4’, ‘P6’, ‘P8’, ‘PO7’, ‘PO3’,’POz’, ‘PO4’,’PO8’, ‘O1’,’Oz’,’O2’, ‘F9’, ‘F10’, ‘TP9’, ‘TP10’ The order of the 64 channels mentioned in subsequent files follows the same order as listed above. **Preprocess Procedure** The EEG preprocessing procedures were as follows: First, the data were filtered to 0.1-47 Hz, downsampled to 200 Hz, and then segmented into trials. For imagery trials, we used the 30 seconds before the button press (or 30 seconds before the start of the rating if no button was pressed) for further analysis; for video trials, we selected the last 30 seconds of the video clip presentation for further analysiscite{shen_contrastive_2023,hu_eeg_2017}. Next, we inspected bad channels based on two criteria. First, channels containing more than 30% outliers were flagged, where outliers are defined as absolute values exceeding three times from the trial’s median of absolutecite{DECHEVEIGNE2018903}. Second, we identified channels with abnormal variance by plotting the variance for each channel across trials to detect significant variance jumps. Suspected bad channels were further verified through visual inspection of the EEG signals and were subsequently interpolated using the average of three neighboring channels. Then we performed Independent Component Analysis (ICA) and manually removed components derived from eye movements and muscle artifacts. Finally, common average referencing and trial reordering were applied. As the order of materials presentation was randomized across subjects, reordering of the trials ensured that the order of EEG data was the same for all subjects to facilitate subsequent analysis. Our dataset also provides several commonly used EEG features, including differential entropy (DE) and power spectral density (PSD) features. DE and PSD features were extracted from the preprocessed data within each non-overlapping second at five frequency bands (delta band: 1-4 Hz, theta band: $4-8 mathrm{~Hz}$, alpha band: $8-14 mathrm{~Hz}$, beta band: $14-30 mathrm{~Hz}$, and gamma band: $30-47 mathrm{~Hz}$ ). The formula to calculate DE and PSD followed the practice in the SEED dataset : $$ begin{gathered} P S D=Eleft[x^2right] \\ D E=frac{1}{2} ln left(2 pi e sigma^2right) end{gathered} $$ where $x$ is the EEG signal filtered into a frequency band and $sigma^2$ is the variance of the EEG signal. **Guide for labels** - **Using Preprocessed Data** If you prefer to work with preprocessed data, navigate to the following directories: `\derivatives\sub-idx\ses-ima\eeg` or `\derivatives\sub-idx\ses-vid\eeg`. Here, you will find: - `_task-emotion_de.npy` - `_task-emotion_psd.npy` - `_task-emotion_reorder.npy` These files have been preprocessed and reordered in the following sequence: \*\*sad-dis-fear-neu-joy-ten-ins\*\*. For example: - The 1st to 3rd stimuli correspond to `sad4`, `sad5`, and `sad8`. - The 4th to 6th stimuli correspond to `dis4`, `dis5`, and `dis8`, and so on. Each session (`ima` or `vid`) typically includes \*\*21 trials\*\*. For information on participants with missing trials, refer to the \*\*Participants with Missing Trials\*\* section above. - **Preprocessing Data on Your Own** If you’d like to preprocess the data yourself, follow these steps: 1. **Locate Raw Data**: > - The raw EEG data is in the directory: `sub-idx\eeg\sub-idx_task-emotion_eeg.edf`. > - Triggers are marked directly in the `.edf` file’s notations. 1. **Map Triggers to Trial Types**: - Mapping information between ‘TypeID’ in `.edf` file’s notations and trial categories is here: > sub-01: vid-31 ima-30 fade-28 rating-29 > sub-02: vid-33 rating-31 ima-32 fade-30 > sub-03: ima-32 rating-31 vid-33 fade-30 > sub-04: ima-50 rating-49 fade-48 vid-51 > sub-05: vid-6 rating-4 ima-5 fade-3 > sub-06: ima-23 fade-21 rating-22 vid-24 > sub-07: vid-51 rating-49 ima-50 fade-48 > sub-08: ima-41 rating-40 vid-42 fade-37 > sub-09: ima-5, fade-3, rating-4, vid-6 > sub-10: ima-23 rating-22 fade-21 vid-24 > sub-11: ima-23 rating-22 fade-21 vid-24 > sub-12: ima-32 fade-30 rating-31 vid-33 > sub-13: vid-24 rating-22 ima-23 fade-21 > sub-14: vid-24 rating-22 ima-23 fade-21 > sub-15: ima-5 fade-3 rating-4 vid-6 > sub-16: vid-24 rating-22 ima-23 fede-21 > sub-17: vid-6 rating-4 ima-5 fade-3 > sub-18: vid-6 rating-4 ima-5 fade-3 > sub-19: vid-6 rating-4 ima-5 fade-3 > sub-20: vid-6 rating-4 ima-5 fade-3 > sub-21: vid-36 rating-34 ima-35 fade-33 > sub-22: vid-56 rating-54 ima-55 fade-53 > sub-23: vid-36 rating-34 ima-35 fade-33 > sub-24: vid-36 rating-34 ima-35 fade-33 > sub-25: vid-6 rating-4 ima-5 fade-3 > sub-26: vid-6 rating-4 ima-5 fade-3 > sub-27: vid-6 rating-4 ima-5 fade-3 > sub-28: vid-16 rating-14 ima-15 fade-13 > sub-29: vid-26 rating-24 ima-25 fade-23 > sub-30: vid-26 rating-24 ima-25 fade-23 > sub-31: vid-6 rating-4 ima-5 fade-3 > sub-32: vid-6 rating-4 ima-5 fade-3 > sub-33: vid-15 rating-13 ima-14 fade-12 > sub-34: vid-6 rating-4 ima-5 fade-3 > sub-35: vid-6 rating-4 ima-5 fade-3 > sub-36: vid-6 rating-4 ima-5 fade-3 > sub-37: vid-6 rating-4 ima-5 fade-3 > sub-38: vid-16 rating-14 ima-15 fade-13 > sub-39: vid-9 rating-7 ima-8 fade-6 > sub-40: vid-6 rating-4 ima-5 fade-3 > sub-41: vid-56 rating-54 ima-55 fade-53 > sub-42: vid-26 rating-24 ima-25 fade-23 > sub-43: vid-38 rating-36 ima-37 fade-35 > sub-44: vid-16 rating-14 ima-15 fade-13 > sub-45: vid-16 rating-14 ima-15 fade-13 > sub-46: vid-10 rating-8 ima-9 fade-7 > sub-47: vid-10 rating-8 ima-9 fade-7 > sub-48: vid-6 rating-4 ima-5 fade-3 > sub-49: vid-6 rating-4 ima-5 fade-3 > sub-50: vid-16 rating-14 ima-15 fade-13 > sub-51: vid-26 rating-24 ima-25 fade-23 > sub-52: vid-6 rating-4 ima-5 fade-3 > sub-53: vid-38 rating-33 ima-36 fade-34 > sub-54: vid-6 rating-4 ima-5 fade-3 > sub-55: vid-30 rating-28 ima-29 fade-27 > sub-56: vid-6 rating-4 ima-5 fade-3 > sub-57: vid-6 rating-4 ima-5 fade-3 > sub-58: vid-26 rating-24 ima-25 fade-23 > sub-59: vid-26 rating-24 ima-25 fade-23 > sub-60: vid-36 rating-34 ima-35 fade-33 > (tips: For many participants, triggers end with “vid-6 rating-4 ima-5 fade-3”, so you can %10 to get the same sequence.) 2. **Segment Data**: - Based on the trigger-trial mapping, segment the data accordingly. 3. **Reorder Trials**: - Use the sequence provided in the **Trial Reordering** section above to rearrange the trials in your preferred order. This approach allows flexibility for custom analyses while ensuring alignment with the established trial order. **Endtime of original EEG files** subject date endtime1 endtime2 01 2023/8/24 16:33 02 2023/8/20 18:34 03 2023/8/23 22:52 04 2023/8/27 18:02 05 2023/8/28 22:24 06 2023/9/5 17:51 07 2023/9/6 19:29 21:53 08 2023/9/9 15:41 18:02 09 2023/9/12 21:41 10 2023/9/16 17:11 18:10 11 2023/9/19 21:48 12 2023/9/26 20:52 22:02 13 2023/9/27 20:40 22:07 14 2023/9/28 17:18 15 2023/10/1 22:12 16 2023/10-4 20:35 22:21 17 2023/10/5 20:14 22:24 18 2023/10/13 21:05 22:25 19 2023/10/14 11:41 12:51 20 2023/10/15 21:04 22:06 21 2024/4/14 20:19 21:32 22 2024/4/15 16:41 18:47 23 2024/4/18 15:44 17:31 24 2024/4/25 17:15 18:08 25 2024/5/3 16:53 18:37 26 2024/5/9 15:48 17:49 27 2024/5/10 20:34 22:11 28 2024/5/11 20:22 22:04 29 2024/5/12 16:07 18:05 30 2024/5/16 15:58 17:39 31 2024/5/17 20:55 22:26 32 2024/5/19 20:07 21:56 33 2024/5/23 20:54 34 2024/5/26 16:01 17:47 35 2024/5/29 20:50 22:35 36 2024/5/31 20:47 22:00 37 2024/6/6 16:13 17:08 38 2024/6/20 17:52 19:31 39 2024/6/26 21:47 40 2024/7/2 22:17 23:08 41 2024/7/3 20:44 21:51 42 2024/7/4 20:14 21:43 43 2024/7/9 15:45 17:40 44 2024/7/10 16:27 17:55 45 2024/7/11 15:45 17:35 46 2024/7/12 15:24 17:20 47 2024/7/14 16:36 18:23 48 2024/7/15 15:55 17:29 49 2024/7/16 16:20 17:49 50 2024/7/17 16:24 18:20 51 2024/7/18 16:13 17:39 52 2024/7/21 20:20 21:36 53 2023/8/8 21:40 54 2024/7/22 15:27 16:59 55 2023/8/16 22:48 56 2023/8/26 18:01 57 2024/4/21 20:34 22:12 58 2024/5/15 19:49 22:06 59 2024/6/27 20:25 22:00 60 2024/7/5 20:00 ## Dataset Information | Dataset ID | `DS005540` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EmoEEG-MC: A Multi-Context Emotional EEG Dataset for Cross-Context Emotion Decoding | | Author (year) | `Xin2024` | | Canonical | — | | Importable as | `DS005540`, `Xin2024` | | Year | 2024 | | Authors | Xin XU, Xinke SHEN, Xuyang CHEN, Qingzhu ZHANG, Sitian WANG, Yihan LI, Zongsheng LI, Dan ZHANG, Mingming ZHANG, Quanying LIU | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005540.v1.0.7](https://doi.org/10.18112/openneuro.ds005540.v1.0.7) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005540) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005540) | [Source URL](https://openneuro.org/datasets/ds005540) | ### Copy-paste BibTeX ```bibtex @dataset{ds005540, title = {EmoEEG-MC: A Multi-Context Emotional EEG Dataset for Cross-Context Emotion Decoding}, author = {Xin XU and Xinke SHEN and Xuyang CHEN and Qingzhu ZHANG and Sitian WANG and Yihan LI and Zongsheng LI and Dan ZHANG and Mingming ZHANG and Quanying LIU}, doi = {10.18112/openneuro.ds005540.v1.0.7}, url = {https://doi.org/10.18112/openneuro.ds005540.v1.0.7}, } ``` ## Technical Details - Subjects: 59 - Recordings: 103 - Tasks: 1 - Channels: 68 - Sampling rate (Hz): 600.0 (95), 1200.0 (8) - Duration (hours): 167.10194444444446 - Pathology: Healthy - Modality: Visual - Type: Affect - Size on disk: 47.3 GB - File count: 103 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005540.v1.0.7 - Source: openneuro - OpenNeuro: [ds005540](https://openneuro.org/datasets/ds005540) - NeMAR: [ds005540](https://nemar.org/dataexplorer/detail?dataset_id=ds005540) ## API Reference Use the `DS005540` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005540(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EmoEEG-MC: A Multi-Context Emotional EEG Dataset for Cross-Context Emotion Decoding * **Study:** `ds005540` (OpenNeuro) * **Author (year):** `Xin2024` * **Canonical:** — Also importable as: `DS005540`, `Xin2024`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 59; recordings: 103; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005540](https://openneuro.org/datasets/ds005540) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005540](https://nemar.org/dataexplorer/detail?dataset_id=ds005540) DOI: [https://doi.org/10.18112/openneuro.ds005540.v1.0.7](https://doi.org/10.18112/openneuro.ds005540.v1.0.7) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005540 >>> dataset = DS005540(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005540) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005540) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005545: ieeg dataset, 106 subjects *Auditory naming* Access recordings and metadata through EEGDash. **Citation:** Aya Kanno, Ryuzaburo Kochi, Kazuki Sakakura, Yu Kitazawa, Hiroshi Uda, Riyo Ueda, Masaki Sonoda, Min-Hee Lee, Jeong-Won Jeong, Aimee F. Luat, Eishi Asano (2024). *Auditory naming*. [10.18112/openneuro.ds005545.v1.0.3](https://doi.org/10.18112/openneuro.ds005545.v1.0.3) Modality: ieeg Subjects: 106 Recordings: 336 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005545 dataset = DS005545(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005545(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005545( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005545, title = {Auditory naming}, author = {Aya Kanno and Ryuzaburo Kochi and Kazuki Sakakura and Yu Kitazawa and Hiroshi Uda and Riyo Ueda and Masaki Sonoda and Min-Hee Lee and Jeong-Won Jeong and Aimee F. Luat and Eishi Asano}, doi = {10.18112/openneuro.ds005545.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005545.v1.0.3}, } ``` ## About This Dataset This dataset, used in the analysis reported by Kanno et al., (2025), contains intracranial EEG recordings from 106 individuals who performed an auditory‑naming task. The corresponding MATLAB analysis code is available at [https://github.com/a8k8nn0/TractographyAtlas](https://github.com/a8k8nn0/TractographyAtlas), and electrode coordinates are provided in MNI‑305 space. Each EDF file is tagged for the auditory naming task with the following event codes: 401 – stimulus onset 402 – stimulus offset 501 – response onset Reference: Aya Kanno, Ryuzaburo Kochi, Kazuki Sakakura, Yu Kitazawa, Hiroshi Uda, Riyo Ueda, Masaki Sonoda, Min-Hee Lee, Jeong-Won Jeong, Robert Rothermel, Aimee F. Luat, Eishi Asano. Dynamic Causal Tractography Analysis of Auditory Descriptive Naming: An Intracranial Study of 106 Patients. bioRxiv 2025.03.07.641428; doi: [https://doi.org/10.1101/2025.03.07.641428](https://doi.org/10.1101/2025.03.07.641428) ## Dataset Information | Dataset ID | `DS005545` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Auditory naming | | Author (year) | `Kanno2024` | | Canonical | `Kanno2025` | | Importable as | `DS005545`, `Kanno2024`, `Kanno2025` | | Year | 2024 | | Authors | Aya Kanno, Ryuzaburo Kochi, Kazuki Sakakura, Yu Kitazawa, Hiroshi Uda, Riyo Ueda, Masaki Sonoda, Min-Hee Lee, Jeong-Won Jeong, Aimee F. Luat, Eishi Asano | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005545.v1.0.3](https://doi.org/10.18112/openneuro.ds005545.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005545) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005545) | [Source URL](https://openneuro.org/datasets/ds005545) | ### Copy-paste BibTeX ```bibtex @dataset{ds005545, title = {Auditory naming}, author = {Aya Kanno and Ryuzaburo Kochi and Kazuki Sakakura and Yu Kitazawa and Hiroshi Uda and Riyo Ueda and Masaki Sonoda and Min-Hee Lee and Jeong-Won Jeong and Aimee F. Luat and Eishi Asano}, doi = {10.18112/openneuro.ds005545.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005545.v1.0.3}, } ``` ## Technical Details - Subjects: 106 - Recordings: 336 - Tasks: 1 - Channels: 128 (237), 138 (14), 136 (11), 134 (11), 140 (8), 112 (6), 110 (6), 156 (5), 142 (5), 150 (5), 164 (4), 132 (4), 144 (4), 148 (4), 118 (3), 116 (3), 84 (3), 96 (3) - Sampling rate (Hz): 1000.0 - Duration (hours): 117.62165555555556 - Pathology: Surgery - Modality: Auditory - Type: Other - Size on disk: 40.0 GB - File count: 336 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005545.v1.0.3 - Source: openneuro - OpenNeuro: [ds005545](https://openneuro.org/datasets/ds005545) - NeMAR: [ds005545](https://nemar.org/dataexplorer/detail?dataset_id=ds005545) ## API Reference Use the `DS005545` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005545(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory naming * **Study:** `ds005545` (OpenNeuro) * **Author (year):** `Kanno2024` * **Canonical:** `Kanno2025` Also importable as: `DS005545`, `Kanno2024`, `Kanno2025`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Surgery`. Subjects: 106; recordings: 336; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005545](https://openneuro.org/datasets/ds005545) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005545](https://nemar.org/dataexplorer/detail?dataset_id=ds005545) DOI: [https://doi.org/10.18112/openneuro.ds005545.v1.0.3](https://doi.org/10.18112/openneuro.ds005545.v1.0.3) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005545 >>> dataset = DS005545(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005545) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005545) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005555: eeg dataset, 128 subjects *The Bitbrain Open Access Sleep (BOAS) dataset* Access recordings and metadata through EEGDash. **Citation:** Eduardo López-Larraz, María Sierra-Torralba, Sergio Clemente, Galit Fierro, David Oriol, Javier Minguez, Luis Montesano, Jens G. Klinzing (2024). *The Bitbrain Open Access Sleep (BOAS) dataset*. [10.18112/openneuro.ds005555.v1.1.1](https://doi.org/10.18112/openneuro.ds005555.v1.1.1) Modality: eeg Subjects: 128 Recordings: 256 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005555 dataset = DS005555(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005555(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005555( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005555, title = {The Bitbrain Open Access Sleep (BOAS) dataset}, author = {Eduardo López-Larraz and María Sierra-Torralba and Sergio Clemente and Galit Fierro and David Oriol and Javier Minguez and Luis Montesano and Jens G. Klinzing}, doi = {10.18112/openneuro.ds005555.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds005555.v1.1.1}, } ``` ## About This Dataset **README** The **Bitbrain Open Access Sleep (BOAS)** dataset. **Overview** This project aimed at bridging the gap between gold-standard clinical sleep monitoring and emerging wearable EEG technologies. The dataset contains data from \*\*128 nights\*\*in which participants were simultaneously monitored with two technologies: a \*\* Brain Quick Plus Evolution PSG system by Micromed\*\*and a \*\*wearable EEG headband by Bitbrain\*\*. The Micromed PSG system records a comprehensive and clinically validated set of physiological sleep parameters, while the Bitbrain wearable EEG headband offers a user-friendly, self-administered alternative, limited to forehead EEG electrodes, movement sensors, and photo-plethysmography. \*\*Data from both systems were acquired simultaneously\*\*, allowing for direct comparison and validation of the wearable EEG device against the established PSG standard. This dual-recording approach provides a rich resource for evaluating the performance and potential of wearable EEG technology in sleep studies. *Human sleep scoring:* To ensure robust and reliable sleep staging, we followed a rigorous labeling process. **Three expert sleep scorers independently annotated the PSG recordings\*\*following criteria developed by the American Academy of Sleep Medicine (AASM) (Berry et al., 2015). From the resulting three scorings, a \*\* consensus label** was derived: each epoch of sleep data received the label scored by at least two of the scorers. In cases where all three scorers had given different labels, a fourth scorer made the final decision. This consensus labeling approach addresses the inherent variability in human-derived sleep scoring, with an estimated inter-scorer agreement of approximately 85% (Danker-Hopfe et al., 2009; Rosenberg and Van Hout, 2013). *Automatic scoring:* We used the human expert consensus labels to train a deep learning model (Esparza-Iaizzo et al., 2024). By implementing a cross-validation procedure, we trained and validated the model separately on the PSG and wearable EEG datasets. The model achieved an 87.13% match between human-consensus and network-provided labels for the PSG data, and an 86.71% match for the wearable EEG data. ### View full README **README** The **Bitbrain Open Access Sleep (BOAS)** dataset. **Overview** This project aimed at bridging the gap between gold-standard clinical sleep monitoring and emerging wearable EEG technologies. The dataset contains data from \*\*128 nights\*\*in which participants were simultaneously monitored with two technologies: a \*\* Brain Quick Plus Evolution PSG system by Micromed\*\*and a \*\*wearable EEG headband by Bitbrain\*\*. The Micromed PSG system records a comprehensive and clinically validated set of physiological sleep parameters, while the Bitbrain wearable EEG headband offers a user-friendly, self-administered alternative, limited to forehead EEG electrodes, movement sensors, and photo-plethysmography. \*\*Data from both systems were acquired simultaneously\*\*, allowing for direct comparison and validation of the wearable EEG device against the established PSG standard. This dual-recording approach provides a rich resource for evaluating the performance and potential of wearable EEG technology in sleep studies. *Human sleep scoring:* To ensure robust and reliable sleep staging, we followed a rigorous labeling process. **Three expert sleep scorers independently annotated the PSG recordings\*\*following criteria developed by the American Academy of Sleep Medicine (AASM) (Berry et al., 2015). From the resulting three scorings, a \*\* consensus label** was derived: each epoch of sleep data received the label scored by at least two of the scorers. In cases where all three scorers had given different labels, a fourth scorer made the final decision. This consensus labeling approach addresses the inherent variability in human-derived sleep scoring, with an estimated inter-scorer agreement of approximately 85% (Danker-Hopfe et al., 2009; Rosenberg and Van Hout, 2013). *Automatic scoring:* We used the human expert consensus labels to train a deep learning model (Esparza-Iaizzo et al., 2024). By implementing a cross-validation procedure, we trained and validated the model separately on the PSG and wearable EEG datasets. The model achieved an 87.13% match between human-consensus and network-provided labels for the PSG data, and an 86.71% match for the wearable EEG data. Our dataset includes: 1. **PSG recordings\*\*from 128 nights (files ending with “\*psg_eeg.edf\*”), 2. \*\*Wearable EEG recordings\*\*from the same nights (files ending with “\*headband_eeg.edf\*”), 3. \*\*Human-consensus sleep stage labels**, obtained from the PSG recordings (”*stage_hum*” in the PSG data’s event files), 4. **AI-generated sleep stage labels**, separately obtained from PSG recordings and from wearable EEG recordings (”*stage_ai*” in both the PSG and headband data’s event files). 5. ``` ** ``` Further meta data\*\*for each recording (i.e., the participants’ age, sex, and BMI, provided in the file “*participants.tsv*”) **Participants** Participants were members of the general population, provided written informed consent, and received economic compensation of 50€ per night. In order to represent the general population, we recruited a broad spectrum of participants along the dimensions of age, sex, and body mass index. We did not recruit patients with particular health conditions but only excluded severe conditions that could have affected the feasibility or safety of the study. In detail, inclusion and exclusion criteria were as follows. **Inclusion criteria** - Age > 18 years, - Sufficient knowledge of Spanish to understand the explanatory text, the consent form and study-related instructions. **Exclusion criteria** - Current severe medical interventions or medication, - History of severe neurological or psychiatric disorders, - Severe health problems in the last 12 months (especially neurological or cardiac disorders), - Current pregnancy or nursing, - Use of psychotropic medication, benzodiazepines, gamma-hydroxybutyric acid, and similar drugs before or during the study. **Format** The dataset is formatted according to the Brain Imaging Data Structure (BIDS). Please note that while the recordings are named from sub-1 up to sub-128, some come from the same participants. 108 unique individuals participated in the recordings, data of which can be matched using the pid (= unique participant ID) property in the file “*participants.tsv*” The folder of each recording contains the data recorded with the PSG (”*sub-xx_task-Sleep_acq-psg_eeg.edf*”) and with the wearable EEG headband (”*sub-xx_task-Sleep_acq-headband_eeg.edf*”). **Channel groups** Not all recordings contain data from all available sensors. The full list of available sensors for each recording can be obtained on the “*channels.tsv*” file. Channels in this file are coded in groups: - **PSG_EEG**: Electroencephalography recorded with the PSG system. Channels available are F3, F4, C3, C4, O1, O2 (PSG_F3, PSG_F4, PSG_C3, PSG_C4, PSG_O1, PSG_O2). - **PSG_EOG**: Electrooculography signals recorded with the PSG system. The location of the EOG electrodes was lateral of the eyes; one slightly lower than the participant’s left eye and one slightly higher than the participant’s right eye (according to AASM guidelines). For recordings containing only one EOG channel (PSG_EOG), the electrodes were recorded as a bipolar derivation. If two EOG channels are present (PSG_EOGR, PSG_EOGL), both electrodes were referenced against the left mastoid. - **PSG_EMG**: Electromyography signals recorded with the PSG system. Data contain a single EMG channel (PSG_EMG), which is the result of a bipolar derivation of two chin electrodes. - **PSG_BELTS**: Breathing activity recorded by the PSG system using abdominal and thoracic breathing belts (PSG_ABD, PSG_THOR). - **PSG_THER**: Respiratory airflow recorded with the PSG system using a thermistor (PSG_THER). - **PSG_CAN**: Respiratory airflow recorded with the PSG system using a nasal cannula (PSG_CAN). - **PSG_PPG**: Photopletismographic (PPG) activity recorded with the PSG system. Channels available are pulse (PSG_PULSE), heart beat (PSG_BEAT) and oxygen saturation (PSG_SPO2). - **HB_EEG**: Electroencephalography recorded with the wearable EEG headband. Headband channels are approximately located at AF7 and AF8 (HB_1, HB_2). - **HB_IMU**: Movement activity recorded by an Inertial Measurement Unit (IMU) in the headband. Signals are derived from an accelerometer (HB_IMU_1, HB_IMU_2, HB_IMU_3) and gyroscope (HB_IMU_4, HB_IMU_5, HB_IMU_6), both recording signals for all three spatial dimensions. - **HB_PULSE**: Pulse activity recorded with the wearable EEG headband using a PPG sensor (HB_PULSE). **Sleep staging labels** The sleep stage labels for each recording are coded as events in corresponding event files (stage_hum and stage_ai; see above). Stages are coded as follows: - 0: Wake, - 1: NonREM sleep stage 1 (N1), - 2: NonREM sleep stage 2 (N2), - 3: NonREM sleep stage 3 (N3), - 4: REM sleep, - 8: PSG disconnections (e.g., due to bathroom breaks; human-scored only) - -2: Artifacts and missing data (AI-scored only) **References** Berry, R. B., Brooks, R., Gamaldo, C. E., Harding, S. M., Lloyd, R. M., Marcus, C. L., et al. (2015). The AASM Manual for the Scoring of Sleep and Associated Events: Rules, Terminology and Technical Specifications, Version 2.2. Darien, Illinois. Danker-Hopfe, H., Anderer, P., Zeitlhofer, J., Boeck, M., Dorn, H., Gruber, G., et al. (2009). Interrater reliability for sleep scoring according to the Rechtschaffen & Kales and the new AASM standard. J. Sleep Res. 18, 74–84. doi: 10.1111/j.1365-2869.2008.00700.x. Esparza-Iaizzo, M., Sierra-Torralba, M., Klinzing, J. G., Minguez, J., Montesano, L., and López-Larraz, E. (2024). Automatic sleep scoring for real-time monitoring and stimulation in individuals with and without sleep apnea. bioRxiv, 2024.06.12.597764. doi: 10.1101/2024.06.12.597764. Rosenberg, R. S., and Van Hout, S. (2013). The American Academy of Sleep Medicine inter-scorer reliability program: sleep stage scoring. J. Clin. sleep Med. 9, 81–87. doi: 10.5664/jcsm.2350. **Contact** If you have any questions or comments, please contact: Eduardo López-Larraz: [eduardo.lopez@bitbrain.com](mailto:eduardo.lopez@bitbrain.com) Jens G. Klinzing: [jens.klinzing@bitbrain.com](mailto:jens.klinzing@bitbrain.com) ## Dataset Information | Dataset ID | `DS005555` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The Bitbrain Open Access Sleep (BOAS) dataset | | Author (year) | `LopezLarraz2024` | | Canonical | `BOAS` | | Importable as | `DS005555`, `LopezLarraz2024`, `BOAS` | | Year | 2024 | | Authors | Eduardo López-Larraz, María Sierra-Torralba, Sergio Clemente, Galit Fierro, David Oriol, Javier Minguez, Luis Montesano, Jens G. Klinzing | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005555.v1.1.1](https://doi.org/10.18112/openneuro.ds005555.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005555) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005555) | [Source URL](https://openneuro.org/datasets/ds005555) | ### Copy-paste BibTeX ```bibtex @dataset{ds005555, title = {The Bitbrain Open Access Sleep (BOAS) dataset}, author = {Eduardo López-Larraz and María Sierra-Torralba and Sergio Clemente and Galit Fierro and David Oriol and Javier Minguez and Luis Montesano and Jens G. Klinzing}, doi = {10.18112/openneuro.ds005555.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds005555.v1.1.1}, } ``` ## Technical Details - Subjects: 128 - Recordings: 256 - Tasks: 1 - Channels: 9 (96), 11 (68), 14 (25), 16 (19), 3 (18), 2 (16), 8 (9), 15 (5) - Sampling rate (Hz): 256.0 - Duration (hours): 2002.5916666666667 - Pathology: Healthy - Modality: Sleep - Type: Sleep - Size on disk: 33.5 GB - File count: 256 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005555.v1.1.1 - Source: openneuro - OpenNeuro: [ds005555](https://openneuro.org/datasets/ds005555) - NeMAR: [ds005555](https://nemar.org/dataexplorer/detail?dataset_id=ds005555) ## API Reference Use the `DS005555` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005555(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Bitbrain Open Access Sleep (BOAS) dataset * **Study:** `ds005555` (OpenNeuro) * **Author (year):** `LopezLarraz2024` * **Canonical:** `BOAS` Also importable as: `DS005555`, `LopezLarraz2024`, `BOAS`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 128; recordings: 256; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005555](https://openneuro.org/datasets/ds005555) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005555](https://nemar.org/dataexplorer/detail?dataset_id=ds005555) DOI: [https://doi.org/10.18112/openneuro.ds005555.v1.1.1](https://doi.org/10.18112/openneuro.ds005555.v1.1.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005555 >>> dataset = DS005555(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005555) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005555) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005557: ieeg dataset, 16 subjects *Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier)* Access recordings and metadata through EEGDash. **Citation:** Haydn G. Herrema, Michael J. Kahana (2024). *Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier)*. [10.18112/openneuro.ds005557.v1.0.0](https://doi.org/10.18112/openneuro.ds005557.v1.0.0) Modality: ieeg Subjects: 16 Recordings: 58 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005557 dataset = DS005557(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005557(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005557( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005557, title = {Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier)}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005557.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005557.v1.0.0}, } ``` ## About This Dataset **Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier)** **Description** This dataset contains behavioral events and intracranial electrophysiology recordings from a delayed free recall task with closed-loop stimulation at encoding, using a classifier trained on encoding data. The experiment consists of participants studying a list of words, presented visually one at a time, completing simple arithmetic problems that function as a distractor, and then freely recalling the words from the just-presented list in any order. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania. This dataset is a closed-loop stimulation version of the [FR1](https://openneuro.org/datasets/ds004789) and [FR2](https://openneuro.org/datasets/ds005489) datasets. This study contains closed-loop electrical stimulation of the brain during encoding. There is no stimulation during the distractor or retrieval phases. Stimulation is delivered to a single electrode at a time, and the stimulation parameters are included in the bevavioral events tsv files, denoting the anode/cathode labels, amplitude, pulse frequency, pulse width, and pulse count. **Classifier Details** The L2 logistic regression classifier is trained to predict whether an encoded item will be subsequently recalled based on the neural features during encoding, using data from a participant’s [FR1](https://openneuro.org/datasets/ds004789) sessions. The bipolar recordings during the 0-1366 ms interval after word presentation are filtered with a Butterworth band stop filter (58-62 Hz, 4th order) to remove 60 Hz line noise, and then a Morlet wavelet transformation (wavenumber = 5) is applied to the signal to estimate spectral power, using 8 log-spaced wavelets between 3-180 Hz (center frequencies 3.0, 5.4, 9.7, 17.4, 31.1, 55.9, 100.3, 180 Hz) and 1365 ms mirrored buffers. The powers are log-transformed prior to removal of the buffer, and then z-transformed based on the within-session mean and standard deviation across all encoding events. These z-transformed log power values represent the feature matrix, and the label vector is the recalled status of the encoded items. The penalty parameter is chosen based on the value that leads to the highest average AUC for all prior participants with at least two [FR1](https://openneuro.org/datasets/ds004789) sessiona, and is inversely weighted according to the class (i.e., recalled v. not recalled) imbalance to ensure the best fit values of the penalty parameter are comparable across different class distributions (recall rates). Class weights are computed as: (1/Na) / ((1/Na + 1/Nb) / 2) where Na and Nb are the number of events in each class. After at least 3 training sessions with a minimum of 15 lists, each participant’s classifier is tested using leave-one-session-out (LOSO) cross validation, and the true AUC is compared to a 200-sample AUC distribution generated from classification of label-permuted data. p < 0.05 (one-sided) is used as the significance threshold for continuing to the closed-loop task. **Closed-Loop Procedure** Each session contains 26 lists (the first being a practice list) and there is no stimulation on the first 4 lists. The classifier ouput for each presented item on the first 4 lists is compared to the classifier output when tested on data from all previous sessions using a two-sample Kolmogorov-Smirnov test. The null hypothesis that the current session and the training data come from the same distribution must not be rejected (p > 0.05) for the closed-loop task to continue. The remaining 22 lists are equally divided into stimulation and no stimulation lists, with conditions balanced in each half of the session. On stimulation lists, classifier output is evaluated during the 0-1366 ms interval following word presentation onset. The input values are normalized using the mean and standard deviation across encoding events on all prior no stimulation lists in the session. If the classifier output is below the median classifier output from the training sessions, stimulation occurs immediately following the 1366 ms decoding interval and lasts for 500 ms. With a 750-1000 ms inter-stimulus interval, there is enough time for stimulation artifacts to subside before the next word onset (next classifier decoding). **To Note** \* The iEEG recordings are labeled either “monopolar” or “bipolar.” The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis. The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables. \* Each subject has a unique montage of electrode locations. MNI and Talairach coordinates are provided when available. \* Recordings done with the Blackrock system are in units of 250 nV, while recordings done with the Medtronic system are estimated through testing to have units of 0.1 uV. We have completed the scaling to provide values in V. **Contact** For questions or inquiries, please contact [sas-kahana-sysadmin@sas.upenn.edu](mailto:sas-kahana-sysadmin@sas.upenn.edu). ## Dataset Information | Dataset ID | `DS005557` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier) | | Author (year) | `Herrema2024_Classifier` | | Canonical | — | | Importable as | `DS005557`, `Herrema2024_Classifier` | | Year | 2024 | | Authors | Haydn G. Herrema, Michael J. Kahana | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005557.v1.0.0](https://doi.org/10.18112/openneuro.ds005557.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005557) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005557) | [Source URL](https://openneuro.org/datasets/ds005557) | ### Copy-paste BibTeX ```bibtex @dataset{ds005557, title = {Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier)}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005557.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005557.v1.0.0}, } ``` ## Technical Details - Subjects: 16 - Recordings: 58 - Tasks: 1 - Channels: 110 (7), 112 (6), 76 (5), 109 (4), 126 (4), 108 (4), 127 (3), 67 (2), 70 (2), 62 (2), 64 (2), 60 (2), 97 (2), 120 (2), 83 (2), 128 (2), 125, 99, 95, 118, 121, 124, 136 - Sampling rate (Hz): 1000.0 - Duration (hours): 51.07257555555555 - Pathology: Other - Modality: Visual - Type: Memory - Size on disk: 34.7 GB - File count: 58 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005557.v1.0.0 - Source: openneuro - OpenNeuro: [ds005557](https://openneuro.org/datasets/ds005557) - NeMAR: [ds005557](https://nemar.org/dataexplorer/detail?dataset_id=ds005557) ## API Reference Use the `DS005557` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005557(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier) * **Study:** `ds005557` (OpenNeuro) * **Author (year):** `Herrema2024_Classifier` * **Canonical:** — Also importable as: `DS005557`, `Herrema2024_Classifier`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Other`. Subjects: 16; recordings: 58; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005557](https://openneuro.org/datasets/ds005557) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005557](https://nemar.org/dataexplorer/detail?dataset_id=ds005557) DOI: [https://doi.org/10.18112/openneuro.ds005557.v1.0.0](https://doi.org/10.18112/openneuro.ds005557.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005557 >>> dataset = DS005557(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005557) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005557) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005558: ieeg dataset, 7 subjects *Categorized Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier)* Access recordings and metadata through EEGDash. **Citation:** Haydn G. Herrema, Michael J. Kahana (2024). *Categorized Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier)*. [10.18112/openneuro.ds005558.v1.0.0](https://doi.org/10.18112/openneuro.ds005558.v1.0.0) Modality: ieeg Subjects: 7 Recordings: 22 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005558 dataset = DS005558(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005558(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005558( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005558, title = {Categorized Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier)}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005558.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005558.v1.0.0}, } ``` ## About This Dataset **Categorized Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier)** **Description** This dataset contains behavioral events and intracranial electrophysiology recordings from a categorized free recall task with closed-loop stimulation at encoding, using a classifier trained on encoding data. The experiment consists of participants studying a list of words, presented visually one at a time, completing simple arithmetic problems that function as a distractor, and then freely recalling the words from the just-presented list in any order. The data was collected at clinical sites across the country as part of a collaboration with the Computational Memory Lab at the University of Pennsylvania. This dataset is a closed-loop stimulation version of the [catFR1](https://openneuro.org/datasets/ds004809) and [catFR2](https://openneuro.org/datasets/ds005491) datasets. The word lists in this paradigm follow a specific semantic construction. Each word comes from one of 25 semantic categories, and each list of 12 items contains 6 pairs of same-category words from 3 different categories. This means that each list has 4 words from 3 semantic categories, and in each half of the list there will be 1 pair of words from each category. For example, if a list contains words from categories A, B, and C, a possible list construction would be: **A1 - A2 - B1 - B2 - C1 - C2 - A3 - A4 - C3 - C4 - B3 - B4** This study contains closed-loop electrical stimulation of the brain during encoding. There is no stimulation during the distractor or retrieval phases. Stimulation is delivered to a single electrode at a time, and the stimulation parameters are included in the bevavioral events tsv files, denoting the anode/cathode labels, amplitude, pulse frequency, pulse width, and pulse count. **Classifier Details** The L2 logistic regression classifier is trained to predict whether an encoded item will be subsequently recalled based on the neural features during encoding, using data from a participant’s [catFR1](https://openneuro.org/datasets/ds004809) sessions. The bipolar recordings during the 0-1366 ms interval after word presentation are filtered with a Butterworth band stop filter (58-62 Hz, 4th order) to remove 60 Hz line noise, and then a Morlet wavelet transformation (wavenumber = 5) is applied to the signal to estimate spectral power, using 8 log-spaced wavelets between 3-180 Hz (center frequencies 3.0, 5.4, 9.7, 17.4, 31.1, 55.9, 100.3, 180 Hz) and 1365 ms mirrored buffers. The powers are log-transformed and downsampled to a 50 Hz sampling frequency prior to removal of the buffer, and then z-transformed based on the within-session mean and standard deviation across all encoding events. These z-transformed log power values represent the feature matrix, and the label vector is the recalled status of the encoded items. The penalty parameter is chosen based on the value that leads to the highest average AUC for all prior participants with at least two [catFR1](https://openneuro.org/datasets/ds004809) sessions, and is inversely weighted according to the class (i.e., recalled v. not recalled) imbalance to ensure the best fit values of the penalty parameter are comparable across different class distributions (recall rates). Class weights are computed as: (1/Na) / ((1/Na + 1/Nb) / 2) where Na and Nb are the number of events in each class. After at least 3 training sessions with a minimum of 15 lists, each participant’s classifier is tested using leave-one-session-out (LOSO) cross validation, and the true AUC is compared to a 200-sample AUC distribution generated from classification of label-permuted data. p < 0.05 (one-sided) is used as the significance threshold for continuing to the closed-loop task. **Closed-Loop Procedure** Each session contains 26 lists (the first being a practice list) and there is no stimulation on the first 4 lists. The classifier ouput for each presented item on the first 4 lists is compared to the classifier output when tested on data from all previous sessions using a two-sample Kolmogorov-Smirnov test. The null hypothesis that the current session and the training data come from the same distribution must not be rejected (p > 0.05) for the closed-loop task to continue. The remaining 22 lists are equally divided into stimulation and no stimulation lists, with conditions balanced in each half of the session. On stimulation lists, classifier output is evaluated during the 0-1366 ms interval following word presentation onset. The input values are normalized using the mean and standard deviation across encoding events on all prior no stimulation lists in the session. If the classifier output is below the threshold value, stimulation occurs immediately following the 1366 ms decoding interval and lasts for 500 ms. The classifier threshold is calculated for each participant individually using Youden’s J statistic and is chosen as the maximal J value across all points on the participant’s training sessions’ ROC curve. With a 750-1000 ms inter-stimulus interval, there is enough time for stimulation artifacts to subside before the next word onset (next classifier decoding). **To Note** \* The iEEG recordings are labeled either “monopolar” or “bipolar.” The monopolar recordings are referenced (typically a mastoid reference), but should always be re-referenced before analysis. The bipolar recordings are referenced according to a paired scheme indicated by the accompanying bipolar channels tables. \* Each subject has a unique montage of electrode locations. MNI and Talairach coordinates are provided when available. \* Recordings done with the Blackrock system are in units of 250 nV, while recordings done with the Medtronic system are estimated through testing to have units of 0.1 uV. We have completed the scaling to provide values in V. **Contact** For questions or inquiries, please contact [sas-kahana-sysadmin@sas.upenn.edu](mailto:sas-kahana-sysadmin@sas.upenn.edu). ## Dataset Information | Dataset ID | `DS005558` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Categorized Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier) | | Author (year) | `Herrema2024_Categorized_Free` | | Canonical | `catFR_closed_loop` | | Importable as | `DS005558`, `Herrema2024_Categorized_Free`, `catFR_closed_loop` | | Year | 2024 | | Authors | Haydn G. Herrema, Michael J. Kahana | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005558.v1.0.0](https://doi.org/10.18112/openneuro.ds005558.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005558) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005558) | [Source URL](https://openneuro.org/datasets/ds005558) | ### Copy-paste BibTeX ```bibtex @dataset{ds005558, title = {Categorized Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier)}, author = {Haydn G. Herrema and Michael J. Kahana}, doi = {10.18112/openneuro.ds005558.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005558.v1.0.0}, } ``` ## Technical Details - Subjects: 7 - Recordings: 22 - Tasks: 1 - Channels: 126 (3), 124 (2), 128 (2), 113 (2), 114 (2), 78 (2), 68 (2), 156 (2), 122, 90, 56, 110, 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 17.006120555555555 - Pathology: Surgery - Modality: Visual - Type: Memory - Size on disk: 12.2 GB - File count: 22 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005558.v1.0.0 - Source: openneuro - OpenNeuro: [ds005558](https://openneuro.org/datasets/ds005558) - NeMAR: [ds005558](https://nemar.org/dataexplorer/detail?dataset_id=ds005558) ## API Reference Use the `DS005558` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005558(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Categorized Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier) * **Study:** `ds005558` (OpenNeuro) * **Author (year):** `Herrema2024_Categorized_Free` * **Canonical:** `catFR_closed_loop` Also importable as: `DS005558`, `Herrema2024_Categorized_Free`, `catFR_closed_loop`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Surgery`. Subjects: 7; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005558](https://openneuro.org/datasets/ds005558) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005558](https://nemar.org/dataexplorer/detail?dataset_id=ds005558) DOI: [https://doi.org/10.18112/openneuro.ds005558.v1.0.0](https://doi.org/10.18112/openneuro.ds005558.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005558 >>> dataset = DS005558(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005558) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005558) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005565: eeg dataset, 24 subjects *Neural associations between fingerspelling, print, and signs: An ERP priming study with deaf readers* Access recordings and metadata through EEGDash. **Citation:** Brittany Lee, Sofia E. Ortega, Priscilla M. Martinez, Katherine J. Midgley, Phillip J. Holcomb, Karen Emmorey (2024). *Neural associations between fingerspelling, print, and signs: An ERP priming study with deaf readers*. [10.18112/openneuro.ds005565.v1.0.3](https://doi.org/10.18112/openneuro.ds005565.v1.0.3) Modality: eeg Subjects: 24 Recordings: 24 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005565 dataset = DS005565(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005565(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005565( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005565, title = {Neural associations between fingerspelling, print, and signs: An ERP priming study with deaf readers}, author = {Brittany Lee and Sofia E. Ortega and Priscilla M. Martinez and Katherine J. Midgley and Phillip J. Holcomb and Karen Emmorey}, doi = {10.18112/openneuro.ds005565.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005565.v1.0.3}, } ``` ## About This Dataset Data collection took place at the NeuroCognition Laboratory (NCL) in San Diego, California under the supervision of Dr. Phillip Holcomb. This project followed the San Diego State University’s IRB guidelines. Participants sat in a comfortable chair in a darkened sound attenuated room throughout the experiment. They were given a gamepad for button pressing. They were instructed to watch the LCD video monitor that was at a viewing distance of 150cm. Participants were presented with 300 prime-target pairs. All targets were four-letter English words. Of the 300 critical trials, 100 had English word primes, 100 had ASL sign primes, and 100 had fingerspelled word primes. Half of the primes in each condition were related to the targets. Related English word primes were identity primes to the English word, related fingerspelled word primes were also identity primes, and related ASL primes were ASL translations of the English word targets. The other half of the primes were unrelated to the targets. Participants were instructed to focus on the purple fixation cross that appeared on the screen for 800ms. This fixation cross then turned white for 500ms. Then, one of three prime conditions was presented: an English word, an ASL sign, or a fingerspelled word. English prime words were presented for 300ms. Signed (M = 565ms) and fingerspelled (M = 1173ms) video primes had variable durations. All target stimuli were 4-letter English words presented for 500ms. Related primes were either identity or translations. Press any of the 4 buttons on the right of the gamepad whenever you see an animal. It doesn’t matter if the animal is presented as a sign, a word, or fingerspelled. Press for ANY animal. You can blink whenever you see purple. A purple + means you have time for a quick blink. A purple (–) means you can blink as much as you want. ## Dataset Information | Dataset ID | `DS005565` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Neural associations between fingerspelling, print, and signs: An ERP priming study with deaf readers | | Author (year) | `Lee2024_StudyWITH` | | Canonical | — | | Importable as | `DS005565`, `Lee2024_StudyWITH` | | Year | 2024 | | Authors | Brittany Lee, Sofia E. Ortega, Priscilla M. Martinez, Katherine J. Midgley, Phillip J. Holcomb, Karen Emmorey | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005565.v1.0.3](https://doi.org/10.18112/openneuro.ds005565.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005565) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005565) | [Source URL](https://openneuro.org/datasets/ds005565) | ### Copy-paste BibTeX ```bibtex @dataset{ds005565, title = {Neural associations between fingerspelling, print, and signs: An ERP priming study with deaf readers}, author = {Brittany Lee and Sofia E. Ortega and Priscilla M. Martinez and Katherine J. Midgley and Phillip J. Holcomb and Karen Emmorey}, doi = {10.18112/openneuro.ds005565.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005565.v1.0.3}, } ``` ## Technical Details - Subjects: 24 - Recordings: 24 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 11.435694444444444 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 2.6 GB - File count: 24 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005565.v1.0.3 - Source: openneuro - OpenNeuro: [ds005565](https://openneuro.org/datasets/ds005565) - NeMAR: [ds005565](https://nemar.org/dataexplorer/detail?dataset_id=ds005565) ## API Reference Use the `DS005565` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005565(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neural associations between fingerspelling, print, and signs: An ERP priming study with deaf readers * **Study:** `ds005565` (OpenNeuro) * **Author (year):** `Lee2024_StudyWITH` * **Canonical:** — Also importable as: `DS005565`, `Lee2024_StudyWITH`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005565](https://openneuro.org/datasets/ds005565) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005565](https://nemar.org/dataexplorer/detail?dataset_id=ds005565) DOI: [https://doi.org/10.18112/openneuro.ds005565.v1.0.3](https://doi.org/10.18112/openneuro.ds005565.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005565 >>> dataset = DS005565(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005565) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005565) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005571: eeg dataset, 24 subjects *Expectation of Conflict Stimuli* Access recordings and metadata through EEGDash. **Citation:** María Paz Martínez-Molina, Alejandra Figueroa-Vargas, Francisco Zamorano, Pablo Billeke (2024). *Expectation of Conflict Stimuli*. [10.18112/openneuro.ds005571.v1.0.1](https://doi.org/10.18112/openneuro.ds005571.v1.0.1) Modality: eeg Subjects: 24 Recordings: 45 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005571 dataset = DS005571(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005571(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005571( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005571, title = {Expectation of Conflict Stimuli}, author = {María Paz Martínez-Molina and Alejandra Figueroa-Vargas and Francisco Zamorano and Pablo Billeke}, doi = {10.18112/openneuro.ds005571.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005571.v1.0.1}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS005571` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Expectation of Conflict Stimuli | | Author (year) | `MartinezMolina2024` | | Canonical | — | | Importable as | `DS005571`, `MartinezMolina2024` | | Year | 2024 | | Authors | María Paz Martínez-Molina, Alejandra Figueroa-Vargas, Francisco Zamorano, Pablo Billeke | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005571.v1.0.1](https://doi.org/10.18112/openneuro.ds005571.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005571) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005571) | [Source URL](https://openneuro.org/datasets/ds005571) | ### Copy-paste BibTeX ```bibtex @dataset{ds005571, title = {Expectation of Conflict Stimuli}, author = {María Paz Martínez-Molina and Alejandra Figueroa-Vargas and Francisco Zamorano and Pablo Billeke}, doi = {10.18112/openneuro.ds005571.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005571.v1.0.1}, } ``` ## Technical Details - Subjects: 24 - Recordings: 45 - Tasks: 2 - Channels: 66 (41), 67 (4) - Sampling rate (Hz): 5000.0 - Duration (hours): 28.21085 - Pathology: Healthy - Modality: — - Type: Attention - Size on disk: 63.3 GB - File count: 45 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005571.v1.0.1 - Source: openneuro - OpenNeuro: [ds005571](https://openneuro.org/datasets/ds005571) - NeMAR: [ds005571](https://nemar.org/dataexplorer/detail?dataset_id=ds005571) ## API Reference Use the `DS005571` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005571(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Expectation of Conflict Stimuli * **Study:** `ds005571` (OpenNeuro) * **Author (year):** `MartinezMolina2024` * **Canonical:** — Also importable as: `DS005571`, `MartinezMolina2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 24; recordings: 45; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005571](https://openneuro.org/datasets/ds005571) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005571](https://nemar.org/dataexplorer/detail?dataset_id=ds005571) DOI: [https://doi.org/10.18112/openneuro.ds005571.v1.0.1](https://doi.org/10.18112/openneuro.ds005571.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005571 >>> dataset = DS005571(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005571) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005571) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005574: ieeg dataset, 9 subjects *The “Podcast” ECoG dataset* Access recordings and metadata through EEGDash. **Citation:** Zaid Zada, Samuel A. Nastase, Bobbi Aubrey, Itamar Jalon, Ariel Goldstein, Sebastian Michelmann, Haocheng Wang, Liat Hasenfratz, Werner Doyle, Daniel Friedman, Patricia Dugan, Lucia Melloni, Sasha Devore, Orrin Devinsky, Adeen Flinker, Uri Hasson (2024). *The “Podcast” ECoG dataset*. [10.18112/openneuro.ds005574.v1.0.2](https://doi.org/10.18112/openneuro.ds005574.v1.0.2) Modality: ieeg Subjects: 9 Recordings: 9 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005574 dataset = DS005574(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005574(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005574( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005574, title = {The "Podcast" ECoG dataset}, author = {Zaid Zada and Samuel A. Nastase and Bobbi Aubrey and Itamar Jalon and Ariel Goldstein and Sebastian Michelmann and Haocheng Wang and Liat Hasenfratz and Werner Doyle and Daniel Friedman and Patricia Dugan and Lucia Melloni and Sasha Devore and Orrin Devinsky and Adeen Flinker and Uri Hasson}, doi = {10.18112/openneuro.ds005574.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds005574.v1.0.2}, } ``` ## About This Dataset The “Podcast” ECoG dataset for modeling neural activity during natural story listening. We introduce the “Podcast” electrocorticography (ECoG) dataset for modeling neural activity supporting natural narrative comprehension. This dataset combines the exceptional spatiotemporal resolution of human intracranial electrophysiology with a naturalistic experimental paradigm for language comprehension. In addition to the raw data, we provide a minimally preprocessed version in the high-gamma spectral band to showcase a simple pipeline and to make it easier to use. Furthermore, we include the auditory stimuli, an aligned word-level transcript, and linguistic features ranging from low-level acoustic properties to large language model (LLM) embeddings. We also include tutorials replicating previous findings and serve as a pedagogical resource and a springboard for new research. The dataset comprises 9 participants with 1,330 electrodes, including grid, depth, and strip electrodes. The participants listened to a 30-minute story with over 5,000 words. By using a natural story with high-fidelity, invasive neural recordings, this dataset offers a unique opportunity to investigate language comprehension. ## Dataset Information | Dataset ID | `DS005574` | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The “Podcast” ECoG dataset | | Author (year) | `Zada2024` | | Canonical | `Podcast` | | Importable as | `DS005574`, `Zada2024`, `Podcast` | | Year | 2024 | | Authors | Zaid Zada, Samuel A. Nastase, Bobbi Aubrey, Itamar Jalon, Ariel Goldstein, Sebastian Michelmann, Haocheng Wang, Liat Hasenfratz, Werner Doyle, Daniel Friedman, Patricia Dugan, Lucia Melloni, Sasha Devore, Orrin Devinsky, Adeen Flinker, Uri Hasson | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005574.v1.0.2](https://doi.org/10.18112/openneuro.ds005574.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005574) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005574) | [Source URL](https://openneuro.org/datasets/ds005574) | ### Copy-paste BibTeX ```bibtex @dataset{ds005574, title = {The "Podcast" ECoG dataset}, author = {Zaid Zada and Samuel A. Nastase and Bobbi Aubrey and Itamar Jalon and Ariel Goldstein and Sebastian Michelmann and Haocheng Wang and Liat Hasenfratz and Werner Doyle and Daniel Friedman and Patricia Dugan and Lucia Melloni and Sasha Devore and Orrin Devinsky and Adeen Flinker and Uri Hasson}, doi = {10.18112/openneuro.ds005574.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds005574.v1.0.2}, } ``` ## Technical Details - Subjects: 9 - Recordings: 9 - Tasks: 1 - Channels: 178, 174, 114, 138, 264, 124, 167, 91, 205 - Sampling rate (Hz): 512.0 (8), 2048.0 - Duration (hours): 4.499995524088542 - Pathology: Not specified - Modality: Auditory - Type: Other - Size on disk: 3.2 GB - File count: 9 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005574.v1.0.2 - Source: openneuro - OpenNeuro: [ds005574](https://openneuro.org/datasets/ds005574) - NeMAR: [ds005574](https://nemar.org/dataexplorer/detail?dataset_id=ds005574) ## API Reference Use the `DS005574` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005574(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The “Podcast” ECoG dataset * **Study:** `ds005574` (OpenNeuro) * **Author (year):** `Zada2024` * **Canonical:** `Podcast` Also importable as: `DS005574`, `Zada2024`, `Podcast`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Unknown`. Subjects: 9; recordings: 9; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005574](https://openneuro.org/datasets/ds005574) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005574](https://nemar.org/dataexplorer/detail?dataset_id=ds005574) DOI: [https://doi.org/10.18112/openneuro.ds005574.v1.0.2](https://doi.org/10.18112/openneuro.ds005574.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS005574 >>> dataset = DS005574(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005574) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005574) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005586: eeg dataset, 23 subjects *Electroencephalographic responses to the number of objects in partially occluded and uncovered scenes* Access recordings and metadata through EEGDash. **Citation:** Cemre Baykan, Alexander C. Schütz (2024). *Electroencephalographic responses to the number of objects in partially occluded and uncovered scenes*. [10.18112/openneuro.ds005586.v2.0.0](https://doi.org/10.18112/openneuro.ds005586.v2.0.0) Modality: eeg Subjects: 23 Recordings: 23 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005586 dataset = DS005586(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005586(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005586( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005586, title = {Electroencephalographic responses to the number of objects in partially occluded and uncovered scenes}, author = {Cemre Baykan and Alexander C. Schütz}, doi = {10.18112/openneuro.ds005586.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds005586.v2.0.0}, } ``` ## About This Dataset **Passing Viewing Task** 23 participants took part in this study in return for a monetary incentive at University of Marburg. Participants performed a passive viewing task in a dimly lit room. The visual scene consisted of a game board, game pieces and a mesh as an occluder. Each trial started with a fixation cross presentation for one second plus the duration of the drift correction procedure. The game board and occluder were presented for two seconds, while game pieces only appeared in the last one second of this presentation. Following the “partially occluded scene”, the occluder disappeared to uncover the hidden parts of the game board along with the visible game pieces leading to the “uncovered scene” phase. The uncovered scene was presented for one second. The experiment consisted of eight blocks of 80 trials each. There were two conditions of initially visible game pieces: 4 or 32 pieces, each with 8 uncovered conditions: 0, 1, 2, 4, 28, 30, 31 or 32 uncovered game pieces. All 16 conditions were repeated 40 times during the experiment, summing up to 640 trials in total. Participants 9, 10 and 15 were excluded from the analyses due to excessive head movements and equipment malfunction. ## Dataset Information | Dataset ID | `DS005586` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Electroencephalographic responses to the number of objects in partially occluded and uncovered scenes | | Author (year) | `Baykan2024` | | Canonical | — | | Importable as | `DS005586`, `Baykan2024` | | Year | 2024 | | Authors | Cemre Baykan, Alexander C. Schütz | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005586.v2.0.0](https://doi.org/10.18112/openneuro.ds005586.v2.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005586) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005586) | [Source URL](https://openneuro.org/datasets/ds005586) | ### Copy-paste BibTeX ```bibtex @dataset{ds005586, title = {Electroencephalographic responses to the number of objects in partially occluded and uncovered scenes}, author = {Cemre Baykan and Alexander C. Schütz}, doi = {10.18112/openneuro.ds005586.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds005586.v2.0.0}, } ``` ## Technical Details - Subjects: 23 - Recordings: 23 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 33.52867694444444 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 28.3 GB - File count: 23 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005586.v2.0.0 - Source: openneuro - OpenNeuro: [ds005586](https://openneuro.org/datasets/ds005586) - NeMAR: [ds005586](https://nemar.org/dataexplorer/detail?dataset_id=ds005586) ## API Reference Use the `DS005586` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005586(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electroencephalographic responses to the number of objects in partially occluded and uncovered scenes * **Study:** `ds005586` (OpenNeuro) * **Author (year):** `Baykan2024` * **Canonical:** — Also importable as: `DS005586`, `Baykan2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 23; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005586](https://openneuro.org/datasets/ds005586) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005586](https://nemar.org/dataexplorer/detail?dataset_id=ds005586) DOI: [https://doi.org/10.18112/openneuro.ds005586.v2.0.0](https://doi.org/10.18112/openneuro.ds005586.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005586 >>> dataset = DS005586(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005586) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005586) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005594: eeg dataset, 16 subjects *Alphabetic Decision Task (Arial Light Font)* Access recordings and metadata through EEGDash. **Citation:** Jack E. Taylor, Rasmus Sinn, Cosimo Iaia, Christian J. Fiebach (2024). *Alphabetic Decision Task (Arial Light Font)*. [10.18112/openneuro.ds005594.v1.0.3](https://doi.org/10.18112/openneuro.ds005594.v1.0.3) Modality: eeg Subjects: 16 Recordings: 16 License: CC0 Source: openneuro Citations: 1.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005594 dataset = DS005594(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005594(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005594( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005594, title = {Alphabetic Decision Task (Arial Light Font)}, author = {Jack E. Taylor and Rasmus Sinn and Cosimo Iaia and Christian J. Fiebach}, doi = {10.18112/openneuro.ds005594.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005594.v1.0.3}, } ``` ## About This Dataset Generated from raw data by MNE-BIDS (Appelhoff et al., 2019) and custom code to join to behavioural data, stimulus information, and metadata. **Notes on the Data** For full details on this dataset, see our preprint: Taylor et al. (2024) [https://doi.org/10.1101/2024.11.11.622929](https://doi.org/10.1101/2024.11.11.622929) General notes: \* An issue during recording meant that sub-05 completed the first block without data being saved. The experiment was restarted from the beginning for this participant. This participant was not included in our analyses, but the data are included in this dataset. They are also identified with the `recording_restarted` field in `participants.tsv`. \* A separate issue during recording meant that EEG data for some trials were lost for `sub-01`, though enough trials were recorded in total to meet our criteria for inclusion in the analysis. The raw data comprised two separate recordings. In this dataset, the two recordings are concatenated end-to-end into one file. The point at which the files are joined is marked with a boundary event. This participant is identified with the `recording_interrupted` field in `participants.tsv`. \* During the course of the experiment, we identified an issue with the wiring in one splitter box, which meant that voltages from channels FT7 and FC3 were swapped in the raw recorded data. We elected to keep the wiring as it was for the duration of the experiment, and then swapped the data from the two channels in the code that generated this BIDS dataset. This means that this issue has been corrected in this BIDS version of the data. \* “BAD” periods (MNE term) for key presses and break periods are included in the events files. \* Recording dates/times have been anonymised by shifting all recordings backwards in time by a constant number of days (same constant for all participants). This obscures information that may be used to identify participants, but preserves time-of-day information, and the relative times elapsed between different recordings. **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS005594` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Alphabetic Decision Task (Arial Light Font) | | Author (year) | `Taylor2024` | | Canonical | — | | Importable as | `DS005594`, `Taylor2024` | | Year | 2024 | | Authors | Jack E. Taylor, Rasmus Sinn, Cosimo Iaia, Christian J. Fiebach | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005594.v1.0.3](https://doi.org/10.18112/openneuro.ds005594.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005594) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005594) | [Source URL](https://openneuro.org/datasets/ds005594) | ### Copy-paste BibTeX ```bibtex @dataset{ds005594, title = {Alphabetic Decision Task (Arial Light Font)}, author = {Jack E. Taylor and Rasmus Sinn and Cosimo Iaia and Christian J. Fiebach}, doi = {10.18112/openneuro.ds005594.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005594.v1.0.3}, } ``` ## Technical Details - Subjects: 16 - Recordings: 16 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 1000.0 - Duration (hours): 12.933862222222222 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 10.9 GB - File count: 16 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005594.v1.0.3 - Source: openneuro - OpenNeuro: [ds005594](https://openneuro.org/datasets/ds005594) - NeMAR: [ds005594](https://nemar.org/dataexplorer/detail?dataset_id=ds005594) ## API Reference Use the `DS005594` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005594(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alphabetic Decision Task (Arial Light Font) * **Study:** `ds005594` (OpenNeuro) * **Author (year):** `Taylor2024` * **Canonical:** — Also importable as: `DS005594`, `Taylor2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 16; recordings: 16; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005594](https://openneuro.org/datasets/ds005594) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005594](https://nemar.org/dataexplorer/detail?dataset_id=ds005594) DOI: [https://doi.org/10.18112/openneuro.ds005594.v1.0.3](https://doi.org/10.18112/openneuro.ds005594.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005594 >>> dataset = DS005594(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005594) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005594) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005620: eeg dataset, 21 subjects *A repeated awakening study exploring the capacity of complexity measures to capture dreaming during propofol sedation* Access recordings and metadata through EEGDash. **Citation:** Imad J. Bajwa1, Andre S. Nilsen1, René Skukies1,3, Arnfinn Aamodt1, Gernot Ernst2, Johan F. Storm1, Bjørn E. Juel1,2 (2024). *A repeated awakening study exploring the capacity of complexity measures to capture dreaming during propofol sedation*. [10.18112/openneuro.ds005620.v1.0.0](https://doi.org/10.18112/openneuro.ds005620.v1.0.0) Modality: eeg Subjects: 21 Recordings: 202 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005620 dataset = DS005620(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005620(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005620( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005620, title = {A repeated awakening study exploring the capacity of complexity measures to capture dreaming during propofol sedation}, author = {Imad J. Bajwa1 and Andre S. Nilsen1 and René Skukies1,3 and Arnfinn Aamodt1 and Gernot Ernst2 and Johan F. Storm1 and Bjørn E. Juel1,2}, doi = {10.18112/openneuro.ds005620.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005620.v1.0.0}, } ``` ## About This Dataset **A Repeated Awakening Study Exploring the Capacity of Complexity Measures to Capture Dreaming During Propofol Sedation** **Description** This dataset contains EEG data from a study investigating the effects of propofol sedation on dreaming and the applicability of complexity measures in capturing this phenomenon. The study aims to understand the dynamics of consciousness during sedation and the potential for EEG complexity measures to reflect subjective experiences. **Authors** ### View full README **A Repeated Awakening Study Exploring the Capacity of Complexity Measures to Capture Dreaming During Propofol Sedation** **Description** This dataset contains EEG data from a study investigating the effects of propofol sedation on dreaming and the applicability of complexity measures in capturing this phenomenon. The study aims to understand the dynamics of consciousness during sedation and the potential for EEG complexity measures to reflect subjective experiences. **Authors** - Imad J. Bajwa - Andre S. Nilsen - René Skukies - Arnfinn Aamodt - Gernot Ernst - Johan F. Storm - Bjørn E. Juel **Ethics Statement** Approved by the Regional Committees for Medical Research Ethics South East Norway (REK), ref. 2015/1520. **License** This dataset is licensed under CC-BY-4.0. **File Structure** The dataset is organized by subject, with each subject’s EEG files stored in a dedicated directory. Below is the structure for the EEG data associated with a sample subject (sub-1016). **Directory Structure** ```text /Volumes/IMADS SSD/Anesthesia_conciousness_paper/project_BIDS/ ``` ```text └── sub-1016/ └── eeg/ ├── sub-1016_task-awake_acq-EC_channels.tsv ├── sub-1016_task-awake_acq-EC_eeg.eeg ├── sub-1016_task-awake_acq-EC_eeg.json ├── sub-1016_task-awake_acq-EC_eeg.vhdr ├── sub-1016_task-awake_acq-EC_eeg.vmrk ├── sub-1016_task-awake_acq-EC_events.json ├── sub-1016_task-awake_acq-EC_events.tsv ├── ... ``` **File Naming Convention** EEG files are named in the format: `sub-_task-_acq-_run-.` **Example Filenames** - `sub-1016_task-awake_acq-EC_channels.tsv` - `sub-1016_task-sed_acq-rest_run-1_eeg.eeg` **Filename Components** - **sub-**: Identifier for the subject (e.g., `sub-1016`). - **task-**: Indicates the task condition: - `awake`: Wakefulness - `sed`: Sedation condition - `sed2`: One-minute resting EEG recorded just before an awakening - **acq-**: Type of acquisition: - `EC`: Eyes Closed (during wakefulness) - `EO`: Eyes Open (during wakefulness) - `tms`: Session with Transcranial Magnetic Stimulation - `rest`: Rest condition (during sedation) - **run-**: Specifies the run number for the data collection: - `run-1`, `run-2`, `run-3` (indicating different awakenings in sedation) - \\\*\\\*.\*\*: File extension indicating the type of file (e.g., `.eeg`, `.vhdr`, `.vmrk`, etc.). **File Types** - \\\*\\\*.eeg\*\*: Raw EEG data. - \\\*\\\*.vhdr\*\*: BrainVision header file. - \\\*\\\*.vmrk\*\*: BrainVision marker file. - \\\*\\\*_events.json / \_events.tsv\*\*: Event markers. - \\\*\\\*_channels.tsv / \_eeg.json\*\*: Channel information and metadata. **Usage Instructions** To analyze the data, you may need software such as Python with MNE-Python. Please refer to the MNE documentation for details on how to load and manipulate the datasets. **Contact Information** For questions regarding this dataset, please contact: Imad J. Bajwa Email: [imadjb@uio.no](mailto:imadjb@uio.no) Bjørn E. Juel Email: [Bjorneju@gmail.com](mailto:Bjorneju@gmail.com) **Acknowledgements** We thank the participants and the supporting research staff for their contributions to this study. **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `DS005620` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A repeated awakening study exploring the capacity of complexity measures to capture dreaming during propofol sedation | | Author (year) | `Bajwa2024` | | Canonical | — | | Importable as | `DS005620`, `Bajwa2024` | | Year | 2024 | | Authors | Imad J. Bajwa1, Andre S. Nilsen1, René Skukies1,3, Arnfinn Aamodt1, Gernot Ernst2, Johan F. Storm1, Bjørn E. Juel1,2 | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005620.v1.0.0](https://doi.org/10.18112/openneuro.ds005620.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005620) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005620) | [Source URL](https://openneuro.org/datasets/ds005620) | ### Copy-paste BibTeX ```bibtex @dataset{ds005620, title = {A repeated awakening study exploring the capacity of complexity measures to capture dreaming during propofol sedation}, author = {Imad J. Bajwa1 and Andre S. Nilsen1 and René Skukies1,3 and Arnfinn Aamodt1 and Gernot Ernst2 and Johan F. Storm1 and Bjørn E. Juel1,2}, doi = {10.18112/openneuro.ds005620.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005620.v1.0.0}, } ``` ## Technical Details - Subjects: 21 - Recordings: 202 - Tasks: 3 - Channels: 64 (132), 65 (70) - Sampling rate (Hz): 5000.0 - Duration (hours): 17.939717444444444 - Pathology: Healthy - Modality: Anesthesia - Type: Clinical/Intervention - Size on disk: 77.3 GB - File count: 202 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005620.v1.0.0 - Source: openneuro - OpenNeuro: [ds005620](https://openneuro.org/datasets/ds005620) - NeMAR: [ds005620](https://nemar.org/dataexplorer/detail?dataset_id=ds005620) ## API Reference Use the `DS005620` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005620(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A repeated awakening study exploring the capacity of complexity measures to capture dreaming during propofol sedation * **Study:** `ds005620` (OpenNeuro) * **Author (year):** `Bajwa2024` * **Canonical:** — Also importable as: `DS005620`, `Bajwa2024`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 21; recordings: 202; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005620](https://openneuro.org/datasets/ds005620) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005620](https://nemar.org/dataexplorer/detail?dataset_id=ds005620) DOI: [https://doi.org/10.18112/openneuro.ds005620.v1.0.0](https://doi.org/10.18112/openneuro.ds005620.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005620 >>> dataset = DS005620(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005620) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005620) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005624: ieeg dataset, 24 subjects *Color Change Detection Task* Access recordings and metadata through EEGDash. **Citation:** [Unspecified] (2024). *Color Change Detection Task*. [10.18112/openneuro.ds005624.v1.0.0](https://doi.org/10.18112/openneuro.ds005624.v1.0.0) Modality: ieeg Subjects: 24 Recordings: 35 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005624 dataset = DS005624(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005624(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005624( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005624, title = {Color Change Detection Task}, author = {[Unspecified]}, doi = {10.18112/openneuro.ds005624.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005624.v1.0.0}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Holdgraf, C., Appelhoff, S., Bickel, S., Bouchard, K., D’Ambrosio, S., David, O., … Hermes, D. (2019). iEEG-BIDS, extending the Brain Imaging Data Structure specification to human intracranial electrophysiology. Scientific Data, 6, 102. [https://doi.org/10.1038/s41597-019-0105-7](https://doi.org/10.1038/s41597-019-0105-7) ## Dataset Information | Dataset ID | `DS005624` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Color Change Detection Task | | Author (year) | `DS5624_ColorChangeDetection` | | Canonical | — | | Importable as | `DS005624`, `DS5624_ColorChangeDetection` | | Year | 2024 | | Authors | [Unspecified] | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005624.v1.0.0](https://doi.org/10.18112/openneuro.ds005624.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005624) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005624) | [Source URL](https://openneuro.org/datasets/ds005624) | ### Copy-paste BibTeX ```bibtex @dataset{ds005624, title = {Color Change Detection Task}, author = {[Unspecified]}, doi = {10.18112/openneuro.ds005624.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005624.v1.0.0}, } ``` ## Technical Details - Subjects: 24 - Recordings: 35 - Tasks: 1 - Channels: 74 (4), 223 (3), 172 (3), 95 (2), 162 (2), 111 (2), 152 (2), 115 (2), 151 (2), 100 (2), 205, 189, 191, 228, 173, 101, 118, 127, 128, 123, 166 - Sampling rate (Hz): 512.0 (26), 1024.0 (9) - Duration (hours): 11.40025417751736 - Pathology: Not specified - Modality: Visual - Type: Memory - Size on disk: 13.8 GB - File count: 35 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005624.v1.0.0 - Source: openneuro - OpenNeuro: [ds005624](https://openneuro.org/datasets/ds005624) - NeMAR: [ds005624](https://nemar.org/dataexplorer/detail?dataset_id=ds005624) ## API Reference Use the `DS005624` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005624(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Color Change Detection Task * **Study:** `ds005624` (OpenNeuro) * **Author (year):** `DS5624_ColorChangeDetection` * **Canonical:** — Also importable as: `DS005624`, `DS5624_ColorChangeDetection`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 24; recordings: 35; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005624](https://openneuro.org/datasets/ds005624) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005624](https://nemar.org/dataexplorer/detail?dataset_id=ds005624) DOI: [https://doi.org/10.18112/openneuro.ds005624.v1.0.0](https://doi.org/10.18112/openneuro.ds005624.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005624 >>> dataset = DS005624(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005624) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005624) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005628: eeg dataset, 102 subjects *Dataset of Visual and Audiovisual Stimuli in Virtual Reality from the Edzna Archaeological Site* Access recordings and metadata through EEGDash. **Citation:** Juan Pablo Rosado-Aíza, Fernando José Domínguez-Morales, Tania Yareni Pech-Canul, Paola Guadalupe Vázquez-Rodríguez, Gustavo Navas-Reascos, Luz María Alonso-Valerdi, David I. Ibarra Zarate (2024). *Dataset of Visual and Audiovisual Stimuli in Virtual Reality from the Edzna Archaeological Site*. [10.18112/openneuro.ds005628.v1.0.0](https://doi.org/10.18112/openneuro.ds005628.v1.0.0) Modality: eeg Subjects: 102 Recordings: 306 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005628 dataset = DS005628(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005628(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005628( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005628, title = {Dataset of Visual and Audiovisual Stimuli in Virtual Reality from the Edzna Archaeological Site}, author = {Juan Pablo Rosado-Aíza and Fernando José Domínguez-Morales and Tania Yareni Pech-Canul and Paola Guadalupe Vázquez-Rodríguez and Gustavo Navas-Reascos and Luz María Alonso-Valerdi and David I. Ibarra Zarate}, doi = {10.18112/openneuro.ds005628.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005628.v1.0.0}, } ``` ## About This Dataset **README** - Authors Juan Pablo Rosado-Aíza, Fernando José Domínguez-Morales, Tania Yareni Pech-Canul, Paola Guadalupe Vázquez-Rodríguez, Gustavo Navas-Reascos, Luz María Alonso-Valerdi, David I. Ibarra Zarate - Contact person Gustavo Navas-Reascos [https://orcid.org/0000-0003-0250-765X](https://orcid.org/0000-0003-0250-765X) ### View full README **README** - Authors Juan Pablo Rosado-Aíza, Fernando José Domínguez-Morales, Tania Yareni Pech-Canul, Paola Guadalupe Vázquez-Rodríguez, Gustavo Navas-Reascos, Luz María Alonso-Valerdi, David I. Ibarra Zarate - Contact person Gustavo Navas-Reascos [https://orcid.org/0000-0003-0250-765X](https://orcid.org/0000-0003-0250-765X) [A01681952@tec.mx](mailto:A01681952@tec.mx) **Overview** - Project name Dataset of Visual and Audiovisual Stimuli in Virtual Reality from the Edzna Archaeological Site - Year that the project ran 2024 - Brief overview The purpose of this dataset is to analyze user experience in a virtual reality (VR) environment, focusing on a comparative study between visual and audiovisual stimuli based on the archaeological site of Edzna, Mexico. The immersive experience allowed participants to explore the site without needing to physically being there, and the experiment was conducted in a museum setting, offering a unique experience that goes beyond traditional visual-only exhibits. The dataset includes both Electroencephalography (EEG) recordings from eight channels (Fz, C3, Cz, C4, Pz, PO7, Oz, and PO8) and user responses to the User Experience Questionnaire (UEQ), providing necessary data for future studies on how immersive environments affect user perception. The EEG data was collected using a Unicorn Hybrid Black EEG system with a sampling rate of 250 Hz. Participants were exposed to two conditions: a visual-only stimulus and an audiovisual stimulus, both of which represented scenes from the archaeological site in VR. Prior to exposure, a baseline measurement was taken to capture the initial state of the participants. Data collection was conducted in MOSTLA, a digital innovation lab at Tecnologico de Monterrey campus, and the Museum of Contemporary Art in Monterrey. Each EEG recording is shared in .set format and follows the BIDS structure. The recordings include eight channels of brainwave recordings for the baseline, visual, and audiovisual conditions. The signals are presented in both formats: raw and preprocessed. Additionally, an .xlsx file is provided with basic participant metadata, such as age, gender, unique identifier as well as the UEQ responses. Each EEG file contains data segmented into the three phases of the experiment: baseline, visual stimulus, and audiovisual stimulus, allowing researchers to directly compare neural responses across conditions. This dataset offers a comprehensive resource for researchers interested in investigating the effects of immersive VR environments on user engagement, and attention, making it highly applicable and useful. - Description of the contents of the dataset > sub-N - Raw data > sub-Np - Preprocesed data > Example: > sub-1 - Raw data of subject 1 > > sub-1p - Preprocesed data of subject 1 **Subjects** A total of 51 participants were obtained. **Apparatus** Unicorn Hybrid Black EEG system VR Headset Headphones **Experimental location** MOSTLA place at Tecnologico de Monterrey. It is located at Av. Eugenio Garza Sada 2501 Sur, Tecnologico, 64849 Monterrey, N.L., Mexico. MARCO a contemporary art museum located in Monterrey at Zuazua y Jardón, Centro, 64000 Monterrey, N.L., Mexico. **Notes** All the metadata information, including the UEQ answers could be obtained from the file metadata.xlsx The videos presented to the participants are shown in: Audiovisual video: [https://youtu.be/FBWbtSFwVuo](https://youtu.be/FBWbtSFwVuo) Visual video: [https://youtu.be/aLzzl0ygBnc](https://youtu.be/aLzzl0ygBnc) ## Dataset Information | Dataset ID | `DS005628` | |----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset of Visual and Audiovisual Stimuli in Virtual Reality from the Edzna Archaeological Site | | Author (year) | `RosadoAiza2024` | | Canonical | — | | Importable as | `DS005628`, `RosadoAiza2024` | | Year | 2024 | | Authors | Juan Pablo Rosado-Aíza, Fernando José Domínguez-Morales, Tania Yareni Pech-Canul, Paola Guadalupe Vázquez-Rodríguez, Gustavo Navas-Reascos, Luz María Alonso-Valerdi, David I. Ibarra Zarate | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005628.v1.0.0](https://doi.org/10.18112/openneuro.ds005628.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005628) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005628) | [Source URL](https://openneuro.org/datasets/ds005628) | ### Copy-paste BibTeX ```bibtex @dataset{ds005628, title = {Dataset of Visual and Audiovisual Stimuli in Virtual Reality from the Edzna Archaeological Site}, author = {Juan Pablo Rosado-Aíza and Fernando José Domínguez-Morales and Tania Yareni Pech-Canul and Paola Guadalupe Vázquez-Rodríguez and Gustavo Navas-Reascos and Luz María Alonso-Valerdi and David I. Ibarra Zarate}, doi = {10.18112/openneuro.ds005628.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005628.v1.0.0}, } ``` ## Technical Details - Subjects: 102 - Recordings: 306 - Tasks: 1 - Channels: 8 - Sampling rate (Hz): 250.0 - Duration (hours): 20.953197777777778 - Pathology: Healthy - Modality: Multisensory - Type: Attention - Size on disk: 633.7 MB - File count: 306 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005628.v1.0.0 - Source: openneuro - OpenNeuro: [ds005628](https://openneuro.org/datasets/ds005628) - NeMAR: [ds005628](https://nemar.org/dataexplorer/detail?dataset_id=ds005628) ## API Reference Use the `DS005628` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005628(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of Visual and Audiovisual Stimuli in Virtual Reality from the Edzna Archaeological Site * **Study:** `ds005628` (OpenNeuro) * **Author (year):** `RosadoAiza2024` * **Canonical:** — Also importable as: `DS005628`, `RosadoAiza2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 102; recordings: 306; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005628](https://openneuro.org/datasets/ds005628) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005628](https://nemar.org/dataexplorer/detail?dataset_id=ds005628) DOI: [https://doi.org/10.18112/openneuro.ds005628.v1.0.0](https://doi.org/10.18112/openneuro.ds005628.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005628 >>> dataset = DS005628(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005628) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005628) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005642: eeg dataset, 21 subjects *illusory-face-eeg* Access recordings and metadata through EEGDash. **Citation:** Amanda K Robinson, Greta Stuart, Sophia M Shatek, Adrian Herbert, Jessica Taubert (2024). *illusory-face-eeg*. [10.18112/openneuro.ds005642.v1.0.1](https://doi.org/10.18112/openneuro.ds005642.v1.0.1) Modality: eeg Subjects: 21 Recordings: 21 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005642 dataset = DS005642(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005642(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005642( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005642, title = {illusory-face-eeg}, author = {Amanda K Robinson and Greta Stuart and Sophia M Shatek and Adrian Herbert and Jessica Taubert}, doi = {10.18112/openneuro.ds005642.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005642.v1.0.1}, } ``` ## About This Dataset EEG data for: Robinson AK, Stuart G, Shatek SM, Herbert A, Taubert J. (2025). Neural correlates reveal separate stages of spontaneous face perception. Preprint: [https://doi.org/10.31234/osf.io/vrtbx_v1](https://doi.org/10.31234/osf.io/vrtbx_v1) 300 images of human face, illusory face and matched non-face object stimuli. Three behavioural tasks and neural measurements (EEG) using these stimuli. - Spontaneous dissimilarity: triplet odd-one-out task - Face-like ratings: participants were asked to rate how easily they could see a face in the image, on a scale of 0-10. - Face/object discrimination: speeded categorisation task - EEG: Stimuli were presented centrally at 3.75 Hz, while participants performed an orthogonal target detection task Stimuli, behavioural data and code available at GitHub: [https://doi.org/10.5281/zenodo.15833508](https://doi.org/10.5281/zenodo.15833508) ## Dataset Information | Dataset ID | `DS005642` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | illusory-face-eeg | | Author (year) | `Robinson2024_illusory` | | Canonical | — | | Importable as | `DS005642`, `Robinson2024_illusory` | | Year | 2024 | | Authors | Amanda K Robinson, Greta Stuart, Sophia M Shatek, Adrian Herbert, Jessica Taubert | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005642.v1.0.1](https://doi.org/10.18112/openneuro.ds005642.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005642) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005642) | [Source URL](https://openneuro.org/datasets/ds005642) | ### Copy-paste BibTeX ```bibtex @dataset{ds005642, title = {illusory-face-eeg}, author = {Amanda K Robinson and Greta Stuart and Sophia M Shatek and Adrian Herbert and Jessica Taubert}, doi = {10.18112/openneuro.ds005642.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005642.v1.0.1}, } ``` ## Technical Details - Subjects: 21 - Recordings: 21 - Tasks: 1 - Channels: 68 - Sampling rate (Hz): 1024.0 - Duration (hours): 18.59861111111111 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 13.8 GB - File count: 21 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005642.v1.0.1 - Source: openneuro - OpenNeuro: [ds005642](https://openneuro.org/datasets/ds005642) - NeMAR: [ds005642](https://nemar.org/dataexplorer/detail?dataset_id=ds005642) ## API Reference Use the `DS005642` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005642(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) illusory-face-eeg * **Study:** `ds005642` (OpenNeuro) * **Author (year):** `Robinson2024_illusory` * **Canonical:** — Also importable as: `DS005642`, `Robinson2024_illusory`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005642](https://openneuro.org/datasets/ds005642) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005642](https://nemar.org/dataexplorer/detail?dataset_id=ds005642) DOI: [https://doi.org/10.18112/openneuro.ds005642.v1.0.1](https://doi.org/10.18112/openneuro.ds005642.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005642 >>> dataset = DS005642(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005642) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005642) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005648: eeg dataset, 21 subjects *Mapping object space dimensions: new insights from temporal dynamics* Access recordings and metadata through EEGDash. **Citation:** Alexis Kidder(\*), Genevieve Quek, Tijl Grootswagers (2024). *Mapping object space dimensions: new insights from temporal dynamics*. [10.18112/openneuro.ds005648.v1.0.3](https://doi.org/10.18112/openneuro.ds005648.v1.0.3) Modality: eeg Subjects: 21 Recordings: 21 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005648 dataset = DS005648(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005648(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005648( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005648, title = {Mapping object space dimensions: new insights from temporal dynamics}, author = {Alexis Kidder(*) and Genevieve Quek and Tijl Grootswagers}, doi = {10.18112/openneuro.ds005648.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005648.v1.0.3}, } ``` ## About This Dataset **README** Experiment details for Mapping object space dimensions: new insights from temporal dynamics. The main folder contains the raw MEG data for all participants in standard bids format. See references. The “sourcedata” folder contains the trial behavioral data collected during the EEG Session. The data in this folder follows the following trial structure: > - sourcedata > - sub-[participant number]_task-targets_events.csv: contains all the events for each trial in the EEG session, detailing what was shown on the screen > - sub-[participant number]:contains BIDS formatted raw EEG data > - sub-[participant name]_task-targets_events_short.tsv: information about the channels used and sampling rate for all trials > - sub-[participant name]_task-targets_eeg.bdf: EEG raw data ## Dataset Information | Dataset ID | `DS005648` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mapping object space dimensions: new insights from temporal dynamics | | Author (year) | `Kidder2024` | | Canonical | — | | Importable as | `DS005648`, `Kidder2024` | | Year | 2024 | | Authors | Alexis Kidder(\*), Genevieve Quek, Tijl Grootswagers | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005648.v1.0.3](https://doi.org/10.18112/openneuro.ds005648.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005648) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005648) | [Source URL](https://openneuro.org/datasets/ds005648) | ### Copy-paste BibTeX ```bibtex @dataset{ds005648, title = {Mapping object space dimensions: new insights from temporal dynamics}, author = {Alexis Kidder(*) and Genevieve Quek and Tijl Grootswagers}, doi = {10.18112/openneuro.ds005648.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds005648.v1.0.3}, } ``` ## Technical Details - Subjects: 21 - Recordings: 21 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 2048.0 - Duration (hours): 11.576944444444443 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 15.5 GB - File count: 21 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005648.v1.0.3 - Source: openneuro - OpenNeuro: [ds005648](https://openneuro.org/datasets/ds005648) - NeMAR: [ds005648](https://nemar.org/dataexplorer/detail?dataset_id=ds005648) ## API Reference Use the `DS005648` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005648(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mapping object space dimensions: new insights from temporal dynamics * **Study:** `ds005648` (OpenNeuro) * **Author (year):** `Kidder2024` * **Canonical:** — Also importable as: `DS005648`, `Kidder2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005648](https://openneuro.org/datasets/ds005648) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005648](https://nemar.org/dataexplorer/detail?dataset_id=ds005648) DOI: [https://doi.org/10.18112/openneuro.ds005648.v1.0.3](https://doi.org/10.18112/openneuro.ds005648.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS005648 >>> dataset = DS005648(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005648) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005648) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005662: eeg dataset, 80 subjects *A comprehensive EEG dataset for investigating visual touch perception* Access recordings and metadata through EEGDash. **Citation:** Sophie Smit, Almudena Ramírez-Haro, Manuel Varlet, Denise Moerel, Genevieve L. Quek, Tijl Grootswagers (2024). *A comprehensive EEG dataset for investigating visual touch perception*. [10.18112/openneuro.ds005662.v2.0.1](https://doi.org/10.18112/openneuro.ds005662.v2.0.1) Modality: eeg Subjects: 80 Recordings: 80 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005662 dataset = DS005662(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005662(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005662( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005662, title = {A comprehensive EEG dataset for investigating visual touch perception}, author = {Sophie Smit and Almudena Ramírez-Haro and Manuel Varlet and Denise Moerel and Genevieve L. Quek and Tijl Grootswagers}, doi = {10.18112/openneuro.ds005662.v2.0.1}, url = {https://doi.org/10.18112/openneuro.ds005662.v2.0.1}, } ``` ## About This Dataset Data collection took place at The MARCS Institute for Brain, Behaviour and Development in Sydney, Australia. The study was approved by the Western Sydney University Ethics Committee. We recorded EEG data while participants viewed rapid streams of videos adapted from the Validated Touch-Video Database (Smit & Rich, 2025) depicting touch to a hand. Both the adapted videos used in this project, and original videos and validation data, are available on OSF ([https://osf.io/jvkqa/](https://osf.io/jvkqa/)). There were 32 sequences in total with a total of 2880 non-target trials (90 unique videos, each presented 8 times) alongside a variable number of target trials (showing touch to an object). Between trials there was an inter-trial-interval of 200ms. The experimental task lasted approximately 55 minutes including breaks. We also recorded questionnaire responses. Whole brain 64-channel EEG data were recorded using an Active Two Biosemi system (Biosemi, Inc.) at 2048Hz and 10-20 standard caps. Stimuli were presented using Python and PsychoPy software version 2023.3.1. ## Dataset Information | Dataset ID | `DS005662` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A comprehensive EEG dataset for investigating visual touch perception | | Author (year) | `Smit2024` | | Canonical | — | | Importable as | `DS005662`, `Smit2024` | | Year | 2024 | | Authors | Sophie Smit, Almudena Ramírez-Haro, Manuel Varlet, Denise Moerel, Genevieve L. Quek, Tijl Grootswagers | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005662.v2.0.1](https://doi.org/10.18112/openneuro.ds005662.v2.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005662) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005662) | [Source URL](https://openneuro.org/datasets/ds005662) | ### Copy-paste BibTeX ```bibtex @dataset{ds005662, title = {A comprehensive EEG dataset for investigating visual touch perception}, author = {Sophie Smit and Almudena Ramírez-Haro and Manuel Varlet and Denise Moerel and Genevieve L. Quek and Tijl Grootswagers}, doi = {10.18112/openneuro.ds005662.v2.0.1}, url = {https://doi.org/10.18112/openneuro.ds005662.v2.0.1}, } ``` ## Technical Details - Subjects: 80 - Recordings: 80 - Tasks: 1 - Channels: 65 - Sampling rate (Hz): 2048.0 - Duration (hours): 80.46722222222222 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 107.8 GB - File count: 80 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005662.v2.0.1 - Source: openneuro - OpenNeuro: [ds005662](https://openneuro.org/datasets/ds005662) - NeMAR: [ds005662](https://nemar.org/dataexplorer/detail?dataset_id=ds005662) ## API Reference Use the `DS005662` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005662(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A comprehensive EEG dataset for investigating visual touch perception * **Study:** `ds005662` (OpenNeuro) * **Author (year):** `Smit2024` * **Canonical:** — Also importable as: `DS005662`, `Smit2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 80; recordings: 80; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005662](https://openneuro.org/datasets/ds005662) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005662](https://nemar.org/dataexplorer/detail?dataset_id=ds005662) DOI: [https://doi.org/10.18112/openneuro.ds005662.v2.0.1](https://doi.org/10.18112/openneuro.ds005662.v2.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005662 >>> dataset = DS005662(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005662) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005662) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005670: ieeg dataset, 2 subjects *SEEG Resting State Recording* Access recordings and metadata through EEGDash. **Citation:** Prof. Pengfei Xu (2024). *SEEG Resting State Recording*. [10.18112/openneuro.ds005670.v1.0.0](https://doi.org/10.18112/openneuro.ds005670.v1.0.0) Modality: ieeg Subjects: 2 Recordings: 2 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005670 dataset = DS005670(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005670(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005670( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005670, title = {SEEG Resting State Recording}, author = {Prof. Pengfei Xu}, doi = {10.18112/openneuro.ds005670.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005670.v1.0.0}, } ``` ## About This Dataset **Dataset description** This SEEG raw dataset includes resting state recordings for two patients with epilepsy. The depth electrodes used in this dataset are Sinovation SDE-08 medical-grade stainless steel electrodes, with the following specifications: > Diameter: 0.8 mm > Contact length: 2 mm > Insulator length: 1.5 mm > Distance between the center of two contacts: 3.5 mm **Between 8 and 16 contacts on each electrode** For questions, please contact Pengfei Xu ([pxu@bnu.edu.cn](mailto:pxu@bnu.edu.cn)). ## Dataset Information | Dataset ID | `DS005670` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | SEEG Resting State Recording | | Author (year) | `Xu2024_SEEG_Resting_State` | | Canonical | — | | Importable as | `DS005670`, `Xu2024_SEEG_Resting_State` | | Year | 2024 | | Authors | Prof. Pengfei Xu | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005670.v1.0.0](https://doi.org/10.18112/openneuro.ds005670.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005670) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005670) | [Source URL](https://openneuro.org/datasets/ds005670) | ### Copy-paste BibTeX ```bibtex @dataset{ds005670, title = {SEEG Resting State Recording}, author = {Prof. Pengfei Xu}, doi = {10.18112/openneuro.ds005670.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005670.v1.0.0}, } ``` ## Technical Details - Subjects: 2 - Recordings: 2 - Tasks: 1 - Channels: 186, 238 - Sampling rate (Hz): 2000.0 - Duration (hours): 0.24251 - Pathology: Epilepsy - Modality: Resting State - Type: Resting-state - Size on disk: 708.8 MB - File count: 2 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005670.v1.0.0 - Source: openneuro - OpenNeuro: [ds005670](https://openneuro.org/datasets/ds005670) - NeMAR: [ds005670](https://nemar.org/dataexplorer/detail?dataset_id=ds005670) ## API Reference Use the `DS005670` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005670(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SEEG Resting State Recording * **Study:** `ds005670` (OpenNeuro) * **Author (year):** `Xu2024_SEEG_Resting_State` * **Canonical:** — Also importable as: `DS005670`, `Xu2024_SEEG_Resting_State`. Modality: `ieeg`; Experiment type: `Resting-state`; Subject type: `Epilepsy`. Subjects: 2; recordings: 2; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005670](https://openneuro.org/datasets/ds005670) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005670](https://nemar.org/dataexplorer/detail?dataset_id=ds005670) DOI: [https://doi.org/10.18112/openneuro.ds005670.v1.0.0](https://doi.org/10.18112/openneuro.ds005670.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005670 >>> dataset = DS005670(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005670) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005670) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005672: eeg dataset, 3 subjects *PerceiveImagine* Access recordings and metadata through EEGDash. **Citation:** Li Zhiyuan, Zhao Jiaxin (2024). *PerceiveImagine*. [10.18112/openneuro.ds005672.v1.0.0](https://doi.org/10.18112/openneuro.ds005672.v1.0.0) Modality: eeg Subjects: 3 Recordings: 3 License: CC0 Source: openneuro Citations: 2.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005672 dataset = DS005672(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005672(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005672( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005672, title = {PerceiveImagine}, author = {Li Zhiyuan and Zhao Jiaxin}, doi = {10.18112/openneuro.ds005672.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005672.v1.0.0}, } ``` ## About This Dataset Participants perceive the image for 6 seconds based on the prompt, then close their eyes and imagine the image they just saw for 6 seconds based on the prompt. After hearing the prompt sound, they enter the next loop ## Dataset Information | Dataset ID | `DS005672` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PerceiveImagine | | Author (year) | `Zhiyuan2024` | | Canonical | — | | Importable as | `DS005672`, `Zhiyuan2024` | | Year | 2024 | | Authors | Li Zhiyuan, Zhao Jiaxin | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005672.v1.0.0](https://doi.org/10.18112/openneuro.ds005672.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005672) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005672) | [Source URL](https://openneuro.org/datasets/ds005672) | ### Copy-paste BibTeX ```bibtex @dataset{ds005672, title = {PerceiveImagine}, author = {Li Zhiyuan and Zhao Jiaxin}, doi = {10.18112/openneuro.ds005672.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005672.v1.0.0}, } ``` ## Technical Details - Subjects: 3 - Recordings: 3 - Tasks: 1 - Channels: 69 (2), 65 - Sampling rate (Hz): 1000.0 - Duration (hours): 4.5855 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 4.2 GB - File count: 3 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005672.v1.0.0 - Source: openneuro - OpenNeuro: [ds005672](https://openneuro.org/datasets/ds005672) - NeMAR: [ds005672](https://nemar.org/dataexplorer/detail?dataset_id=ds005672) ## API Reference Use the `DS005672` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005672(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PerceiveImagine * **Study:** `ds005672` (OpenNeuro) * **Author (year):** `Zhiyuan2024` * **Canonical:** — Also importable as: `DS005672`, `Zhiyuan2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 3; recordings: 3; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005672](https://openneuro.org/datasets/ds005672) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005672](https://nemar.org/dataexplorer/detail?dataset_id=ds005672) DOI: [https://doi.org/10.18112/openneuro.ds005672.v1.0.0](https://doi.org/10.18112/openneuro.ds005672.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS005672 >>> dataset = DS005672(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005672) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005672) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005688: eeg dataset, 20 subjects *visStim* Access recordings and metadata through EEGDash. **Citation:** Henry Tan, Devon Griggs, Lucas Chen, Kahte Culevski, Kat Floerchinger, Alissa Phutirat, Gabe Koh, Nels Schimek, Pierre Mourad (2024). *visStim*. [10.18112/openneuro.ds005688.v1.0.1](https://doi.org/10.18112/openneuro.ds005688.v1.0.1) Modality: eeg Subjects: 20 Recordings: 89 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005688 dataset = DS005688(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005688(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005688( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005688, title = {visStim}, author = {Henry Tan and Devon Griggs and Lucas Chen and Kahte Culevski and Kat Floerchinger and Alissa Phutirat and Gabe Koh and Nels Schimek and Pierre Mourad}, doi = {10.18112/openneuro.ds005688.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005688.v1.0.1}, } ``` ## About This Dataset **Dataset description** This dataset was collected for the study “Diagnostic ultrasound enhances, then reduces, exogenously induced brain activity of mice” by Tan et al. (2024). The research demonstrates how transcranially delivered diagnostic ultrasound (tDUS) modulates the brain’s receptivity to external stimuli, using a blinking light stimulus in a mouse model. The study findings highlight the potential for diagnostic ultrasound to intentionally modulate brain function, paving the way for possible future clinical and therapeutic applications. Please cite the following paper when using this dataset: Tan H, Griggs DJ, Chen L, et al. Diagnostic ultrasound enhances, then reduces, exogenously induced brain activity of mice. Frontiers in Neuroscience. 2024. DOI: [in peer review]. **License** ### View full README **Dataset description** This dataset was collected for the study “Diagnostic ultrasound enhances, then reduces, exogenously induced brain activity of mice” by Tan et al. (2024). The research demonstrates how transcranially delivered diagnostic ultrasound (tDUS) modulates the brain’s receptivity to external stimuli, using a blinking light stimulus in a mouse model. The study findings highlight the potential for diagnostic ultrasound to intentionally modulate brain function, paving the way for possible future clinical and therapeutic applications. Please cite the following paper when using this dataset: Tan H, Griggs DJ, Chen L, et al. Diagnostic ultrasound enhances, then reduces, exogenously induced brain activity of mice. Frontiers in Neuroscience. 2024. DOI: [in peer review]. **License** This dataset is proprietary to the Department of Neurological Surgery, University of Washington, Seattle, WA, USA. Usage is restricted to academic and non-commercial research. Redistribution, modification, or commercial use is prohibited without prior permission. For inquiries, contact: Pierre D. Mourad ([doumitt@uw.edu](mailto:doumitt@uw.edu)). **Acknowledgements** We thank the Department of Neurological Surgery, University of Washington, for internal funding. This research was supported by R01 NS119395 and P51 OD010425 (DJG) and the Mary Gates Research Scholarship\*\* (HT). **Dataset Overview** This dataset comprises electrocorticography (ECoG) recordings from three cohorts of C57BL/6 mice exposed to combinations of diagnostic ultrasound (tDUS) and blinking light stimuli. The study investigates how tDUS influences the visual cortex’s response to external visual stimulation. Key Features: - Subjects: 20 C57BL/6 mice divided into three experimental cohorts: > 1. Light-only cohort: Exposed to blinking light only. > 2. US-only cohort: Exposed to tDUS without light. > 3. US+Light cohort: Exposed to blinking light combined with tDUS. - Electrode Placement: ECoG electrodes targeted visual and somatosensory cortices. - Recording Conditions: Data recorded at 20 kS/s, filtered (5–55 Hz), and analyzed with MATLAB. **Data Structure** Data Files: - Raw ECoG Data: Continuous brain activity recordings. - Event-Triggered Data: Segment-specific RMS values for blinking light events. - Processed Data: Filtered and normalized brain activity traces. Metadata: - Experimental conditions, cohort allocation, and baseline brain activity. **Methodology** - Stimulation Protocols: Mice were exposed to blinking light stimuli (10 seconds per event) and tDUS delivered through a P21x5-1 scan head (Sonosite MicroMaxx system). - Data Collection: Baseline and event-triggered ECoG signals were recorded using LabChart software. - Analysis: RMS values normalized to baseline activity for comparative statistical analysis. Outcomes Key Findings: 1. Enhanced Brain Activity: Simultaneous tDUS and blinking light increased cortical activity compared to light alone. 2. Persistent Effects: tDUS effects on brain activity persisted after stimulation ceased. 3. No Effect of tDUS Alone: tDUS without light did not activate cortical activity but reduced subsequent activity. References For a complete list of references, please consult the manuscript. ## Dataset Information | Dataset ID | `DS005688` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | visStim | | Author (year) | `Tan2024` | | Canonical | — | | Importable as | `DS005688`, `Tan2024` | | Year | 2024 | | Authors | Henry Tan, Devon Griggs, Lucas Chen, Kahte Culevski, Kat Floerchinger, Alissa Phutirat, Gabe Koh, Nels Schimek, Pierre Mourad | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005688.v1.0.1](https://doi.org/10.18112/openneuro.ds005688.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005688) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005688) | [Source URL](https://openneuro.org/datasets/ds005688) | ### Copy-paste BibTeX ```bibtex @dataset{ds005688, title = {visStim}, author = {Henry Tan and Devon Griggs and Lucas Chen and Kahte Culevski and Kat Floerchinger and Alissa Phutirat and Gabe Koh and Nels Schimek and Pierre Mourad}, doi = {10.18112/openneuro.ds005688.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005688.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 89 - Tasks: 5 - Channels: 5 (86), 1 (3) - Sampling rate (Hz): 10000.0 (74), 20000.0 (15) - Duration (hours): 1.731666666666667 - Pathology: Healthy - Modality: Visual - Type: Clinical/Intervention - Size on disk: 8.4 GB - File count: 89 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005688.v1.0.1 - Source: openneuro - OpenNeuro: [ds005688](https://openneuro.org/datasets/ds005688) - NeMAR: [ds005688](https://nemar.org/dataexplorer/detail?dataset_id=ds005688) ## API Reference Use the `DS005688` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005688(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) visStim * **Study:** `ds005688` (OpenNeuro) * **Author (year):** `Tan2024` * **Canonical:** — Also importable as: `DS005688`, `Tan2024`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 20; recordings: 89; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005688](https://openneuro.org/datasets/ds005688) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005688](https://nemar.org/dataexplorer/detail?dataset_id=ds005688) DOI: [https://doi.org/10.18112/openneuro.ds005688.v1.0.1](https://doi.org/10.18112/openneuro.ds005688.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005688 >>> dataset = DS005688(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005688) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005688) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005691: ieeg dataset, 8 subjects *SpinalExpect_Invasive* Access recordings and metadata through EEGDash. **Citation:** Max-Philipp Stenner, Cindy Marquez Nossa, Tino Zaehle, Elena Azanon, Hans-Jochen Heinze, Matthias Deliano, Lars Buentjen (2024). *SpinalExpect_Invasive*. [10.18112/openneuro.ds005691.v1.0.0](https://doi.org/10.18112/openneuro.ds005691.v1.0.0) Modality: ieeg Subjects: 8 Recordings: 8 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005691 dataset = DS005691(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005691(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005691( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005691, title = {SpinalExpect_Invasive}, author = {Max-Philipp Stenner and Cindy Marquez Nossa and Tino Zaehle and Elena Azanon and Hans-Jochen Heinze and Matthias Deliano and Lars Buentjen}, doi = {10.18112/openneuro.ds005691.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005691.v1.0.0}, } ``` ## About This Dataset Contact person: Max-Philipp Stenner, email: [max-philipp.stenner@med.ovgu.de](mailto:max-philipp.stenner@med.ovgu.de) or [max-philipp.stenner@lin-magdeburg.de](mailto:max-philipp.stenner@lin-magdeburg.de), ORCID: 0000-0002-3694-1887 Brief overview of the tasks in the experiment: The goal was to reveal whether temporal expectation influences initial sensory processing in the human spinal cord. In each trial, subjects (included n=8, patients with neuropathic pain following epidural electrode implantation into the cervical spinal canal, dorsal to the spinal cord) received at least one electric median nerve stimulation on the left wrist. Their task was to silently count in how many (rare) trials in each block (~5 min) there were two median nerve stimuli in rapid succession (80 ms ISI), and to report that number at the end of each block. In each trial, the median nerve stimulatino was preceded by an auditory cue. In alternating blocks, the time interval between the auditory cue and the (first) median nerve stimulation was either fixed at 1100 ms, or varied randomly (uniform distribution) between 100 and 2100 ms. Montage: Lead placement alongside the cervical spinal cord varied across patients. In each patient, we recorded from eight epidural contacts (3 mm length, 1 mm spacing) placed alongside the dorsal cervical spinal cord. The fourth contact (counted from the most caudal contact) served as a reference. The ground was an extra wire twisted around the wire for the sixth contact (counted from the most caudal contact). Events: There are three events of interest. “S1” is a keypress of the experimenter, to advance the task upon breaks. “S1” can therefore be used to identify the start of each new block. “S70” corresponds to the auditory cue. “S130” corresponds to median nerve stimulation. “S99” is a “placeholder” that corresponds to no stimulus at all. The placeholder was used to enable “double stimuli”, i.e., a rapid succession of two median nerve stimulations with an ISI of 80 ms. Most trials follow a sequence of “S70” - “S130” - “S99”. Deviant trials have “S70” - “S130” - “S130”. ## Dataset Information | Dataset ID | `DS005691` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | SpinalExpect_Invasive | | Author (year) | `Stenner2024_SpinalExpect` | | Canonical | — | | Importable as | `DS005691`, `Stenner2024_SpinalExpect` | | Year | 2024 | | Authors | Max-Philipp Stenner, Cindy Marquez Nossa, Tino Zaehle, Elena Azanon, Hans-Jochen Heinze, Matthias Deliano, Lars Buentjen | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005691.v1.0.0](https://doi.org/10.18112/openneuro.ds005691.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005691) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005691) | [Source URL](https://openneuro.org/datasets/ds005691) | ### Copy-paste BibTeX ```bibtex @dataset{ds005691, title = {SpinalExpect_Invasive}, author = {Max-Philipp Stenner and Cindy Marquez Nossa and Tino Zaehle and Elena Azanon and Hans-Jochen Heinze and Matthias Deliano and Lars Buentjen}, doi = {10.18112/openneuro.ds005691.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005691.v1.0.0}, } ``` ## Technical Details - Subjects: 8 - Recordings: 8 - Tasks: 1 - Channels: 7 (7), 8 - Sampling rate (Hz): 2500.0 - Duration (hours): 5.919116666666667 - Pathology: Other - Modality: Multisensory - Type: Attention - Size on disk: 723.0 MB - File count: 8 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005691.v1.0.0 - Source: openneuro - OpenNeuro: [ds005691](https://openneuro.org/datasets/ds005691) - NeMAR: [ds005691](https://nemar.org/dataexplorer/detail?dataset_id=ds005691) ## API Reference Use the `DS005691` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005691(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SpinalExpect_Invasive * **Study:** `ds005691` (OpenNeuro) * **Author (year):** `Stenner2024_SpinalExpect` * **Canonical:** — Also importable as: `DS005691`, `Stenner2024_SpinalExpect`. Modality: `ieeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005691](https://openneuro.org/datasets/ds005691) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005691](https://nemar.org/dataexplorer/detail?dataset_id=ds005691) DOI: [https://doi.org/10.18112/openneuro.ds005691.v1.0.0](https://doi.org/10.18112/openneuro.ds005691.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005691 >>> dataset = DS005691(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005691) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005691) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005692: eeg dataset, 30 subjects *SpinalExpect_NonInvasive* Access recordings and metadata through EEGDash. **Citation:** Max-Philipp Stenner, Cindy Marquez Nossa, Tino Zaehle, Elena Azanon, Hans-Jochen Heinze, Matthias Deliano, Lars Buentjen (2024). *SpinalExpect_NonInvasive*. [10.18112/openneuro.ds005692.v1.0.0](https://doi.org/10.18112/openneuro.ds005692.v1.0.0) Modality: eeg Subjects: 30 Recordings: 59 License: CC0 Source: openneuro Citations: 0.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005692 dataset = DS005692(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005692(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005692( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005692, title = {SpinalExpect_NonInvasive}, author = {Max-Philipp Stenner and Cindy Marquez Nossa and Tino Zaehle and Elena Azanon and Hans-Jochen Heinze and Matthias Deliano and Lars Buentjen}, doi = {10.18112/openneuro.ds005692.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005692.v1.0.0}, } ``` ## About This Dataset Contact person: Max-Philipp Stenner, email: [max-philipp.stenner@med.ovgu.de](mailto:max-philipp.stenner@med.ovgu.de) or [max-philipp.stenner@lin-magdeburg.de](mailto:max-philipp.stenner@lin-magdeburg.de), ORCID: 0000-0002-3694-1887 Brief overview of the tasks in the experiment: The goal was to reveal whether temporal expectation influences initial sensory processing in the human spinal cord. In each trial, subjects (n=30, healthy, young) received at least one electric median nerve stimulation on the left wrist. Their task was to silently count in how many (rare) trials in each block (~5 min) there were two median nerve stimuli in rapid succession (80 ms ISI), and to report that number at the end of each block. In each trial, the median nerve stimulation was preceded by an auditory cue. In alternating blocks, the time interval between the auditory cue and the (first) median nerve stimulation was either fixed at 1100 ms, or varied randomly (uniform distribution) between 100 and 2100 ms. Each subject (except for sub-24) completed two days of testing. Whether a session began with a constant-interval block or variable-interval block was counterbalanced across subjects. Montage: Adapted from/extended beyond Chander, B. S., Deliano, M., Azañón, E., Büntjen, L., & Stenner, M. P. (2022). Non-invasive recording of high-frequency signals from the human spinal cord. NeuroImage, 253, 119050, as follows. Electrodes were placed in three rings around the neck. “Sr” in the channel names stands for “spinal ring”. There was a caudal, middle, and cranial ring of electrodes. The middle ring was connecting a point above the spinous process of the sixth cervical vertebra (SrC6), and a point above the thyroid cartilage (“SrF”). Between these two electrodes, the remaining six electrodes for the middle ring were evenly spaced around the neck. “L” in channel names means left, “R” means right, “B” means “back”, “F” means front, and “M” means middle. For the caudal and cranial rings, electrodes were placed approximately 2 cm below (“_b”) and 2 cm above (“_a”) the corresponding electrodes of the middle ring, respectively. Events: There are three events of interest. “S1” is a keypress of the experimenter, to advance the task upon breaks. “S1” can therefore be used to identify the start of each new block. “S70” corresponds to the auditory cue. “S130” corresponds to median nerve stimulation. “S99” is a “placeholder” that corresponds to no stimulus at all. The placeholder was used to enable “double stimuli”, i.e., a rapid succession of two median nerve stimulations with an ISI of 80 ms. Most trials follow a sequence of “S70” - “S130” - “S99”. Deviant trials have “S70” - “S130” - “S130”. ## Dataset Information | Dataset ID | `DS005692` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | SpinalExpect_NonInvasive | | Author (year) | `Stenner2024_SpinalExpect_NonInvasive` | | Canonical | — | | Importable as | `DS005692`, `Stenner2024_SpinalExpect_NonInvasive` | | Year | 2024 | | Authors | Max-Philipp Stenner, Cindy Marquez Nossa, Tino Zaehle, Elena Azanon, Hans-Jochen Heinze, Matthias Deliano, Lars Buentjen | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005692.v1.0.0](https://doi.org/10.18112/openneuro.ds005692.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005692) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005692) | [Source URL](https://openneuro.org/datasets/ds005692) | ### Copy-paste BibTeX ```bibtex @dataset{ds005692, title = {SpinalExpect_NonInvasive}, author = {Max-Philipp Stenner and Cindy Marquez Nossa and Tino Zaehle and Elena Azanon and Hans-Jochen Heinze and Matthias Deliano and Lars Buentjen}, doi = {10.18112/openneuro.ds005692.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005692.v1.0.0}, } ``` ## Technical Details - Subjects: 30 - Recordings: 59 - Tasks: 1 - Channels: 25 - Sampling rate (Hz): 5000.0 - Duration (hours): 112.20623327777776 - Pathology: Healthy - Modality: Multisensory - Type: Attention - Size on disk: 92.8 GB - File count: 59 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005692.v1.0.0 - Source: openneuro - OpenNeuro: [ds005692](https://openneuro.org/datasets/ds005692) - NeMAR: [ds005692](https://nemar.org/dataexplorer/detail?dataset_id=ds005692) ## API Reference Use the `DS005692` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005692(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SpinalExpect_NonInvasive * **Study:** `ds005692` (OpenNeuro) * **Author (year):** `Stenner2024_SpinalExpect_NonInvasive` * **Canonical:** — Also importable as: `DS005692`, `Stenner2024_SpinalExpect_NonInvasive`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 30; recordings: 59; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005692](https://openneuro.org/datasets/ds005692) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005692](https://nemar.org/dataexplorer/detail?dataset_id=ds005692) DOI: [https://doi.org/10.18112/openneuro.ds005692.v1.0.0](https://doi.org/10.18112/openneuro.ds005692.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005692 >>> dataset = DS005692(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005692) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005692) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005697: eeg dataset, 51 subjects *PerceiveImagine* Access recordings and metadata through EEGDash. **Citation:** Weilong Li, Hua Fan (2024). *PerceiveImagine*. [10.18112/openneuro.ds005697.v1.0.2](https://doi.org/10.18112/openneuro.ds005697.v1.0.2) Modality: eeg Subjects: 51 Recordings: 51 License: CC0 Source: openneuro Citations: 3.0 Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005697 dataset = DS005697(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005697(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005697( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005697, title = {PerceiveImagine}, author = {Weilong Li and Hua Fan}, doi = {10.18112/openneuro.ds005697.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds005697.v1.0.2}, } ``` ## About This Dataset This dataset consists of electroencephalogram (EEG) signals collected by the 64 channel EEG device SynAmps2. ###Participants and Conversations This experiment included 54 participants. 2 participants gave up the experiment midway due to physical reasons, and 6 participants had poor signal collection results during the first collection and underwent a second collection. All participants met the experimental requirements ###Task This experiment requires participants to watch the image for 6 seconds according to the requirements, and then imagine the image they see for 6 seconds. The total dataset contains 340 images ###Dataset version The provided dataset consists of the original dataset ###contact If you have any questions, please contact:yingxmbio@foxmail.com ## Dataset Information | Dataset ID | `DS005697` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PerceiveImagine | | Author (year) | `Li2024_PerceiveImagine` | | Canonical | `PerceiveImagine` | | Importable as | `DS005697`, `Li2024_PerceiveImagine`, `PerceiveImagine` | | Year | 2024 | | Authors | Weilong Li, Hua Fan | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005697.v1.0.2](https://doi.org/10.18112/openneuro.ds005697.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005697) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005697) | [Source URL](https://openneuro.org/datasets/ds005697) | ### Copy-paste BibTeX ```bibtex @dataset{ds005697, title = {PerceiveImagine}, author = {Weilong Li and Hua Fan}, doi = {10.18112/openneuro.ds005697.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds005697.v1.0.2}, } ``` ## Technical Details - Subjects: 51 - Recordings: 51 - Tasks: 1 - Channels: 65 (45), 69 (6) - Sampling rate (Hz): 1000.0 - Duration (hours): 77.68925 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 66.6 GB - File count: 51 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005697.v1.0.2 - Source: openneuro - OpenNeuro: [ds005697](https://openneuro.org/datasets/ds005697) - NeMAR: [ds005697](https://nemar.org/dataexplorer/detail?dataset_id=ds005697) ## API Reference Use the `DS005697` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005697(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PerceiveImagine * **Study:** `ds005697` (OpenNeuro) * **Author (year):** `Li2024_PerceiveImagine` * **Canonical:** `PerceiveImagine` Also importable as: `DS005697`, `Li2024_PerceiveImagine`, `PerceiveImagine`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 51; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005697](https://openneuro.org/datasets/ds005697) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005697](https://nemar.org/dataexplorer/detail?dataset_id=ds005697) DOI: [https://doi.org/10.18112/openneuro.ds005697.v1.0.2](https://doi.org/10.18112/openneuro.ds005697.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS005697 >>> dataset = DS005697(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005697) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005697) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005752: meg dataset, 123 subjects *The NIMH Healthy Research Volunteer Dataset* Access recordings and metadata through EEGDash. **Citation:** Allison C. Nugent, Adam G Thomas, Margaret Mahoney, Alison Gibbons, Jarrod Smith, Antoinette Charles, Jacob S Shaw, Jeffrey D Stout, Anna M Namyst, Arshitha Basavaraj, Eric Earl, Dustin Moraczewski, Emily Guinee, Michael Liu, Travis Riddle, Joseph Snow, Shruti Japee, Morgan Andrews, Adriana Pavletic, Stephen Sinclair, Vinai Roopchansingh, Peter A Bandettini, Joyce Chung (2024). *The NIMH Healthy Research Volunteer Dataset*. [10.18112/openneuro.ds005752.v2.1.0](https://doi.org/10.18112/openneuro.ds005752.v2.1.0) Modality: meg Subjects: 123 Recordings: 1055 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005752 dataset = DS005752(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005752(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005752( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005752, title = {The NIMH Healthy Research Volunteer Dataset}, author = {Allison C. Nugent and Adam G Thomas and Margaret Mahoney and Alison Gibbons and Jarrod Smith and Antoinette Charles and Jacob S Shaw and Jeffrey D Stout and Anna M Namyst and Arshitha Basavaraj and Eric Earl and Dustin Moraczewski and Emily Guinee and Michael Liu and Travis Riddle and Joseph Snow and Shruti Japee and Morgan Andrews and Adriana Pavletic and Stephen Sinclair and Vinai Roopchansingh and Peter A Bandettini and Joyce Chung}, doi = {10.18112/openneuro.ds005752.v2.1.0}, url = {https://doi.org/10.18112/openneuro.ds005752.v2.1.0}, } ``` ## About This Dataset **The National Institute of Mental Health (NIMH) Research Volunteer (RV) Data Set** A comprehensive dataset characterizing healthy research volunteers in terms of clinical assessments, mood-related psychometrics, cognitive function neuropsychological tests, structural and functional magnetic resonance imaging (MRI), along with diffusion tensor imaging (DTI), and a comprehensive magnetoencephalography battery (MEG). In addition, blood samples are currently banked for future genetic analysis. All data collected in this protocol are broadly shared in the OpenNeuro repository, in the Brain Imaging Data Structure (BIDS) format. In addition, task paradigms and basic pre-processing scripts are shared on GitHub. This dataset is unprecedented in its depth of characterization of a healthy population and will allow a wide array of investigations into normal cognition and mood regulation. This dataset is licensed under the [Creative Commons Zero (CC0) v1.0 License](https://creativecommons.org/publicdomain/zero/1.0/). **Release Notes** ### View full README **The National Institute of Mental Health (NIMH) Research Volunteer (RV) Data Set** A comprehensive dataset characterizing healthy research volunteers in terms of clinical assessments, mood-related psychometrics, cognitive function neuropsychological tests, structural and functional magnetic resonance imaging (MRI), along with diffusion tensor imaging (DTI), and a comprehensive magnetoencephalography battery (MEG). In addition, blood samples are currently banked for future genetic analysis. All data collected in this protocol are broadly shared in the OpenNeuro repository, in the Brain Imaging Data Structure (BIDS) format. In addition, task paradigms and basic pre-processing scripts are shared on GitHub. This dataset is unprecedented in its depth of characterization of a healthy population and will allow a wide array of investigations into normal cognition and mood regulation. This dataset is licensed under the [Creative Commons Zero (CC0) v1.0 License](https://creativecommons.org/publicdomain/zero/1.0/). **Release Notes** **Release v2.0.0** This release includes data collected between 2020-06-03 (cut-off date for v1.0.0) and 2024-04-01. Notable changes in this release: : 1. 769 new participants have been added along with re-evaluation data for 15 participants. Total unique participants count is now 1859. 2. `visit` and `age_at_visit` columns added to phenotype files to distinguish between visits and intervals between them. 3. Follow-up online survey data included. 4. Replaced Beck Anxiety Inventory (BAI) and Beck Depression Inventory-II (BDI-II) with General Anxiety Disorder-7 (GAD7) and Patient Health Questionnaire 9 (PHQ9) surveys, respectively. 5. Discontinued the Perceived Health rating survey. 6. Added Brief Trauma Questionnaire (BTQ) and Big Five personality survey to online screening questionnaires. 7. MRI:
> - Replaced ADNI-3 resting state sequence with a multi-echo sequence with higher spatial resolution. > - Replaced field map scans with a shorter reversed-blipped EPI scan. 1. MEG:
> - Some participants have 6-minute empty room data instead of the shorter duration empty room acquisition. See the [CHANGES](./CHANGES) file for complete version-wise changelog. **Participant Eligibility** To be eligible for the study, participants need to be medically healthy adults over 18 years of age with the ability to read, speak and understand English. All participants provided electronic informed consent for online pre-screening, and written informed consent for all other procedures. Participants with a history of mental illness or suicidal or self-injury thoughts or behavior are excluded. Additional exclusion criteria include current illicit drug use, abnormal medical exam, and less than an 8th grade education or IQ below 70. Current NIMH employees, or first degree relatives of NIMH employees are prohibited from participating. Study participants are recruited through direct mailings, bulletin boards and listservs, outreach exhibits, print advertisements, and electronic media. **Clinical Measures** All potential volunteers visit [the study website](https://nimhresearchvolunteer.ctss.nih.gov), check a box indicating consent, and fill out preliminary screening questionnaires. The questionnaires include basic demographics, the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0), the DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure, the DSM-5 Level 2 Cross-Cutting Symptom Measure - Substance Use, the Alcohol Use Disorders Identification Test (AUDIT), the Edinburgh Handedness Inventory, and a brief clinical history checklist. The WHODAS 2.0 is a 15 item questionnaire that assesses overall general health and disability, with 14 items distributed over 6 domains: cognition, mobility, self-care, “getting along”, life activities, and participation. The DSM-5 Level 1 cross-cutting measure uses 23 items to assess symptoms across diagnoses, although an item regarding self-injurious behavior was removed from the online self-report version. The DSM-5 Level 2 cross-cutting measure is adapted from the NIDA ASSIST measure, and contains 15 items to assess use of both illicit drugs and prescription drugs without a doctor’s prescription. The AUDIT is a 10 item screening assessment used to detect harmful levels of alcohol consumption, and the Edinburgh Handedness Inventory is a systematic assessment of handedness. These online results do not contain any personally identifiable information (PII). At the conclusion of the questionnaires, participants are prompted to send an email to the study team. These results are reviewed by the study team, who determines if the participant is appropriate for an in-person interview. Participants who meet all inclusion criteria are scheduled for an in-person screening visit to determine if there are any further exclusions to participation. At this visit, participants receive a History and Physical exam, Structured Clinical Interview for DSM-5 Disorders (SCID-5), the Beck Depression Inventory-II (BDI-II), Beck Anxiety Inventory (BAI), and the Kaufman Brief Intelligence Test, Second Edition (KBIT-2). The purpose of these cognitive and psychometric tests is two-fold. First, these measures are designed to provide a sensitive test of psychopathology. Second, they provide a comprehensive picture of cognitive functioning, including mood regulation. The SCID-5 is a structured interview, administered by a clinician, that establishes the absence of any DSM-5 axis I disorder. The KBIT-2 is a brief (20 minute) assessment of intellectual functioning administered by a trained examiner. There are three subtests, including verbal knowledge, riddles, and matrices. **Biological and physiological measures** Biological and physiological measures are acquired, including blood pressure, pulse, weight, height, and BMI. Blood and urine samples are taken and a complete blood count, acute care panel, hepatic panel, thyroid stimulating hormone, viral markers (HCV, HBV, HIV), c-reactive protein, creatine kinase, urine drug screen and urine pregnancy tests are performed. In addition, three additional tubes of blood samples are collected and banked for future analysis, including genetic testing. **Imaging Studies** Participants were given the option to enroll in optional magnetic resonance imaging (MRI) and magnetoencephalography (MEG) studies. **MRI** On the same visit as the MRI scan, participants are administered a subset of tasks from the NIH Toolbox Cognition Battery. The four tasks asses attention and executive functioning (Flanker Inhibitory Control and Attention Task), executive functioning (Dimensional Change Card Sort Task), episodic memory (Picture Sequence Memory Task), and working memory (List Sorting Working Memory Task). The MRI protocol used was initially based on the ADNI-3 basic protocol, but was later modified to include portions of the ABCD protocol in the following manner: 1. The T1 scan from ADNI3 was replaced by the T1 scan from the ABCD protocol. 2. The Axial T2 2D FLAIR acquisition from ADNI2 was added, and fat saturation turned on. 3. Fat saturation was turned on for the pCASL acquisition. 4. The high-resolution in-plane hippocampal 2D T2 scan was removed, and replaced with the whole brain 3D T2 scan from the ABCD protocol (which is resolution and bandwidth matched to the T1 scan). 5. The slice-select gradient reversal method was turned on for DTI acquisition, and reconstruction interpolation turned off. 6. Scans for distortion correction were added (reversed-blip scans for DTI and resting state scans). 7. The 3D FLAIR sequence was made optional, and replaced by one where the prescription and other acquisition parameters provide resolution and geometric correspondence between the T1 and T2 scans. **MEG** The optional MEG studies were added to the protocol approximately one year after the study was initiated, thus there are relatively fewer MEG recordings in comparison to the MRI dataset. MEG studies are performed on a 275 channel CTF MEG system. The position of the head was localized at the beginning and end of the recording using three fiducial coils. These coils were placed 1.5 cm above the nasion, and at each ear, 1.5 cm from the tragus on a line between the tragus and the outer canthus of the eye. For some participants, photographs were taken of the three coils and used to mark the points on the T1 weighted structural MRI scan for co-registration. For the remainder of the participants, a BrainSight neuro-navigation unit was used to coregister the MRI, anatomical fiducials, and localizer coils directly prior to MEG data acquisition. **Specific Survey and Test Data within Data Set** *NOTE:* In the release 2.0 of the dataset, two measures Brief Trauma Questionnaire (BTQ) and Big Five personality survey were added to the online screening questionnaires. Also, for the in-person screening visit, the Beck Anxiety Inventory (BAI) and Beck Depression Inventory-II (BDI-II) were replaced with the General Anxiety Disorder-7 (GAD7) and Patient Health Questionnaire 9 (PHQ9) surveys, respectively. The Perceived Health rating survey was discontinued. **1. Preliminary Online Screening Questionnaires** ```text | Survey or Test | BIDS TSV Name | | --------------------------------------------------------------------------- | ------------------------------ | | Alcohol Use Disorders Identification Test (AUDIT) | audit.tsv | | Brief Trauma Questionnaire (BTQ) | btq.tsv | | Big-Five Personality | big_five_personality.tsv | | Demographics | demographics.tsv | | Drug Use Questionnaire | drug_use.tsv | | Edinburgh Handedness Inventory (EHI) | ehi.tsv | | Health History Questions | health_history_questions.tsv | | Health Rating | health_rating.tsv | | Mental Health Questions | mental_health_questions.tsv | | World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0) | whodas.tsv | ``` **2. On-Campus In-Person Screening Visit** ```text | Survey | BIDS TSV Name | | -------------------------------------------------------------------------------------------- | ----------------------------- | | Adverse Childhood Experiences (ACEs) | ace.tsv | | Beck Anxiety Inventory (BAI) | bai.tsv | | Beck Depression Inventory-II (BDI-II) | bdi.tsv | | Clinical Variable Form | clinical_variable_form.tsv | | Family Interview for Genetic Studies (FIGS) | figs.tsv | | General Anxiety Disorder-7 (GAD7) | gad7.tsv | | Kaufman Brief Intelligence Test 2nd Edition (KBIT-2) and Vocabulary Assessment Scale (VAS) | kbit2_vas.tsv | | Patient Health Questionnaire 9 | phq9.tsv | | Perceived Health Rating | perceived_health_rating.tsv | | Satisfaction Survey | satisfaction.tsv | | Structured Clinical Interview for DSM-5 Disorders (SCID-5) | scid5.tsv | | Test | BIDS TSV Name | | ---------------------------------------- | --------------------------- | | Acute Care Panel | acute_care.tsv | | Blood Chemistry | blood_chemistry.tsv | | Complete Blood Count with Differential | cbc_with_differential.tsv | | Hematology Panel | hematology.tsv | | Hepatic Function Panel | hepatic.tsv | | Infectious Disease Panel | infectious_disease.tsv | | Lipid Panel | lipid.tsv | | Other Panel | other.tsv | | Urinalysis | urinalysis.tsv | | Urine Chemistry | urine_chemistry.tsv | | Vitamin Levels | vitamin_levels.tsv | ``` **3. Optional On-Campus In-Person MRI Visit** ```text | Survey | BIDS TSV Name | | ------------------------------ | ------------------- | | MRI Variables | mri_variables.tsv | | NIH Toolbox Cognition Battery | nih_toolbox.tsv | ``` **Preparation Notes** In many of the Clinical Measures data files, there exist `-999` values. `-999` means there was no response though a response was possible. The question may have been skipped over by the participant or the question flow. `-777` appears in the Edinburgh Handedness Inventory (EHI) as well. `-777` means there is no data available for a response. The question was not presented or asked to the participant. The data were prepared using the following tools and filename mappings. **Clinical Measures Data** The `ctdb_clean_up.ipynb` Jupyter Notebook contains the python functions used to clean and convert the spreadsheet downloaded from CTDB to BIDS-standard TSV files as well as their respective data dictionaries converted to BIDS-standard JSON files. **Biological and Physiological Measures Data** The `cris_clean_up.ipynb` Jupyter Notebook contains the Python functions used to clean and convert the spreadsheet with clinical measures to BIDS-standard TSV files and their data dictionaries to BIDS-standard JSON files. **BIDS-standard MEG Files** Data collected by the NIMH MEG Core was converted to BIDS-standard files using the MNE BIDS package. Associated notebooks: `1_mne_bids_extractor.ipynb` & `2_bids_editor.ipynb`. **BIDS-standard MRI** We used the `heudiconv` tool to convert MRI DICOM files to BIDS-standard files with the associated script: `heuristic_rvol.py`. A modified workflow of `pydeface` was used to deface structural scans with the associated notebook: `modified-workflow-pydeface.ipynb` Each participant received either the ADNI3 or the ABCD protocol, not both, during their MRI/MEG visit. T1w scans with acquisition label `fspgr` are ADNI3 protocol sequence and scans with `mprage` acquisition labels are ABCD protocol sequence. **OpenNeuro BIDS File/Folder Tree** Below is a BIDS-compliant file/folder tree as it appears for subjects on OpenNeuro. ```shell sub-ON ``` ```text └── ses-01 ├── anat │ └── sub-ON_ses-01_acq-_run-_. ├── asl │ └── sub-ON_ses-01_run-_asl. ├── dwi │ └── sub-ON_ses-01_run-_dwi. ├── fmap │ └── sub-ON_ses-01_acq-_dir-_run-_epi. ├── func │ └── sub-ON_ses-01_task-_run-_. ├── meg │ ├── sub-ON_ses-01_task-_run-01_.json │ ├── sub-ON_ses-01_task-_run-01_.tsv │ └── sub-ON_ses-01_task-_run-01_meg.ds │ ├── BadChannels │ ├── bad.segments │ ├── ClassFile.cls │ ├── MarkerFile.mrk │ ├── params.dsc │ ├── processing.cfg │ ├── sub-ON_ses-01_task-_run-01_meg. │ └── sub-ON_ses-01_task-_run-01.xml └── sub-ON_ses-01_scans. ``` Definitions: - `` = subject number - `` = task name: `airpuff`, `artifact`, `gonogo`, `haririhammer`, `movie`, `oddball`, `sternberg` - `` = placeholder for acquisition label for a given suffix - `` = flipped, unflipped - `` = run number/index - `` = placeholder to indicate the scan type > - `T1w`: `` = `fspgr`, `mprage`, `fse`, `highreshippo` > - `T2w`: `` = `abcdcube`, `cube`, `frfse` > - `FLAIR`: `` = `adni2d`, `2d`, `3d`, `t2` > - `epi`: `` = `dwib1000`, `dwi`, `resting` > - `T2star` > - `bold` > - `meg` > - `asl` - ``: indicates meg data files’ type = `acq`, `bak`, `hc`, `hist`, `infods`, `meg4`, `newds`, `res4`, `xml` ## Dataset Information | Dataset ID | `DS005752` | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The NIMH Healthy Research Volunteer Dataset | | Author (year) | `Nugent2024` | | Canonical | — | | Importable as | `DS005752`, `Nugent2024` | | Year | 2024 | | Authors | Allison C. Nugent, Adam G Thomas, Margaret Mahoney, Alison Gibbons, Jarrod Smith, Antoinette Charles, Jacob S Shaw, Jeffrey D Stout, Anna M Namyst, Arshitha Basavaraj, Eric Earl, Dustin Moraczewski, Emily Guinee, Michael Liu, Travis Riddle, Joseph Snow, Shruti Japee, Morgan Andrews, Adriana Pavletic, Stephen Sinclair, Vinai Roopchansingh, Peter A Bandettini, Joyce Chung | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005752.v2.1.0](https://doi.org/10.18112/openneuro.ds005752.v2.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005752) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005752) | [Source URL](https://openneuro.org/datasets/ds005752) | ### Copy-paste BibTeX ```bibtex @dataset{ds005752, title = {The NIMH Healthy Research Volunteer Dataset}, author = {Allison C. Nugent and Adam G Thomas and Margaret Mahoney and Alison Gibbons and Jarrod Smith and Antoinette Charles and Jacob S Shaw and Jeffrey D Stout and Anna M Namyst and Arshitha Basavaraj and Eric Earl and Dustin Moraczewski and Emily Guinee and Michael Liu and Travis Riddle and Joseph Snow and Shruti Japee and Morgan Andrews and Adriana Pavletic and Stephen Sinclair and Vinai Roopchansingh and Peter A Bandettini and Joyce Chung}, doi = {10.18112/openneuro.ds005752.v2.1.0}, url = {https://doi.org/10.18112/openneuro.ds005752.v2.1.0}, } ``` ## Technical Details - Subjects: 123 - Recordings: 1055 - Tasks: 10 - Channels: 305 (240), 306 (183), 304 (123), 302 (117), 303 (110), 301 (71), 382 (59), 300 (57), 378 (20), 377 (16), 379 (16), 381 (15), 380 (15), 299 (3), 387, 388 - Sampling rate (Hz): 1200.0 (926), 4800.0 (121) - Duration (hours): 102.62917361111111 - Pathology: Healthy - Modality: Multisensory - Type: Other - Size on disk: 662.7 GB - File count: 1055 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005752.v2.1.0 - Source: openneuro - OpenNeuro: [ds005752](https://openneuro.org/datasets/ds005752) - NeMAR: [ds005752](https://nemar.org/dataexplorer/detail?dataset_id=ds005752) ## API Reference Use the `DS005752` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005752(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The NIMH Healthy Research Volunteer Dataset * **Study:** `ds005752` (OpenNeuro) * **Author (year):** `Nugent2024` * **Canonical:** — Also importable as: `DS005752`, `Nugent2024`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 123; recordings: 1055; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005752](https://openneuro.org/datasets/ds005752) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005752](https://nemar.org/dataexplorer/detail?dataset_id=ds005752) DOI: [https://doi.org/10.18112/openneuro.ds005752.v2.1.0](https://doi.org/10.18112/openneuro.ds005752.v2.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS005752 >>> dataset = DS005752(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005752) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005752) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS005776: fnirs dataset, 11 subjects *Electrical_Thermal_FingerTapping_2015* Access recordings and metadata through EEGDash. **Citation:** Yücel, Meryem, Selb, Juliette, Aasted, Christopher, Petkov, Mihayl, Borsook, David, Boas, David, Becerra, Lino (2025). *Electrical_Thermal_FingerTapping_2015*. [10.18112/openneuro.ds005776.v1.0.1](https://doi.org/10.18112/openneuro.ds005776.v1.0.1) Modality: fnirs Subjects: 11 Recordings: 46 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005776 dataset = DS005776(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005776(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005776( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005776, title = {Electrical_Thermal_FingerTapping_2015}, author = {Yücel, Meryem and Selb, Juliette and Aasted, Christopher and Petkov, Mihayl and Borsook, David and Boas, David and Becerra, Lino}, doi = {10.18112/openneuro.ds005776.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005776.v1.0.1}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS005776` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Electrical_Thermal_FingerTapping_2015 | | Author (year) | `Yucel2025_Electrical` | | Canonical | `Yucel2015` | | Importable as | `DS005776`, `Yucel2025_Electrical`, `Yucel2015` | | Year | 2025 | | Authors | Yücel, Meryem, Selb, Juliette, Aasted, Christopher, Petkov, Mihayl, Borsook, David, Boas, David, Becerra, Lino | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005776.v1.0.1](https://doi.org/10.18112/openneuro.ds005776.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005776) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005776) | [Source URL](https://openneuro.org/datasets/ds005776) | ### Copy-paste BibTeX ```bibtex @dataset{ds005776, title = {Electrical_Thermal_FingerTapping_2015}, author = {Yücel, Meryem and Selb, Juliette and Aasted, Christopher and Petkov, Mihayl and Borsook, David and Boas, David and Becerra, Lino}, doi = {10.18112/openneuro.ds005776.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005776.v1.0.1}, } ``` ## Technical Details - Subjects: 11 - Recordings: 46 - Tasks: 5 - Channels: 102 - Sampling rate (Hz): 50.0 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Tactile - Type: Motor - Size on disk: 1.2 GB - File count: 46 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005776.v1.0.1 - Source: openneuro - OpenNeuro: [ds005776](https://openneuro.org/datasets/ds005776) - NeMAR: [ds005776](https://nemar.org/dataexplorer/detail?dataset_id=ds005776) ## API Reference Use the `DS005776` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005776(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrical_Thermal_FingerTapping_2015 * **Study:** `ds005776` (OpenNeuro) * **Author (year):** `Yucel2025_Electrical` * **Canonical:** `Yucel2015` Also importable as: `DS005776`, `Yucel2025_Electrical`, `Yucel2015`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 11; recordings: 46; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005776](https://openneuro.org/datasets/ds005776) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005776](https://nemar.org/dataexplorer/detail?dataset_id=ds005776) DOI: [https://doi.org/10.18112/openneuro.ds005776.v1.0.1](https://doi.org/10.18112/openneuro.ds005776.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005776 >>> dataset = DS005776(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005776) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005776) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) * [eegdash.dataset.DS005929](eegdash.dataset.DS005929.md) # DS005777: fnirs dataset, 14 subjects *Electrical_Morphine_Placebo_2018* Access recordings and metadata through EEGDash. **Citation:** Peng, Ke, Yücel, Meryem, Steele, Sarah, Bittner, Edward, Aasted, Christopher, Hoeft, Mark, Lee, Arielle, George, Edward, Boas, David, Becerra, Lino, Borsook, David (2025). *Electrical_Morphine_Placebo_2018*. [10.18112/openneuro.ds005777.v1.0.1](https://doi.org/10.18112/openneuro.ds005777.v1.0.1) Modality: fnirs Subjects: 14 Recordings: 113 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005777 dataset = DS005777(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005777(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005777( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005777, title = {Electrical_Morphine_Placebo_2018}, author = {Peng, Ke and Yücel, Meryem and Steele, Sarah and Bittner, Edward and Aasted, Christopher and Hoeft, Mark and Lee, Arielle and George, Edward and Boas, David and Becerra, Lino and Borsook, David}, doi = {10.18112/openneuro.ds005777.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005777.v1.0.1}, } ``` ## About This Dataset **Please provide a description for your dataset.** ## Dataset Information | Dataset ID | `DS005777` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Electrical_Morphine_Placebo_2018 | | Author (year) | `Peng2025` | | Canonical | `Peng2018` | | Importable as | `DS005777`, `Peng2025`, `Peng2018` | | Year | 2025 | | Authors | Peng, Ke, Yücel, Meryem, Steele, Sarah, Bittner, Edward, Aasted, Christopher, Hoeft, Mark, Lee, Arielle, George, Edward, Boas, David, Becerra, Lino, Borsook, David | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005777.v1.0.1](https://doi.org/10.18112/openneuro.ds005777.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005777) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005777) | [Source URL](https://openneuro.org/datasets/ds005777) | ### Copy-paste BibTeX ```bibtex @dataset{ds005777, title = {Electrical_Morphine_Placebo_2018}, author = {Peng, Ke and Yücel, Meryem and Steele, Sarah and Bittner, Edward and Aasted, Christopher and Hoeft, Mark and Lee, Arielle and George, Edward and Boas, David and Becerra, Lino and Borsook, David}, doi = {10.18112/openneuro.ds005777.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005777.v1.0.1}, } ``` ## Technical Details - Subjects: 14 - Recordings: 113 - Tasks: 2 - Channels: 66 - Sampling rate (Hz): 25.0 - Duration (hours): Not calculated - Pathology: Not specified - Modality: Tactile - Type: Perception - Size on disk: 864.8 MB - File count: 113 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005777.v1.0.1 - Source: openneuro - OpenNeuro: [ds005777](https://openneuro.org/datasets/ds005777) - NeMAR: [ds005777](https://nemar.org/dataexplorer/detail?dataset_id=ds005777) ## API Reference Use the `DS005777` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005777(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrical_Morphine_Placebo_2018 * **Study:** `ds005777` (OpenNeuro) * **Author (year):** `Peng2025` * **Canonical:** `Peng2018` Also importable as: `DS005777`, `Peng2025`, `Peng2018`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Unknown`. Subjects: 14; recordings: 113; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005777](https://openneuro.org/datasets/ds005777) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005777](https://nemar.org/dataexplorer/detail?dataset_id=ds005777) DOI: [https://doi.org/10.18112/openneuro.ds005777.v1.0.1](https://doi.org/10.18112/openneuro.ds005777.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005777 >>> dataset = DS005777(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005777) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005777) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005929](eegdash.dataset.DS005929.md) # DS005779: eeg dataset, 19 subjects *Real-time personalized brain state-dependent TMS in healthy adults* Access recordings and metadata through EEGDash. **Citation:** Uttara Khatri, Sara Hussain (2025). *Real-time personalized brain state-dependent TMS in healthy adults*. [10.18112/openneuro.ds005779.v1.0.1](https://doi.org/10.18112/openneuro.ds005779.v1.0.1) Modality: eeg Subjects: 19 Recordings: 250 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005779 dataset = DS005779(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005779(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005779( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005779, title = {Real-time personalized brain state-dependent TMS in healthy adults}, author = {Uttara Khatri and Sara Hussain}, doi = {10.18112/openneuro.ds005779.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005779.v1.0.1}, } ``` ## About This Dataset This dataset contains raw data for the following publication: Khatri, U.U., Pulliam, K., Manesiya, M., Cortez, M.V., Millán, J.D.R. and Hussain, S.J., 2024. Personalized whole-brain activity patterns predict human corticospinal tract activation in real-time. Brain Stimulation, in press. Real-time and offline analysis code can be found here: [https://github.com/SMNPLab/Realtime_decoding_neurotypical.git](https://github.com/SMNPLab/Realtime_decoding_neurotypical.git) This work was funded by NINDS under award number R21NS133605. ## Dataset Information | Dataset ID | `DS005779` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Real-time personalized brain state-dependent TMS in healthy adults | | Author (year) | `Khatri2025` | | Canonical | — | | Importable as | `DS005779`, `Khatri2025` | | Year | 2025 | | Authors | Uttara Khatri, Sara Hussain | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005779.v1.0.1](https://doi.org/10.18112/openneuro.ds005779.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005779) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005779) | [Source URL](https://openneuro.org/datasets/ds005779) | ### Copy-paste BibTeX ```bibtex @dataset{ds005779, title = {Real-time personalized brain state-dependent TMS in healthy adults}, author = {Uttara Khatri and Sara Hussain}, doi = {10.18112/openneuro.ds005779.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005779.v1.0.1}, } ``` ## Technical Details - Subjects: 19 - Recordings: 250 - Tasks: 16 - Channels: 67 (235), 64 (14), 70 - Sampling rate (Hz): 5000.0 - Duration (hours): 19.778788944444443 - Pathology: Healthy - Modality: Other - Type: Clinical/Intervention - Size on disk: 88.7 GB - File count: 250 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005779.v1.0.1 - Source: openneuro - OpenNeuro: [ds005779](https://openneuro.org/datasets/ds005779) - NeMAR: [ds005779](https://nemar.org/dataexplorer/detail?dataset_id=ds005779) ## API Reference Use the `DS005779` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005779(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Real-time personalized brain state-dependent TMS in healthy adults * **Study:** `ds005779` (OpenNeuro) * **Author (year):** `Khatri2025` * **Canonical:** — Also importable as: `DS005779`, `Khatri2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 19; recordings: 250; tasks: 16. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005779](https://openneuro.org/datasets/ds005779) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005779](https://nemar.org/dataexplorer/detail?dataset_id=ds005779) DOI: [https://doi.org/10.18112/openneuro.ds005779.v1.0.1](https://doi.org/10.18112/openneuro.ds005779.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005779 >>> dataset = DS005779(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005779) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005779) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005795: eeg dataset, 34 subjects *MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data)* Access recordings and metadata through EEGDash. **Citation:** Jörg Stadler, Torsten Stöter, Nicole Angenstein, Andreas Fügner, Marcel Lommerzheim, Artur Mathysiak, Anke Michalsky, Gabriele Schöps, Johann van der Meer, Susann Wolff, André Brechmann (2025). *MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data)*. [10.18112/openneuro.ds005795.v1.0.0](https://doi.org/10.18112/openneuro.ds005795.v1.0.0) Modality: eeg Subjects: 34 Recordings: 39 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005795 dataset = DS005795(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005795(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005795( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005795, title = {MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data)}, author = {Jörg Stadler and Torsten Stöter and Nicole Angenstein and Andreas Fügner and Marcel Lommerzheim and Artur Mathysiak and Anke Michalsky and Gabriele Schöps and Johann van der Meer and Susann Wolff and André Brechmann}, doi = {10.18112/openneuro.ds005795.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005795.v1.0.0}, } ``` ## About This Dataset Overview The study comprises data of a combined fMRI/EEG experiment. The EEG files contain 63 head channels, ECG, EOG, facial EMG and skin conductance data. A physio file contains respiration and finger-pulse data. In addition, a T1 weighted whole-brain anatomical MR scan, a PD weighted (UTE) scan for electrode localization is provided (defacing was performed using [https://github.com/cbinyu/pydeface](https://github.com/cbinyu/pydeface)). Additional data of the participants (T2 weighted images, button press dynamics, hearing threshold, hearing abilities, and personality traits (NEO-FFI, BIS/BAS, SVF, ERQ, MMG) are available on request. The study was conducted at the Combinatorial NeuroImaging (CNI) core facility of the Leibniz Institute for Neurobiology (LIN) Magdeburg and was approved by the ethics committee of the University of Magdeburg, Germany. All participants gave written informed consent. Currently you will only find 5 data-sets that include the multi-dimensional category learning experiment (cf. Wolff & Brechmann, Cerebral Cortex, 2023) because of the copyright policy of OpenNeuro (i.e. CC0). If you are interested in the remaining data-sets, please contact [brechmann@lin-magdeburg.de](mailto:brechmann@lin-magdeburg.de). Collaboration is highly welcome! Details of the learning task The auditory category learning experiment comprised 180 trials for which 160 different frequency modulated sounds were presented in pseudo-randomized order with a jittered inter-trial interval of 6, 8, or 10 s plus 19-95 ms in steps of 19 ms in order to ensure a pseudo-random jitter of the sound onset with the onset of the acquisition of an MR volume. Each sound had five different binary features, i.e. duration (short: 400 ms, long 800 ms), direction of the frequency modulation (rising, falling), intensity (soft: 76–81 dB, loud: 86–91 dB), speed of the frequency modulation (slow: 0.25 octaves/s, fast: 0.5 octaves/s), and frequency range (low: 500–831 Hz, high: 1630–2639 Hz with 5 different ranges each). Participants had to learn a target category defined by a combination of the features duration and direction (i.e. long/rising, long/falling, short/rising, or short/falling) by trial and error. In each trial, participants had to indicate via button press whether they thought a sound belonged to the target category (right index finger) or not (right middle finger). They received feedback about the correctness of the response by a prerecorded, female voice in standard German; e.g., “ja” (yes) or “richtig” (right) following correct responses, “nein” (no) or “falsch” (wrong) following incorrect responses. In 90% of the trials the feedback immediately followed the button press, in 10% it was delayed by 1500 ms. If participants failed to respond within 2 seconds after FM tone onset, a timeout feedback (“zu spät”, too late) was presented. During the ~27 min learning experiment, participants were asked to fixate a white cross on grey background and avoid any movements. For the 10 min rs-fMRI, they were asked to close their eyes. Technical details MR data were acquired with a 3 Tesla MRI scanner (Philips Achieva dStream) equipped with a 32-channel head coil. The MR scanner generates a trigger signal used to synchronize the multimodal data acquisition. The timing of stimulus events and the participants’ responses were controlled by the software Presentation (Neurobehavioral Systems) running on a Windows stimulation-PC. Auditory stimuli were presented via a Mark II+ (MR-Confon, Magdeburg, Germany) audio control unit to MR compatible electrodynamic headphones with integrated ear muffs that provide passive damping of ambient scanner noise by ~24 dB. Earplugs (Bilsom 303) further reduce the noise by ~29 dB (SNR). Button presses of the participants were recorded with the ResponseBox 2.0 by Covilex (Magdeburg, Germany) that includes a response pad with two buttons. The device delivers continuous 8-bit data at a sampling rate of 500 Hz. The Teensy converts left and right button presses that exceed a defined threshold into USB keyboard events handled by the stimulation-PC. Respiration and heart rate was recorded with Invivo MRI Sensors at a sampling rate of 100 Hz and stored on the MRI acquisition PC at 496 Hz sampling rate. 64-channel EEG (including ECG) was recorded at 5 kHz using two 32-channel amplifiers BrainAmp MRplus (Brain Products GmbH, Gilching, Germany). The amplifier’s discriminative resolution was set to 0.5 µV/bit (range of +/-16.38 mV) and the signals were hardware-filtered in the frequency band between 0.01 Hz and 250 Hz. A bipolar 16-channel amplifier BrainAmp ExG MR was used to record 2 EOG, 4 EMG (Corrugator, Zygomaticus) channels as well as signals from 4 carbon wire loops (CWL) for correcting pulse and motion related artifacts. Another BrainAmp ExG MR amplifier with an ExG AUX box was used to record the skin conductance (GSR) at the index finger of the participant’s non-dominant hand. All signals are synchronized with the MR trigger via a Sync box and two USB2 adapter. All data were recorded and stored with the BrainVision Recorder software. Preprocessing (MR-artifact correction, bandpass filtering between 0.3 and 125 Hz, downsampling to 500 Hz with subsequent CWL correction) and export of the EEG-data was performed in BrainVision Analyzer 2.3. Raw data for optimized artifact correction are available upon request. ## Dataset Information | Dataset ID | `DS005795` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data) | | Author (year) | `Stadler2025` | | Canonical | — | | Importable as | `DS005795`, `Stadler2025` | | Year | 2025 | | Authors | Jörg Stadler, Torsten Stöter, Nicole Angenstein, Andreas Fügner, Marcel Lommerzheim, Artur Mathysiak, Anke Michalsky, Gabriele Schöps, Johann van der Meer, Susann Wolff, André Brechmann | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005795.v1.0.0](https://doi.org/10.18112/openneuro.ds005795.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005795) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005795) | [Source URL](https://openneuro.org/datasets/ds005795) | ### Copy-paste BibTeX ```bibtex @dataset{ds005795, title = {MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data)}, author = {Jörg Stadler and Torsten Stöter and Nicole Angenstein and Andreas Fügner and Marcel Lommerzheim and Artur Mathysiak and Anke Michalsky and Gabriele Schöps and Johann van der Meer and Susann Wolff and André Brechmann}, doi = {10.18112/openneuro.ds005795.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005795.v1.0.0}, } ``` ## Technical Details - Subjects: 34 - Recordings: 39 - Tasks: 2 - Channels: 72 - Sampling rate (Hz): 500.0 - Duration (hours): 7.933313888888889 - Pathology: Healthy - Modality: Auditory - Type: Learning - Size on disk: 6.4 GB - File count: 39 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005795.v1.0.0 - Source: openneuro - OpenNeuro: [ds005795](https://openneuro.org/datasets/ds005795) - NeMAR: [ds005795](https://nemar.org/dataexplorer/detail?dataset_id=ds005795) ## API Reference Use the `DS005795` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005795(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data) * **Study:** `ds005795` (OpenNeuro) * **Author (year):** `Stadler2025` * **Canonical:** — Also importable as: `DS005795`, `Stadler2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 34; recordings: 39; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005795](https://openneuro.org/datasets/ds005795) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005795](https://nemar.org/dataexplorer/detail?dataset_id=ds005795) DOI: [https://doi.org/10.18112/openneuro.ds005795.v1.0.0](https://doi.org/10.18112/openneuro.ds005795.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005795 >>> dataset = DS005795(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005795) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005795) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005810: meg dataset, 31 subjects *NOD-MEG* Access recordings and metadata through EEGDash. **Citation:** Guohao Zhang, Ming Zhou, Shuyi Zhen, Shaohua Tang, Zheng Li, Zonglei Zhen (2025). *NOD-MEG*. [10.18112/openneuro.ds005810.v2.0.0](https://doi.org/10.18112/openneuro.ds005810.v2.0.0) Modality: meg Subjects: 31 Recordings: 305 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005810 dataset = DS005810(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005810(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005810( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005810, title = {NOD-MEG}, author = {Guohao Zhang and Ming Zhou and Shuyi Zhen and Shaohua Tang and Zheng Li and Zonglei Zhen}, doi = {10.18112/openneuro.ds005810.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds005810.v2.0.0}, } ``` ## About This Dataset **Summary** The human brain can rapidly recognize meaningful objects from natural scenes encountered in everyday life. Neuroimaging with large-scale naturalistic stimuli is increasingly employed to elucidate these neural mechanisms of object recognition across these rich and daily natural scenes. However, most existing large-scale neuroimaging datasets with naturalistic stimuli primarily rely on functional magnetic resonance imaging (fMRI), which provides high spatial resolution to characterize spatial representation patterns but is limited in capturing the temporal dynamics inherent in visual cognitive processing. To address this limitation, we extended our previously collected Natural Object Dataset-fMRI (NOD-fMRI) by collecting both magnetoencephalography (MEG) and electroencephalography (EEG) data from the same subjects while viewing the same set of naturalistic stimuli. As a result, the NOD uniquely integrates three different modalities—fMRI, MEG, and EEG—thus offering promising avenues to examine brain activity induced by naturalistic stimuli with both high spatial and high temporal resolutions. Additionally, the NOD encompasses a diverse array of naturalistic stimuli and a broader subject pool, enabling researchers to explore differences in neural activation patterns across both stimuli and subjects. We anticipate that the NOD dataset will serve as a valuable resource for advancing our understanding of the cognitive and neural mechanisms underlying object recognition. **The EEG data’s accession number is ds005811.** ### View full README **Summary** The human brain can rapidly recognize meaningful objects from natural scenes encountered in everyday life. Neuroimaging with large-scale naturalistic stimuli is increasingly employed to elucidate these neural mechanisms of object recognition across these rich and daily natural scenes. However, most existing large-scale neuroimaging datasets with naturalistic stimuli primarily rely on functional magnetic resonance imaging (fMRI), which provides high spatial resolution to characterize spatial representation patterns but is limited in capturing the temporal dynamics inherent in visual cognitive processing. To address this limitation, we extended our previously collected Natural Object Dataset-fMRI (NOD-fMRI) by collecting both magnetoencephalography (MEG) and electroencephalography (EEG) data from the same subjects while viewing the same set of naturalistic stimuli. As a result, the NOD uniquely integrates three different modalities—fMRI, MEG, and EEG—thus offering promising avenues to examine brain activity induced by naturalistic stimuli with both high spatial and high temporal resolutions. Additionally, the NOD encompasses a diverse array of naturalistic stimuli and a broader subject pool, enabling researchers to explore differences in neural activation patterns across both stimuli and subjects. We anticipate that the NOD dataset will serve as a valuable resource for advancing our understanding of the cognitive and neural mechanisms underlying object recognition. **The EEG data’s accession number is ds005811.** **Data Records** **Directory Structure** The raw data from each subject are stored in the `sub-subID` directory, while preprocessed data and epoch data are stored in the following directories: - *Preprocessed Data:* `derivatives/preprocessed/raw` - *Epoch Data:* `derivatives/preprocessed/epochs` **Stimulus Images** The stimulus images used for MEG and EEG are identical and are stored in the `stimuli/ImageNet` directory. Images within this folder are named in the `synsetID_imageID.JPEG` Where: - `synsetID` is the ILSVRC category information. - `imageID` is the unique number for the image within that category. The image metadata, including category information, is available in the table files under the `stimuli/metadata` directory. **Raw Data** Raw MEG data are stored in BIDS format. Each subject’s directory contains multiple session folders, designated as `ses-sesID`. Comprehensive trial information for each subject is documented in the file: `derivatives/detailed_events/sub-subID_events.csv` Where each row corresponds to a trial, and each column contains metadata for that trial, including the session and run number, category information of the stimuli, and subject response. **Preprocessed Data** The full time series data of preprocessed data are archived in the `derivatives/raw` directory, named as: `sub-subID_ses-sesID_task-ImageNet_run-runID_meg_clean.fif`. The epoch data derived from preprocessed data are stored within the `derivatives/epochs` directory. In this directory, all data for each subject are concatenated into a single file, labeled as: `sub-subID_epo.fif` The trial information within each subject’s epochs data can be accessed via the metadata of the epochs data, which are aligned with the content of the subject’s `sub-subID_events.csv` file. **References** Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. *Scientific Data, 5*, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `DS005810` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | NOD-MEG | | Author (year) | `Zhang2025_MEG` | | Canonical | `NOD_MEG` | | Importable as | `DS005810`, `Zhang2025_MEG`, `NOD_MEG` | | Year | 2025 | | Authors | Guohao Zhang, Ming Zhou, Shuyi Zhen, Shaohua Tang, Zheng Li, Zonglei Zhen | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005810.v2.0.0](https://doi.org/10.18112/openneuro.ds005810.v2.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005810) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005810) | [Source URL](https://openneuro.org/datasets/ds005810) | ### Copy-paste BibTeX ```bibtex @dataset{ds005810, title = {NOD-MEG}, author = {Guohao Zhang and Ming Zhou and Shuyi Zhen and Shaohua Tang and Zheng Li and Zonglei Zhen}, doi = {10.18112/openneuro.ds005810.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds005810.v2.0.0}, } ``` ## Technical Details - Subjects: 31 - Recordings: 305 - Tasks: 2 - Channels: 409 (285), 378 (20) - Sampling rate (Hz): 1200.0 - Duration (hours): 25.52289236111111 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 178.6 GB - File count: 305 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005810.v2.0.0 - Source: openneuro - OpenNeuro: [ds005810](https://openneuro.org/datasets/ds005810) - NeMAR: [ds005810](https://nemar.org/dataexplorer/detail?dataset_id=ds005810) ## API Reference Use the `DS005810` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005810(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NOD-MEG * **Study:** `ds005810` (OpenNeuro) * **Author (year):** `Zhang2025_MEG` * **Canonical:** `NOD_MEG` Also importable as: `DS005810`, `Zhang2025_MEG`, `NOD_MEG`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 31; recordings: 305; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005810](https://openneuro.org/datasets/ds005810) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005810](https://nemar.org/dataexplorer/detail?dataset_id=ds005810) DOI: [https://doi.org/10.18112/openneuro.ds005810.v2.0.0](https://doi.org/10.18112/openneuro.ds005810.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005810 >>> dataset = DS005810(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005810) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005810) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS005811: eeg dataset, 19 subjects *NOD-EEG* Access recordings and metadata through EEGDash. **Citation:** Guohao Zhang, Ming Zhou, Shuyi Zhen, Shaohua Tang, Zheng Li, Zonglei Zhen (2025). *NOD-EEG*. [10.18112/openneuro.ds005811.v1.0.9](https://doi.org/10.18112/openneuro.ds005811.v1.0.9) Modality: eeg Subjects: 19 Recordings: 448 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005811 dataset = DS005811(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005811(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005811( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005811, title = {NOD-EEG}, author = {Guohao Zhang and Ming Zhou and Shuyi Zhen and Shaohua Tang and Zheng Li and Zonglei Zhen}, doi = {10.18112/openneuro.ds005811.v1.0.9}, url = {https://doi.org/10.18112/openneuro.ds005811.v1.0.9}, } ``` ## About This Dataset **Summary** The human brain can rapidly recognize meaningful objects from natural scenes encountered in everyday life. Neuroimaging with large-scale naturalistic stimuli is increasingly employed to elucidate these neural mechanisms of object recognition across these rich and daily natural scenes. However, most existing large-scale neuroimaging datasets with naturalistic stimuli primarily rely on functional magnetic resonance imaging (fMRI), which provides high spatial resolution to characterize spatial representation patterns but is limited in capturing the temporal dynamics inherent in visual cognitive processing. To address this limitation, we extended our previously collected Natural Object Dataset-fMRI (NOD-fMRI) by collecting both magnetoencephalography (MEG) and electroencephalography (EEG) data from the same subjects while viewing the same set of naturalistic stimuli. As a result, the NOD uniquely integrates three different modalities—fMRI, MEG, and EEG—thus offering promising avenues to examine brain activity induced by naturalistic stimuli with both high spatial and high temporal resolutions. Additionally, the NOD encompasses a diverse array of naturalistic stimuli and a broader subject pool, enabling researchers to explore differences in neural activation patterns across both stimuli and subjects. We anticipate that the NOD dataset will serve as a valuable resource for advancing our understanding of the cognitive and neural mechanisms underlying object recognition. **The MEG data’s accession number is ds005810.** ### View full README **Summary** The human brain can rapidly recognize meaningful objects from natural scenes encountered in everyday life. Neuroimaging with large-scale naturalistic stimuli is increasingly employed to elucidate these neural mechanisms of object recognition across these rich and daily natural scenes. However, most existing large-scale neuroimaging datasets with naturalistic stimuli primarily rely on functional magnetic resonance imaging (fMRI), which provides high spatial resolution to characterize spatial representation patterns but is limited in capturing the temporal dynamics inherent in visual cognitive processing. To address this limitation, we extended our previously collected Natural Object Dataset-fMRI (NOD-fMRI) by collecting both magnetoencephalography (MEG) and electroencephalography (EEG) data from the same subjects while viewing the same set of naturalistic stimuli. As a result, the NOD uniquely integrates three different modalities—fMRI, MEG, and EEG—thus offering promising avenues to examine brain activity induced by naturalistic stimuli with both high spatial and high temporal resolutions. Additionally, the NOD encompasses a diverse array of naturalistic stimuli and a broader subject pool, enabling researchers to explore differences in neural activation patterns across both stimuli and subjects. We anticipate that the NOD dataset will serve as a valuable resource for advancing our understanding of the cognitive and neural mechanisms underlying object recognition. **The MEG data’s accession number is ds005810.** **Data Records** **Directory Structure** The raw data from each subject are stored in the `sub-subID` directory, while preprocessed data and epoch data are stored in the following directories: - *Preprocessed Data:* `derivatives/preprocessed/raw` - *Epoch Data:* `derivatives/preprocessed/epochs` **Stimulus Images** The stimulus images used for MEG and EEG are identical and are stored in the `stimuli/ImageNet` directory. Images within this folder are named in the `synsetID_imageID.JPEG` Where: - `synsetID` is the ILSVRC category information. - `imageID` is the unique number for the image within that category. The image metadata, including category information, is available in the table files under the `stimuli/metadata` directory. **Raw Data** Raw EEG data are stored in BIDS format. Each subject’s directory contains multiple session folders, designated as `ses-sesID`. Comprehensive trial information for each subject is documented in the file: `derivatives/detailed_events/sub-subID_events.csv` Where each row corresponds to a trial, and each column contains metadata for that trial, including the session and run number, category information of the stimuli, and subject response. **Preprocessed Data** The full time series data of preprocessed data are archived in the `derivatives/raw` directory, named as: `sub-subID_ses-sesID_task-ImageNet_run-runID_eeg_clean.fif`. The epoch data derived from preprocessed data are stored within the `derivatives/epochs` directory. In this directory, all data for each subject are concatenated into a single file, labeled as: `sub-subID_epo.fif` The trial information within each subject’s epochs data can be accessed via the metadata of the epochs data, which are aligned with the content of the subject’s `sub-subID_events.csv` file. **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A., and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. ``` * ``` Journal of Open Source Software, 4\*(1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) ## Dataset Information | Dataset ID | `DS005811` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | NOD-EEG | | Author (year) | `Zhang2025_EEG` | | Canonical | `NOD_EEG` | | Importable as | `DS005811`, `Zhang2025_EEG`, `NOD_EEG` | | Year | 2025 | | Authors | Guohao Zhang, Ming Zhou, Shuyi Zhen, Shaohua Tang, Zheng Li, Zonglei Zhen | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005811.v1.0.9](https://doi.org/10.18112/openneuro.ds005811.v1.0.9) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005811) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005811) | [Source URL](https://openneuro.org/datasets/ds005811) | ### Copy-paste BibTeX ```bibtex @dataset{ds005811, title = {NOD-EEG}, author = {Guohao Zhang and Ming Zhou and Shuyi Zhen and Shaohua Tang and Zheng Li and Zonglei Zhen}, doi = {10.18112/openneuro.ds005811.v1.0.9}, url = {https://doi.org/10.18112/openneuro.ds005811.v1.0.9}, } ``` ## Technical Details - Subjects: 19 - Recordings: 448 - Tasks: 1 - Channels: 64 (440), 66 (8) - Sampling rate (Hz): 500.0 (288), 1000.0 (160) - Duration (hours): 23.7022 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 16.2 GB - File count: 448 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005811.v1.0.9 - Source: openneuro - OpenNeuro: [ds005811](https://openneuro.org/datasets/ds005811) - NeMAR: [ds005811](https://nemar.org/dataexplorer/detail?dataset_id=ds005811) ## API Reference Use the `DS005811` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005811(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NOD-EEG * **Study:** `ds005811` (OpenNeuro) * **Author (year):** `Zhang2025_EEG` * **Canonical:** `NOD_EEG` Also importable as: `DS005811`, `Zhang2025_EEG`, `NOD_EEG`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 19; recordings: 448; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005811](https://openneuro.org/datasets/ds005811) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005811](https://nemar.org/dataexplorer/detail?dataset_id=ds005811) DOI: [https://doi.org/10.18112/openneuro.ds005811.v1.0.9](https://doi.org/10.18112/openneuro.ds005811.v1.0.9) ### Examples ```pycon >>> from eegdash.dataset import DS005811 >>> dataset = DS005811(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005811) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005811) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005815: eeg dataset, 20 subjects *A Human EEG Dataset for Multisensory Perception and Mental Imagery* Access recordings and metadata through EEGDash. **Citation:** Yan-Han Chang, Hsi-An Chen, Min-Jiun Tsai, Chun-Lung Tseng, Ching-Huei Lo, Kuan-Chih Huang, Chun-Shu Wei (2025). *A Human EEG Dataset for Multisensory Perception and Mental Imagery*. [10.18112/openneuro.ds005815.v2.0.1](https://doi.org/10.18112/openneuro.ds005815.v2.0.1) Modality: eeg Subjects: 20 Recordings: 103 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005815 dataset = DS005815(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005815(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005815( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005815, title = {A Human EEG Dataset for Multisensory Perception and Mental Imagery}, author = {Yan-Han Chang and Hsi-An Chen and Min-Jiun Tsai and Chun-Lung Tseng and Ching-Huei Lo and Kuan-Chih Huang and Chun-Shu Wei}, doi = {10.18112/openneuro.ds005815.v2.0.1}, url = {https://doi.org/10.18112/openneuro.ds005815.v2.0.1}, } ``` ## About This Dataset The YOTO (You Only Think Once) dataset presents a human electroencephalography (EEG) resource for exploring multisensory perception and mental imagery. The study enrolled 20 participants who performed tasks involving both unimodal and multimodal stimuli. Researchers collected high-resolution EEG signals at a 1000 Hz sampling rate to capture high-temporal-resolution neural activity related to internal mental representations. The protocol incorporated visual, auditory, and combined cues to investigate the integration of multiple sensory modalities, and participants provided self-reported vividness ratings that indicate subjective perceptual strength. Technical validation involved event-related potentials (ERPs) and power spectral density (PSD) analyses, which demonstrated the reliability of the data and confirmed distinct neural responses across stimuli. This dataset aims to foster studies on neural decoding, perception, and cognitive modeling, and it is publicly accessible for researchers who seek to advance multimodal mental imagery research and related applications. ## Dataset Information | Dataset ID | `DS005815` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A Human EEG Dataset for Multisensory Perception and Mental Imagery | | Author (year) | `Chang2025` | | Canonical | — | | Importable as | `DS005815`, `Chang2025` | | Year | 2025 | | Authors | Yan-Han Chang, Hsi-An Chen, Min-Jiun Tsai, Chun-Lung Tseng, Ching-Huei Lo, Kuan-Chih Huang, Chun-Shu Wei | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005815.v2.0.1](https://doi.org/10.18112/openneuro.ds005815.v2.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005815) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005815) | [Source URL](https://openneuro.org/datasets/ds005815) | ### Copy-paste BibTeX ```bibtex @dataset{ds005815, title = {A Human EEG Dataset for Multisensory Perception and Mental Imagery}, author = {Yan-Han Chang and Hsi-An Chen and Min-Jiun Tsai and Chun-Lung Tseng and Ching-Huei Lo and Kuan-Chih Huang and Chun-Shu Wei}, doi = {10.18112/openneuro.ds005815.v2.0.1}, url = {https://doi.org/10.18112/openneuro.ds005815.v2.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 103 - Tasks: 3 - Channels: 31 - Sampling rate (Hz): 1000.0 - Duration (hours): 3.9896991666666666 - Pathology: Healthy - Modality: Multisensory - Type: Perception - Size on disk: 7.6 GB - File count: 103 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005815.v2.0.1 - Source: openneuro - OpenNeuro: [ds005815](https://openneuro.org/datasets/ds005815) - NeMAR: [ds005815](https://nemar.org/dataexplorer/detail?dataset_id=ds005815) ## API Reference Use the `DS005815` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005815(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Human EEG Dataset for Multisensory Perception and Mental Imagery * **Study:** `ds005815` (OpenNeuro) * **Author (year):** `Chang2025` * **Canonical:** — Also importable as: `DS005815`, `Chang2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 20; recordings: 103; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005815](https://openneuro.org/datasets/ds005815) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005815](https://nemar.org/dataexplorer/detail?dataset_id=ds005815) DOI: [https://doi.org/10.18112/openneuro.ds005815.v2.0.1](https://doi.org/10.18112/openneuro.ds005815.v2.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005815 >>> dataset = DS005815(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005815) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005815) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005841: eeg dataset, 48 subjects *EEG Experiment measuring ERPs in VR* Access recordings and metadata through EEGDash. **Citation:** Elena Karakashevska, Alexis Makin, Michael Batterley (2025). *EEG Experiment measuring ERPs in VR*. [10.18112/openneuro.ds005841.v1.0.0](https://doi.org/10.18112/openneuro.ds005841.v1.0.0) Modality: eeg Subjects: 48 Recordings: 288 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005841 dataset = DS005841(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005841(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005841( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005841, title = {EEG Experiment measuring ERPs in VR}, author = {Elena Karakashevska and Alexis Makin and Michael Batterley}, doi = {10.18112/openneuro.ds005841.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005841.v1.0.0}, } ``` ## About This Dataset **EEG Experiment Measuring ERPs in VR** This dataset contains EEG recordings from a study investigating event-related potentials (ERPs) during different visual tasks in virtual reality. **Study Design** - **Participants**: 48 participants - **Tasks**: - Lumfront - Lumperp - Regfront - Regperp - Signalscreen - Signalvr - **Modality**: EEG (512 Hz sampling rate) **Dataset Organization** The dataset follows the BIDS specification (version 1.6.0). Each subject folder contains EEG recordings and associated metadata. **Funding and Acknowledgements** This study was supported by a doctoral studentship awarded to EK. We thank the participants for their time. ## Dataset Information | Dataset ID | `DS005841` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG Experiment measuring ERPs in VR | | Author (year) | `Karakashevska2025` | | Canonical | — | | Importable as | `DS005841`, `Karakashevska2025` | | Year | 2025 | | Authors | Elena Karakashevska, Alexis Makin, Michael Batterley | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005841.v1.0.0](https://doi.org/10.18112/openneuro.ds005841.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005841) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005841) | [Source URL](https://openneuro.org/datasets/ds005841) | ### Copy-paste BibTeX ```bibtex @dataset{ds005841, title = {EEG Experiment measuring ERPs in VR}, author = {Elena Karakashevska and Alexis Makin and Michael Batterley}, doi = {10.18112/openneuro.ds005841.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005841.v1.0.0}, } ``` ## Technical Details - Subjects: 48 - Recordings: 288 - Tasks: 6 - Channels: 73 - Sampling rate (Hz): 512 - Duration (hours): 21.566111111111116 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 7.3 GB - File count: 288 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005841.v1.0.0 - Source: openneuro - OpenNeuro: [ds005841](https://openneuro.org/datasets/ds005841) - NeMAR: [ds005841](https://nemar.org/dataexplorer/detail?dataset_id=ds005841) ## API Reference Use the `DS005841` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005841(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Experiment measuring ERPs in VR * **Study:** `ds005841` (OpenNeuro) * **Author (year):** `Karakashevska2025` * **Canonical:** — Also importable as: `DS005841`, `Karakashevska2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 48; recordings: 288; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005841](https://openneuro.org/datasets/ds005841) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005841](https://nemar.org/dataexplorer/detail?dataset_id=ds005841) DOI: [https://doi.org/10.18112/openneuro.ds005841.v1.0.0](https://doi.org/10.18112/openneuro.ds005841.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005841 >>> dataset = DS005841(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005841) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005841) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005857: eeg dataset, 29 subjects *ltpDelayRepFRReadOnly* Access recordings and metadata through EEGDash. **Citation:** [Adam Broitman], [Michael Kahana] (2025). *ltpDelayRepFRReadOnly*. [10.18112/openneuro.ds005857.v1.0.0](https://doi.org/10.18112/openneuro.ds005857.v1.0.0) Modality: eeg Subjects: 29 Recordings: 110 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005857 dataset = DS005857(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005857(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005857( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005857, title = {ltpDelayRepFRReadOnly}, author = {[Adam Broitman] and [Michael Kahana]}, doi = {10.18112/openneuro.ds005857.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005857.v1.0.0}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `DS005857` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | ltpDelayRepFRReadOnly | | Author (year) | `Broitman2025` | | Canonical | `Broitman2019` | | Importable as | `DS005857`, `Broitman2025`, `Broitman2019` | | Year | 2025 | | Authors | [Adam Broitman], [Michael Kahana] | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005857.v1.0.0](https://doi.org/10.18112/openneuro.ds005857.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005857) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005857) | [Source URL](https://openneuro.org/datasets/ds005857) | ### Copy-paste BibTeX ```bibtex @dataset{ds005857, title = {ltpDelayRepFRReadOnly}, author = {[Adam Broitman] and [Michael Kahana]}, doi = {10.18112/openneuro.ds005857.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005857.v1.0.0}, } ``` ## Technical Details - Subjects: 29 - Recordings: 110 - Tasks: 1 - Channels: 137 - Sampling rate (Hz): 2048.0 - Duration (hours): 101.04776285807291 - Pathology: Not specified - Modality: Visual - Type: Memory - Size on disk: 284.4 GB - File count: 110 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005857.v1.0.0 - Source: openneuro - OpenNeuro: [ds005857](https://openneuro.org/datasets/ds005857) - NeMAR: [ds005857](https://nemar.org/dataexplorer/detail?dataset_id=ds005857) ## API Reference Use the `DS005857` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005857(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ltpDelayRepFRReadOnly * **Study:** `ds005857` (OpenNeuro) * **Author (year):** `Broitman2025` * **Canonical:** `Broitman2019` Also importable as: `DS005857`, `Broitman2025`, `Broitman2019`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 29; recordings: 110; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005857](https://openneuro.org/datasets/ds005857) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005857](https://nemar.org/dataexplorer/detail?dataset_id=ds005857) DOI: [https://doi.org/10.18112/openneuro.ds005857.v1.0.0](https://doi.org/10.18112/openneuro.ds005857.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005857 >>> dataset = DS005857(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005857) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005857) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005863: eeg dataset, 127 subjects *Cognitive Electrophysiology in Socioeconomic Context in Adulthood* Access recordings and metadata through EEGDash. **Citation:** Elif Isbell, Amanda N. Peters, Dylan M. Richardson, Nancy E. R. De León (2025). *Cognitive Electrophysiology in Socioeconomic Context in Adulthood*. [10.18112/openneuro.ds005863.v2.0.0](https://doi.org/10.18112/openneuro.ds005863.v2.0.0) Modality: eeg Subjects: 127 Recordings: 357 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005863 dataset = DS005863(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005863(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005863( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005863, title = {Cognitive Electrophysiology in Socioeconomic Context in Adulthood}, author = {Elif Isbell and Amanda N. Peters and Dylan M. Richardson and Nancy E. R. De León}, doi = {10.18112/openneuro.ds005863.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds005863.v2.0.0}, } ``` ## About This Dataset **The “Cognitive Electrophysiology in Socioeconomic Context in Adulthood” Dataset** **Data Description** This dataset comprises electroencephalogram (EEG) data collected from 127 young adults (18-30 years), along with retrospective objective and subjective indicators of childhood family socioeconomic status (SES), as well as SES indicators in adulthood, such as educational attainment, individual and household income, food security, and home and neighborhood characteristics. The EEG data were recorded with tasks directly acquired from the Event-Related Potentials Compendium of Open Resources and Experiments ERP CORE (Kappenman et al., 2021), or adapted from these tasks (Isbell et al., 2024). These tasks, which are publicly available, were optimized to capture neural activity manifest in perception, cognition, and action, in neurotypical young adults. Furthermore, the dataset includes a symptoms checklist, consisting of questions that were found to be predictive of symptoms consistent with attention-deficit/hyperactivity disorder (ADHD) in adulthood, which can be used to investigate the links between ADHD symptoms and neural activity in a socioeconomically diverse young adult sample. **Notes** Before the data were publicly shared, all identifiable information was removed, including date of birth, race/ethnicity, zip code, and names of the languages the participants reported to speaking and understanding fluently. Date of birth was used to compute age in years, which is included in the dataset. The dataset consists of participants recruited for studies on adult cognition in context. To provide the largest sample size, we included all participants who completed at least one of the EEG tasks of interest. Each participant completed each EEG task only once. The original participant IDs with which the EEG data were saved were recoded and the raw EEG files were renamed to make the dataset BIDS compatible. **Copyright and License** This dataset is licensed under CC0. **References** Isbell, E., De León, N. E. R., & Richardson, D. M. (2024). Childhood family socioeconomic status is linked to adult brain electrophysiology. PloS One, 19(8), e0307406. Kappenman, E. S., Farrens, J. L., Zhang, W., Stewart, A. X., & Luck, S. J. (2021). ERP CORE: An open resource for human event-related potential research. NeuroImage, 225, 117465. ## Dataset Information | Dataset ID | `DS005863` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Cognitive Electrophysiology in Socioeconomic Context in Adulthood | | Author (year) | `Isbell2025_Cognitive` | | Canonical | — | | Importable as | `DS005863`, `Isbell2025_Cognitive` | | Year | 2025 | | Authors | Elif Isbell, Amanda N. Peters, Dylan M. Richardson, Nancy E. R. De León | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005863.v2.0.0](https://doi.org/10.18112/openneuro.ds005863.v2.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005863) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005863) | [Source URL](https://openneuro.org/datasets/ds005863) | ### Copy-paste BibTeX ```bibtex @dataset{ds005863, title = {Cognitive Electrophysiology in Socioeconomic Context in Adulthood}, author = {Elif Isbell and Amanda N. Peters and Dylan M. Richardson and Nancy E. R. De León}, doi = {10.18112/openneuro.ds005863.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds005863.v2.0.0}, } ``` ## Technical Details - Subjects: 127 - Recordings: 357 - Tasks: 4 - Channels: 30 - Sampling rate (Hz): 500.0 - Duration (hours): 50.88856666666666 - Pathology: Healthy - Modality: Multisensory - Type: Other - Size on disk: 10.6 GB - File count: 357 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005863.v2.0.0 - Source: openneuro - OpenNeuro: [ds005863](https://openneuro.org/datasets/ds005863) - NeMAR: [ds005863](https://nemar.org/dataexplorer/detail?dataset_id=ds005863) ## API Reference Use the `DS005863` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005863(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cognitive Electrophysiology in Socioeconomic Context in Adulthood * **Study:** `ds005863` (OpenNeuro) * **Author (year):** `Isbell2025_Cognitive` * **Canonical:** — Also importable as: `DS005863`, `Isbell2025_Cognitive`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 127; recordings: 357; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005863](https://openneuro.org/datasets/ds005863) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005863](https://nemar.org/dataexplorer/detail?dataset_id=ds005863) DOI: [https://doi.org/10.18112/openneuro.ds005863.v2.0.0](https://doi.org/10.18112/openneuro.ds005863.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005863 >>> dataset = DS005863(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005863) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005863) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005866: eeg dataset, 60 subjects *Flankers-NEAR* Access recordings and metadata through EEGDash. **Citation:** Brennan Terhune-Cotter, Phillip J. Holcomb, Katherine J. Midgley, Sofia E. Ortega, Emily M. Akers, Karen Emmorey (2025). *Flankers-NEAR*. [10.18112/openneuro.ds005866.v1.0.1](https://doi.org/10.18112/openneuro.ds005866.v1.0.1) Modality: eeg Subjects: 60 Recordings: 60 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005866 dataset = DS005866(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005866(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005866( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005866, title = {Flankers-NEAR}, author = {Brennan Terhune-Cotter and Phillip J. Holcomb and Katherine J. Midgley and Sofia E. Ortega and Emily M. Akers and Karen Emmorey}, doi = {10.18112/openneuro.ds005866.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005866.v1.0.1}, } ``` ## About This Dataset Data collection took place at the NeuroCognition Laboratory (NCL) in San Diego, California under the supervision of Dr. Phillip Holcomb and Dr. Karen Emmorey. This project followed the San Diego State University’s IRB guidelines. Participants sat in a comfortable chair in a darkened, sound-attenuated room throughout the experiment. They were given a game controller for responding to stimuli. They were instructed to watch the 24in-LCD video monitor, which was placed at a viewing distance of 60 in (152 cm). Participants were presented with 90 four-letter real words and 90 four-letter pseudowords in white New Courier font on a black background. Each letter subtended .41 degrees of visual angle. The flanker words were separated from the center target word by .41 degrees of empty space on both sides. All targets and flankers were content words under a 6th grade reading level; plural words and proper nouns were excluded. All words were presented once in each of the three conditions: no flanker, identical flankers, or different flankers. There were 270 trials. Trials started with a purple fixation cross for 1000ms, followed by a white fixation cross for 500ms to prepare participants for the presentation of the stimulus. The stimulus item was then presented for 150ms, followed by a blank screen shown until participants responded via the game controller. ## Dataset Information | Dataset ID | `DS005866` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Flankers-NEAR | | Author (year) | `TerhuneCotter2025_NEAR` | | Canonical | `Flankers_NEAR` | | Importable as | `DS005866`, `TerhuneCotter2025_NEAR`, `Flankers_NEAR` | | Year | 2025 | | Authors | Brennan Terhune-Cotter, Phillip J. Holcomb, Katherine J. Midgley, Sofia E. Ortega, Emily M. Akers, Karen Emmorey | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005866.v1.0.1](https://doi.org/10.18112/openneuro.ds005866.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005866) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005866) | [Source URL](https://openneuro.org/datasets/ds005866) | ### Copy-paste BibTeX ```bibtex @dataset{ds005866, title = {Flankers-NEAR}, author = {Brennan Terhune-Cotter and Phillip J. Holcomb and Katherine J. Midgley and Sofia E. Ortega and Emily M. Akers and Karen Emmorey}, doi = {10.18112/openneuro.ds005866.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005866.v1.0.1}, } ``` ## Technical Details - Subjects: 60 - Recordings: 60 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 15.976248888888888 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 3.6 GB - File count: 60 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005866.v1.0.1 - Source: openneuro - OpenNeuro: [ds005866](https://openneuro.org/datasets/ds005866) - NeMAR: [ds005866](https://nemar.org/dataexplorer/detail?dataset_id=ds005866) ## API Reference Use the `DS005866` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005866(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Flankers-NEAR * **Study:** `ds005866` (OpenNeuro) * **Author (year):** `TerhuneCotter2025_NEAR` * **Canonical:** `Flankers_NEAR` Also importable as: `DS005866`, `TerhuneCotter2025_NEAR`, `Flankers_NEAR`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 60; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005866](https://openneuro.org/datasets/ds005866) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005866](https://nemar.org/dataexplorer/detail?dataset_id=ds005866) DOI: [https://doi.org/10.18112/openneuro.ds005866.v1.0.1](https://doi.org/10.18112/openneuro.ds005866.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005866 >>> dataset = DS005866(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005866) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005866) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005868: eeg dataset, 48 subjects *Flankers-FAR* Access recordings and metadata through EEGDash. **Citation:** Brennan Terhune-Cotter, Phillip J. Holcomb, Katherine J. Midgley, Sofia E. Ortega, Emily M. Akers, Karen Emmorey (2025). *Flankers-FAR*. [10.18112/openneuro.ds005868.v1.0.1](https://doi.org/10.18112/openneuro.ds005868.v1.0.1) Modality: eeg Subjects: 48 Recordings: 48 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005868 dataset = DS005868(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005868(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005868( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005868, title = {Flankers-FAR}, author = {Brennan Terhune-Cotter and Phillip J. Holcomb and Katherine J. Midgley and Sofia E. Ortega and Emily M. Akers and Karen Emmorey}, doi = {10.18112/openneuro.ds005868.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005868.v1.0.1}, } ``` ## About This Dataset Data collection took place at the NeuroCognition Laboratory (NCL) in San Diego, California under the supervision of Dr. Phillip Holcomb and Dr. Karen Emmorey. This project followed the San Diego State University’s IRB guidelines. Participants sat in a comfortable chair in a darkened, sound-attenuated room throughout the experiment. They were given a game controller for responding to stimuli. They were instructed to watch the 24in-LCD video monitor, which was placed at a viewing distance of 60 in (152 cm). Participants were presented with 90 four-letter real words and 90 four-letter pseudowords in white New Courier font on a black background. Each letter subtended .41 degrees of visual angle. The flanker words were separated from the center target word by 3.28 degrees of empty space on both sides. All targets and flankers were content words under a 6th grade reading level; plural words and proper nouns were excluded. All words were presented once in each of the three conditions: no flanker, identical flankers, or different flankers. There were 270 trials. Trials started with a purple fixation cross for 1000ms, followed by a white fixation cross for 500ms to prepare participants for the presentation of the stimulus. The stimulus item was then presented for 150ms, followed by a blank screen shown until participants responded via the game controller. ## Dataset Information | Dataset ID | `DS005868` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Flankers-FAR | | Author (year) | `TerhuneCotter2025_FAR` | | Canonical | `Flankers_FAR` | | Importable as | `DS005868`, `TerhuneCotter2025_FAR`, `Flankers_FAR` | | Year | 2025 | | Authors | Brennan Terhune-Cotter, Phillip J. Holcomb, Katherine J. Midgley, Sofia E. Ortega, Emily M. Akers, Karen Emmorey | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005868.v1.0.1](https://doi.org/10.18112/openneuro.ds005868.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005868) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005868) | [Source URL](https://openneuro.org/datasets/ds005868) | ### Copy-paste BibTeX ```bibtex @dataset{ds005868, title = {Flankers-FAR}, author = {Brennan Terhune-Cotter and Phillip J. Holcomb and Katherine J. Midgley and Sofia E. Ortega and Emily M. Akers and Karen Emmorey}, doi = {10.18112/openneuro.ds005868.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005868.v1.0.1}, } ``` ## Technical Details - Subjects: 48 - Recordings: 48 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 13.093546111111111 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 2.9 GB - File count: 48 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005868.v1.0.1 - Source: openneuro - OpenNeuro: [ds005868](https://openneuro.org/datasets/ds005868) - NeMAR: [ds005868](https://nemar.org/dataexplorer/detail?dataset_id=ds005868) ## API Reference Use the `DS005868` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005868(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Flankers-FAR * **Study:** `ds005868` (OpenNeuro) * **Author (year):** `TerhuneCotter2025_FAR` * **Canonical:** `Flankers_FAR` Also importable as: `DS005868`, `TerhuneCotter2025_FAR`, `Flankers_FAR`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 48; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005868](https://openneuro.org/datasets/ds005868) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005868](https://nemar.org/dataexplorer/detail?dataset_id=ds005868) DOI: [https://doi.org/10.18112/openneuro.ds005868.v1.0.1](https://doi.org/10.18112/openneuro.ds005868.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005868 >>> dataset = DS005868(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005868) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005868) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005872: eeg dataset, 1 subjects *EEGEyeNet Dataset* Access recordings and metadata through EEGDash. **Citation:** Martyna Beata Płomecka, Ard Kastrati, Nicolas Langer (2025). *EEGEyeNet Dataset*. [10.18112/openneuro.ds005872.v1.0.0](https://doi.org/10.18112/openneuro.ds005872.v1.0.0) Modality: eeg Subjects: 1 Recordings: 1 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005872 dataset = DS005872(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005872(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005872( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005872, title = {EEGEyeNet Dataset}, author = {Martyna Beata Płomecka and Ard Kastrati and Nicolas Langer}, doi = {10.18112/openneuro.ds005872.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005872.v1.0.0}, } ``` ## About This Dataset This is a BIDS standardized version of simultaneously collected EEG and eye-tracking data, taken from one subject from the [EEGEYENET](https://osf.io/ktv7m/) dataset. Acknowledgements go to Martyna Beata Płomecka, Ard Kastrati, and Nicolas Langer who designed the study, collected the data, and published the dataset to Open Science Framework. For access to the full dataset, please refer to the dataset DOI. ## Dataset Information | Dataset ID | `DS005872` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEGEyeNet Dataset | | Author (year) | `Plomecka2025` | | Canonical | `EEGEyeNet` | | Importable as | `DS005872`, `Plomecka2025`, `EEGEyeNet` | | Year | 2025 | | Authors | Martyna Beata Płomecka, Ard Kastrati, Nicolas Langer | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005872.v1.0.0](https://doi.org/10.18112/openneuro.ds005872.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005872) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005872) | [Source URL](https://openneuro.org/datasets/ds005872) | ### Copy-paste BibTeX ```bibtex @dataset{ds005872, title = {EEGEyeNet Dataset}, author = {Martyna Beata Płomecka and Ard Kastrati and Nicolas Langer}, doi = {10.18112/openneuro.ds005872.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005872.v1.0.0}, } ``` ## Technical Details - Subjects: 1 - Recordings: 1 - Tasks: 1 - Channels: 129 - Sampling rate (Hz): 500.0 - Duration (hours): 0.0898511111111111 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 39.9 MB - File count: 1 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005872.v1.0.0 - Source: openneuro - OpenNeuro: [ds005872](https://openneuro.org/datasets/ds005872) - NeMAR: [ds005872](https://nemar.org/dataexplorer/detail?dataset_id=ds005872) ## API Reference Use the `DS005872` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005872(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEGEyeNet Dataset * **Study:** `ds005872` (OpenNeuro) * **Author (year):** `Plomecka2025` * **Canonical:** `EEGEyeNet` Also importable as: `DS005872`, `Plomecka2025`, `EEGEyeNet`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005872](https://openneuro.org/datasets/ds005872) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005872](https://nemar.org/dataexplorer/detail?dataset_id=ds005872) DOI: [https://doi.org/10.18112/openneuro.ds005872.v1.0.0](https://doi.org/10.18112/openneuro.ds005872.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005872 >>> dataset = DS005872(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005872) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005872) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005873: eeg, emg dataset, 125 subjects *SeizeIT2* Access recordings and metadata through EEGDash. **Citation:** Miguel Bhagubai, Christos Chatzichristos, Lauren Swinnen, Jaiver Macea, Jingwei Zhang, Lieven Lagae, Katrien Jansen, Andreas Schulze-Bonhage, Francisco Sales, Benno Mahler, Yvonne Weber, Wim Van Paesschen, Maarten De Vos (2025). *SeizeIT2*. [10.18112/openneuro.ds005873.v1.1.0](https://doi.org/10.18112/openneuro.ds005873.v1.1.0) Modality: eeg, emg Subjects: 125 Recordings: 5654 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005873 dataset = DS005873(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005873(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005873( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005873, title = {SeizeIT2}, author = {Miguel Bhagubai and Christos Chatzichristos and Lauren Swinnen and Jaiver Macea and Jingwei Zhang and Lieven Lagae and Katrien Jansen and Andreas Schulze-Bonhage and Francisco Sales and Benno Mahler and Yvonne Weber and Wim Van Paesschen and Maarten De Vos}, doi = {10.18112/openneuro.ds005873.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds005873.v1.1.0}, } ``` ## About This Dataset **README** This dataset is a BIDS compatible version of the SeizeIT2 dataset. It reorganizes the file structure to comply with the BIDS specification. To this effect: - Metadata was organized according to BIDS. - Data in the edf files contains wearable EEG, ECG, EMG and movement data recorded with the Sensor Dot device. - Annotations were formatted as BIDS-score compatible `tsv` files. **Contact person** ### View full README **README** This dataset is a BIDS compatible version of the SeizeIT2 dataset. It reorganizes the file structure to comply with the BIDS specification. To this effect: - Metadata was organized according to BIDS. - Data in the edf files contains wearable EEG, ECG, EMG and movement data recorded with the Sensor Dot device. - Annotations were formatted as BIDS-score compatible `tsv` files. **Contact person** The dataset was published by [Miguel Bhagubai](mailto:miguel.bhagubai@esat.kuleuven.be) and [Christos Chatzichristos](mailto:christos.chatzichristos@kuleuven.be). **Overview** **Project name** SeizeIT2 Dataset **Year that the project ran** 2024 **Description of the dataset** The SeizeIT2 project (clinicaltrials.gov: NCT04284072), a multicenter, prospective study, was carried out to validate the Sensor Dot device in adult and pediatric patients with epilepsy. Participants were included if they had a history of refractory epilepsy and were admitted to the Epilepsy Monitoring Unit (EMU) for long-term vEEG monitoring as a presurgical evaluation procedure. The exclusion criteria included patients with skin conditions or allergies that prevented the placement of the electrodes and adhesives or had implanted devices, such as neurostimulators or pacemakers. All participants provided written informed consent. The data collection started on January 10, 2020, and ended on June 30, 2022. The study was approved by the UZ Leuven ethics committee (approval ID: S63631, ClinicalTrials.gov, NCT04284072), anonymization and sharing of the data was also approved by the same committee (S67350 - amendment 1). The dataset comprises 125 patients (51 female, 41%) from 5 different European EMUs: University Hospital Leuven (Belgium), Freiburg University Medical Center (Germany), RWTH University of Aachen (Germany), Karolinska University Hospital (Sweden) and Coimbra University Hospital (Portugal). The University Hospital Leuven was the only center that enrolled pediatric patients. The dataset includes only data from patients with focal epilepsy who experienced one or more seizure episodes during the monitoring period. **Methods** The participants were recorded with the specific center’s vEEG monitoring equipment, where the EEG electrodes were placed according to the 10-20 system or the 25-electrode array of the International Federation of Clinical Neurophysiology. The SD device was used to record wearable data simultaneously with the vEEG. The device has a size of 24.5 x 33.5 x 7.73 mm and weighs approximately 6.3 grams. The wearable device measures data at a sampling frequency of 250 Hz and has a battery life of approximately 24 hours. Two recording devices were used: one placed in the patient’s upper back using a patch and connected to electrodes attached behind the ear, on the mastoid bone (EEG SD); another placed on the left side of the chest, with two electrodes extended to the lower left rib cage and the fourth intercostal space in the left parasternal position to measure ECG, and two electrodes extended to the left deltoid muscle to measure EMG data (ECG/EMG SD). The module itself contains accelerometers (ACC) and gyroscopes (GYR), which measured movement data at a sampling rate of 25 Hz. The EEG SD electrode placement depended on the patient’s medical history and is based on the seizure type and onset. When the seizures were suspected to originate from the left hemisphere, two electrodes were placed on the left side and one on the right side, forming one left same-side channel and one cross-head channel. Analogously, if seizures were suspected to originate from the right hemisphere, the same-side channel was derived from two electrodes placed behind the right ear. The dataset includes patients who were suspected to have generalized seizures (but had focal seizures) as well, and in this case, the cross-head channel was non-existent and replaced by an additional lateral channel by using two electrodes on each ear. **Dataset contents** The complete dataset contains around 11 640 hours of wearable data. Four different modalities were recorded for most participants: bte-EEG, ECG, EMG and movement data. All participants’ data within the dataset contain wearable bte-EEG. In 3% of the dataset, ECG, EMG and movement data were not included due to technical failures or errors in the setup. In total, 886 focal seizures were recorded with the wearable device. The mean duration of the recorded seizures was 58 seconds, ranging between 3 seconds and 16 minutes. The majority of the seizures were focal aware (FA) and focal impaired awareness (FIA), with 317 and 393 occurrences, respectively. From the remaining seizures, 55 were focal-to-bilateral tonic clinic (FBTC), 12 were focal with unclear awareness status, 2 were subclinical focal seizures and 93 had unknown or unreported onset. There was a predominance of seizures with onset on the left hemisphere (44%). In 12% of the seizures, the onset was located in the right hemisphere, 1% had a bilateral onset and in 43% of the seizures the onset was unclear. Regarding localization, the seizure onsets were distributed over the central, frontal, temporal, occipital, parietal and insula lobes, with a predominance of temporal lobe seizures (30%). Several seizures recorded could not be paired with a clear onset lobe (26%). ## Dataset Information | Dataset ID | `DS005873` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | SeizeIT2 | | Author (year) | `Bhagubai2025` | | Canonical | `SeizeIT2` | | Importable as | `DS005873`, `Bhagubai2025`, `SeizeIT2` | | Year | 2025 | | Authors | Miguel Bhagubai, Christos Chatzichristos, Lauren Swinnen, Jaiver Macea, Jingwei Zhang, Lieven Lagae, Katrien Jansen, Andreas Schulze-Bonhage, Francisco Sales, Benno Mahler, Yvonne Weber, Wim Van Paesschen, Maarten De Vos | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005873.v1.1.0](https://doi.org/10.18112/openneuro.ds005873.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005873) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005873) | [Source URL](https://openneuro.org/datasets/ds005873) | ### Copy-paste BibTeX ```bibtex @dataset{ds005873, title = {SeizeIT2}, author = {Miguel Bhagubai and Christos Chatzichristos and Lauren Swinnen and Jaiver Macea and Jingwei Zhang and Lieven Lagae and Katrien Jansen and Andreas Schulze-Bonhage and Francisco Sales and Benno Mahler and Yvonne Weber and Wim Van Paesschen and Maarten De Vos}, doi = {10.18112/openneuro.ds005873.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds005873.v1.1.0}, } ``` ## Technical Details - Subjects: 125 - Recordings: 5654 - Tasks: 1 - Channels: 2 (2850), 1 (2804) - Sampling rate (Hz): 256.0 - Duration (hours): 22897.171388888888 - Pathology: Epilepsy - Modality: Other - Type: Clinical/Intervention - Size on disk: 44.4 GB - File count: 5654 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005873.v1.1.0 - Source: openneuro - OpenNeuro: [ds005873](https://openneuro.org/datasets/ds005873) - NeMAR: [ds005873](https://nemar.org/dataexplorer/detail?dataset_id=ds005873) ## API Reference Use the `DS005873` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005873(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SeizeIT2 * **Study:** `ds005873` (OpenNeuro) * **Author (year):** `Bhagubai2025` * **Canonical:** `SeizeIT2` Also importable as: `DS005873`, `Bhagubai2025`, `SeizeIT2`. Modality: `eeg, emg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 125; recordings: 5654; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005873](https://openneuro.org/datasets/ds005873) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005873](https://nemar.org/dataexplorer/detail?dataset_id=ds005873) DOI: [https://doi.org/10.18112/openneuro.ds005873.v1.1.0](https://doi.org/10.18112/openneuro.ds005873.v1.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS005873 >>> dataset = DS005873(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005873) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005873) # DS005876: eeg dataset, 29 subjects *Song Familiarity* Access recordings and metadata through EEGDash. **Citation:** Jared R. Girard, Aaron M. Bishop, Cameron D. Hassall (2025). *Song Familiarity*. [10.18112/openneuro.ds005876.v1.0.1](https://doi.org/10.18112/openneuro.ds005876.v1.0.1) Modality: eeg Subjects: 29 Recordings: 29 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005876 dataset = DS005876(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005876(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005876( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005876, title = {Song Familiarity}, author = {Jared R. Girard and Aaron M. Bishop and Cameron D. Hassall}, doi = {10.18112/openneuro.ds005876.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005876.v1.0.1}, } ``` ## About This Dataset **Song Familiarity** Twenty-nine participants listened to song melodies and responded as soon as the song felt familiar. Participants were then asked to identify the song, if possible (title, artist, or lyrics). Next, participants were shown a multiple choice display with four song titles, selected a song title, and were given visual feedback (correct: selected option turned green and a checkmark appeared next to the title; incorrect: selected option turned red and an x appeared next to the title.) Song stimuli are taken from Kostic and Cleary (2009): [https://supp.apa.org/psycarticles/supplemental/a0014584/a0014584_supp.html](https://supp.apa.org/psycarticles/supplemental/a0014584/a0014584_supp.html) An audio file with a reconstruction of what each participant heard throughout the experiment can be found in /derivatives. The audio file has been synchronized with the EEG recording. ## Dataset Information | Dataset ID | `DS005876` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Song Familiarity | | Author (year) | `Girard2025` | | Canonical | — | | Importable as | `DS005876`, `Girard2025` | | Year | 2025 | | Authors | Jared R. Girard, Aaron M. Bishop, Cameron D. Hassall | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005876.v1.0.1](https://doi.org/10.18112/openneuro.ds005876.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005876) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005876) | [Source URL](https://openneuro.org/datasets/ds005876) | ### Copy-paste BibTeX ```bibtex @dataset{ds005876, title = {Song Familiarity}, author = {Jared R. Girard and Aaron M. Bishop and Cameron D. Hassall}, doi = {10.18112/openneuro.ds005876.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005876.v1.0.1}, } ``` ## Technical Details - Subjects: 29 - Recordings: 29 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 1000.0 - Duration (hours): 16.017188611111113 - Pathology: Healthy - Modality: Auditory - Type: Memory - Size on disk: 7.1 GB - File count: 29 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005876.v1.0.1 - Source: openneuro - OpenNeuro: [ds005876](https://openneuro.org/datasets/ds005876) - NeMAR: [ds005876](https://nemar.org/dataexplorer/detail?dataset_id=ds005876) ## API Reference Use the `DS005876` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005876(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Song Familiarity * **Study:** `ds005876` (OpenNeuro) * **Author (year):** `Girard2025` * **Canonical:** — Also importable as: `DS005876`, `Girard2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 29; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005876](https://openneuro.org/datasets/ds005876) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005876](https://nemar.org/dataexplorer/detail?dataset_id=ds005876) DOI: [https://doi.org/10.18112/openneuro.ds005876.v1.0.1](https://doi.org/10.18112/openneuro.ds005876.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005876 >>> dataset = DS005876(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005876) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005876) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005907: eeg dataset, 53 subjects *EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls* Access recordings and metadata through EEGDash. **Citation:** Ethan Campbell, James F Cavanagh (2025). *EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls*. [10.18112/openneuro.ds005907.v1.0.0](https://doi.org/10.18112/openneuro.ds005907.v1.0.0) Modality: eeg Subjects: 53 Recordings: 53 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005907 dataset = DS005907(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005907(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005907( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005907, title = {EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls}, author = {Ethan Campbell and James F Cavanagh}, doi = {10.18112/openneuro.ds005907.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005907.v1.0.0}, } ``` ## About This Dataset RL task (3-armed bandit) with alcohol vs. beverage cues in N=53 Community participants. Data collected from 2019-2021 in the CRCL at UNM. The paper [Campbell, E., Singh, G., Claus, E.D., Witkiewitz,K., Costa, V.D., Hogeveen, J; & Cavanagh, J.F. Electrophysiological markers of aberrant cue-specific exploration in hazardous drinkers] Should be coming out in print soonish. Your best bet for understanding this task would be to read that paper first. For more info on triggers and outputs, see BEH_EXPLAIN.m file in code folder. - James F Cavanagh 03/06/2023 ## Dataset Information | Dataset ID | `DS005907` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls | | Author (year) | `Campbell2025` | | Canonical | — | | Importable as | `DS005907`, `Campbell2025` | | Year | 2025 | | Authors | Ethan Campbell, James F Cavanagh | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005907.v1.0.0](https://doi.org/10.18112/openneuro.ds005907.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005907) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005907) | [Source URL](https://openneuro.org/datasets/ds005907) | ### Copy-paste BibTeX ```bibtex @dataset{ds005907, title = {EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls}, author = {Ethan Campbell and James F Cavanagh}, doi = {10.18112/openneuro.ds005907.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005907.v1.0.0}, } ``` ## Technical Details - Subjects: 53 - Recordings: 53 - Tasks: 1 - Channels: 58 (13), 57 (13), 56 (11), 55 (6), 59 (4), 54 (2), 52, 53, 33, 61 - Sampling rate (Hz): 500.0 - Duration (hours): 14.171917222222223 - Pathology: Alcohol - Modality: Visual - Type: Learning - Size on disk: 5.6 GB - File count: 53 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005907.v1.0.0 - Source: openneuro - OpenNeuro: [ds005907](https://openneuro.org/datasets/ds005907) - NeMAR: [ds005907](https://nemar.org/dataexplorer/detail?dataset_id=ds005907) ## API Reference Use the `DS005907` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005907(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls * **Study:** `ds005907` (OpenNeuro) * **Author (year):** `Campbell2025` * **Canonical:** — Also importable as: `DS005907`, `Campbell2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Alcohol`. Subjects: 53; recordings: 53; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005907](https://openneuro.org/datasets/ds005907) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005907](https://nemar.org/dataexplorer/detail?dataset_id=ds005907) DOI: [https://doi.org/10.18112/openneuro.ds005907.v1.0.0](https://doi.org/10.18112/openneuro.ds005907.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005907 >>> dataset = DS005907(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005907) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005907) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005929: fnirs dataset, 7 subjects *Motion-Yucel2014* Access recordings and metadata through EEGDash. **Citation:** Yücel, Meryem, Selb, Juliette, Boas, David, Cash, Sydney, Cooper, Robert (2025). *Motion-Yucel2014*. [10.18112/openneuro.ds005929.v1.0.1](https://doi.org/10.18112/openneuro.ds005929.v1.0.1) Modality: fnirs Subjects: 7 Recordings: 7 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005929 dataset = DS005929(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005929(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005929( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005929, title = {Motion-Yucel2014}, author = {Yücel, Meryem and Selb, Juliette and Boas, David and Cash, Sydney and Cooper, Robert}, doi = {10.18112/openneuro.ds005929.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005929.v1.0.1}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS005929` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Motion-Yucel2014 | | Author (year) | `MotionYucel2014` | | Canonical | `Yucel2014`, `Motion_Yucel2014` | | Importable as | `DS005929`, `MotionYucel2014`, `Yucel2014`, `Motion_Yucel2014` | | Year | 2025 | | Authors | Yücel, Meryem, Selb, Juliette, Boas, David, Cash, Sydney, Cooper, Robert | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005929.v1.0.1](https://doi.org/10.18112/openneuro.ds005929.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005929) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005929) | [Source URL](https://openneuro.org/datasets/ds005929) | ### Copy-paste BibTeX ```bibtex @dataset{ds005929, title = {Motion-Yucel2014}, author = {Yücel, Meryem and Selb, Juliette and Boas, David and Cash, Sydney and Cooper, Robert}, doi = {10.18112/openneuro.ds005929.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005929.v1.0.1}, } ``` ## Technical Details - Subjects: 7 - Recordings: 7 - Tasks: 1 - Channels: 28 - Sampling rate (Hz): 50.0 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Motor - Type: Motor - Size on disk: 68.5 MB - File count: 7 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005929.v1.0.1 - Source: openneuro - OpenNeuro: [ds005929](https://openneuro.org/datasets/ds005929) - NeMAR: [ds005929](https://nemar.org/dataexplorer/detail?dataset_id=ds005929) ## API Reference Use the `DS005929` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005929(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motion-Yucel2014 * **Study:** `ds005929` (OpenNeuro) * **Author (year):** `MotionYucel2014` * **Canonical:** `Yucel2014`, `Motion_Yucel2014` Also importable as: `DS005929`, `MotionYucel2014`, `Yucel2014`, `Motion_Yucel2014`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 7; recordings: 7; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005929](https://openneuro.org/datasets/ds005929) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005929](https://nemar.org/dataexplorer/detail?dataset_id=ds005929) DOI: [https://doi.org/10.18112/openneuro.ds005929.v1.0.1](https://doi.org/10.18112/openneuro.ds005929.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005929 >>> dataset = DS005929(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005929) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005929) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS005930: fnirs dataset, 12 subjects *BallSqueezingHD_Gao2023* Access recordings and metadata through EEGDash. **Citation:** Yuanyuan Gao, De’Ja Rogers, Alexander von Lühmann, Antonio Ortega-Martinez, David A. Boas, Meryem A. Yücel (2025). *BallSqueezingHD_Gao2023*. [10.18112/openneuro.ds005930.v1.0.1](https://doi.org/10.18112/openneuro.ds005930.v1.0.1) Modality: fnirs Subjects: 12 Recordings: 36 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005930 dataset = DS005930(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005930(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005930( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005930, title = {BallSqueezingHD_Gao2023}, author = {Yuanyuan Gao and De'Ja Rogers and Alexander von Lühmann and Antonio Ortega-Martinez and David A. Boas and Meryem A. Yücel}, doi = {10.18112/openneuro.ds005930.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005930.v1.0.1}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS005930` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BallSqueezingHD_Gao2023 | | Author (year) | `Gao2023` | | Canonical | — | | Importable as | `DS005930`, `Gao2023` | | Year | 2025 | | Authors | Yuanyuan Gao, De’Ja Rogers, Alexander von Lühmann, Antonio Ortega-Martinez, David A. Boas, Meryem A. Yücel | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005930.v1.0.1](https://doi.org/10.18112/openneuro.ds005930.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005930) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005930) | [Source URL](https://openneuro.org/datasets/ds005930) | ### Copy-paste BibTeX ```bibtex @dataset{ds005930, title = {BallSqueezingHD_Gao2023}, author = {Yuanyuan Gao and De'Ja Rogers and Alexander von Lühmann and Antonio Ortega-Martinez and David A. Boas and Meryem A. Yücel}, doi = {10.18112/openneuro.ds005930.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005930.v1.0.1}, } ``` ## Technical Details - Subjects: 12 - Recordings: 36 - Tasks: 1 - Channels: 200 - Sampling rate (Hz): 8.719308035714286 - Duration (hours): Not calculated - Pathology: Not specified - Modality: Motor - Type: Motor - Size on disk: 304.2 MB - File count: 36 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005930.v1.0.1 - Source: openneuro - OpenNeuro: [ds005930](https://openneuro.org/datasets/ds005930) - NeMAR: [ds005930](https://nemar.org/dataexplorer/detail?dataset_id=ds005930) ## API Reference Use the `DS005930` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005930(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BallSqueezingHD_Gao2023 * **Study:** `ds005930` (OpenNeuro) * **Author (year):** `Gao2023` * **Canonical:** — Also importable as: `DS005930`, `Gao2023`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 12; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005930](https://openneuro.org/datasets/ds005930) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005930](https://nemar.org/dataexplorer/detail?dataset_id=ds005930) DOI: [https://doi.org/10.18112/openneuro.ds005930.v1.0.1](https://doi.org/10.18112/openneuro.ds005930.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005930 >>> dataset = DS005930(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005930) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005930) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS005931: ieeg dataset, 8 subjects *Visuomotor_task* Access recordings and metadata through EEGDash. **Citation:** Riyo Ueda, Eishi Asano (2025). *Visuomotor_task*. [10.18112/openneuro.ds005931.v1.0.0](https://doi.org/10.18112/openneuro.ds005931.v1.0.0) Modality: ieeg Subjects: 8 Recordings: 16 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005931 dataset = DS005931(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005931(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005931( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005931, title = {Visuomotor_task}, author = {Riyo Ueda and Eishi Asano}, doi = {10.18112/openneuro.ds005931.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005931.v1.0.0}, } ``` ## About This Dataset Dataset of intracranial EEG from human epilepsy patients performing a visuomotor task Description: We present an electrophysiological dataset recorded from ten subjects during a visuomotor task. Subjects were epilepsy patients undergoing intracranial monitoring for localization of epileptic seizures. Subjects completed five sessions of Speed Match - a visuomotor on the Lumosity platform ([https://www.lumosity.com/](https://www.lumosity.com/); Lumos Labs, Inc, San Francisco, CA) - during interictal EEG recording. Repository structure: Main directory (interictal EEG from children during gameplay): Contains interictal EEG files of each participant in the study. Folders are explained below. Subfolders: sub-/: Contains folders for each subject, named sub-. sub-/ses-: Contains folders for visuomotor task. sub-/ses-/ieeg/: Contains the raw iEEG data in .edf format for each subject. Each subject performed visuomotor task (ses-task). Each \*ieeg.edf file contains continuous iEEG data during the visuomotor task. Details about the channels are given in the corresponding .tsv file. We also provide the information on the timing of the finger tapping on ieeg/edf file by specifying the start and end sample of each trial. (101 is for finger tapping). ## Dataset Information | Dataset ID | `DS005931` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Visuomotor_task | | Author (year) | `Ueda2025` | | Canonical | — | | Importable as | `DS005931`, `Ueda2025` | | Year | 2025 | | Authors | Riyo Ueda, Eishi Asano | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005931.v1.0.0](https://doi.org/10.18112/openneuro.ds005931.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005931) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005931) | [Source URL](https://openneuro.org/datasets/ds005931) | ### Copy-paste BibTeX ```bibtex @dataset{ds005931, title = {Visuomotor_task}, author = {Riyo Ueda and Eishi Asano}, doi = {10.18112/openneuro.ds005931.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005931.v1.0.0}, } ``` ## Technical Details - Subjects: 8 - Recordings: 16 - Tasks: 1 - Channels: 128 (12), 112 (2), 110 (2) - Sampling rate (Hz): 1000.0 - Duration (hours): 1.8886666666666667 - Pathology: Epilepsy - Modality: Visual - Type: Motor - Size on disk: 817.7 MB - File count: 16 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005931.v1.0.0 - Source: openneuro - OpenNeuro: [ds005931](https://openneuro.org/datasets/ds005931) - NeMAR: [ds005931](https://nemar.org/dataexplorer/detail?dataset_id=ds005931) ## API Reference Use the `DS005931` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005931(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visuomotor_task * **Study:** `ds005931` (OpenNeuro) * **Author (year):** `Ueda2025` * **Canonical:** — Also importable as: `DS005931`, `Ueda2025`. Modality: `ieeg`; Experiment type: `Motor`; Subject type: `Epilepsy`. Subjects: 8; recordings: 16; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005931](https://openneuro.org/datasets/ds005931) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005931](https://nemar.org/dataexplorer/detail?dataset_id=ds005931) DOI: [https://doi.org/10.18112/openneuro.ds005931.v1.0.0](https://doi.org/10.18112/openneuro.ds005931.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005931 >>> dataset = DS005931(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005931) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005931) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005932: eeg dataset, 29 subjects *PWIe* Access recordings and metadata through EEGDash. **Citation:** Phillip J. Holcomb, Jacklyn Jardel, Katherine J. Midgley, and Karen Emmorey (2025). *PWIe*. [10.18112/openneuro.ds005932.v1.0.0](https://doi.org/10.18112/openneuro.ds005932.v1.0.0) Modality: eeg Subjects: 29 Recordings: 29 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005932 dataset = DS005932(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005932(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005932( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005932, title = {PWIe}, author = {Phillip J. Holcomb and Jacklyn Jardel and Katherine J. Midgley and and Karen Emmorey}, doi = {10.18112/openneuro.ds005932.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005932.v1.0.0}, } ``` ## About This Dataset Data collection took place at the NeuroCognition Laboratory (NCL) in San Diego, California under the supervision of Dr. Phillip Holcomb. This project followed the San Diego State University’s IRB guidelines. Participants sat in a comfortable chair in a darkened sound attenuated room throughout the experiment and wore 32 head and face electrodes (left mastoid reference). They were given a gamepad for button pressing and wore a lightweight headset to record their verbal responses. They were instructed to watch the LCD video monitor that was at a viewing distance of 150cm. All stimuli were less than 2° of horizontal and vertical visual angle. Participants were presented with 100 unique simple black on white to-be-named line drawings, with 50 pictures in the Semantic category and 50 in the Identity category. Each picture was presented twice, once preceded by an unrelated English distractor word and once by a related English distractor word (2000 ms duration). Prime “distractor” words were presented before the picture for 200 ms and were either semantically related, were the same name as the picture, or were unrelated to the picture. Participants were told to name each picture as quickly as possible in English. Their voice response was digitized online. The experiment was self-paced and participants pressed a button after each trial when ready to go on. EEG was sampled continuously at 500 Hz with a bandpass of DC to 200 Hz. Event markers were stored with the EEG data for later ERP averaging. The raw EEG data were imported into EEGLab and saved as .set files. A key to the event code structure is contained in the PWIe bdf files for each subject. ## Dataset Information | Dataset ID | `DS005932` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PWIe | | Author (year) | `Holcomb2025` | | Canonical | `PWIe` | | Importable as | `DS005932`, `Holcomb2025`, `PWIe` | | Year | 2025 | | Authors | Phillip J. Holcomb, Jacklyn Jardel, Katherine J. Midgley, and Karen Emmorey | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005932.v1.0.0](https://doi.org/10.18112/openneuro.ds005932.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005932) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005932) | [Source URL](https://openneuro.org/datasets/ds005932) | ### Copy-paste BibTeX ```bibtex @dataset{ds005932, title = {PWIe}, author = {Phillip J. Holcomb and Jacklyn Jardel and Katherine J. Midgley and and Karen Emmorey}, doi = {10.18112/openneuro.ds005932.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005932.v1.0.0}, } ``` ## Technical Details - Subjects: 29 - Recordings: 29 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 9.946916666666668 - Pathology: Healthy - Modality: Visual - Type: Other - Size on disk: 2.3 GB - File count: 29 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005932.v1.0.0 - Source: openneuro - OpenNeuro: [ds005932](https://openneuro.org/datasets/ds005932) - NeMAR: [ds005932](https://nemar.org/dataexplorer/detail?dataset_id=ds005932) ## API Reference Use the `DS005932` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005932(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PWIe * **Study:** `ds005932` (OpenNeuro) * **Author (year):** `Holcomb2025` * **Canonical:** `PWIe` Also importable as: `DS005932`, `Holcomb2025`, `PWIe`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 29; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005932](https://openneuro.org/datasets/ds005932) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005932](https://nemar.org/dataexplorer/detail?dataset_id=ds005932) DOI: [https://doi.org/10.18112/openneuro.ds005932.v1.0.0](https://doi.org/10.18112/openneuro.ds005932.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005932 >>> dataset = DS005932(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005932) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005932) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005935: fnirs dataset, 21 subjects *Mirror Neuron Study* Access recordings and metadata through EEGDash. **Citation:** Xinge Li, Manon A. Krol, Sahar Jahani, David A. Boas, Helen Tager-Flusberg, Meryem A. Yücel (2025). *Mirror Neuron Study*. [10.18112/openneuro.ds005935.v1.0.0](https://doi.org/10.18112/openneuro.ds005935.v1.0.0) Modality: fnirs Subjects: 21 Recordings: 64 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005935 dataset = DS005935(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005935(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005935( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005935, title = {Mirror Neuron Study}, author = {Xinge Li and Manon A. Krol and Sahar Jahani and David A. Boas and Helen Tager-Flusberg and Meryem A. Yücel}, doi = {10.18112/openneuro.ds005935.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005935.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS005935` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mirror Neuron Study | | Author (year) | `Li2025` | | Canonical | — | | Importable as | `DS005935`, `Li2025` | | Year | 2025 | | Authors | Xinge Li, Manon A. Krol, Sahar Jahani, David A. Boas, Helen Tager-Flusberg, Meryem A. Yücel | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005935.v1.0.0](https://doi.org/10.18112/openneuro.ds005935.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005935) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005935) | [Source URL](https://openneuro.org/datasets/ds005935) | ### Copy-paste BibTeX ```bibtex @dataset{ds005935, title = {Mirror Neuron Study}, author = {Xinge Li and Manon A. Krol and Sahar Jahani and David A. Boas and Helen Tager-Flusberg and Meryem A. Yücel}, doi = {10.18112/openneuro.ds005935.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005935.v1.0.0}, } ``` ## Technical Details - Subjects: 21 - Recordings: 64 - Tasks: 1 - Channels: 120 - Sampling rate (Hz): 25.0 (63), 25.000000000000004 - Duration (hours): Not calculated - Pathology: Not specified - Modality: Visual - Type: Motor - Size on disk: 738.6 MB - File count: 64 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005935.v1.0.0 - Source: openneuro - OpenNeuro: [ds005935](https://openneuro.org/datasets/ds005935) - NeMAR: [ds005935](https://nemar.org/dataexplorer/detail?dataset_id=ds005935) ## API Reference Use the `DS005935` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005935(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mirror Neuron Study * **Study:** `ds005935` (OpenNeuro) * **Author (year):** `Li2025` * **Canonical:** — Also importable as: `DS005935`, `Li2025`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 21; recordings: 64; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005935](https://openneuro.org/datasets/ds005935) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005935](https://nemar.org/dataexplorer/detail?dataset_id=ds005935) DOI: [https://doi.org/10.18112/openneuro.ds005935.v1.0.0](https://doi.org/10.18112/openneuro.ds005935.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005935 >>> dataset = DS005935(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005935) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005935) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS005946: eeg dataset, 39 subjects *ERC_CoG PROMENADE - WP2 - MetaImagery (Metaphor and Mental Imagery)* Access recordings and metadata through EEGDash. **Citation:** Federico Frau, Paolo Canal, Maddalena Bressler, Chiara Pompei, Valentina Bambini (2025). *ERC_CoG PROMENADE - WP2 - MetaImagery (Metaphor and Mental Imagery)*. [10.18112/openneuro.ds005946.v1.0.1](https://doi.org/10.18112/openneuro.ds005946.v1.0.1) Modality: eeg Subjects: 39 Recordings: 39 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005946 dataset = DS005946(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005946(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005946( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005946, title = {ERC_CoG PROMENADE - WP2 - MetaImagery (Metaphor and Mental Imagery)}, author = {Federico Frau and Paolo Canal and Maddalena Bressler and Chiara Pompei and Valentina Bambini}, doi = {10.18112/openneuro.ds005946.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005946.v1.0.1}, } ``` ## About This Dataset The following is the README for the “ERC_CoG PROMENADE - WP2 - MetaImagery (Metaphor and Mental Imagery)” dataset =================== ## License details This dataset is proprietary to the University School for Advanced Studies IUSS Pavia, Italy. Data and script usage is restricted to academic and non-commercial research upon appropriate attribution. All data provided (behavioral and EEG) are licensed under CC-BY-NC-SA, while code scripts are licensed under CC-BY. Data collection was conducted by the research team of the Laboratory of Neurolinguistics and Experimental Pragmatics (NEPLab). For inquiries, contact: Dr. Federico Frau ([federico.frau@iusspavia.it](mailto:federico.frau@iusspavia.it)) and Dr. Paolo Canal ([paolo.canal@iusspavia.it](mailto:paolo.canal@iusspavia.it)). =================== ## Overview of the dataset The dataset was collected within the project “PROcessing MEtaphors: Neurochronometry, Acquisition and DEcay (PROMENADE)”, ERC CONSOLIDATOR GRANT: Grant agreement ID: 101045733 (principal investigator: Prof. Valentina Bambini; email: [valentina.bambini@iusspavia.it](mailto:valentina.bambini@iusspavia.it)) ### View full README The following is the README for the “ERC_CoG PROMENADE - WP2 - MetaImagery (Metaphor and Mental Imagery)” dataset =================== ## License details This dataset is proprietary to the University School for Advanced Studies IUSS Pavia, Italy. Data and script usage is restricted to academic and non-commercial research upon appropriate attribution. All data provided (behavioral and EEG) are licensed under CC-BY-NC-SA, while code scripts are licensed under CC-BY. Data collection was conducted by the research team of the Laboratory of Neurolinguistics and Experimental Pragmatics (NEPLab). For inquiries, contact: Dr. Federico Frau ([federico.frau@iusspavia.it](mailto:federico.frau@iusspavia.it)) and Dr. Paolo Canal ([paolo.canal@iusspavia.it](mailto:paolo.canal@iusspavia.it)). =================== ## Overview of the dataset The dataset was collected within the project “PROcessing MEtaphors: Neurochronometry, Acquisition and DEcay (PROMENADE)”, ERC CONSOLIDATOR GRANT: Grant agreement ID: 101045733 (principal investigator: Prof. Valentina Bambini; email: [valentina.bambini@iusspavia.it](mailto:valentina.bambini@iusspavia.it)) Data acquisition was conducted in 2023 (acquisition time is provided for each subject in the individual scans.tsv files). - Description of the contents of the dataset: The dataset includes 238 Files (14.8 GiB) from 39 Subjects acquired in 1 session. EEG data are provided in a long epoch format, from 1.87 seconds before the onset of the target to 3.17 seconds following its presentation (epoch length: 5.04 seconds). Data was high-pass (0.10 Hz) and low-pass filtered (45 Hz) offline with a 4th order IIR Butterworth filter (DC removed), and re-referenced to the average activity of the two mastoids (TP9 and TP10). Independent component analysis (ICA) decomposition was used to identify and remove eye-related activity only. - Brief overview of the tasks in the experiment: A picture-matching task was used. Target pictures could match (matching condition) or not match (mismatching condition) the information provided in the preceding cue, which could be of four different types producing the four different tasks used in the experiment (Physical, Imagery, Literal, and Metaphorical cues): i) in the Physical, the cue could be the same picture or a different one; ii) in the Imagery, the cue was a single word (an adjective, e.g., “uncombed”) and participants were requested to produce a mental representation of a human referent with the characteristic denoted by the prompted word; iii) in the Literal, participants were cued with a four-word sentence that was a literal description of the target picture (e.g., “some hairstyles are uncombed”); iv) in the Metaphorical, participants were cued with a four-word sentence that was the metaphorical description of the target picture (e.g., “some hairstyles are bushes”). Participants were asked to judge whether the target picture was compatible with the preceding information. Our aim was to test whether the mental representation generated by four types of cues can differently influence the processing of the target picture, with a focus on the potential difference between verbal cues (i.e., literal and metaphorical sentences). - Independent and dependent variables: Independent variables were Condition (Matching, Mismatching) and Type of cue (Physical, Imagery, Literal, and Metaphorical). These variables were manipulated within subjects. Moreover, a set of variables linked to participants’ vocabulary (i.e., lexical-semantic skills) and mental imagery abilities were assessed via questionnaires and offline behavioral tasks. The EEG amplitude was the main dependent variable. We also coded the accuracy of the response to the task. The mean Accuracy values across conditions and types of cue as well as the scores obtained in vocabulary and mental imagery tasks are available for each subject in the participants.tsv file (all variables are described in the participants.json file). - Additional control variables: Pictures were selected and adapted to have similar framing and comparable perceptual characteristics, such as relative luminance, contrast, self-similarity, complexity, and symmetry (as measured via “imhistR” R package). - Subjects: The participants were all right-handed Italian-speaking students from the University of Pavia, Italy. They had different backgrounds and were recruited through leafleting. They were paid €20 for their participation. Exclusion criteria included being: 1) non-native speaker of Italian, 2) bilingual from birth, 3) diagnosed with a learning disability, and 4) left-handedness, as evaluated using the Edinburgh Handedness Inventory (Oldfield, 1971). =================== ## Apparatus and setup - Apparatus: Brain Vision active ACTICHAMP, 64 electrodes. No shielded room, with subjects seating at a distance of 85 cm from an EIZO V2490 monitor. Responses were collected using a CedrusBox RB-530. No shielded room, with subjects seating at a distance of 85 cm. - Location and setup: Data were collected in the “spazio EEG” at the Laboratory of Neurolinguistics and Experimental Pragmatics (NepLab) of the University School of Advanced Studies IUSS, located at Palazzo del Broletto, piazza della Vittoria 15, Pavia, Italy. Upon arrival, participants read the information sheet describing the general aims of the study, and how the EEG would be measured, and how their personal data would be kept confidential. After signing the consent form, they moved to the acquisition room where the cap was mounted (roughly 35 minutes for each participant). At the end of the experiment they could wash their hair and go back to the welcome room where they would complete the assessment using questionnaires. =================== ## Task organization - Paradigm and procedure: The experiment consisted of completing the same task (picture matching) with four different types of cues. We used a block design in which each kind of cue was presented separately and preceded by its own instructions. The order of tasks varied across participants: half of the participants performed the linguistic blocks (Literal and Metaphorical) at the beginning of the experiment and the Imagery as the last one, while the other half started the experiment with the Imagery and performed the linguistic blocks at the end of the experiment. The order of the Literal and Metaphorical blocks was also counterbalanced across participants, so four versions of the experiment were created: i) Literal-Metaphorical-Physical-Imagery, ii) Metaphorical-Literal-Physical-Imagery, iii) Imagery-Physical-Literal-Metaphorical, and iv) Imagery-Physical-Metaphorical-Literal. The order of items was pseudorandomized to counterbalance the presentation order of matching and mismatching stimuli for each item, to ensure that a picture stimulus presented first in the matching condition in one task was then presented first in the mismatching condition in the following task (and vice versa). The pseudorandomized order also ensured that matching and mismatching stimuli associated with the same item were separated by at least 1/4 of the trials (i.e., 21 trials). - Task details and event coding: The procedure consisted in presenting a cue and a target picture, with different inter-stimulus intervals (700 ms for Physical, Literal, and Metaphorical, and 3000 ms for Imagery). Event codes were sent at the onset of the target picture and during the presentation of the cue. The numerical part of event codes identified each target picture (1 to 42). The same numbers were used to code the match condition, while for the mismatch condition we added 100 to the identifier (therefore 101 to 142). The cue type was coded upon presentation of the cue: 221 for Physical, 222 for Literal, 223 for Metaphorical. Therefore, to identify the individual target picture, cue type, and condition, two triggers were needed in each trial [the trigger at the cue determines the cue type, while the trigger at the target determines the condition (match < 100; mismatch > 100) and specific item (module 100). A Fieltrip trial function is provided in the parent folder (code/trialFunIMEEG.m), since conditional trigger selection must be carried out to retrieve all information. Data would then be apt for single-trial analysis. ## Dataset Information | Dataset ID | `DS005946` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | ERC_CoG PROMENADE - WP2 - MetaImagery (Metaphor and Mental Imagery) | | Author (year) | `Frau2025` | | Canonical | `PROMENADE` | | Importable as | `DS005946`, `Frau2025`, `PROMENADE` | | Year | 2025 | | Authors | Federico Frau, Paolo Canal, Maddalena Bressler, Chiara Pompei, Valentina Bambini | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005946.v1.0.1](https://doi.org/10.18112/openneuro.ds005946.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005946) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005946) | [Source URL](https://openneuro.org/datasets/ds005946) | ### Copy-paste BibTeX ```bibtex @dataset{ds005946, title = {ERC_CoG PROMENADE - WP2 - MetaImagery (Metaphor and Mental Imagery)}, author = {Federico Frau and Paolo Canal and Maddalena Bressler and Chiara Pompei and Valentina Bambini}, doi = {10.18112/openneuro.ds005946.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005946.v1.0.1}, } ``` ## Technical Details - Subjects: 39 - Recordings: 39 - Tasks: 1 - Channels: 60 - Sampling rate (Hz): 1000.0 - Duration (hours): 18.3456 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 14.8 GB - File count: 39 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005946.v1.0.1 - Source: openneuro - OpenNeuro: [ds005946](https://openneuro.org/datasets/ds005946) - NeMAR: [ds005946](https://nemar.org/dataexplorer/detail?dataset_id=ds005946) ## API Reference Use the `DS005946` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005946(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ERC_CoG PROMENADE - WP2 - MetaImagery (Metaphor and Mental Imagery) * **Study:** `ds005946` (OpenNeuro) * **Author (year):** `Frau2025` * **Canonical:** `PROMENADE` Also importable as: `DS005946`, `Frau2025`, `PROMENADE`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 39; recordings: 39; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005946](https://openneuro.org/datasets/ds005946) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005946](https://nemar.org/dataexplorer/detail?dataset_id=ds005946) DOI: [https://doi.org/10.18112/openneuro.ds005946.v1.0.1](https://doi.org/10.18112/openneuro.ds005946.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005946 >>> dataset = DS005946(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005946) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005946) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005953: ieeg dataset, 2 subjects *iEEG_visual* Access recordings and metadata through EEGDash. **Citation:** Jonathan Winawer, Dora Hermes (2025). *iEEG_visual*. [10.18112/openneuro.ds005953.v1.0.0](https://doi.org/10.18112/openneuro.ds005953.v1.0.0) Modality: ieeg Subjects: 2 Recordings: 3 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005953 dataset = DS005953(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005953(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005953( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005953, title = {iEEG_visual}, author = {Jonathan Winawer and Dora Hermes}, doi = {10.18112/openneuro.ds005953.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005953.v1.0.0}, } ``` ## About This Dataset **Information** This folder contains the ECoG data from 2 subjects performing a visual task used in the publications of Hermes et al., 2015, Cerebral Cortex “Stimulus Dependence of Gamma Oscillations in Human Visual Cortex” and Hermes et al., 2017, PLOS Biology “Neuronal synchrony and the relation between the blood-oxygen-level dependent response and the local field potential”. Contact: Dora Hermes ([dorahermes@gmail.com](mailto:dorahermes@gmail.com)) **Citing this dataset** If you use this data as a part of any publications, please use the following citation: [1] Hermes D, Miller KJ, Wandell BA, Winawer J (2015). Stimulus dependence of gamma oscillations in human visual cortex. Cerebral Cortex 25(9):2951-9. [https://doi.org/10.1093/cercor/bhu091](https://doi.org/10.1093/cercor/bhu091) ### View full README **Information** This folder contains the ECoG data from 2 subjects performing a visual task used in the publications of Hermes et al., 2015, Cerebral Cortex “Stimulus Dependence of Gamma Oscillations in Human Visual Cortex” and Hermes et al., 2017, PLOS Biology “Neuronal synchrony and the relation between the blood-oxygen-level dependent response and the local field potential”. Contact: Dora Hermes ([dorahermes@gmail.com](mailto:dorahermes@gmail.com)) **Citing this dataset** If you use this data as a part of any publications, please use the following citation: [1] Hermes D, Miller KJ, Wandell BA, Winawer J (2015). Stimulus dependence of gamma oscillations in human visual cortex. Cerebral Cortex 25(9):2951-9. [https://doi.org/10.1093/cercor/bhu091](https://doi.org/10.1093/cercor/bhu091) [2] Hermes D, Nguyen M, Winawer J. (2017). Neuronal synchrony and the relation between the BOLD response and the local field potential. PLOS Biology 15(7). [https://doi.org/10.1371/journal.pbio.2001461](https://doi.org/10.1371/journal.pbio.2001461) This dataset was made available with the support of the Netherlands Organization for Scientific Research www.nwo.nl under award number 016.VENI.178.048 to Dora Hermes and the National Institute Of Mental Health of the National Institutes of Health under Award Number R01MH111417 to Natalia Petridou and Jonathan Winawer. The ECoG data collection was facilitated by the Stanford Human Intracranial Cognitive Electrophysiology Program (SHICEP). **License** This dataset is made available under the Public Domain Dedication and License \\nv1.0, whose full text can be found at \\nhttp://www.opendatacommons.org/licenses/pddl/1.0/. **Task Description** Subjects were presented with images presented on a computer screen. The images spanned about 25x25 degrees of visual angle. Subjects fixated on a dot in the center of the screen that alternated between red and green, changing colors at random times. Subject 1 pressed a button when the fixation dot changed color. Subject 2 fixated on the dot but did not make manual responses because these responses were found to interfere with visual fixation. **Dataset and Stimuli** This data is organized according to the Brain Imaging Data Structure specification. A community- driven specification for organizing neurophysiology data along with its metadata. For more information on this data specification, see [https://bids-specification.readthedocs.io/en/stable/](https://bids-specification.readthedocs.io/en/stable/) Each subject has their own folder (e.g., `sub-01`) which contains the raw EcoG data for that subject, as well as the metadata needed to understand the raw data and event timing. In addition, the `stimuli/` folder contains the .png files of the presented images. **Stimuli** Stimuli including high contrast vertical gratings (0.16, 0.33, 0.65, or 1.3 duty cycles per degree square wave) and noise patterns (spectral power distributions of k/f^4; k/f^2; and k/f^0). **Raw data** Raw data is stored with the Brainvision data format. This can be read in to memory with the following tools: \* Python: The `pybv` package ([https://github.com/bids-standard/pybv](https://github.com/bids-standard/pybv)) \* Matlab: BrainVision analyzer ([https://www.mathworks.com/products/connections/product_detail/brainvision-analyzer.html](https://www.mathworks.com/products/connections/product_detail/brainvision-analyzer.html)) ## Dataset Information | Dataset ID | `DS005953` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | iEEG_visual | | Author (year) | `Winawer2025` | | Canonical | — | | Importable as | `DS005953`, `Winawer2025` | | Year | 2025 | | Authors | Jonathan Winawer, Dora Hermes | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005953.v1.0.0](https://doi.org/10.18112/openneuro.ds005953.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005953) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005953) | [Source URL](https://openneuro.org/datasets/ds005953) | ### Copy-paste BibTeX ```bibtex @dataset{ds005953, title = {iEEG_visual}, author = {Jonathan Winawer and Dora Hermes}, doi = {10.18112/openneuro.ds005953.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005953.v1.0.0}, } ``` ## Technical Details - Subjects: 2 - Recordings: 3 - Tasks: 1 - Channels: 96 (2), 118 - Sampling rate (Hz): 1525.9 (2), 3051.76 - Duration (hours): 0.1946988794917 - Pathology: Surgery - Modality: Visual - Type: Perception - Size on disk: 577.3 MB - File count: 3 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005953.v1.0.0 - Source: openneuro - OpenNeuro: [ds005953](https://openneuro.org/datasets/ds005953) - NeMAR: [ds005953](https://nemar.org/dataexplorer/detail?dataset_id=ds005953) ## API Reference Use the `DS005953` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005953(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_visual * **Study:** `ds005953` (OpenNeuro) * **Author (year):** `Winawer2025` * **Canonical:** — Also importable as: `DS005953`, `Winawer2025`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Surgery`. Subjects: 2; recordings: 3; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005953](https://openneuro.org/datasets/ds005953) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005953](https://nemar.org/dataexplorer/detail?dataset_id=ds005953) DOI: [https://doi.org/10.18112/openneuro.ds005953.v1.0.0](https://doi.org/10.18112/openneuro.ds005953.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005953 >>> dataset = DS005953(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005953) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005953) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS005960: eeg dataset, 41 subjects *General Info: inst-comp-eeg* Access recordings and metadata through EEGDash. **Citation:** Pena, P., Palenciano, A.F., González-García, C., Ruz, M. (2025). *General Info: inst-comp-eeg*. [10.18112/openneuro.ds005960.v1.0.0](https://doi.org/10.18112/openneuro.ds005960.v1.0.0) Modality: eeg Subjects: 41 Recordings: 41 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005960 dataset = DS005960(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005960(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005960( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005960, title = {General Info: inst-comp-eeg}, author = {Pena, P. and Palenciano, A.F. and González-García, C. and Ruz, M.}, doi = {10.18112/openneuro.ds005960.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005960.v1.0.0}, } ``` ## About This Dataset The experiment consisted of two tasks: the main instruction-following task and an additional localizer task. The data of each participant was recorded in one session. For the main instruction-following task, participants saw four sequential screens -screen display of 200 ms and 800 ms as interscreen interval- that contained the full instruction, after a pretarget interval, they were presented with the target images -two images framed by a colored shape, on display for 200 ms-. They had to respond if the instruction was fulfilled or not by the targets. The first two screens of the instruction indicated if the participant had to pay attention to both images -integration- or to just one -selection-, and which specific images were set to appear -animate or inanimate images per trial-. The third instruction refered to the relevant feature they had to pay attention to, either the color or the shape surrounding the image. The last instruction indicated the key to press if the instruction was fulfilled by the target images -either “A” or “L”-. Each trial consisted of a novel combination of the instruction components. Additional catch trials were added, to ensure that participants were maintaining all information. If any of the target images was different from the ones previously instructed, the participant had to indicate it by pressing both “A” and “L” simultaneously. The localizer task was a 1-back task. Participants saw one target image per trial, and they had to indicate with a keypress -“A” and “L”- if the image was from the same subcategory as the image from the previous trial. Each block of the main instruction-following task consisted of 32 trials, with a total of 16 blocks. All the conditions were fully counterbalanced to ensure no statistical dependencies within the blocks. Each of the 8 localizer blocks consisted of 40 trials. To counterbalance the presentation of the blocks for the whole experiment session, the blocks of the main task were further divided according to the features -blocks of features 1 and blocks of features 2-,and then the sequence of main task and localizer blocks was counterbalanced. ## Dataset Information | Dataset ID | `DS005960` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | General Info: inst-comp-eeg | | Author (year) | `Pena2025` | | Canonical | — | | Importable as | `DS005960`, `Pena2025` | | Year | 2025 | | Authors | Pena, P., Palenciano, A.F., González-García, C., Ruz, M. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005960.v1.0.0](https://doi.org/10.18112/openneuro.ds005960.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005960) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005960) | [Source URL](https://openneuro.org/datasets/ds005960) | ### Copy-paste BibTeX ```bibtex @dataset{ds005960, title = {General Info: inst-comp-eeg}, author = {Pena, P. and Palenciano, A.F. and González-García, C. and Ruz, M.}, doi = {10.18112/openneuro.ds005960.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005960.v1.0.0}, } ``` ## Technical Details - Subjects: 41 - Recordings: 41 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 66.90505555555556 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 57.7 GB - File count: 41 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005960.v1.0.0 - Source: openneuro - OpenNeuro: [ds005960](https://openneuro.org/datasets/ds005960) - NeMAR: [ds005960](https://nemar.org/dataexplorer/detail?dataset_id=ds005960) ## API Reference Use the `DS005960` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005960(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) General Info: inst-comp-eeg * **Study:** `ds005960` (OpenNeuro) * **Author (year):** `Pena2025` * **Canonical:** — Also importable as: `DS005960`, `Pena2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 41; recordings: 41; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005960](https://openneuro.org/datasets/ds005960) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005960](https://nemar.org/dataexplorer/detail?dataset_id=ds005960) DOI: [https://doi.org/10.18112/openneuro.ds005960.v1.0.0](https://doi.org/10.18112/openneuro.ds005960.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005960 >>> dataset = DS005960(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005960) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005960) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS005963: fnirs dataset, 10 subjects *FRESH Motor Dataset* Access recordings and metadata through EEGDash. **Citation:** Rickson C. Mesquita (2025). *FRESH Motor Dataset*. [10.18112/openneuro.ds005963.v1.0.0](https://doi.org/10.18112/openneuro.ds005963.v1.0.0) Modality: fnirs Subjects: 10 Recordings: 40 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005963 dataset = DS005963(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005963(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005963( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005963, title = {FRESH Motor Dataset}, author = {Rickson C. Mesquita}, doi = {10.18112/openneuro.ds005963.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005963.v1.0.0}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) In preperation ## Dataset Information | Dataset ID | `DS005963` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | FRESH Motor Dataset | | Author (year) | `Mesquita2025` | | Canonical | `Mesquita2019` | | Importable as | `DS005963`, `Mesquita2025`, `Mesquita2019` | | Year | 2025 | | Authors | Rickson C. Mesquita | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005963.v1.0.0](https://doi.org/10.18112/openneuro.ds005963.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005963) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005963) | [Source URL](https://openneuro.org/datasets/ds005963) | ### Copy-paste BibTeX ```bibtex @dataset{ds005963, title = {FRESH Motor Dataset}, author = {Rickson C. Mesquita}, doi = {10.18112/openneuro.ds005963.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005963.v1.0.0}, } ``` ## Technical Details - Subjects: 10 - Recordings: 40 - Tasks: 1 - Channels: 136 - Sampling rate (Hz): 8.928571428571429 - Duration (hours): 6.489715555555555 - Pathology: Not specified - Modality: — - Type: Motor - Size on disk: 233.4 MB - File count: 40 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005963.v1.0.0 - Source: openneuro - OpenNeuro: [ds005963](https://openneuro.org/datasets/ds005963) - NeMAR: [ds005963](https://nemar.org/dataexplorer/detail?dataset_id=ds005963) ## API Reference Use the `DS005963` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005963(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRESH Motor Dataset * **Study:** `ds005963` (OpenNeuro) * **Author (year):** `Mesquita2025` * **Canonical:** `Mesquita2019` Also importable as: `DS005963`, `Mesquita2025`, `Mesquita2019`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 10; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005963](https://openneuro.org/datasets/ds005963) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005963](https://nemar.org/dataexplorer/detail?dataset_id=ds005963) DOI: [https://doi.org/10.18112/openneuro.ds005963.v1.0.0](https://doi.org/10.18112/openneuro.ds005963.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005963 >>> dataset = DS005963(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005963) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005963) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS005964: fnirs dataset, 17 subjects *FRESH Audio Dataset* Access recordings and metadata through EEGDash. **Citation:** Robert Luke, Maureen Shader, David McAlpine (2025). *FRESH Audio Dataset*. [10.18112/openneuro.ds005964.v1.0.0](https://doi.org/10.18112/openneuro.ds005964.v1.0.0) Modality: fnirs Subjects: 17 Recordings: 17 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS005964 dataset = DS005964(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS005964(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS005964( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds005964, title = {FRESH Audio Dataset}, author = {Robert Luke and Maureen Shader and David McAlpine}, doi = {10.18112/openneuro.ds005964.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005964.v1.0.0}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) In preperation ## Dataset Information | Dataset ID | `DS005964` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | FRESH Audio Dataset | | Author (year) | `Luke2025` | | Canonical | `Luke2019` | | Importable as | `DS005964`, `Luke2025`, `Luke2019` | | Year | 2025 | | Authors | Robert Luke, Maureen Shader, David McAlpine | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds005964.v1.0.0](https://doi.org/10.18112/openneuro.ds005964.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds005964) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds005964) | [Source URL](https://openneuro.org/datasets/ds005964) | ### Copy-paste BibTeX ```bibtex @dataset{ds005964, title = {FRESH Audio Dataset}, author = {Robert Luke and Maureen Shader and David McAlpine}, doi = {10.18112/openneuro.ds005964.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds005964.v1.0.0}, } ``` ## Technical Details - Subjects: 17 - Recordings: 17 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 5.208333333333333 - Duration (hours): 6.14368 - Pathology: Not specified - Modality: Auditory - Type: Perception - Size on disk: 62.4 MB - File count: 17 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds005964.v1.0.0 - Source: openneuro - OpenNeuro: [ds005964](https://openneuro.org/datasets/ds005964) - NeMAR: [ds005964](https://nemar.org/dataexplorer/detail?dataset_id=ds005964) ## API Reference Use the `DS005964` class to access this dataset programmatically. ### *class* eegdash.dataset.DS005964(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRESH Audio Dataset * **Study:** `ds005964` (OpenNeuro) * **Author (year):** `Luke2025` * **Canonical:** `Luke2019` Also importable as: `DS005964`, `Luke2025`, `Luke2019`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Unknown`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005964](https://openneuro.org/datasets/ds005964) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005964](https://nemar.org/dataexplorer/detail?dataset_id=ds005964) DOI: [https://doi.org/10.18112/openneuro.ds005964.v1.0.0](https://doi.org/10.18112/openneuro.ds005964.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005964 >>> dataset = DS005964(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds005964) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds005964) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS006012: meg dataset, 21 subjects *A geometric shape regularity effect in the human brain: MEG dataset* Access recordings and metadata through EEGDash. **Citation:** Mathias Sablé-Meyer, Lucas Benjamin, Cassandra Potier Watkins, Chenxi He, Maxence Pajot, Théo Morfoisse, Fosca Al Roumi, Stanislas Dehaene (2025). *A geometric shape regularity effect in the human brain: MEG dataset*. [10.18112/openneuro.ds006012.v1.0.1](https://doi.org/10.18112/openneuro.ds006012.v1.0.1) Modality: meg Subjects: 21 Recordings: 193 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006012 dataset = DS006012(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006012(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006012( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006012, title = {A geometric shape regularity effect in the human brain: MEG dataset}, author = {Mathias Sablé-Meyer and Lucas Benjamin and Cassandra Potier Watkins and Chenxi He and Maxence Pajot and Théo Morfoisse and Fosca Al Roumi and Stanislas Dehaene}, doi = {10.18112/openneuro.ds006012.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006012.v1.0.1}, } ``` ## About This Dataset **A geometric shape regularity effect in the human brain: MEG dataset** Authors: *Mathias Sablé-Meyer* \* Lucas Benjamin \* Cassandra Potier Watkins \* Chenxi He \* Maxence Pajot \* Théo Morfoisse ### View full README **A geometric shape regularity effect in the human brain: MEG dataset** Authors: *Mathias Sablé-Meyer* \* Lucas Benjamin \* Cassandra Potier Watkins \* Chenxi He \* Maxence Pajot \* Théo Morfoisse \* Fosca Al Roumi \* Stanislas Dehaene \*Corresponding author: [mathias.sable-meyer@ucl.ac.uk](mailto:mathias.sable-meyer@ucl.ac.uk) **Abstract** The perception and production of regular geometric shapes is a characteristic trait of human cultures since prehistory, whose neural mechanisms are unknown. Behavioral studies suggest that humans are attuned to discrete regularities such as symmetries and parallelism, and rely on their combinations to encode regular geometric shapes in a compressed form. To identify the relevant brain systems and their dynamics, we collected functional MRI and magnetoencephalography data in both adults and six-year-olds during the perception of simple shapes such as hexagons, triangles and quadrilaterals. The results revealed that geometric shapes, relative to other visual categories, induce a hypoactivation of ventral visual areas and an overactivation of the intraparietal and inferior temporal regions also involved in mathematical processing, whose activation is modulated by geometric regularity. While convolutional neural networks captured the early visual activity evoked by geometric shapes, they failed to account for subsequent dorsal parietal and prefrontal signals, which could only be captured by discrete geometric features or by more advanced transformer models of vision. We propose that the perception of abstract geometric regularities engages an additional symbolic mode of visual perception. **Notes about this dataset** We separately share the fMRI dataset at [https://openneuro.org/datasets/ds006010](https://openneuro.org/datasets/ds006010). Below are some notes about the MEG dataset of N=20 participants: \* The code for the analyses associated to > [https://doi.org/10.1101/2024.03.13.584141](https://doi.org/10.1101/2024.03.13.584141) > are provided at > [https://github.com/mathias-sm/AGeometricShapeRegularityEffectHumanBrain](https://github.com/mathias-sm/AGeometricShapeRegularityEffectHumanBrain). > However, these analyses have been performed on pre-processed data \_without_ > this defacing steps. I am not publishing this raw data, but should there be > discrepancies or problems coming from the defacing, I have a copy of the following > information, which I may ask for permission to share in specific cases: > > 1. The original data > > 2. The seed used for the anonymization procedure > > 3. The shuffling information. \* Anonymization (including defacing of the `anat` folder) has been performed : using the following command: `python -c 'import mne_bids; mne_bids.anonymize_dataset("", "", random_state=, daysback=)'` This has shuffled the participant order, changed the dates, defaced the anatomy, and stripped gender information from the dataset. \* The data was pre-processed with the configuration file provided at : [https://github.com/mathias-sm/AGeometricShapeRegularityEffectHumanBrain/blob/main/MEG/POGS_MEG_config.py](https://github.com/mathias-sm/AGeometricShapeRegularityEffectHumanBrain/blob/main/MEG/POGS_MEG_config.py) for `mne-bids-pipeline` with the development version at the time, `bce60a79241731bdd03fccffa6cf315a35b33ab2` on [https://github.com/mne-tools/mne-bids-pipeline/](https://github.com/mne-tools/mne-bids-pipeline/) ## Dataset Information | Dataset ID | `DS006012` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A geometric shape regularity effect in the human brain: MEG dataset | | Author (year) | `SableMeyer2025` | | Canonical | — | | Importable as | `DS006012`, `SableMeyer2025` | | Year | 2025 | | Authors | Mathias Sablé-Meyer, Lucas Benjamin, Cassandra Potier Watkins, Chenxi He, Maxence Pajot, Théo Morfoisse, Fosca Al Roumi, Stanislas Dehaene | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006012.v1.0.1](https://doi.org/10.18112/openneuro.ds006012.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006012) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006012) | [Source URL](https://openneuro.org/datasets/ds006012) | ### Copy-paste BibTeX ```bibtex @dataset{ds006012, title = {A geometric shape regularity effect in the human brain: MEG dataset}, author = {Mathias Sablé-Meyer and Lucas Benjamin and Cassandra Potier Watkins and Chenxi He and Maxence Pajot and Théo Morfoisse and Fosca Al Roumi and Stanislas Dehaene}, doi = {10.18112/openneuro.ds006012.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006012.v1.0.1}, } ``` ## Technical Details - Subjects: 21 - Recordings: 193 - Tasks: 2 - Channels: 336 (172), 333 - Sampling rate (Hz): 1000.0 - Duration (hours): 15.586618611111112 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 71.1 GB - File count: 193 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006012.v1.0.1 - Source: openneuro - OpenNeuro: [ds006012](https://openneuro.org/datasets/ds006012) - NeMAR: [ds006012](https://nemar.org/dataexplorer/detail?dataset_id=ds006012) ## API Reference Use the `DS006012` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006012(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A geometric shape regularity effect in the human brain: MEG dataset * **Study:** `ds006012` (OpenNeuro) * **Author (year):** `SableMeyer2025` * **Canonical:** — Also importable as: `DS006012`, `SableMeyer2025`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 21; recordings: 193; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006012](https://openneuro.org/datasets/ds006012) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006012](https://nemar.org/dataexplorer/detail?dataset_id=ds006012) DOI: [https://doi.org/10.18112/openneuro.ds006012.v1.0.1](https://doi.org/10.18112/openneuro.ds006012.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006012 >>> dataset = DS006012(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006012) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006012) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS006018: eeg dataset, 127 subjects *Cognitive Electrophysiology in Socioeconomic Context in Adulthood: An EEG dataset* Access recordings and metadata through EEGDash. **Citation:** Elif Isbell, Amanda N. Peters, Dylan M. Richardson, Nancy E. R. De León (2025). *Cognitive Electrophysiology in Socioeconomic Context in Adulthood: An EEG dataset*. [10.18112/openneuro.ds006018.v1.2.2](https://doi.org/10.18112/openneuro.ds006018.v1.2.2) Modality: eeg Subjects: 127 Recordings: 357 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006018 dataset = DS006018(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006018(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006018( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006018, title = {Cognitive Electrophysiology in Socioeconomic Context in Adulthood: An EEG dataset}, author = {Elif Isbell and Amanda N. Peters and Dylan M. Richardson and Nancy E. R. De León}, doi = {10.18112/openneuro.ds006018.v1.2.2}, url = {https://doi.org/10.18112/openneuro.ds006018.v1.2.2}, } ``` ## About This Dataset **The Cognitive Electrophysiology in Socioeconomic Context in Adulthood Dataset** **Data Description** This dataset comprises electroencephalogram (EEG) data collected from 127 young adults (18-30 years), along with retrospective objective and subjective indicators of childhood family socioeconomic status (SES), as well as SES indicators in adulthood, such as educational attainment, individual and household income, food security, and home and neighborhood characteristics. The EEG data were recorded with tasks directly acquired from the Event-Related Potentials Compendium of Open Resources and Experiments ERP CORE (Kappenman et al., 2021), or adapted from these tasks (Isbell et al., 2024). These tasks were optimized to capture neural activity manifest in perception, cognition, and action, in neurotypical young adults. Furthermore, the dataset includes a symptoms checklist, consisting of questions that were found to be predictive of symptoms consistent with attention-deficit/hyperactivity disorder (ADHD) in adulthood, which can be used to investigate the links between ADHD symptoms and neural activity in a socioeconomically diverse young adult sample. The detailed description of the dataset is accepted for publication in Scientific Data, with the title: “Cognitive Electrophysiology in Socioeconomic Context in Adulthood.” **EEG Recording** ### View full README **The Cognitive Electrophysiology in Socioeconomic Context in Adulthood Dataset** **Data Description** This dataset comprises electroencephalogram (EEG) data collected from 127 young adults (18-30 years), along with retrospective objective and subjective indicators of childhood family socioeconomic status (SES), as well as SES indicators in adulthood, such as educational attainment, individual and household income, food security, and home and neighborhood characteristics. The EEG data were recorded with tasks directly acquired from the Event-Related Potentials Compendium of Open Resources and Experiments ERP CORE (Kappenman et al., 2021), or adapted from these tasks (Isbell et al., 2024). These tasks were optimized to capture neural activity manifest in perception, cognition, and action, in neurotypical young adults. Furthermore, the dataset includes a symptoms checklist, consisting of questions that were found to be predictive of symptoms consistent with attention-deficit/hyperactivity disorder (ADHD) in adulthood, which can be used to investigate the links between ADHD symptoms and neural activity in a socioeconomically diverse young adult sample. The detailed description of the dataset is accepted for publication in Scientific Data, with the title: “Cognitive Electrophysiology in Socioeconomic Context in Adulthood.” **EEG Recording** EEG data were recorded using the Brain Products actiCHamp Plus system, in combination with BrainVision Recorder (Version 1.25.0101). We used a 32-channel actiCAP slim active electrode system, with electrodes mounted on elastic snap caps (Brain Products GmbH, Gilching, Germany). The ground electrode was placed at FPz. From the electrode bundle, we repurposed 2 electrodes by placing them on the mastoid bones behind the left and right ears to be used for re-referencing after data collection. We also repurposed 3 additional electrodes to record electrooculogram (EOG). To capture eye artifacts, we placed the horizontal EOG (HEOG) electrodes ateral to the external canthus of each eye. We also placed one vertical EOG (VEOG) electrode below the right eye. The remaining 27 electrodes were used as scalp electrodes, which were mounted per the international 10/20 system. EEG data were recorded at a sampling rate of 500 Hz and referenced to the Cz electrode. StimTrak was used to assess stimulus presentation delays for both the monitor and headphones. The results indicated that both the visual and auditory stimuli had a delay of approximately 20 ms. Therefore, users should shift the event-codes by 20 ms when conducting stimulus-locked analyses. **Notes** Before the data were publicly shared, all identifiable information was removed, including date of birth, date of session, race/ethnicity, zip code, occupation (self and parent), and names of the languages the participants reported speaking and understanding fluently. Date of birth and date of session were used to compute age in years, which is included in the dataset. Furthermore, several variables were recoded based on re-identification risk assessments. Users who would like to establish secure access to components of the dataset we could not publicly share due to re-identification risks, should contact the corresponding researcher as described below. The dataset consists of participants recruited for studies on adult cognition in context. To provide the largest sample size, we included all participants who completed at least one of the EEG tasks of interest. Each participant completed each EEG task only once. The original participant IDs with which the EEG data were saved were recoded and the raw EEG files were renamed to make the dataset BIDS compatible. The ERP CORE experimental tasks can be found on OSF, under Experiment Control Files: [https://osf.io/thsqg/](https://osf.io/thsqg/) Examples of EEGLAB/ERPLAB data processing scripts that can be used with the EEG data shared here can be found on OSF: osf.io/thsqg osf.io/43H75 Contact > * If you have any questions, comments, or requests, please contact: > * Elif Isbell: [eisbell@ucmerced.edu](mailto:eisbell@ucmerced.edu) **Copyright and License** This dataset is licensed under CC0. **References** Isbell, E., Peters, A. N., Richardson, D. M., & Rodas De León, N. E. (2025). Cognitive electrophysiology in socioeconomic context in adulthood. Scientific Data, 12(1), 1–9. [https://doi.org/10.1038/s41597-025-05209-z](https://doi.org/10.1038/s41597-025-05209-z) Isbell, E., De León, N. E. R., & Richardson, D. M. (2024). Childhood family socioeconomic status is linked to adult brain electrophysiology. PloS One, 19(8), e0307406. Isbell, E., De León, N. E. R. & Richardson, D. M. Childhood family socioeconomic status is linked to adult brain electrophysiology - accompanying analytic data and code. OSF [https://doi.org/10.17605/osf.io/43H75](https://doi.org/10.17605/osf.io/43H75) (2024). Kappenman, E. S., Farrens, J. L., Zhang, W., Stewart, A. X., & Luck, S. J. (2021). ERP CORE: An open resource for human event-related potential research. NeuroImage, 225, 117465. Kappenman, E. S., Farrens, J., Zhang, W., Stewart, A. X. & Luck, S. J. ERP CORE. [https://osf.io/thsqg](https://osf.io/thsqg) (2020). Kappenman, E., Farrens, J., Zhang, W., Stewart, A. & Luck, S. Experiment control files. [https://osf.io/47uf2](https://osf.io/47uf2) (2020). ## Dataset Information | Dataset ID | `DS006018` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Cognitive Electrophysiology in Socioeconomic Context in Adulthood: An EEG dataset | | Author (year) | `Isbell2025_Adulthood` | | Canonical | — | | Importable as | `DS006018`, `Isbell2025_Adulthood` | | Year | 2025 | | Authors | Elif Isbell, Amanda N. Peters, Dylan M. Richardson, Nancy E. R. De León | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006018.v1.2.2](https://doi.org/10.18112/openneuro.ds006018.v1.2.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006018) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006018) | [Source URL](https://openneuro.org/datasets/ds006018) | ### Copy-paste BibTeX ```bibtex @dataset{ds006018, title = {Cognitive Electrophysiology in Socioeconomic Context in Adulthood: An EEG dataset}, author = {Elif Isbell and Amanda N. Peters and Dylan M. Richardson and Nancy E. R. De León}, doi = {10.18112/openneuro.ds006018.v1.2.2}, url = {https://doi.org/10.18112/openneuro.ds006018.v1.2.2}, } ``` ## Technical Details - Subjects: 127 - Recordings: 357 - Tasks: 4 - Channels: 30 - Sampling rate (Hz): 500.0 - Duration (hours): 50.88856666666666 - Pathology: Healthy - Modality: Multisensory - Type: Other - Size on disk: 10.6 GB - File count: 357 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006018.v1.2.2 - Source: openneuro - OpenNeuro: [ds006018](https://openneuro.org/datasets/ds006018) - NeMAR: [ds006018](https://nemar.org/dataexplorer/detail?dataset_id=ds006018) ## API Reference Use the `DS006018` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006018(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cognitive Electrophysiology in Socioeconomic Context in Adulthood: An EEG dataset * **Study:** `ds006018` (OpenNeuro) * **Author (year):** `Isbell2025_Adulthood` * **Canonical:** — Also importable as: `DS006018`, `Isbell2025_Adulthood`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 127; recordings: 357; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006018](https://openneuro.org/datasets/ds006018) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006018](https://nemar.org/dataexplorer/detail?dataset_id=ds006018) DOI: [https://doi.org/10.18112/openneuro.ds006018.v1.2.2](https://doi.org/10.18112/openneuro.ds006018.v1.2.2) ### Examples ```pycon >>> from eegdash.dataset import DS006018 >>> dataset = DS006018(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006018) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006018) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006033: eeg dataset, 3 subjects *Synchronous EEG and fMRI dataset on inner speech* Access recordings and metadata through EEGDash. **Citation:** Foteini Simistira Liwicki (2025). *Synchronous EEG and fMRI dataset on inner speech*. [10.18112/openneuro.ds006033.v1.0.1](https://doi.org/10.18112/openneuro.ds006033.v1.0.1) Modality: eeg Subjects: 3 Recordings: 5 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006033 dataset = DS006033(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006033(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006033( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006033, title = {Synchronous EEG and fMRI dataset on inner speech}, author = {Foteini Simistira Liwicki}, doi = {10.18112/openneuro.ds006033.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006033.v1.0.1}, } ``` ## About This Dataset **Inner Speech EEG-fMRI Dataset** **Description** This dataset contains simultaneous EEG-fMRI recordings for inner speech experiments. Data were collected using a 3T MRI scanner and 64-channel BrainProducts EEG system. The EEG data have undergone preprocessing, including pulse artifact removal, using the BrainVision Analyzer software. No further data transformations have been applied to ensure the dataset remains BIDS-compliant as “raw”. **Subjects** - Number of subjects: 3 - Sessions per subject: 2 - Tasks: Inner speech **Experimental Protocol** - Each trial includes a fixation period (2s), stimulus display (2s), and rest (12s). - 8 words were presented in random order, each repeated 40 times. - EEG sampled at 5000 Hz, fMRI acquired with TR=2s. **Data Organization** - Functional MRI data: `sub-xx/ses-xx/func/` - EEG data: `sub-xx/ses-xx/eeg/` - Event markers: `events.tsv` - BIDS-compatible metadata included in JSON sidecars. **Contact** For inquiries, contact: Foteini Simistira Liwicki ([Foteini.liwicki@ltu.se](mailto:Foteini.liwicki@ltu.se)) ## Dataset Information | Dataset ID | `DS006033` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Synchronous EEG and fMRI dataset on inner speech | | Author (year) | `Liwicki2025` | | Canonical | — | | Importable as | `DS006033`, `Liwicki2025` | | Year | 2025 | | Authors | Foteini Simistira Liwicki | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006033.v1.0.1](https://doi.org/10.18112/openneuro.ds006033.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006033) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006033) | [Source URL](https://openneuro.org/datasets/ds006033) | ### Copy-paste BibTeX ```bibtex @dataset{ds006033, title = {Synchronous EEG and fMRI dataset on inner speech}, author = {Foteini Simistira Liwicki}, doi = {10.18112/openneuro.ds006033.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006033.v1.0.1}, } ``` ## Technical Details - Subjects: 3 - Recordings: 5 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 5000.0 - Duration (hours): 2.1883706111111114 - Pathology: Healthy - Modality: Visual - Type: Other - Size on disk: 15.3 GB - File count: 5 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006033.v1.0.1 - Source: openneuro - OpenNeuro: [ds006033](https://openneuro.org/datasets/ds006033) - NeMAR: [ds006033](https://nemar.org/dataexplorer/detail?dataset_id=ds006033) ## API Reference Use the `DS006033` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006033(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Synchronous EEG and fMRI dataset on inner speech * **Study:** `ds006033` (OpenNeuro) * **Author (year):** `Liwicki2025` * **Canonical:** — Also importable as: `DS006033`, `Liwicki2025`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 3; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006033](https://openneuro.org/datasets/ds006033) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006033](https://nemar.org/dataexplorer/detail?dataset_id=ds006033) DOI: [https://doi.org/10.18112/openneuro.ds006033.v1.0.1](https://doi.org/10.18112/openneuro.ds006033.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006033 >>> dataset = DS006033(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006033) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006033) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006035: meg dataset, 5 subjects *somatomotor* Access recordings and metadata through EEGDash. **Citation:** Fa-Hsuan Lin, Deirdre Foxe von Pechmann, Kaisu Lankinen, Seppo Ahlfors, Bruce Rosen, Jyrki Ahveninen, Matti Hämäläinen, Tommi Raij (2025). *somatomotor*. [10.18112/openneuro.ds006035.v1.0.0](https://doi.org/10.18112/openneuro.ds006035.v1.0.0) Modality: meg Subjects: 5 Recordings: 15 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006035 dataset = DS006035(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006035(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006035( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006035, title = {somatomotor}, author = {Fa-Hsuan Lin and Deirdre Foxe von Pechmann and Kaisu Lankinen and Seppo Ahlfors and Bruce Rosen and Jyrki Ahveninen and Matti Hämäläinen and Tommi Raij}, doi = {10.18112/openneuro.ds006035.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006035.v1.0.0}, } ``` ## About This Dataset Accession: #: ds006035 Description: Multi-subject, multi-modal (sMRI+MEG+EEG) neuroimaging dataset for median nerve stimulation and motor responses. This dataset contains MRI (T1w), MEG, and EEG data for median nerve electrical stimuli delivered at the right wrist. The task was to respond by lifting the left hand idex finger as quickly as possible after each right median nerve stimulus. **meg/** Three anatomical fiducials were digitized for aligning the MEG with the MRI: 1. nasion (lowest depression between the eyes); 2. left pre-auricular point; 3. right pre-auricular point. The following triggers are included in the .fif files and are also used in the “value” column of the meg events .tsv files: Trigger Label Simplified Label 32 somatosensory stimulus somatosensory 16 finger lift response finger If you wish to publish any of these data, please acknowledge the authors. **BIDS References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110.https://doi.org/10.1038/sdata.2018.110 ## Dataset Information | Dataset ID | `DS006035` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | somatomotor | | Author (year) | `Lin2025` | | Canonical | `Lin2019` | | Importable as | `DS006035`, `Lin2025`, `Lin2019` | | Year | 2025 | | Authors | Fa-Hsuan Lin, Deirdre Foxe von Pechmann, Kaisu Lankinen, Seppo Ahlfors, Bruce Rosen, Jyrki Ahveninen, Matti Hämäläinen, Tommi Raij | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006035.v1.0.0](https://doi.org/10.18112/openneuro.ds006035.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006035) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006035) | [Source URL](https://openneuro.org/datasets/ds006035) | ### Copy-paste BibTeX ```bibtex @dataset{ds006035, title = {somatomotor}, author = {Fa-Hsuan Lin and Deirdre Foxe von Pechmann and Kaisu Lankinen and Seppo Ahlfors and Bruce Rosen and Jyrki Ahveninen and Matti Hämäläinen and Tommi Raij}, doi = {10.18112/openneuro.ds006035.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006035.v1.0.0}, } ``` ## Technical Details - Subjects: 5 - Recordings: 15 - Tasks: 1 - Channels: 388 (12), 387 (3) - Sampling rate (Hz): 1004.01611328125 - Duration (hours): 1.1343288512795158 - Pathology: Healthy - Modality: Tactile - Type: Motor - Size on disk: 3.1 GB - File count: 15 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006035.v1.0.0 - Source: openneuro - OpenNeuro: [ds006035](https://openneuro.org/datasets/ds006035) - NeMAR: [ds006035](https://nemar.org/dataexplorer/detail?dataset_id=ds006035) ## API Reference Use the `DS006035` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006035(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) somatomotor * **Study:** `ds006035` (OpenNeuro) * **Author (year):** `Lin2025` * **Canonical:** `Lin2019` Also importable as: `DS006035`, `Lin2025`, `Lin2019`. Modality: `meg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 15; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006035](https://openneuro.org/datasets/ds006035) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006035](https://nemar.org/dataexplorer/detail?dataset_id=ds006035) DOI: [https://doi.org/10.18112/openneuro.ds006035.v1.0.0](https://doi.org/10.18112/openneuro.ds006035.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006035 >>> dataset = DS006035(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006035) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006035) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS006036: eeg dataset, 88 subjects *A complementary dataset of open-eyes EEG recordings in a photo-stimulation setting from: Alzheimer’s disease, Frontotemporal dementia and Healthy subjects* Access recordings and metadata through EEGDash. **Citation:** Aimilia Ntetska, Andreas Miltiadous, Alexandros T. Tzallas, Katerina D. Tzimourta, Theodora Afrantou, Panagiotis Ioannidis, Dimitrios G. Tsalikakis, Nikolaos Grigoriadis, Pantelis Angelidis, Konstantinos Sakkas, Emmanouil D. Oikonomou, Nikolaos Giannakeas, Markos G. Tsipouras (2025). *A complementary dataset of open-eyes EEG recordings in a photo-stimulation setting from: Alzheimer’s disease, Frontotemporal dementia and Healthy subjects*. [10.18112/openneuro.ds006036.v1.0.6](https://doi.org/10.18112/openneuro.ds006036.v1.0.6) Modality: eeg Subjects: 88 Recordings: 88 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006036 dataset = DS006036(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006036(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006036( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006036, title = {A complementary dataset of open-eyes EEG recordings in a photo-stimulation setting from: Alzheimer's disease, Frontotemporal dementia and Healthy subjects}, author = {Aimilia Ntetska and Andreas Miltiadous and Alexandros T. Tzallas and Katerina D. Tzimourta and Theodora Afrantou and Panagiotis Ioannidis and Dimitrios G. Tsalikakis and Nikolaos Grigoriadis and Pantelis Angelidis and Konstantinos Sakkas and Emmanouil D. Oikonomou and Nikolaos Giannakeas and Markos G. Tsipouras}, doi = {10.18112/openneuro.ds006036.v1.0.6}, url = {https://doi.org/10.18112/openneuro.ds006036.v1.0.6}, } ``` ## About This Dataset This dataset provides complementary material to the previously published dataset named “A dataset of EEG recordings from: Alzheimer’s disease, Frontotemporal dementia and Healthy subjects” with doi:10.18112/openneuro.ds004504.v1.0.8. It is consisted of eyes-open EEG recordings in multiple photic stimulation settings, according to the clinical protocol of the 2nd department of Neurology, AHEPA University of Thessaloniki, Greece. The participant numbers match the respective participant numbers of the aforementioned dataset. In the clinical protocol, the 1st datasets recordings came first, followed by the recordings of this dataset. The dataset is designed to complement a previously published dataset in which the same cohort underwent EEG recordings with their eyes closed. During the recordings, participants were seated with their eyes open while being exposed to photic stimulation. The stimulation was administered at incremental frequencies, beginning at 5 Hz, progressing to 10 Hz, 15 Hz, and in some cases, extending up to 30 Hz, with increments of 5 Hz at each level. This study compared cognitive function in 36 individuals with Alzheimer’s disease (AD), 23 with Frontotemporal Dementia (FTD), and 29 healthy controls (CN). Cognitive function was measured using the Mini-Mental State Examination (MMSE), where lower scores indicate greater cognitive impairment. The AD group had an average MMSE score of 17.75 (standard deviation of 4.5), the FTD group averaged 22.17 (standard deviation of 8.22), and the CN group scored 30. The average age was 66.4 (standard deviation of 7.9) for the AD group, 63.6 (standard deviation of 8.2) for the FTD group, and 67.9 (standard deviation of 5.4) for the CN group. The median disease duration was 25 months, with an interquartile range of 24 to 28.5 months. Notably, the AD group had no reported dementia-related comorbidities. Recordings: Recordings were aquired from the 2nd Department of Neurology of AHEPA General Hospital of Thessaloniki by an experienced team of neurologists. For recording, a Nihon Kohden EEG 2100 clinical device was used, with 19 scalp electrodes (Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, and O2) according to the 10-20 international system and 2 additional ectrodes (A1 and A2) placed on the mastoids for impendance check, according to the manual of the device. Each recording was performed according to the clinical protocol with participants being in a sitting position having their eyes closed. Before the initialization of each recording, the skin impedance value was ensured to be below 5k?. The sampling rate was 500 Hz with 10uV/mm resolution. The recording montages were anterior-posterior bipolar and referential montage using Cz as the common reference. The referential montage was included in this dataset. The recordings were received under the range of the following parameters of the amplifier: Sensitivity: 10uV/mm, time constant: 0.3s, and high frequency filter at 70 Hz. Each recording lasted approximately 4.86 minutes for AD group (min=1.30 minutes , max= 8.77 minutes), 4.42 minutes for FTD group (min=1.25 minutes, max=10.05 minutes) and 6.43 minutes for CN group (min=3.17 minutes, max= 9.17 minutes). In total, 174.94 minutes of AD, 101.56 minutes of FTD and 186.50 minutes of CN recordings were collected and are included in the dataset. Preprocessing: The EEG recordings were exported in .eeg format and are transformed to BIDS accepted .set format for the inclusion in the dataset. Automatic annotations of the Nihon Kohden EEG device marking artifacts (muscle activity, blinking, swallowing) have not been included for language compatibility purposes (If this is an issue, please use the preprocessed dataset in Folder: derivatives). The unprocessed EEG recordings are included in folders named: sub-0XX. Folders named sub-0XX in the subfolder derivatives contain the preprocessed and denoised EEG recordings. The preprocessing pipeline of the EEG signals is as follows. First, a Butterworth band-pass filter 0.5-45 Hz was applied and the signals were re-referenced to A1-A2. Then, the Artifact Subspace Reconstruction routine (ASR) which is an EEG artifact correction method included in the EEGLab Matlab software was applied to the signals, removing bad data periods which exceeded the max acceptable 0.5 second window standard deviation of 15, which is considered a conservative window. Next, the Independent Component Analysis (ICA) method (RunICA algorithm) was performed, transforming the 19 EEG signals to 19 ICA components. ICA components that were classified as “eye artifacts” or “jaw artifacts” by the automatic classification routine “ICLabel” in the EEGLAB platform were automatically rejected. It should be noted that, even though the recording was performed in a resting state, eyes-closed condition, eye artifacts of eye movement were still found at some EEG recordings. ## Dataset Information | Dataset ID | `DS006036` | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A complementary dataset of open-eyes EEG recordings in a photo-stimulation setting from: Alzheimer’s disease, Frontotemporal dementia and Healthy subjects | | Author (year) | `Ntetska2025` | | Canonical | — | | Importable as | `DS006036`, `Ntetska2025` | | Year | 2025 | | Authors | Aimilia Ntetska, Andreas Miltiadous, Alexandros T. Tzallas, Katerina D. Tzimourta, Theodora Afrantou, Panagiotis Ioannidis, Dimitrios G. Tsalikakis, Nikolaos Grigoriadis, Pantelis Angelidis, Konstantinos Sakkas, Emmanouil D. Oikonomou, Nikolaos Giannakeas, Markos G. Tsipouras | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006036.v1.0.6](https://doi.org/10.18112/openneuro.ds006036.v1.0.6) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006036) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006036) | [Source URL](https://openneuro.org/datasets/ds006036) | ### Copy-paste BibTeX ```bibtex @dataset{ds006036, title = {A complementary dataset of open-eyes EEG recordings in a photo-stimulation setting from: Alzheimer's disease, Frontotemporal dementia and Healthy subjects}, author = {Aimilia Ntetska and Andreas Miltiadous and Alexandros T. Tzallas and Katerina D. Tzimourta and Theodora Afrantou and Panagiotis Ioannidis and Dimitrios G. Tsalikakis and Nikolaos Grigoriadis and Pantelis Angelidis and Konstantinos Sakkas and Emmanouil D. Oikonomou and Nikolaos Giannakeas and Markos G. Tsipouras}, doi = {10.18112/openneuro.ds006036.v1.0.6}, url = {https://doi.org/10.18112/openneuro.ds006036.v1.0.6}, } ``` ## Technical Details - Subjects: 88 - Recordings: 88 - Tasks: 1 - Channels: 19 - Sampling rate (Hz): 500.0 - Duration (hours): 7.716666666666667 - Pathology: Dementia - Modality: Visual - Type: Clinical/Intervention - Size on disk: 1.0 GB - File count: 88 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006036.v1.0.6 - Source: openneuro - OpenNeuro: [ds006036](https://openneuro.org/datasets/ds006036) - NeMAR: [ds006036](https://nemar.org/dataexplorer/detail?dataset_id=ds006036) ## API Reference Use the `DS006036` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006036(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A complementary dataset of open-eyes EEG recordings in a photo-stimulation setting from: Alzheimer’s disease, Frontotemporal dementia and Healthy subjects * **Study:** `ds006036` (OpenNeuro) * **Author (year):** `Ntetska2025` * **Canonical:** — Also importable as: `DS006036`, `Ntetska2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Dementia`. Subjects: 88; recordings: 88; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006036](https://openneuro.org/datasets/ds006036) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006036](https://nemar.org/dataexplorer/detail?dataset_id=ds006036) DOI: [https://doi.org/10.18112/openneuro.ds006036.v1.0.6](https://doi.org/10.18112/openneuro.ds006036.v1.0.6) ### Examples ```pycon >>> from eegdash.dataset import DS006036 >>> dataset = DS006036(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006036) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006036) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006040: eeg dataset, 28 subjects *Sustained Attention Task (gradCPT) Dataset using simultaneous EEG-fMRI and DTI* Access recordings and metadata through EEGDash. **Citation:** Younghwa Cha, Yeji Lee, Eunhee Ji, SoHyun Han, Sunhyun Min, Hyoungkyu Kim, Minseo Cho, Hae Seong Lee, Youngjai Park, Joon-Young Moon (2025). *Sustained Attention Task (gradCPT) Dataset using simultaneous EEG-fMRI and DTI*. [10.18112/openneuro.ds006040.v1.0.2](https://doi.org/10.18112/openneuro.ds006040.v1.0.2) Modality: eeg Subjects: 28 Recordings: 392 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006040 dataset = DS006040(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006040(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006040( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006040, title = {Sustained Attention Task (gradCPT) Dataset using simultaneous EEG-fMRI and DTI}, author = {Younghwa Cha and Yeji Lee and Eunhee Ji and SoHyun Han and Sunhyun Min and Hyoungkyu Kim and Minseo Cho and Hae Seong Lee and Youngjai Park and Joon-Young Moon}, doi = {10.18112/openneuro.ds006040.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds006040.v1.0.2}, } ``` ## About This Dataset This dataset includes simultaneous recordings of electroencephalography (EEG), functional magnetic resonance imaging (fMRI), and diffusion-weighted imaging (DWI) from 28 participants aged 19 to 42 years. The fMRI and DWI data were acquired using a 3T MRI scanner (Siemens Magnetom Prisma), and the EEG was recorded using 64 channels (Brain Product BrainCap MR with Multirodes). The following tasks were performed: resting state (eyes open and closed), checkerboard (15Hz), gradCPT, and imagery task. Raw files can be found in the subfolders, while preprocessed files are available in the derivatives folder. For more detailed information about the file structure, please refer to the readme files. ## Dataset Information | Dataset ID | `DS006040` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Sustained Attention Task (gradCPT) Dataset using simultaneous EEG-fMRI and DTI | | Author (year) | `Cha2025` | | Canonical | — | | Importable as | `DS006040`, `Cha2025` | | Year | 2025 | | Authors | Younghwa Cha, Yeji Lee, Eunhee Ji, SoHyun Han, Sunhyun Min, Hyoungkyu Kim, Minseo Cho, Hae Seong Lee, Youngjai Park, Joon-Young Moon | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006040.v1.0.2](https://doi.org/10.18112/openneuro.ds006040.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006040) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006040) | [Source URL](https://openneuro.org/datasets/ds006040) | ### Copy-paste BibTeX ```bibtex @dataset{ds006040, title = {Sustained Attention Task (gradCPT) Dataset using simultaneous EEG-fMRI and DTI}, author = {Younghwa Cha and Yeji Lee and Eunhee Ji and SoHyun Han and Sunhyun Min and Hyoungkyu Kim and Minseo Cho and Hae Seong Lee and Youngjai Park and Joon-Young Moon}, doi = {10.18112/openneuro.ds006040.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds006040.v1.0.2}, } ``` ## Technical Details - Subjects: 28 - Recordings: 392 - Tasks: 10 - Channels: 64 - Sampling rate (Hz): 5000.0 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Visual - Type: Other - Size on disk: 172.5 GB - File count: 392 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006040.v1.0.2 - Source: openneuro - OpenNeuro: [ds006040](https://openneuro.org/datasets/ds006040) - NeMAR: [ds006040](https://nemar.org/dataexplorer/detail?dataset_id=ds006040) ## API Reference Use the `DS006040` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006040(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sustained Attention Task (gradCPT) Dataset using simultaneous EEG-fMRI and DTI * **Study:** `ds006040` (OpenNeuro) * **Author (year):** `Cha2025` * **Canonical:** — Also importable as: `DS006040`, `Cha2025`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 28; recordings: 392; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006040](https://openneuro.org/datasets/ds006040) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006040](https://nemar.org/dataexplorer/detail?dataset_id=ds006040) DOI: [https://doi.org/10.18112/openneuro.ds006040.v1.0.2](https://doi.org/10.18112/openneuro.ds006040.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006040 >>> dataset = DS006040(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006040) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006040) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006065: ieeg dataset, 7 subjects *TSS_iEEG* Access recordings and metadata through EEGDash. **Citation:** James Kragel, Joel Voss (2025). *TSS_iEEG*. [10.18112/openneuro.ds006065.v1.0.0](https://doi.org/10.18112/openneuro.ds006065.v1.0.0) Modality: ieeg Subjects: 7 Recordings: 45 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006065 dataset = DS006065(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006065(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006065( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006065, title = {TSS_iEEG}, author = {James Kragel and Joel Voss}, doi = {10.18112/openneuro.ds006065.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006065.v1.0.0}, } ``` ## About This Dataset **iEEG Dataset: Theta-synchronized Stimulation of Human Hippocampal Networks** **Information** This folder contains intracranial EEG (iEEG) data from **7 participants** undergoing closed-loop stimulation as part of a study on hippocampal network connectivity, as used in the following publication: *Kragel et al., 2025, Nature Communications:* *“Closed-loop control of theta oscillations enhances human hippocampal network connectivity”* For questions or further information, contact: ### View full README **iEEG Dataset: Theta-synchronized Stimulation of Human Hippocampal Networks** **Information** This folder contains intracranial EEG (iEEG) data from **7 participants** undergoing closed-loop stimulation as part of a study on hippocampal network connectivity, as used in the following publication: *Kragel et al., 2025, Nature Communications:* *“Closed-loop control of theta oscillations enhances human hippocampal network connectivity”* For questions or further information, contact: - *James Kragel:* [jkragel@uchicago.edu](mailto:jkragel@uchicago.edu) **- Joel Voss: joelvoss@uchicago.edu** **License** This dataset is made available under the \*\*Public Domain Dedication and License v1.0\*\*. Full text: [http://www.opendatacommons.org/licenses/pddl/1.0](http://www.opendatacommons.org/licenses/pddl/1.0) **Dataset and Protocol** The data are organized according to the **Brain Imaging Data Structure (BIDS)** iEEG specification, a community-driven standard for organizing neurophysiology data along with its metadata. **Structure** Each subject folder contains the raw iEEG data for that subject, segmented into different periods of the stimulation protocol: - **Pre-stimulation evoked potentials** - **Post-stimulation evoked potentials** - **Pre-stimulation rest** - **Post-stimulation rest** - **Closed-loop stimulation** **- Control stimulation** **Raw Data** The raw data are stored in **BrainVision format** (`vhdr`, `vmrk`, and `eeg` files). You can read these files into memory using the following tools: - *MATLAB:* [FieldTrip toolbox](https://www.fieldtriptoolbox.org/getting_started/eeg/brainvision/) - *Python:* `pybv` package <[https://github.com/bids-standard/pybv](https://github.com/bids-standard/pybv)>\\\`_\_ **Electrode Coordinates** Electrode coordinates are provided in \*\*MNI space\*\*, registered to the \*\*MNI152 2009c asymmetrical template\*\*. ## Dataset Information | Dataset ID | `DS006065` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | TSS_iEEG | | Author (year) | `Kragel2025` | | Canonical | — | | Importable as | `DS006065`, `Kragel2025` | | Year | 2025 | | Authors | James Kragel, Joel Voss | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006065.v1.0.0](https://doi.org/10.18112/openneuro.ds006065.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006065) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006065) | [Source URL](https://openneuro.org/datasets/ds006065) | ### Copy-paste BibTeX ```bibtex @dataset{ds006065, title = {TSS_iEEG}, author = {James Kragel and Joel Voss}, doi = {10.18112/openneuro.ds006065.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006065.v1.0.0}, } ``` ## Technical Details - Subjects: 7 - Recordings: 45 - Tasks: 10 - Channels: 168 (15), 175 (10), 82 (5), 68 (5), 181 (5), 43 (5) - Sampling rate (Hz): 500.0 - Duration (hours): 10.699191111111112 - Pathology: Surgery - Modality: Other - Type: Clinical/Intervention - Size on disk: 9.6 GB - File count: 45 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006065.v1.0.0 - Source: openneuro - OpenNeuro: [ds006065](https://openneuro.org/datasets/ds006065) - NeMAR: [ds006065](https://nemar.org/dataexplorer/detail?dataset_id=ds006065) ## API Reference Use the `DS006065` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006065(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TSS_iEEG * **Study:** `ds006065` (OpenNeuro) * **Author (year):** `Kragel2025` * **Canonical:** — Also importable as: `DS006065`, `Kragel2025`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 7; recordings: 45; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006065](https://openneuro.org/datasets/ds006065) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006065](https://nemar.org/dataexplorer/detail?dataset_id=ds006065) DOI: [https://doi.org/10.18112/openneuro.ds006065.v1.0.0](https://doi.org/10.18112/openneuro.ds006065.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006065 >>> dataset = DS006065(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006065) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006065) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS006095: eeg dataset, 71 subjects *Mind in Motion Older Adults Walking Over Uneven Terrain* Access recordings and metadata through EEGDash. **Citation:** Chang Liu, Erika M. Pliner, Jacob S. Salminen, Ryan Downey, Jungyun Hwang, Akraprava Roy, Ryland Swearinger, Natalie Richer, Chris J. Hass, David J. Clark, Todd M. Manini, Yenisel Cruz-Almeida, Rachael D. Seidler, Daniel P. Ferris (2025). *Mind in Motion Older Adults Walking Over Uneven Terrain*. [10.18112/openneuro.ds006095.v1.0.0](https://doi.org/10.18112/openneuro.ds006095.v1.0.0) Modality: eeg Subjects: 71 Recordings: 1182 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006095 dataset = DS006095(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006095(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006095( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006095, title = {Mind in Motion Older Adults Walking Over Uneven Terrain}, author = {Chang Liu and Erika M. Pliner and Jacob S. Salminen and Ryan Downey and Jungyun Hwang and Akraprava Roy and Ryland Swearinger and Natalie Richer and Chris J. Hass and David J. Clark and Todd M. Manini and Yenisel Cruz-Almeida and Rachael D. Seidler and Daniel P. Ferris}, doi = {10.18112/openneuro.ds006095.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006095.v1.0.0}, } ``` ## About This Dataset Our dataset contains high-density, dual-layer electroencephalography (EEG), neck electromyography (EMG), inertial measurement unit (IMU) acceleration, ground reaction force from all participants walking over uneven terrain and at different speeds. Participants completed two trials for each condition for three minutes and a seated rest trial for three minutes. Please refer to our publication for more detail. Digitized electrode locations (txt) are included in each subject folder. ## Dataset Information | Dataset ID | `DS006095` | |----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mind in Motion Older Adults Walking Over Uneven Terrain | | Author (year) | `Liu2025_Mind_Motion_Older` | | Canonical | — | | Importable as | `DS006095`, `Liu2025_Mind_Motion_Older` | | Year | 2025 | | Authors | Chang Liu, Erika M. Pliner, Jacob S. Salminen, Ryan Downey, Jungyun Hwang, Akraprava Roy, Ryland Swearinger, Natalie Richer, Chris J. Hass, David J. Clark, Todd M. Manini, Yenisel Cruz-Almeida, Rachael D. Seidler, Daniel P. Ferris | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006095.v1.0.0](https://doi.org/10.18112/openneuro.ds006095.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006095) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006095) | [Source URL](https://openneuro.org/datasets/ds006095) | ### Copy-paste BibTeX ```bibtex @dataset{ds006095, title = {Mind in Motion Older Adults Walking Over Uneven Terrain}, author = {Chang Liu and Erika M. Pliner and Jacob S. Salminen and Ryan Downey and Jungyun Hwang and Akraprava Roy and Ryland Swearinger and Natalie Richer and Chris J. Hass and David J. Clark and Todd M. Manini and Yenisel Cruz-Almeida and Rachael D. Seidler and Daniel P. Ferris}, doi = {10.18112/openneuro.ds006095.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006095.v1.0.0}, } ``` ## Technical Details - Subjects: 71 - Recordings: 1182 - Tasks: 9 - Channels: 284 (1053), 310 (115), 336 (14) - Sampling rate (Hz): 500.0 - Duration (hours): 61.09685555555556 - Pathology: Healthy - Modality: Motor - Type: Motor - Size on disk: 129.8 GB - File count: 1182 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006095.v1.0.0 - Source: openneuro - OpenNeuro: [ds006095](https://openneuro.org/datasets/ds006095) - NeMAR: [ds006095](https://nemar.org/dataexplorer/detail?dataset_id=ds006095) ## API Reference Use the `DS006095` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006095(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mind in Motion Older Adults Walking Over Uneven Terrain * **Study:** `ds006095` (OpenNeuro) * **Author (year):** `Liu2025_Mind_Motion_Older` * **Canonical:** — Also importable as: `DS006095`, `Liu2025_Mind_Motion_Older`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 71; recordings: 1182; tasks: 9. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006095](https://openneuro.org/datasets/ds006095) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006095](https://nemar.org/dataexplorer/detail?dataset_id=ds006095) DOI: [https://doi.org/10.18112/openneuro.ds006095.v1.0.0](https://doi.org/10.18112/openneuro.ds006095.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006095 >>> dataset = DS006095(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006095) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006095) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006104: eeg dataset, 24 subjects *EEG dataset for speech decoding* Access recordings and metadata through EEGDash. **Citation:** João Pedro Carvalho Moreira, Vinícius Rezende Carvalho, Eduardo Mazoni Andrade Marçal Mendes, Ariah Fallah, Terrence J. Sejnowski, Claudia Lainscsek, Lindy Comstock (2025). *EEG dataset for speech decoding*. [10.18112/openneuro.ds006104.v1.0.1](https://doi.org/10.18112/openneuro.ds006104.v1.0.1) Modality: eeg Subjects: 24 Recordings: 56 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006104 dataset = DS006104(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006104(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006104( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006104, title = {EEG dataset for speech decoding}, author = {João Pedro Carvalho Moreira and Vinícius Rezende Carvalho and Eduardo Mazoni Andrade Marçal Mendes and Ariah Fallah and Terrence J. Sejnowski and Claudia Lainscsek and Lindy Comstock}, doi = {10.18112/openneuro.ds006104.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006104.v1.0.1}, } ``` ## About This Dataset **EEG dataset for speech decoding** **Dataset Overview** This dataset contains EEG recordings from a phoneme discrimination task with TMS. The data were collected during two related studies in 2019 and 2021. Study 1 (2019, Session 01): - 8 participants (P01-P08) ### View full README **EEG dataset for speech decoding** **Dataset Overview** This dataset contains EEG recordings from a phoneme discrimination task with TMS. The data were collected during two related studies in 2019 and 2021. Study 1 (2019, Session 01): - 8 participants (P01-P08) - Focus on CV and VC phoneme pairs - 2 blocks: CV pairs and VC pairs - TMS targeted to LipM1 (-56, -8, 46) and TongueM1 (-60, -10, 25) Study 2 (2021, Session 02): - 16 participants (S01-S16) - Expanded to include single phonemes and phoneme triplets - 4 blocks: single phonemes, CV pairs, real words, and pseudowords - Additional TMS targets included Broca’s area (BA 44: -51, 7, 23) and verbal memory region (BA 6: -46, 1, 41) **Task Description** Participants listened to speech sounds and identified stimuli with a button-press response. The stimuli included: 1. Single phonemes - Consonants (/b/, /p/, /d/, /t/, /s/, /z/) and vowels (/i/, /E/, /A/, /u/, /oU/) 2. Phoneme pairs - CV and VC combinations of the phonemes 3. Phoneme triplets - Real and pseudowords constructed of CVC sequences **TMS Methodology** Detailed information about TMS parameters can be found in the sourcedata/tms_metadata/tms_parameters.json file. TMS was applied using a Magstim Super Rapid Plus1 stimulator with a figure-of-eight 40 mm coil. Stimulation was delivered at 110% of resting motor threshold as paired pulses with 50ms interpulse interval. Detailed information about the methodology and results can be found in the associated publication: Moreira et al. “An open-access EEG dataset for speech decoding: Exploring the role of articulation and coarticulation” **Directory Structure** The dataset follows BIDS convention with the following structure: /sub-[subject]/ses-[session]/eeg/ Where subject is P01-P08 for Study 1 and S01-S16 for Study 2. Session is 01 for Study 1 and 02 for Study 2. **Contact Information** For questions about this dataset, please contact Lindy Comstock at [lbcomstock@ucla.edu](mailto:lbcomstock@ucla.edu) ## Dataset Information | Dataset ID | `DS006104` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG dataset for speech decoding | | Author (year) | `Moreira2025` | | Canonical | — | | Importable as | `DS006104`, `Moreira2025` | | Year | 2025 | | Authors | João Pedro Carvalho Moreira, Vinícius Rezende Carvalho, Eduardo Mazoni Andrade Marçal Mendes, Ariah Fallah, Terrence J. Sejnowski, Claudia Lainscsek, Lindy Comstock | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006104.v1.0.1](https://doi.org/10.18112/openneuro.ds006104.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006104) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006104) | [Source URL](https://openneuro.org/datasets/ds006104) | ### Copy-paste BibTeX ```bibtex @dataset{ds006104, title = {EEG dataset for speech decoding}, author = {João Pedro Carvalho Moreira and Vinícius Rezende Carvalho and Eduardo Mazoni Andrade Marçal Mendes and Ariah Fallah and Terrence J. Sejnowski and Claudia Lainscsek and Lindy Comstock}, doi = {10.18112/openneuro.ds006104.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006104.v1.0.1}, } ``` ## Technical Details - Subjects: 24 - Recordings: 56 - Tasks: 3 - Channels: 61 (53), 83 (3) - Sampling rate (Hz): 2000.0 - Duration (hours): 50.75694444444444 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 43.0 GB - File count: 56 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006104.v1.0.1 - Source: openneuro - OpenNeuro: [ds006104](https://openneuro.org/datasets/ds006104) - NeMAR: [ds006104](https://nemar.org/dataexplorer/detail?dataset_id=ds006104) ## API Reference Use the `DS006104` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006104(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG dataset for speech decoding * **Study:** `ds006104` (OpenNeuro) * **Author (year):** `Moreira2025` * **Canonical:** — Also importable as: `DS006104`, `Moreira2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 24; recordings: 56; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006104](https://openneuro.org/datasets/ds006104) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006104](https://nemar.org/dataexplorer/detail?dataset_id=ds006104) DOI: [https://doi.org/10.18112/openneuro.ds006104.v1.0.1](https://doi.org/10.18112/openneuro.ds006104.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006104 >>> dataset = DS006104(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006104) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006104) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006107: ieeg dataset, 166 subjects *iEEG_Neural_spatial_volatility* Access recordings and metadata through EEGDash. **Citation:** Naoto Kuroda, Eishi Asano, Nobukazu Nakasato (2025). *iEEG_Neural_spatial_volatility*. [10.18112/openneuro.ds006107.v1.0.0](https://doi.org/10.18112/openneuro.ds006107.v1.0.0) Modality: ieeg Subjects: 166 Recordings: 167 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006107 dataset = DS006107(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006107(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006107( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006107, title = {iEEG_Neural_spatial_volatility}, author = {Naoto Kuroda and Eishi Asano and Nobukazu Nakasato}, doi = {10.18112/openneuro.ds006107.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006107.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS006107` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | iEEG_Neural_spatial_volatility | | Author (year) | `Kuroda2025` | | Canonical | `Kuroda2024` | | Importable as | `DS006107`, `Kuroda2025`, `Kuroda2024` | | Year | 2025 | | Authors | Naoto Kuroda, Eishi Asano, Nobukazu Nakasato | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006107.v1.0.0](https://doi.org/10.18112/openneuro.ds006107.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006107) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006107) | [Source URL](https://openneuro.org/datasets/ds006107) | ### Copy-paste BibTeX ```bibtex @dataset{ds006107, title = {iEEG_Neural_spatial_volatility}, author = {Naoto Kuroda and Eishi Asano and Nobukazu Nakasato}, doi = {10.18112/openneuro.ds006107.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006107.v1.0.0}, } ``` ## Technical Details - Subjects: 166 - Recordings: 167 - Tasks: 1 - Channels: 128 (30), 112 (19), 104 (7), 108 (6), 118 (6), 124 (5), 102 (5), 132 (5), 100 (5), 120 (5), 106 (5), 138 (4), 130 (4), 58 (4), 140 (3), 110 (3), 116 (3), 34 (2), 86 (2), 136 (2), 150 (2), 84 (2), 114 (2), 64 (2), 126 (2), 144 (2), 72 (2), 98 (2), 48, 78, 68, 94, 80, 44, 134, 73, 70, 52, 109, 156, 88, 28, 74, 69, 38, 164, 82, 56, 96, 54, 133, 90, 46, 122 - Sampling rate (Hz): 1000.0 - Duration (hours): 16.50111111111111 - Pathology: Not specified - Modality: Sleep - Type: Sleep - Size on disk: 11.9 GB - File count: 167 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006107.v1.0.0 - Source: openneuro - OpenNeuro: [ds006107](https://openneuro.org/datasets/ds006107) - NeMAR: [ds006107](https://nemar.org/dataexplorer/detail?dataset_id=ds006107) ## API Reference Use the `DS006107` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006107(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_Neural_spatial_volatility * **Study:** `ds006107` (OpenNeuro) * **Author (year):** `Kuroda2025` * **Canonical:** `Kuroda2024` Also importable as: `DS006107`, `Kuroda2025`, `Kuroda2024`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Unknown`. Subjects: 166; recordings: 167; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006107](https://openneuro.org/datasets/ds006107) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006107](https://nemar.org/dataexplorer/detail?dataset_id=ds006107) DOI: [https://doi.org/10.18112/openneuro.ds006107.v1.0.0](https://doi.org/10.18112/openneuro.ds006107.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006107 >>> dataset = DS006107(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006107) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006107) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS006126: eeg dataset, 5 subjects *TDCS Modulation of Visual Cortex in Motor Imagery* Access recordings and metadata through EEGDash. **Citation:** Anthony Mensah, Gleb Perevoznyuk, Artyom Batov, Aleksandra S. Pleskovskaya (2025). *TDCS Modulation of Visual Cortex in Motor Imagery*. [10.18112/openneuro.ds006126.v1.0.0](https://doi.org/10.18112/openneuro.ds006126.v1.0.0) Modality: eeg Subjects: 5 Recordings: 90 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006126 dataset = DS006126(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006126(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006126( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006126, title = {TDCS Modulation of Visual Cortex in Motor Imagery}, author = {Anthony Mensah and Gleb Perevoznyuk and Artyom Batov and Aleksandra S. Pleskovskaya}, doi = {10.18112/openneuro.ds006126.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006126.v1.0.0}, } ``` ## About This Dataset **TDCS Neuromodulated Motor Imagery TMS Dataset** **Research/Experiment Description** [Your research description goes here] **BIDS Report** > The TDCS Modulation of Visual Cortex in Motor Imagery dataset was created by Anthony Mensah, Gleb Perevoznyuk, Artyom Batov, and Aleksandra S. Pleskovskaya and conforms to BIDS version 1.7.0. This report was generated with MNE-BIDS ([https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896)). The dataset consists of 5 participants (comprised of 3 male and 3 female participants; comprised of 6 right hand, 0 left hand and 0 ambidextrous; ages ranged from 18.0 to 30.0 (mean = 23.0, std = 5.1)) and 3 recording sessions: An, Ca, and Sh. Data was recorded using an EEG system (Brain Products) sampled at 5000.0 Hz with line noise at 60.0 Hz. There were 90 scans in total. Recording durations ranged from 363.7 to 2910.98 seconds (mean = 429.21, std = 270.19), for a total of 38629.34 seconds of data recorded over all scans. For each dataset, there were on average 3.0 (std = 0.0) recording channels per scan, out of which 3.0 (std = 0.0) were used in analysis (0.0 +/- 0.0 were removed from analysis). **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `DS006126` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | TDCS Modulation of Visual Cortex in Motor Imagery | | Author (year) | `Mensah2025` | | Canonical | — | | Importable as | `DS006126`, `Mensah2025` | | Year | 2025 | | Authors | Anthony Mensah, Gleb Perevoznyuk, Artyom Batov, Aleksandra S. Pleskovskaya | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006126.v1.0.0](https://doi.org/10.18112/openneuro.ds006126.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006126) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006126) | [Source URL](https://openneuro.org/datasets/ds006126) | ### Copy-paste BibTeX ```bibtex @dataset{ds006126, title = {TDCS Modulation of Visual Cortex in Motor Imagery}, author = {Anthony Mensah and Gleb Perevoznyuk and Artyom Batov and Aleksandra S. Pleskovskaya}, doi = {10.18112/openneuro.ds006126.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006126.v1.0.0}, } ``` ## Technical Details - Subjects: 5 - Recordings: 90 - Tasks: 6 - Channels: 3 - Sampling rate (Hz): 5000.0 - Duration (hours): 10.730372611111113 - Pathology: Healthy - Modality: Motor - Type: Motor - Size on disk: 1.1 GB - File count: 90 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006126.v1.0.0 - Source: openneuro - OpenNeuro: [ds006126](https://openneuro.org/datasets/ds006126) - NeMAR: [ds006126](https://nemar.org/dataexplorer/detail?dataset_id=ds006126) ## API Reference Use the `DS006126` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006126(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TDCS Modulation of Visual Cortex in Motor Imagery * **Study:** `ds006126` (OpenNeuro) * **Author (year):** `Mensah2025` * **Canonical:** — Also importable as: `DS006126`, `Mensah2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 90; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006126](https://openneuro.org/datasets/ds006126) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006126](https://nemar.org/dataexplorer/detail?dataset_id=ds006126) DOI: [https://doi.org/10.18112/openneuro.ds006126.v1.0.0](https://doi.org/10.18112/openneuro.ds006126.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006126 >>> dataset = DS006126(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006126) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006126) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006136: ieeg dataset, 13 subjects *OWM-Dataset* Access recordings and metadata through EEGDash. **Citation:** Vladimir Omelyusik, Tyler S. Davis, Satish S. Nair, Behrad Noudoost, Patrick Hackett, Elliot H. Smith, Shervin Rahimpour, John D. Rolston, Bornali Kundu (2025). *OWM-Dataset*. [10.18112/openneuro.ds006136.v1.0.1](https://doi.org/10.18112/openneuro.ds006136.v1.0.1) Modality: ieeg Subjects: 13 Recordings: 14 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006136 dataset = DS006136(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006136(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006136( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006136, title = {OWM-Dataset}, author = {Vladimir Omelyusik and Tyler S. Davis and Satish S. Nair and Behrad Noudoost and Patrick Hackett and Elliot H. Smith and Shervin Rahimpour and John D. Rolston and Bornali Kundu}, doi = {10.18112/openneuro.ds006136.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006136.v1.0.1}, } ``` ## About This Dataset **OWM-Dataset** **Description** The dataset contains processed intracranial EEG recordings from frontal (LMFG, RMFG) and temporal (LMTG, RMTG) areas of 13 subjects (epilepsy patients) while they performed a load-3 object working memory task. Please see the associated publication (Paper): [https://doi.org/10.1016/j.neuroimage.2026.121718](https://doi.org/10.1016/j.neuroimage.2026.121718) **Data structure** **Included trials** The dataset includes trials which were used for the final analyses (i.e., after artifact rejection; see the Methods section of the Paper for a full description of preprocessing procedures). Note that since some artifact rejection procedures were performed at the single-trial level, trial indexes are not matched across channels even for the same subject (i.e., trial 1 of channel 1 may not correspond to trial 1 of channel 2) and have to be read separately. The sourcedata/ folder contains per-subject trial indexes for trial matching. **Trial structure** Each trial is 6498 ms long (1000 ms of fixation, 1500 ms of encoding and 3998 ms of delay). **Storage format** To comply with the .edf format, trials for every channel were concatenated into a single one-dimensional array. Due to a different number of trials across channels, each array was padded with 0s on the right, ensuring the same data length for all channels within a subject. The total number of concatenated trials per channel and the padding length are recorded in the “_channels.tsv” file for each subject. **Performance** The sourcedata/ folder contains performance results for every subject. Rows of each table correspond to trials (the order matches LFP recordings). Columns represent whether the subject selected the presented stimuli during the search period (0 = no, 1 = yes). **Reading the data and replicating the results** The Paper repository ([https://github.com/V-Marco/FT-bursting-WM](https://github.com/V-Marco/FT-bursting-WM)) includes a Python function for reading the data, performing trial matching, appending performance information, and representing the recordings as a 2D table. The repository also includes examples on replicating the main figures. ## Dataset Information | Dataset ID | `DS006136` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | OWM-Dataset | | Author (year) | `Omelyusik2025` | | Canonical | `Omelyusik2026` | | Importable as | `DS006136`, `Omelyusik2025`, `Omelyusik2026` | | Year | 2025 | | Authors | Vladimir Omelyusik, Tyler S. Davis, Satish S. Nair, Behrad Noudoost, Patrick Hackett, Elliot H. Smith, Shervin Rahimpour, John D. Rolston, Bornali Kundu | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006136.v1.0.1](https://doi.org/10.18112/openneuro.ds006136.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006136) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006136) | [Source URL](https://openneuro.org/datasets/ds006136) | ### Copy-paste BibTeX ```bibtex @dataset{ds006136, title = {OWM-Dataset}, author = {Vladimir Omelyusik and Tyler S. Davis and Satish S. Nair and Behrad Noudoost and Patrick Hackett and Elliot H. Smith and Shervin Rahimpour and John D. Rolston and Bornali Kundu}, doi = {10.18112/openneuro.ds006136.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006136.v1.0.1}, } ``` ## Technical Details - Subjects: 13 - Recordings: 14 - Tasks: 1 - Channels: 8 (2), 7 (2), 9 (2), 12 (2), 18, 6, 14, 17, 5, 11 - Sampling rate (Hz): 1000.0 - Duration (hours): 25.273888888888887 - Pathology: Epilepsy - Modality: Visual - Type: Memory - Size on disk: 285.9 MB - File count: 14 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006136.v1.0.1 - Source: openneuro - OpenNeuro: [ds006136](https://openneuro.org/datasets/ds006136) - NeMAR: [ds006136](https://nemar.org/dataexplorer/detail?dataset_id=ds006136) ## API Reference Use the `DS006136` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006136(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) OWM-Dataset * **Study:** `ds006136` (OpenNeuro) * **Author (year):** `Omelyusik2025` * **Canonical:** `Omelyusik2026` Also importable as: `DS006136`, `Omelyusik2025`, `Omelyusik2026`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 13; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006136](https://openneuro.org/datasets/ds006136) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006136](https://nemar.org/dataexplorer/detail?dataset_id=ds006136) DOI: [https://doi.org/10.18112/openneuro.ds006136.v1.0.1](https://doi.org/10.18112/openneuro.ds006136.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006136 >>> dataset = DS006136(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006136) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006136) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS006142: eeg dataset, 27 subjects *Essex EEG Movie Memory dataset* Access recordings and metadata through EEGDash. **Citation:** Ana Matran-Fernandez, Sebastian Halder (2025). *Essex EEG Movie Memory dataset*. [10.18112/openneuro.ds006142.v1.0.2](https://doi.org/10.18112/openneuro.ds006142.v1.0.2) Modality: eeg Subjects: 27 Recordings: 27 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006142 dataset = DS006142(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006142(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006142( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006142, title = {Essex EEG Movie Memory dataset}, author = {Ana Matran-Fernandez and Sebastian Halder}, doi = {10.18112/openneuro.ds006142.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds006142.v1.0.2}, } ``` ## About This Dataset **Essex EEG Movie Memory Dataset** Authors: Ana Matran-Fernandez and Sebastian Halder **Description** This dataset contains raw electroencephalography (EEG) signals recorded from 27 participants while watching 10-second long clips extracted from movies that they had previously watched. For each clip, participants were asked whether they recognised the movie it belonged to, and if so, whether they remembered having watched it previously or not. ### View full README **Essex EEG Movie Memory Dataset** Authors: Ana Matran-Fernandez and Sebastian Halder **Description** This dataset contains raw electroencephalography (EEG) signals recorded from 27 participants while watching 10-second long clips extracted from movies that they had previously watched. For each clip, participants were asked whether they recognised the movie it belonged to, and if so, whether they remembered having watched it previously or not. If a participant reported recognising or remembering a clip, it was shown a second time to capture (via a mouse click) time annotations of the instants that prompted this recognition. **EEG** EEG data were acquired with a BioSemi ActiveTwo system with 64 electrodes positioned according to the international 10-20 system. The sampling rate was 2048 Hz. **Stimuli** The clips used in the study were originally annotated in terms of their memorability by Cohendet et al (see References). This dataset can be requested from the authors. **Example code** We have prepared an example script to demonstrate how to load the EEG data into Python using MNE and MNE-BIDS packages. This script is located in the ‘code’ directory. **References** Romain Cohendet, Karthik Yadati, Ngoc Q. K. Duong, and Claire-Hélène Demarty. 2018. Annotating, Understanding, and Predicting Long-term Video Memorability. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval (ICMR ‘18). Association for Computing Machinery, New York, NY, USA, 178–186. [https://doi.org/10.1145/3206025.3206056](https://doi.org/10.1145/3206025.3206056) **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `DS006142` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Essex EEG Movie Memory dataset | | Author (year) | `MatranFernandez2025` | | Canonical | — | | Importable as | `DS006142`, `MatranFernandez2025` | | Year | 2025 | | Authors | Ana Matran-Fernandez, Sebastian Halder | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006142.v1.0.2](https://doi.org/10.18112/openneuro.ds006142.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006142) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006142) | [Source URL](https://openneuro.org/datasets/ds006142) | ### Copy-paste BibTeX ```bibtex @dataset{ds006142, title = {Essex EEG Movie Memory dataset}, author = {Ana Matran-Fernandez and Sebastian Halder}, doi = {10.18112/openneuro.ds006142.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds006142.v1.0.2}, } ``` ## Technical Details - Subjects: 27 - Recordings: 27 - Tasks: 1 - Channels: 65 - Sampling rate (Hz): 2048.0 - Duration (hours): 26.295897216796877 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 24.3 GB - File count: 27 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006142.v1.0.2 - Source: openneuro - OpenNeuro: [ds006142](https://openneuro.org/datasets/ds006142) - NeMAR: [ds006142](https://nemar.org/dataexplorer/detail?dataset_id=ds006142) ## API Reference Use the `DS006142` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006142(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Essex EEG Movie Memory dataset * **Study:** `ds006142` (OpenNeuro) * **Author (year):** `MatranFernandez2025` * **Canonical:** — Also importable as: `DS006142`, `MatranFernandez2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 27; recordings: 27; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006142](https://openneuro.org/datasets/ds006142) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006142](https://nemar.org/dataexplorer/detail?dataset_id=ds006142) DOI: [https://doi.org/10.18112/openneuro.ds006142.v1.0.2](https://doi.org/10.18112/openneuro.ds006142.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006142 >>> dataset = DS006142(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006142) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006142) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006159: eeg dataset, 61 subjects *Implicit Learning EEG (BioSemi)* Access recordings and metadata through EEGDash. **Citation:** Mateo Leganes-Fonteneau (2025). *Implicit Learning EEG (BioSemi)*. [10.18112/openneuro.ds006159.v1.0.0](https://doi.org/10.18112/openneuro.ds006159.v1.0.0) Modality: eeg Subjects: 61 Recordings: 61 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006159 dataset = DS006159(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006159(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006159( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006159, title = {Implicit Learning EEG (BioSemi)}, author = {Mateo Leganes-Fonteneau}, doi = {10.18112/openneuro.ds006159.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006159.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS006159` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Implicit Learning EEG (BioSemi) | | Author (year) | `LeganesFonteneau2025` | | Canonical | `LeganesFonteneau2024` | | Importable as | `DS006159`, `LeganesFonteneau2025`, `LeganesFonteneau2024` | | Year | 2025 | | Authors | Mateo Leganes-Fonteneau | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006159.v1.0.0](https://doi.org/10.18112/openneuro.ds006159.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006159) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006159) | [Source URL](https://openneuro.org/datasets/ds006159) | ### Copy-paste BibTeX ```bibtex @dataset{ds006159, title = {Implicit Learning EEG (BioSemi)}, author = {Mateo Leganes-Fonteneau}, doi = {10.18112/openneuro.ds006159.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006159.v1.0.0}, } ``` ## Technical Details - Subjects: 61 - Recordings: 61 - Tasks: 1 - Channels: 73 - Sampling rate (Hz): 1024.0 - Duration (hours): 14.299166666666666 - Pathology: Healthy - Modality: — - Type: Learning - Size on disk: 14.3 GB - File count: 61 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006159.v1.0.0 - Source: openneuro - OpenNeuro: [ds006159](https://openneuro.org/datasets/ds006159) - NeMAR: [ds006159](https://nemar.org/dataexplorer/detail?dataset_id=ds006159) ## API Reference Use the `DS006159` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006159(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Implicit Learning EEG (BioSemi) * **Study:** `ds006159` (OpenNeuro) * **Author (year):** `LeganesFonteneau2025` * **Canonical:** `LeganesFonteneau2024` Also importable as: `DS006159`, `LeganesFonteneau2025`, `LeganesFonteneau2024`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 61; recordings: 61; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006159](https://openneuro.org/datasets/ds006159) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006159](https://nemar.org/dataexplorer/detail?dataset_id=ds006159) DOI: [https://doi.org/10.18112/openneuro.ds006159.v1.0.0](https://doi.org/10.18112/openneuro.ds006159.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006159 >>> dataset = DS006159(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006159) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006159) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006171: eeg dataset, 36 subjects *EEG data during three near-threshold visual detection tasks: a no-cue task, a noninformative cue task (50% validity), and an informative cue task (100% validity)* Access recordings and metadata through EEGDash. **Citation:** María Melcón, Enrique Stern, Lydia Arana, Almudena Capilla (2025). *EEG data during three near-threshold visual detection tasks: a no-cue task, a noninformative cue task (50% validity), and an informative cue task (100% validity)*. [10.18112/openneuro.ds006171.v1.0.0](https://doi.org/10.18112/openneuro.ds006171.v1.0.0) Modality: eeg Subjects: 36 Recordings: 104 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006171 dataset = DS006171(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006171(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006171( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006171, title = {EEG data during three near-threshold visual detection tasks: a no-cue task, a noninformative cue task (50% validity), and an informative cue task (100% validity)}, author = {María Melcón and Enrique Stern and Lydia Arana and Almudena Capilla}, doi = {10.18112/openneuro.ds006171.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006171.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS006171` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG data during three near-threshold visual detection tasks: a no-cue task, a noninformative cue task (50% validity), and an informative cue task (100% validity) | | Author (year) | `Melcon2025` | | Canonical | `Melcon2024` | | Importable as | `DS006171`, `Melcon2025`, `Melcon2024` | | Year | 2025 | | Authors | María Melcón, Enrique Stern, Lydia Arana, Almudena Capilla | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006171.v1.0.0](https://doi.org/10.18112/openneuro.ds006171.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006171) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006171) | [Source URL](https://openneuro.org/datasets/ds006171) | ### Copy-paste BibTeX ```bibtex @dataset{ds006171, title = {EEG data during three near-threshold visual detection tasks: a no-cue task, a noninformative cue task (50% validity), and an informative cue task (100% validity)}, author = {María Melcón and Enrique Stern and Lydia Arana and Almudena Capilla}, doi = {10.18112/openneuro.ds006171.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006171.v1.0.0}, } ``` ## Technical Details - Subjects: 36 - Recordings: 104 - Tasks: 3 - Channels: 144 - Sampling rate (Hz): 1024.0 - Duration (hours): 40.88305555555556 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 67.8 GB - File count: 104 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006171.v1.0.0 - Source: openneuro - OpenNeuro: [ds006171](https://openneuro.org/datasets/ds006171) - NeMAR: [ds006171](https://nemar.org/dataexplorer/detail?dataset_id=ds006171) ## API Reference Use the `DS006171` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006171(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data during three near-threshold visual detection tasks: a no-cue task, a noninformative cue task (50% validity), and an informative cue task (100% validity) * **Study:** `ds006171` (OpenNeuro) * **Author (year):** `Melcon2025` * **Canonical:** `Melcon2024` Also importable as: `DS006171`, `Melcon2025`, `Melcon2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 36; recordings: 104; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006171](https://openneuro.org/datasets/ds006171) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006171](https://nemar.org/dataexplorer/detail?dataset_id=ds006171) DOI: [https://doi.org/10.18112/openneuro.ds006171.v1.0.0](https://doi.org/10.18112/openneuro.ds006171.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006171 >>> dataset = DS006171(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006171) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006171) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006222: eeg dataset, 69 subjects *MultisensoryFlickerHealthyYoungAdults_AllSubjectsRawData* Access recordings and metadata through EEGDash. **Citation:** Matthew Attokaren, Annabelle Singer (2025). *MultisensoryFlickerHealthyYoungAdults_AllSubjectsRawData*. [10.18112/openneuro.ds006222.v1.0.1](https://doi.org/10.18112/openneuro.ds006222.v1.0.1) Modality: eeg Subjects: 69 Recordings: 70 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006222 dataset = DS006222(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006222(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006222( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006222, title = {MultisensoryFlickerHealthyYoungAdults_AllSubjectsRawData}, author = {Matthew Attokaren and Annabelle Singer}, doi = {10.18112/openneuro.ds006222.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006222.v1.0.1}, } ``` ## About This Dataset We recorded scalp EEG activity of healthy adults during one hour of either 40 Hz audiovisual flicker, no flicker as control, or randomized flicker as sham stimulation, while subjects performed a psychomotor vigilance task. ## Dataset Information | Dataset ID | `DS006222` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MultisensoryFlickerHealthyYoungAdults_AllSubjectsRawData | | Author (year) | `Attokaren2025` | | Canonical | — | | Importable as | `DS006222`, `Attokaren2025` | | Year | 2025 | | Authors | Matthew Attokaren, Annabelle Singer | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006222.v1.0.1](https://doi.org/10.18112/openneuro.ds006222.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006222) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006222) | [Source URL](https://openneuro.org/datasets/ds006222) | ### Copy-paste BibTeX ```bibtex @dataset{ds006222, title = {MultisensoryFlickerHealthyYoungAdults_AllSubjectsRawData}, author = {Matthew Attokaren and Annabelle Singer}, doi = {10.18112/openneuro.ds006222.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006222.v1.0.1}, } ``` ## Technical Details - Subjects: 69 - Recordings: 70 - Tasks: 1 - Channels: 40 - Sampling rate (Hz): 512.0 - Duration (hours): 52.80083333333334 - Pathology: Healthy - Modality: Multisensory - Type: Attention - Size on disk: 14.7 GB - File count: 70 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006222.v1.0.1 - Source: openneuro - OpenNeuro: [ds006222](https://openneuro.org/datasets/ds006222) - NeMAR: [ds006222](https://nemar.org/dataexplorer/detail?dataset_id=ds006222) ## API Reference Use the `DS006222` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006222(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MultisensoryFlickerHealthyYoungAdults_AllSubjectsRawData * **Study:** `ds006222` (OpenNeuro) * **Author (year):** `Attokaren2025` * **Canonical:** — Also importable as: `DS006222`, `Attokaren2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 69; recordings: 70; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006222](https://openneuro.org/datasets/ds006222) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006222](https://nemar.org/dataexplorer/detail?dataset_id=ds006222) DOI: [https://doi.org/10.18112/openneuro.ds006222.v1.0.1](https://doi.org/10.18112/openneuro.ds006222.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006222 >>> dataset = DS006222(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006222) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006222) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006233: ieeg dataset, 108 subjects *Picture naming* Access recordings and metadata through EEGDash. **Citation:** Ryuzaburo Kochi, Aya Kanno, Hiroshi Uda, Keisuke Hatano, Hidenori Endo, Michael Cools, Robert Rothermel, Aimee F. Luat, Eishi Asano (2025). *Picture naming*. [10.18112/openneuro.ds006233.v1.0.0](https://doi.org/10.18112/openneuro.ds006233.v1.0.0) Modality: ieeg Subjects: 108 Recordings: 347 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006233 dataset = DS006233(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006233(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006233( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006233, title = {Picture naming}, author = {Ryuzaburo Kochi and Aya Kanno and Hiroshi Uda and Keisuke Hatano and Hidenori Endo and Michael Cools and Robert Rothermel and Aimee F. Luat and Eishi Asano}, doi = {10.18112/openneuro.ds006233.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006233.v1.0.0}, } ``` ## About This Dataset This dataset, used in the analysis reported by Kochi et al., (2025), contains intracranial EEG recordings from 108 individuals who performed an picture‑naming task. Electrode coordinates are provided in MNI‑305 space. Each EDF file is tagged for the auditory naming task with the following event codes: 401 – stimulus onset 501 – response onset Reference: Ryuzaburo Kochi, Aya Kanno, Hiroshi Uda, Keisuke Hatano, Hidenori Endo, Michael Cools, Robert Rothermel, Aimee F. Luat, Eishi Asano. Naming is Shaped by Early Facilitative and Late Compensatory Neural Interactions: An Intracranial Study of 125 Patients ## Dataset Information | Dataset ID | `DS006233` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Picture naming | | Author (year) | `Kochi2025_Picture_naming` | | Canonical | — | | Importable as | `DS006233`, `Kochi2025_Picture_naming` | | Year | 2025 | | Authors | Ryuzaburo Kochi, Aya Kanno, Hiroshi Uda, Keisuke Hatano, Hidenori Endo, Michael Cools, Robert Rothermel, Aimee F. Luat, Eishi Asano | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006233.v1.0.0](https://doi.org/10.18112/openneuro.ds006233.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006233) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006233) | [Source URL](https://openneuro.org/datasets/ds006233) | ### Copy-paste BibTeX ```bibtex @dataset{ds006233, title = {Picture naming}, author = {Ryuzaburo Kochi and Aya Kanno and Hiroshi Uda and Keisuke Hatano and Hidenori Endo and Michael Cools and Robert Rothermel and Aimee F. Luat and Eishi Asano}, doi = {10.18112/openneuro.ds006233.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006233.v1.0.0}, } ``` ## Technical Details - Subjects: 108 - Recordings: 347 - Tasks: 1 - Channels: 128 (245), 138 (19), 136 (19), 140 (8), 110 (6), 112 (6), 150 (5), 156 (5), 134 (4), 148 (4), 130 (4), 164 (4), 96 (3), 152 (3), 144 (3), 84 (3), 118 (3), 64 (2), 58 - Sampling rate (Hz): 1000.0 - Duration (hours): Not calculated - Pathology: Surgery - Modality: Visual - Type: Other - Size on disk: 17.3 GB - File count: 347 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006233.v1.0.0 - Source: openneuro - OpenNeuro: [ds006233](https://openneuro.org/datasets/ds006233) - NeMAR: [ds006233](https://nemar.org/dataexplorer/detail?dataset_id=ds006233) ## API Reference Use the `DS006233` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006233(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Picture naming * **Study:** `ds006233` (OpenNeuro) * **Author (year):** `Kochi2025_Picture_naming` * **Canonical:** — Also importable as: `DS006233`, `Kochi2025_Picture_naming`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Surgery`. Subjects: 108; recordings: 347; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006233](https://openneuro.org/datasets/ds006233) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006233](https://nemar.org/dataexplorer/detail?dataset_id=ds006233) DOI: [https://doi.org/10.18112/openneuro.ds006233.v1.0.0](https://doi.org/10.18112/openneuro.ds006233.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006233 >>> dataset = DS006233(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006233) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006233) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS006234: ieeg dataset, 119 subjects *Auditory naming* Access recordings and metadata through EEGDash. **Citation:** Ryuzaburo Kochi, Aya Kanno, Hiroshi Uda, Keisuke Hatano, Hidenori Endo, Michael Cools, Robert Rothermel, Aimee F. Luat, Eishi Asano (2025). *Auditory naming*. [10.18112/openneuro.ds006234.v1.0.0](https://doi.org/10.18112/openneuro.ds006234.v1.0.0) Modality: ieeg Subjects: 119 Recordings: 378 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006234 dataset = DS006234(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006234(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006234( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006234, title = {Auditory naming}, author = {Ryuzaburo Kochi and Aya Kanno and Hiroshi Uda and Keisuke Hatano and Hidenori Endo and Michael Cools and Robert Rothermel and Aimee F. Luat and Eishi Asano}, doi = {10.18112/openneuro.ds006234.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006234.v1.0.0}, } ``` ## About This Dataset This dataset, used in the analysis reported by Kochi et al., (2025), contains intracranial EEG recordings from 119 individuals who performed an auditory‑naming task. Electrode coordinates are provided in MNI‑305 space. Each EDF file is tagged for the auditory naming task with the following event codes: 401 – stimulus onset 402 – stimulus offset 501 – response onset Reference: Ryuzaburo Kochi, Aya Kanno, Hiroshi Uda, Keisuke Hatano, Hidenori Endo, Michael Cools, Robert Rothermel, Aimee F. Luat, Eishi Asano. Naming is Shaped by Early Facilitative and Late Compensatory Neural Interactions: An Intracranial Study of 125 Patients ## Dataset Information | Dataset ID | `DS006234` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Auditory naming | | Author (year) | `Kochi2025_Auditory_naming` | | Canonical | — | | Importable as | `DS006234`, `Kochi2025_Auditory_naming` | | Year | 2025 | | Authors | Ryuzaburo Kochi, Aya Kanno, Hiroshi Uda, Keisuke Hatano, Hidenori Endo, Michael Cools, Robert Rothermel, Aimee F. Luat, Eishi Asano | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006234.v1.0.0](https://doi.org/10.18112/openneuro.ds006234.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006234) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006234) | [Source URL](https://openneuro.org/datasets/ds006234) | ### Copy-paste BibTeX ```bibtex @dataset{ds006234, title = {Auditory naming}, author = {Ryuzaburo Kochi and Aya Kanno and Hiroshi Uda and Keisuke Hatano and Hidenori Endo and Michael Cools and Robert Rothermel and Aimee F. Luat and Eishi Asano}, doi = {10.18112/openneuro.ds006234.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006234.v1.0.0}, } ``` ## Technical Details - Subjects: 119 - Recordings: 378 - Tasks: 1 - Channels: 128 (269), 138 (14), 136 (11), 112 (9), 140 (8), 164 (8), 134 (7), 110 (6), 142 (5), 156 (5), 150 (5), 132 (4), 130 (4), 148 (4), 144 (4), 152 (3), 118 (3), 96 (3), 84 (3), 64 (2), 58 - Sampling rate (Hz): 1000.0 - Duration (hours): 128.09192777777778 - Pathology: Surgery - Modality: Auditory - Type: Other - Size on disk: 43.9 GB - File count: 378 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006234.v1.0.0 - Source: openneuro - OpenNeuro: [ds006234](https://openneuro.org/datasets/ds006234) - NeMAR: [ds006234](https://nemar.org/dataexplorer/detail?dataset_id=ds006234) ## API Reference Use the `DS006234` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006234(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory naming * **Study:** `ds006234` (OpenNeuro) * **Author (year):** `Kochi2025_Auditory_naming` * **Canonical:** — Also importable as: `DS006234`, `Kochi2025_Auditory_naming`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Surgery`. Subjects: 119; recordings: 378; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006234](https://openneuro.org/datasets/ds006234) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006234](https://nemar.org/dataexplorer/detail?dataset_id=ds006234) DOI: [https://doi.org/10.18112/openneuro.ds006234.v1.0.0](https://doi.org/10.18112/openneuro.ds006234.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006234 >>> dataset = DS006234(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006234) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006234) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS006253: ieeg dataset, 23 subjects *MetaRDK* Access recordings and metadata through EEGDash. **Citation:** Dorian Goueytes, Francois Stockart, Alexis Robin, Lucien Gyger, Martin Rouy, Dominique Hoffmann, Lorella Minotti, Philippe Kahane, Michael Pereira, Nathan Faivre (—). *MetaRDK*. [10.18112/openneuro.ds006253.v1.0.3](https://doi.org/10.18112/openneuro.ds006253.v1.0.3) Modality: ieeg Subjects: 23 Recordings: 201 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006253 dataset = DS006253(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006253(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006253( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006253, title = {MetaRDK}, author = {Dorian Goueytes and Francois Stockart and Alexis Robin and Lucien Gyger and Martin Rouy and Dominique Hoffmann and Lorella Minotti and Philippe Kahane and Michael Pereira and Nathan Faivre}, doi = {10.18112/openneuro.ds006253.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds006253.v1.0.3}, } ``` ## About This Dataset **Evidence accumulation in the pre-supplementary motor area and insula drives confidence and changes of mind** Goueytes, D., Gigyer, L., Rouy, M., Hoffmann, D., Minotti, L., Kahane, P., Pereira, M., and Faivre, N. Evidence accumulation in the pre-supplementary motor area and insula drives confidence and changes of mind. **Overview** ### View full README **Evidence accumulation in the pre-supplementary motor area and insula drives confidence and changes of mind** Goueytes, D., Gigyer, L., Rouy, M., Hoffmann, D., Minotti, L., Kahane, P., Pereira, M., and Faivre, N. Evidence accumulation in the pre-supplementary motor area and insula drives confidence and changes of mind. **Overview** **Overview of the task** MetaRDK is a project aiming to understand the neural correlates of decision making and decision making-related metacognition. Epileptic patient with pharmacology intractable epilepsy performed a perceptual decision making task associated with a confidence judgement task. The patients had to decide within a window of 6s if a a cloud of dot displayed at the center of the screen was moving right or left, and provide their answer by moving a computer mouse and clicking on the corresponding right/left target. The difficulty of the task was titrated for all patients at 70% correct using an adaptive staircase. After each answer patients were prompted to evaluate how confidence they felt in their decision on a scale from 0 to 100 (0 : Sure to be wrong, 50 : answered at random, 100 : Sure to be right All scripts for the task, data processing and analysis are available here : doi: 10.17605/OSF.IO/2KT97 **Description of the contents of the dataset** This dataset contains the high-gamma content (average power modulation in five non-overlapping frequency bands between 70 and 150Hz) of the patient while they performed the task. The data are segmented, and each segment contain high-gamma activity from trial onset (clicking on the start button) to trial offset (following the confidence judgement response), sampled at 512Hz. All information regarding the behavior of the patients (stimulus onset, response time, confidence judgements) are available in the derivative/beh directory as a separated .csv table for each patient. The data provided were screened in order to remove trials and iEEG channels with high epilepsy-related artifact. **Methods** **Subjects** All participants were patients with pharmacologically intractable epilepsy. **Apparatus** Recordings were performed at the bedside of the patients using a micromed recording system. The implantation schema was decided by the medical team solely based on the medical status of the patients. **Initial setup** The patient sat reclined in their hospital bed, with a laptop and a mouse in front of them. The task was explained to them, and they were instructed to sample the stimuli as long as required within 6s to form their decision. The confidence judgement scale was explained, and they were explicitly instructed to use the whole confidence scale. **Task organization** - The patients first perform a short initial staircase session to titrate difficulty at 70% (the staircase procedure was maintained during the task - The staircase was followed by the main task, corresponding to the data shared in this dataset. **Task details** Each trial was initiated by clicking on a ‘start’ button at the bottom of the screen. This click corresponds to the trial onset. After a fixed delay, the stimulus was presented (stimonset). The decision was recorded as soon as the participants started to move the mouse (decision time), and the response was recorded upon clicking on the target button (response/R1). The confidence scale was then displayed after a fixed delay (VAS onset), and the click on the confidence scale was also recorded (R2). After a 500 ms delay, the trial ended (trial offset). For each trial, we also recorded outcome (correct), the presence of change of mind (ch_mind) and their timing (rt_chmind), as well as the coherence of the stimulus (stim_int) and the max velocity of the computer mouse (vmax). The identity and timing of all this elements is available in the derivates/beh directory ## Dataset Information | Dataset ID | `DS006253` | |----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MetaRDK | | Author (year) | `Goueytes2024` | | Canonical | `MetaRDK` | | Importable as | `DS006253`, `Goueytes2024`, `MetaRDK` | | Year | — | | Authors | Dorian Goueytes, Francois Stockart, Alexis Robin, Lucien Gyger, Martin Rouy, Dominique Hoffmann, Lorella Minotti, Philippe Kahane, Michael Pereira, Nathan Faivre | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006253.v1.0.3](https://doi.org/10.18112/openneuro.ds006253.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006253) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006253) | [Source URL](https://openneuro.org/datasets/ds006253/versions/1.0.3) | ### Copy-paste BibTeX ```bibtex @dataset{ds006253, title = {MetaRDK}, author = {Dorian Goueytes and Francois Stockart and Alexis Robin and Lucien Gyger and Martin Rouy and Dominique Hoffmann and Lorella Minotti and Philippe Kahane and Michael Pereira and Nathan Faivre}, doi = {10.18112/openneuro.ds006253.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds006253.v1.0.3}, } ``` ## Technical Details - Subjects: 23 - Recordings: 201 - Tasks: 4 - Channels: 122 (13), 185 (2), 120, 143, 156, 186, 201, 132 - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Epilepsy - Modality: Visual - Type: Decision-making - Size on disk: 656.2 KB - File count: 201 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006253.v1.0.3 - Source: openneuro - OpenNeuro: [ds006253](https://openneuro.org/datasets/ds006253) - NeMAR: [ds006253](https://nemar.org/dataexplorer/detail?dataset_id=ds006253) ## API Reference Use the `DS006253` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006253(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MetaRDK * **Study:** `ds006253` (OpenNeuro) * **Author (year):** `Goueytes2024` * **Canonical:** `MetaRDK` Also importable as: `DS006253`, `Goueytes2024`, `MetaRDK`. Modality: `ieeg`; Experiment type: `Decision-making`; Subject type: `Epilepsy`. Subjects: 23; recordings: 201; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006253](https://openneuro.org/datasets/ds006253) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006253](https://nemar.org/dataexplorer/detail?dataset_id=ds006253) DOI: [https://doi.org/10.18112/openneuro.ds006253.v1.0.3](https://doi.org/10.18112/openneuro.ds006253.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS006253 >>> dataset = DS006253(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006253) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006253) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS006260: eeg dataset, 76 subjects *Dataset of psychophysiological data from children with learning difficulties who strengthen reading and math skills through assistive technology* Access recordings and metadata through EEGDash. **Citation:** César E. Corona-González, Claudia Rebeca De Stefano-Ramos, Juan Pablo Rosado-Aíza, David I. Ibarra-Zarate, Fabiola R. Gómez-Velázquez, Luz María Alonso-Valerdi (2025). *Dataset of psychophysiological data from children with learning difficulties who strengthen reading and math skills through assistive technology*. [10.18112/openneuro.ds006260.v1.0.1](https://doi.org/10.18112/openneuro.ds006260.v1.0.1) Modality: eeg Subjects: 76 Recordings: 366 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006260 dataset = DS006260(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006260(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006260( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006260, title = {Dataset of psychophysiological data from children with learning difficulties who strengthen reading and math skills through assistive technology}, author = {César E. Corona-González and Claudia Rebeca De Stefano-Ramos and Juan Pablo Rosado-Aíza and David I. Ibarra-Zarate and Fabiola R. Gómez-Velázquez and Luz María Alonso-Valerdi}, doi = {10.18112/openneuro.ds006260.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006260.v1.0.1}, } ``` ## About This Dataset **README** **Authors** César E. Corona-González, Claudia Rebeca De Stefano-Ramos, Juan Pablo Rosado-Aíza, Fabiola R Gómez-Velázquez, David I. Ibarra-Zarate, Luz María Alonso-Valerdi **Contact person** ### View full README **README** **Authors** César E. Corona-González, Claudia Rebeca De Stefano-Ramos, Juan Pablo Rosado-Aíza, Fabiola R Gómez-Velázquez, David I. Ibarra-Zarate, Luz María Alonso-Valerdi **Contact person** César E. Corona-González : [https://orcid.org/0000-0002-7680-2953](https://orcid.org/0000-0002-7680-2953) [a00833959@tec.mx](mailto:a00833959@tec.mx) **Project name** Psychophysiological data from Mexican children with learning difficulties who strengthen reading and math skills by assistive technology **Year that the project ran** 2023 **Brief overview of the tasks in the experiment** The current dataset consists of psychometric and electrophysiological data from children with reading or math learning difficulties. These data were collected to evaluate improvements in reading or math skills resulting from using an online learning method called Smartick. The psychometric evaluations from children with reading difficulties encompassed: spelling tests, where 1) orthographic and 2) phonological errors were considered, 3) reading speed, expressed in words read per minute, and 4) reading comprehension, where multiple-choice questions were given to the children. The last 2 parameters were determined according to the [standards from the Ministry of Public Education](https://rarchivoszona33.files.wordpress.com/2012/10/manual_fomento.pdf) (Secretaría de Educación Pública in Spanish) in Mexico. On the other hand, group 2 assessments embraced: 1) an assessment of general mathematical knowledge, as well as 2) the hits percentage, and 3) reaction time from an arithmetical task. Additionally, selective attention and intelligence quotient (IQ) were also evaluated. Then, individuals underwent an EEG experimental paradigm where two conditions were recorded: 1) a 3-minute eyes-open resting state and 2) performing either reading or mathematical activities. EEG recordings from the reading experiment consisted of reading a text aloud and then answering questions about the text. Alternatively, EEG recordings from the math experiment involved the solution of two blocks with 20 arithmetic operations (addition and subtraction). Subsequently, each child was randomly subcategorized as 1) the experimental group, who were asked to engage with Smartick for three months, and 2) the control group, who were not involved with the intervention. Once the 3-month period was over, every child was reassessed as described before. **Description of the contents of the dataset** The dataset contains a total of 76 \\\*subjects\* \*\*(sub-)\*\*, where two study groups were assessed: 1) \\\*reading difficulties\* \*\*(R)\*\*and 2) \\\*math difficulties\* \*\*(M)\*\\\*. Then, each individual was subcategorized as \\\*experimental subgroup\* \*\*(e)\*\*, where children were compromised to engage with Smartick, or \\\*control subgroup\* \*\*(c)\*\*, where they did not get involved with any intervention. Every subject was followed up on for three months. During this period, each subject underwent two EEG sessions, representing the \\\*PRE-intervention\* \*\*(ses-1)\*\*and the \\\*POST-intervention\* \*\*(ses-2)\*\*. The EEG recordings from the reading difficulties group consisted of a \*resting state condition\* ``` ** ``` (run-1)\*\*and while performing\*active reading and reading comprehension activities\* ``` ** ``` (run-2)\*\\\*. On the other hand, EEG data from the math difficulties group was collected from a\*resting state condition\* ``` ** ``` (run-1)\*\*and when\*solving two blocks of 20 arithmetic operations\* ``` ** ``` (run-2 and run-3)\*\*. All EEG files were stored in .set format. The nomenclature and description from filenames are shown below: ```text | Nomenclature | Description | |-------------------------------- |--------------------------- | | sub- | Subject | | M | Math group | | R | Reading group | | c | Control subgroup | | e | Experimental subgroup| | ses-1 | PRE-intervention | | ses-2 | POST-Intervention | | run-1 | EEG for baseline | | run-2 | EEG for reading activity, or the first block of math| | run-3 | EEG for the second block of math| ``` Example: the file *sub-Rc11_ses-1_task-SmartickDataset_run-2_eeg.set* is related to: - The 11th subject from the reading difficulties group, control subgroup (sub-Rc11). - EEG recording from the PRE-intervention (ses-1) while performing the reading activity (run-2) **Independent variables** - Study groups: : + Reading difficulties \* Control: children did not follow any intervention \* Experimental: Children used the reading program of Smartick for 3 months + Math difficulties : * Control: children did not follow any intervention * Experimental: Children used the math program of Smartick for 3 months - Condition: : + PRE-intervention: first psychological and electroencephalographic evaluation + POST-intervention: second psychological and electroencephalographic evaluation **Dependent variables** - *Psychometric data from the reading difficulties group*: : * Orthographic_ERR: number of orthographic errors. * Phonological_ERR: number of phonological errors. * Selective_Attention: score from the selective attention test. * Reading_Speed: reading speed in words per minute. * Comprehension: score on a reading comprehension task. * GROUP: C for the control group, E for the experimental group. * GENDER: M for male, F for Female. * AGE: age at the beginning of the study. * IQ: intelligence quotient. - *Psychometric data from the math difficulties group*: : * WRAT4: score from the WRAT-4 test. * hits: hits during the EEG acquisition [%]. * RT: reaction time during the EEG acquisition [s]. * Selective_Attention: score from the selective attention test. * GROUP: C for the control Group, E for the experimental group. * GENDER: M for male, F for female. * AGE: age at the beginning of the study. * IQ: intelligence quotient. **Psychometric data can be found in the\*01_Psychometric_Data.xlsx\* file** - *Engagement percentage within Smartick (only for experimental group)* > * These values represent the engagement percentage through Smartick. > * Students were asked to get involved with the online method for learning for 3 months, 5 days a week. > * Greater values than 100% denote participants who regularly logged in more than 5 days weekly. **Engagement percentage be found in the\*05_SessionEngagement.xlsx\* file** **Methods** **Subjects** Seventy-six Mexican children between 7 and 13 years old were enrolled in this study. **Information about the recruitment procedure** The sample was recruited through non-profit foundations that support learning and foster care programs. **Apparatus** g.USBamp RESEARCH amplifier **Initial setup** 1. Explain the task to the participant. 2. Sign informed consent. 3. Set up electrodes. **Task details** The *stimuli* nested folder contains all stimuli employed in the EEG experiments. **Level 1** - Math: Images used in the math experiment.​​​​​​​ - Reading: Images used in the reading experiment. **Level 2** - Math > * POST_Operations: arithmetic operations from the POST-intervention. > * PRE_Operations: arithmetic operations from the PRE-intervention. - Reading : * POST_Reading1: text 1 and text-related comprehension questions from the POST-intervention. * POST_Reading2: text 2 and text-related comprehension questions from the POST-intervention. * POST_Reading3: text 3 and text-related comprehension questions from the POST-intervention. * PRE_Reading1: text 1 and text-related comprehension questions from the PRE-intervention. * PRE_Reading2: text 2 and text-related comprehension questions from the PRE-intervention. * PRE_Reading3: text 3 and text-related comprehension questions from the PRE-intervention. **Level 3** - Math > \*\\\*Operation01.jpg\*to \\\*Operation20.jpg\*: arithmetical operations solved during the first block of the math EEG experiment. > \*\\\*Operation21.jpg\*to \\\*Operation40.jpg\*: arithmetical operations solved during the second block of the math EEG experiment. > \*\\\*Experiment_Start.jpeg\*: start of the experiment. > \*\\\*Experiment_End.jpeg\*: end of the experiment. > \*\\\*End_Block_1.jpeg\*: break between blocks. - Reading : * Q1.png: first question. * Q2.png: second question. * Q3.png: third question. * Reading1, Reading2, or Reading3: texts from the reading EEG experiment. **The files\*3. Reading_Tags.xlsx\*and\*4. Math_Tags.xlsx\* provide the following information:** - Order: number for better event accommodation. - Event: tag in EEG file. - Subject: Subject identifier - Intervention: PRE (ses-1) or POST (ses-2). - Reading/Block: task identifier tag. > * “R1”, “R2”, and “R3” indicates which reading was assigned to each participant. > * “1” and “2” for the blocks of the math experiment. - Group: control or experimental. - Description: event tag meaning. - Question shown (PRE): There are no event tags for questions in the PRE-EEG. intervention. Question sequences were registered manually. **Experimental location** Tecnologico de Monterrey. Av. Eugenio Garza Sada 2501 Sur, Tecnologico, 64849 Monterrey, N.L., Mexico. **Missing data** The file **2. EEG Data Descriptor.xlsx\*** describes the data availability for EEG recordings. Some cells are highlighted to indicate some situations: - Yellow cells mean that the quality of the EEG signals is inconsistent. However, EEG files were included for EEG preprocessing purposes. Additionally, EEG events for these files were not exported. - Red cells imply that data is unavailable due to technical problems during the experiment’s run and saving. **Notes** EEG signals were filtered online with a 0.1-100 Hz bandpass filter. The electrode O2 showed technical issues during the EEG experiment. The authors suggest the interpolation of this electrode ## Dataset Information | Dataset ID | `DS006260` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset of psychophysiological data from children with learning difficulties who strengthen reading and math skills through assistive technology | | Author (year) | `CoronaGonzalez2025` | | Canonical | — | | Importable as | `DS006260`, `CoronaGonzalez2025` | | Year | 2025 | | Authors | César E. Corona-González, Claudia Rebeca De Stefano-Ramos, Juan Pablo Rosado-Aíza, David I. Ibarra-Zarate, Fabiola R. Gómez-Velázquez, Luz María Alonso-Valerdi | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006260.v1.0.1](https://doi.org/10.18112/openneuro.ds006260.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006260) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006260) | [Source URL](https://openneuro.org/datasets/ds006260) | ### Copy-paste BibTeX ```bibtex @dataset{ds006260, title = {Dataset of psychophysiological data from children with learning difficulties who strengthen reading and math skills through assistive technology}, author = {César E. Corona-González and Claudia Rebeca De Stefano-Ramos and Juan Pablo Rosado-Aíza and David I. Ibarra-Zarate and Fabiola R. Gómez-Velázquez and Luz María Alonso-Valerdi}, doi = {10.18112/openneuro.ds006260.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006260.v1.0.1}, } ``` ## Technical Details - Subjects: 76 - Recordings: 366 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 256.0 - Duration (hours): 23.128793402777777 - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 2.7 GB - File count: 366 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006260.v1.0.1 - Source: openneuro - OpenNeuro: [ds006260](https://openneuro.org/datasets/ds006260) - NeMAR: [ds006260](https://nemar.org/dataexplorer/detail?dataset_id=ds006260) ## API Reference Use the `DS006260` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006260(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of psychophysiological data from children with learning difficulties who strengthen reading and math skills through assistive technology * **Study:** `ds006260` (OpenNeuro) * **Author (year):** `CoronaGonzalez2025` * **Canonical:** — Also importable as: `DS006260`, `CoronaGonzalez2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 76; recordings: 366; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006260](https://openneuro.org/datasets/ds006260) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006260](https://nemar.org/dataexplorer/detail?dataset_id=ds006260) DOI: [https://doi.org/10.18112/openneuro.ds006260.v1.0.1](https://doi.org/10.18112/openneuro.ds006260.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006260 >>> dataset = DS006260(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006260) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006260) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006269: eeg dataset, 24 subjects *Tethered EEG Recordings in Syngap1 rats* Access recordings and metadata through EEGDash. **Citation:** Lucy Pritchard, Ingrid Buller-Peralta, Sally M Till, Peter C Kind, Alfredo Gonzalez-Sulser (2025). *Tethered EEG Recordings in Syngap1 rats*. [10.18112/openneuro.ds006269.v1.0.0](https://doi.org/10.18112/openneuro.ds006269.v1.0.0) Modality: eeg Subjects: 24 Recordings: 40 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006269 dataset = DS006269(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006269(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006269( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006269, title = {Tethered EEG Recordings in Syngap1 rats}, author = {Lucy Pritchard and Ingrid Buller-Peralta and Sally M Till and Peter C Kind and Alfredo Gonzalez-Sulser}, doi = {10.18112/openneuro.ds006269.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006269.v1.0.0}, } ``` ## About This Dataset This dataset consists of 6-hour long EEG recordings in wildtype (WT) and rats Syngap+/Δ−GAP (HET) rats (male, 12-16 weeks old) starting at zeitgeber time (ZT) 3 to 9 (under a 12 light hr:12 dark hr schedule with lights on at 07:00 am). Associated with each rat is two 6-hour recording files, expect for those which only underwent one recording session (S7020, , S7025, S7030, S7031, S7032, 39, S7040, S7041). Recordings were acquired with an OpenEphys acquisition system (OpenEphys, Portugal) and head-mounted 32-channel EEG array probe (H32-EEG—NeuroNexus, USA) with accelerometers (NeuroNexus, USA), at a sampling rate of 1 kHz. For more detailed methods, please see our associated publication doi: 10.1016/j.celrep.2024.114733. ## Dataset Information | Dataset ID | `DS006269` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Tethered EEG Recordings in Syngap1 rats | | Author (year) | `Pritchard2025` | | Canonical | — | | Importable as | `DS006269`, `Pritchard2025` | | Year | 2025 | | Authors | Lucy Pritchard, Ingrid Buller-Peralta, Sally M Till, Peter C Kind, Alfredo Gonzalez-Sulser | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006269.v1.0.0](https://doi.org/10.18112/openneuro.ds006269.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006269) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006269) | [Source URL](https://openneuro.org/datasets/ds006269) | ### Copy-paste BibTeX ```bibtex @dataset{ds006269, title = {Tethered EEG Recordings in Syngap1 rats}, author = {Lucy Pritchard and Ingrid Buller-Peralta and Sally M Till and Peter C Kind and Alfredo Gonzalez-Sulser}, doi = {10.18112/openneuro.ds006269.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006269.v1.0.0}, } ``` ## Technical Details - Subjects: 24 - Recordings: 40 - Tasks: 2 - Channels: 33 - Sampling rate (Hz): 1000.0 - Duration (hours): 240.0 - Pathology: Other - Modality: Resting State - Type: Resting-state - Size on disk: 106.7 GB - File count: 40 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006269.v1.0.0 - Source: openneuro - OpenNeuro: [ds006269](https://openneuro.org/datasets/ds006269) - NeMAR: [ds006269](https://nemar.org/dataexplorer/detail?dataset_id=ds006269) ## API Reference Use the `DS006269` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006269(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Tethered EEG Recordings in Syngap1 rats * **Study:** `ds006269` (OpenNeuro) * **Author (year):** `Pritchard2025` * **Canonical:** — Also importable as: `DS006269`, `Pritchard2025`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Other`. Subjects: 24; recordings: 40; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006269](https://openneuro.org/datasets/ds006269) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006269](https://nemar.org/dataexplorer/detail?dataset_id=ds006269) DOI: [https://doi.org/10.18112/openneuro.ds006269.v1.0.0](https://doi.org/10.18112/openneuro.ds006269.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006269 >>> dataset = DS006269(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006269) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006269) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006317: eeg dataset, 2 subjects *Chisco-2.0* Access recordings and metadata through EEGDash. **Citation:** Zihan Zhang, Yu Bao, Tianyi Jiang, Xiao Ding, Xia Liang, Juntong Du, Yi Zhao, Kai Xiong, Bing Qin, Ting Liu (2025). *Chisco-2.0*. [10.18112/openneuro.ds006317.v1.1.0](https://doi.org/10.18112/openneuro.ds006317.v1.1.0) Modality: eeg Subjects: 2 Recordings: 64 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006317 dataset = DS006317(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006317(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006317( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006317, title = {Chisco-2.0}, author = {Zihan Zhang and Yu Bao and Tianyi Jiang and Xiao Ding and Xia Liang and Juntong Du and Yi Zhao and Kai Xiong and Bing Qin and Ting Liu}, doi = {10.18112/openneuro.ds006317.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds006317.v1.1.0}, } ``` ## About This Dataset This dataset is a imagined speech dataset with two participants, identified as sub-01 to sub-02. The dataset includes raw data in edf formats. Information also can be found in [https://github.com/baoyudu/How-Far-to-NISN](https://github.com/baoyudu/How-Far-to-NISN) . **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `DS006317` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Chisco-2.0 | | Author (year) | `Zhang2025_Chisco_2_0` | | Canonical | `Chisco2_0`, `Chisco20`, `CHISCO20` | | Importable as | `DS006317`, `Zhang2025_Chisco_2_0`, `Chisco2_0`, `Chisco20`, `CHISCO20` | | Year | 2025 | | Authors | Zihan Zhang, Yu Bao, Tianyi Jiang, Xiao Ding, Xia Liang, Juntong Du, Yi Zhao, Kai Xiong, Bing Qin, Ting Liu | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006317.v1.1.0](https://doi.org/10.18112/openneuro.ds006317.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006317) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006317) | [Source URL](https://openneuro.org/datasets/ds006317) | ### Copy-paste BibTeX ```bibtex @dataset{ds006317, title = {Chisco-2.0}, author = {Zihan Zhang and Yu Bao and Tianyi Jiang and Xiao Ding and Xia Liang and Juntong Du and Yi Zhao and Kai Xiong and Bing Qin and Ting Liu}, doi = {10.18112/openneuro.ds006317.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds006317.v1.1.0}, } ``` ## Technical Details - Subjects: 2 - Recordings: 64 - Tasks: 2 - Channels: 127 - Sampling rate (Hz): 1000.0 - Duration (hours): 62.12170444444445 - Pathology: Healthy - Modality: — - Type: Motor - Size on disk: 52.9 GB - File count: 64 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006317.v1.1.0 - Source: openneuro - OpenNeuro: [ds006317](https://openneuro.org/datasets/ds006317) - NeMAR: [ds006317](https://nemar.org/dataexplorer/detail?dataset_id=ds006317) ## API Reference Use the `DS006317` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006317(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Chisco-2.0 * **Study:** `ds006317` (OpenNeuro) * **Author (year):** `Zhang2025_Chisco_2_0` * **Canonical:** `Chisco2_0`, `Chisco20`, `CHISCO20` Also importable as: `DS006317`, `Zhang2025_Chisco_2_0`, `Chisco2_0`, `Chisco20`, `CHISCO20`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 2; recordings: 64; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006317](https://openneuro.org/datasets/ds006317) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006317](https://nemar.org/dataexplorer/detail?dataset_id=ds006317) DOI: [https://doi.org/10.18112/openneuro.ds006317.v1.1.0](https://doi.org/10.18112/openneuro.ds006317.v1.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS006317 >>> dataset = DS006317(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006317) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006317) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006334: meg dataset, 30 subjects *Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories* Access recordings and metadata through EEGDash. **Citation:** Biau E, Wang D, Park H, Jensen O, Hanslmayr S (2025). *Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories*. [10.18112/openneuro.ds006334.v1.0.0](https://doi.org/10.18112/openneuro.ds006334.v1.0.0) Modality: meg Subjects: 30 Recordings: 128 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006334 dataset = DS006334(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006334(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006334( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006334, title = {Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories}, author = {Biau E and Wang D and Park H and Jensen O and Hanslmayr S}, doi = {10.18112/openneuro.ds006334.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006334.v1.0.0}, } ``` ## About This Dataset General information: This repository contains the raw MEG data, T1-weighted anatomical scans, the corresponding behavioural logfiles, as well as the scripts to perform analyses and results reported in the manuscript: Biau, E., Wang, D., Park, H., Jensen, O., & Hanslmayr, S. (2025). Neocortical and hippocampal theta oscillations track audiovisual integration and replay of speech memories. Journal of Neuroscience, 45(21). Task overview: The experimental paradigm consisted of repeated blocks, with each block being composed of three successive tasks: encoding, distractor, and retrieval task. 1) Encoding: participants were presented with a series of audiovisual speech movies and performed an audiovisual synchrony detection. Each trial started with a brief fixation cross (jittered duration, 1,000–1,500 ms) followed by the presentation of a random synchronous or asynchronous audiovisual speech movie (5 s). After the movie end, participants had to determine whether video and sound were presented in synchrony or asynchrony in the movie, by pressing the index finger (synchronous) or the middle finger (asynchronous) button of the response device as fast and accurate as possible. The next trial started after the participant’s response. After the encoding, the participants did a short distractor task. Each trial started with a brief fixation cross (jittered duration, 1,000–1,500 ms) followed by the presentation of a random number (from 1 to 99) displayed at the center of the screen. 2) Distractor: Participants were instructed to determine as fast and accurate as possible whether this number was odd or even by pressing the index (odd) or the middle finger (even) button of the response device. Each distractor task contained 20 trials. The purpose of the distractor task was only to clear memory up. After the distractor task, the participants performed the retrieval task to assess their memory. Each trial started with a brief fixation cross (jittered duration, 1,000–1,500 ms) followed by the presentation of a static frame depicting the face of a speaker from a movie attended in the previous encoding. 3) Retrieval: During this visual cueing (5 s), participants were instructed to recall as accurately as possible every auditory information previously associated with the speaker’s speech during the movie presentation. At the end of the visual cueing, participants were provided the possibility to listen two auditory speech stimuli: one stimulus corresponded to the speaker’s auditory speech from the same movie (i.e., matching). The other auditory stimulus was taken from another random movie with the same speaker gender (i.e., unmatching). Participants chose to listen each stimulus sequentially by pressing the index finger (Speech 1) or the middle finger (Speech 2) button of the response device. The order of displaying was free, but for every trial, participants were allowed to listen to each auditory stimulus only one time to avoid speech restudy. At the end of the second auditory stimulus, participants were instructed to determine as fast and accurate as possible which auditory speech stimulus corresponded to the speaker’s face frame, by pressing the index finger (Speech1) or the middle finger (Speech2) button of the response device. The next retrieval trial started after the participant’s response. After the last trial of the retrieval, participants took a short break, before starting a new block (encoding–distractor–retrieval). Events and corresponding trigger values in .fif raw MEG data: Each participant underwent only one session. Run1to5 are simply the chunks of the continuous MEG recording during the unique session, and were split automatically by the software. Audiovisual movie onset [1]; Visual cue onset [2]; Speech 1 onset [4]; Speech 2 onset [8]; Probe response key press [16]; Movie Localiser onset [32] and Sound Localiser onset [64]. Some data have their associated individual T1w anatomy scans, other do not. ## Dataset Information | Dataset ID | `DS006334` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories | | Author (year) | `Biau2025` | | Canonical | — | | Importable as | `DS006334`, `Biau2025` | | Year | 2025 | | Authors | Biau E, Wang D, Park H, Jensen O, Hanslmayr S | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006334.v1.0.0](https://doi.org/10.18112/openneuro.ds006334.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006334) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006334) | [Source URL](https://openneuro.org/datasets/ds006334) | ### Copy-paste BibTeX ```bibtex @dataset{ds006334, title = {Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories}, author = {Biau E and Wang D and Park H and Jensen O and Hanslmayr S}, doi = {10.18112/openneuro.ds006334.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006334.v1.0.0}, } ``` ## Technical Details - Subjects: 30 - Recordings: 128 - Tasks: 1 - Channels: 331 (74), 332 (54) - Sampling rate (Hz): 1000.0 - Duration (hours): 36.74722222222222 - Pathology: Healthy - Modality: Multisensory - Type: Memory - Size on disk: 166.2 GB - File count: 128 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006334.v1.0.0 - Source: openneuro - OpenNeuro: [ds006334](https://openneuro.org/datasets/ds006334) - NeMAR: [ds006334](https://nemar.org/dataexplorer/detail?dataset_id=ds006334) ## API Reference Use the `DS006334` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006334(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories * **Study:** `ds006334` (OpenNeuro) * **Author (year):** `Biau2025` * **Canonical:** — Also importable as: `DS006334`, `Biau2025`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 30; recordings: 128; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006334](https://openneuro.org/datasets/ds006334) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006334](https://nemar.org/dataexplorer/detail?dataset_id=ds006334) DOI: [https://doi.org/10.18112/openneuro.ds006334.v1.0.0](https://doi.org/10.18112/openneuro.ds006334.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006334 >>> dataset = DS006334(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006334) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006334) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS006366: eeg dataset, 92 subjects *Mouse Sleep Staging Validation dataset (MSSV)* Access recordings and metadata through EEGDash. **Citation:** Laura Rose, Alexander Neergaard Zahid, Javier García Ciudad, Christine Egebjerg, Louise Piilgaard, Frederikke Lynge Sørensen, Mie Andersen, Tessa Radovanovic, Anastasia Tsopanidou, Maiken Nedergaard, Sébastien Arthaud, Renato Maciel, Christelle Peyron, Chiara Berteotti, Viviana Lo Martire, Alessandro Silvani, Giovanna Zoccoli, Micaela Borsa, Antoine Adamantidis, Morten Mørup, Birgitte Rahbek Kornum (2025). *Mouse Sleep Staging Validation dataset (MSSV)*. [10.18112/openneuro.ds006366.v1.0.1](https://doi.org/10.18112/openneuro.ds006366.v1.0.1) Modality: eeg Subjects: 92 Recordings: 148 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006366 dataset = DS006366(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006366(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006366( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006366, title = {Mouse Sleep Staging Validation dataset (MSSV)}, author = {Laura Rose and Alexander Neergaard Zahid and Javier García Ciudad and Christine Egebjerg and Louise Piilgaard and Frederikke Lynge Sørensen and Mie Andersen and Tessa Radovanovic and Anastasia Tsopanidou and Maiken Nedergaard and Sébastien Arthaud and Renato Maciel and Christelle Peyron and Chiara Berteotti and Viviana Lo Martire and Alessandro Silvani and Giovanna Zoccoli and Micaela Borsa and Antoine Adamantidis and Morten Mørup and Birgitte Rahbek Kornum}, doi = {10.18112/openneuro.ds006366.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006366.v1.0.1}, } ``` ## About This Dataset **Mouse Sleep Staging Validation dataset (MSSV)** This dataset contains EEG recordings with sleep scores from 92 healthy mice. The recordings and sleep scores were collected from five different labs: - Department of Biomedical and Neuromotor Sciences, Università di Bologna, Italy. - Center for Translational Neuroscience, University of Copenhagen, Denmark. - Department of Neuroscience, University of Copenhagen, Denmark. - Zentrum für Experimentelle Neurologie, Department of Neurology, Inselspital University Hospital Bern, Bern, Switzerland. - Lyon Neuroscience Research Center, Lyon, France. **Overview** - It contains EEG data from healthy mice in both dark and light phases. - All recordings contain at least one EEG and one EMG channel. The number of EEG channels available varies between labs (see labs.tsv). - The dataset is formatted according to the Brain Imaging Data Structure. See the ‘dataset_description.json’ file for the specific BIDS version used. The recordings are stored in .EDF format. - This dataset has been used and is fully described in [https://doi.org/10.1093/sleepadvances/zpaf025](https://doi.org/10.1093/sleepadvances/zpaf025). **Methods** - There are 92 healthy mice from 5 different labs. Each mouse is scored by a single sleep expert. See participants.tsv to see what lab each mouse belongs to. - All recordings have been downsampled to 128Hz and have SI units (volts). The epoch length is 4 seconds. **Contact** For questions regarding this dataset, contact Birgitte Rahbek Kornum, [kornum@sund.ku.dk](mailto:kornum@sund.ku.dk) ## Dataset Information | Dataset ID | `DS006366` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mouse Sleep Staging Validation dataset (MSSV) | | Author (year) | `Rose2025` | | Canonical | `MSSV` | | Importable as | `DS006366`, `Rose2025`, `MSSV` | | Year | 2025 | | Authors | Laura Rose, Alexander Neergaard Zahid, Javier García Ciudad, Christine Egebjerg, Louise Piilgaard, Frederikke Lynge Sørensen, Mie Andersen, Tessa Radovanovic, Anastasia Tsopanidou, Maiken Nedergaard, Sébastien Arthaud, Renato Maciel, Christelle Peyron, Chiara Berteotti, Viviana Lo Martire, Alessandro Silvani, Giovanna Zoccoli, Micaela Borsa, Antoine Adamantidis, Morten Mørup, Birgitte Rahbek Kornum | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006366.v1.0.1](https://doi.org/10.18112/openneuro.ds006366.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006366) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006366) | [Source URL](https://openneuro.org/datasets/ds006366) | ### Copy-paste BibTeX ```bibtex @dataset{ds006366, title = {Mouse Sleep Staging Validation dataset (MSSV)}, author = {Laura Rose and Alexander Neergaard Zahid and Javier García Ciudad and Christine Egebjerg and Louise Piilgaard and Frederikke Lynge Sørensen and Mie Andersen and Tessa Radovanovic and Anastasia Tsopanidou and Maiken Nedergaard and Sébastien Arthaud and Renato Maciel and Christelle Peyron and Chiara Berteotti and Viviana Lo Martire and Alessandro Silvani and Giovanna Zoccoli and Micaela Borsa and Antoine Adamantidis and Morten Mørup and Birgitte Rahbek Kornum}, doi = {10.18112/openneuro.ds006366.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006366.v1.0.1}, } ``` ## Technical Details - Subjects: 92 - Recordings: 148 - Tasks: 1 - Channels: 3 (71), 2 (43), 5 (34) - Sampling rate (Hz): 128.0 - Duration (hours): 2163.2877777777776 - Pathology: Healthy - Modality: Sleep - Type: Sleep - Size on disk: 6.1 GB - File count: 148 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006366.v1.0.1 - Source: openneuro - OpenNeuro: [ds006366](https://openneuro.org/datasets/ds006366) - NeMAR: [ds006366](https://nemar.org/dataexplorer/detail?dataset_id=ds006366) ## API Reference Use the `DS006366` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006366(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mouse Sleep Staging Validation dataset (MSSV) * **Study:** `ds006366` (OpenNeuro) * **Author (year):** `Rose2025` * **Canonical:** `MSSV` Also importable as: `DS006366`, `Rose2025`, `MSSV`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 92; recordings: 148; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006366](https://openneuro.org/datasets/ds006366) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006366](https://nemar.org/dataexplorer/detail?dataset_id=ds006366) DOI: [https://doi.org/10.18112/openneuro.ds006366.v1.0.1](https://doi.org/10.18112/openneuro.ds006366.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006366 >>> dataset = DS006366(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006366) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006366) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006367: eeg dataset, 52 subjects *Memory Reactivation Levels Remain Unaffected by Anticipated Interference* Access recordings and metadata through EEGDash. **Citation:** xx (2025). *Memory Reactivation Levels Remain Unaffected by Anticipated Interference*. [10.18112/openneuro.ds006367.v1.0.1](https://doi.org/10.18112/openneuro.ds006367.v1.0.1) Modality: eeg Subjects: 52 Recordings: 52 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006367 dataset = DS006367(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006367(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006367( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006367, title = {Memory Reactivation Levels Remain Unaffected by Anticipated Interference}, author = {xx}, doi = {10.18112/openneuro.ds006367.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006367.v1.0.1}, } ``` ## About This Dataset In this memory interference task, each trial began with a space key press, followed by a fixation dot for 1000–1500 ms. Then, two lateral objects appeared: a cued target (learned) and an irrelevant novel one. Participants memorized the cued object. After a 1400 ms delay, interference objects appeared briefly on half the blocks; otherwise, only fixation. A probe then showed two objects vertically aligned, and participants selected the target using arrow keys. Feedback followed the response, showing the correct object with color-coded text for 1000 ms. The preprocessing steps to reach this dataset is explained in the following preprint and the mentioned OSF repository xx (Experiment 1) ## Dataset Information | Dataset ID | `DS006367` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Memory Reactivation Levels Remain Unaffected by Anticipated Interference | | Author (year) | `DS6367_Memory_Reactivation` | | Canonical | — | | Importable as | `DS006367`, `DS6367_Memory_Reactivation` | | Year | 2025 | | Authors | xx | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006367.v1.0.1](https://doi.org/10.18112/openneuro.ds006367.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006367) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006367) | [Source URL](https://openneuro.org/datasets/ds006367) | ### Copy-paste BibTeX ```bibtex @dataset{ds006367, title = {Memory Reactivation Levels Remain Unaffected by Anticipated Interference}, author = {xx}, doi = {10.18112/openneuro.ds006367.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006367.v1.0.1}, } ``` ## Technical Details - Subjects: 52 - Recordings: 52 - Tasks: 1 - Channels: 30 - Sampling rate (Hz): 1000.0 - Duration (hours): 68.17161055555555 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 27.8 GB - File count: 52 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006367.v1.0.1 - Source: openneuro - OpenNeuro: [ds006367](https://openneuro.org/datasets/ds006367) - NeMAR: [ds006367](https://nemar.org/dataexplorer/detail?dataset_id=ds006367) ## API Reference Use the `DS006367` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006367(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Memory Reactivation Levels Remain Unaffected by Anticipated Interference * **Study:** `ds006367` (OpenNeuro) * **Author (year):** `DS6367_Memory_Reactivation` * **Canonical:** — Also importable as: `DS006367`, `DS6367_Memory_Reactivation`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 52; recordings: 52; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006367](https://openneuro.org/datasets/ds006367) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006367](https://nemar.org/dataexplorer/detail?dataset_id=ds006367) DOI: [https://doi.org/10.18112/openneuro.ds006367.v1.0.1](https://doi.org/10.18112/openneuro.ds006367.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006367 >>> dataset = DS006367(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006367) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006367) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006370: eeg dataset, 56 subjects *Memory Reactivation Levels Remain Unaffected by Anticipated Interference Experiment 2 Dataset* Access recordings and metadata through EEGDash. **Citation:** X (2025). *Memory Reactivation Levels Remain Unaffected by Anticipated Interference Experiment 2 Dataset*. [10.18112/openneuro.ds006370.v1.0.1](https://doi.org/10.18112/openneuro.ds006370.v1.0.1) Modality: eeg Subjects: 56 Recordings: 56 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006370 dataset = DS006370(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006370(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006370( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006370, title = {Memory Reactivation Levels Remain Unaffected by Anticipated Interference Experiment 2 Dataset}, author = {X}, doi = {10.18112/openneuro.ds006370.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006370.v1.0.1}, } ``` ## About This Dataset Each trial began with a space key press, followed by a fixation dot for 500–650 ms. Then, 2 or 6 lateral objects appeared for 1250 ms. A central cue indicated which object(s) to memorize. The other side had irrelevant objects for visual balance. After a 1000 ms delay, half the blocks showed 4 distractors (dual task), where participants identified a same-category object among them. The other half showed only fixation (single task). A colored dot gave feedback on dual task accuracy. Then, a probe showed 2 objects, and participants selected the cued one using arrow keys. Feedback followed, showing the correct object with colored cues. The preprocessing steps to reach this dataset is explained in the following preprint and the mentioned OSF repository xx (Experiment 2) ## Dataset Information | Dataset ID | `DS006370` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Memory Reactivation Levels Remain Unaffected by Anticipated Interference Experiment 2 Dataset | | Author (year) | `DS6370_Memory_Reactivation` | | Canonical | — | | Importable as | `DS006370`, `DS6370_Memory_Reactivation` | | Year | 2025 | | Authors | X | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006370.v1.0.1](https://doi.org/10.18112/openneuro.ds006370.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006370) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006370) | [Source URL](https://openneuro.org/datasets/ds006370) | ### Copy-paste BibTeX ```bibtex @dataset{ds006370, title = {Memory Reactivation Levels Remain Unaffected by Anticipated Interference Experiment 2 Dataset}, author = {X}, doi = {10.18112/openneuro.ds006370.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006370.v1.0.1}, } ``` ## Technical Details - Subjects: 56 - Recordings: 56 - Tasks: 1 - Channels: 30 - Sampling rate (Hz): 1000.0 - Duration (hours): 95.32295833333336 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 40.1 GB - File count: 56 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006370.v1.0.1 - Source: openneuro - OpenNeuro: [ds006370](https://openneuro.org/datasets/ds006370) - NeMAR: [ds006370](https://nemar.org/dataexplorer/detail?dataset_id=ds006370) ## API Reference Use the `DS006370` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006370(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Memory Reactivation Levels Remain Unaffected by Anticipated Interference Experiment 2 Dataset * **Study:** `ds006370` (OpenNeuro) * **Author (year):** `DS6370_Memory_Reactivation` * **Canonical:** — Also importable as: `DS006370`, `DS6370_Memory_Reactivation`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 56; recordings: 56; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006370](https://openneuro.org/datasets/ds006370) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006370](https://nemar.org/dataexplorer/detail?dataset_id=ds006370) DOI: [https://doi.org/10.18112/openneuro.ds006370.v1.0.1](https://doi.org/10.18112/openneuro.ds006370.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006370 >>> dataset = DS006370(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006370) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006370) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006374: eeg dataset, 36 subjects *Expectation effects on repetition suppression in nociception* Access recordings and metadata through EEGDash. **Citation:** Lisa-Marie Pohle, Moritz Nickel, Birgit Nierula, Markus Ploner, Ulrike Horn, Falk Eippert (2025). *Expectation effects on repetition suppression in nociception*. [10.18112/openneuro.ds006374.v1.0.0](https://doi.org/10.18112/openneuro.ds006374.v1.0.0) Modality: eeg Subjects: 36 Recordings: 358 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006374 dataset = DS006374(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006374(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006374( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006374, title = {Expectation effects on repetition suppression in nociception}, author = {Lisa-Marie Pohle and Moritz Nickel and Birgit Nierula and Markus Ploner and Ulrike Horn and Falk Eippert}, doi = {10.18112/openneuro.ds006374.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006374.v1.0.0}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS006374` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Expectation effects on repetition suppression in nociception | | Author (year) | `Pohle2025` | | Canonical | `Pohle2019` | | Importable as | `DS006374`, `Pohle2025`, `Pohle2019` | | Year | 2025 | | Authors | Lisa-Marie Pohle, Moritz Nickel, Birgit Nierula, Markus Ploner, Ulrike Horn, Falk Eippert | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006374.v1.0.0](https://doi.org/10.18112/openneuro.ds006374.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006374) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006374) | [Source URL](https://openneuro.org/datasets/ds006374) | ### Copy-paste BibTeX ```bibtex @dataset{ds006374, title = {Expectation effects on repetition suppression in nociception}, author = {Lisa-Marie Pohle and Moritz Nickel and Birgit Nierula and Markus Ploner and Ulrike Horn and Falk Eippert}, doi = {10.18112/openneuro.ds006374.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006374.v1.0.0}, } ``` ## Technical Details - Subjects: 36 - Recordings: 358 - Tasks: 2 - Channels: 35 - Sampling rate (Hz): 2000.0 - Duration (hours): 31.611037916666668 - Pathology: Healthy - Modality: Tactile - Type: Perception - Size on disk: 31.1 GB - File count: 358 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006374.v1.0.0 - Source: openneuro - OpenNeuro: [ds006374](https://openneuro.org/datasets/ds006374) - NeMAR: [ds006374](https://nemar.org/dataexplorer/detail?dataset_id=ds006374) ## API Reference Use the `DS006374` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006374(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Expectation effects on repetition suppression in nociception * **Study:** `ds006374` (OpenNeuro) * **Author (year):** `Pohle2025` * **Canonical:** `Pohle2019` Also importable as: `DS006374`, `Pohle2025`, `Pohle2019`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 36; recordings: 358; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006374](https://openneuro.org/datasets/ds006374) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006374](https://nemar.org/dataexplorer/detail?dataset_id=ds006374) DOI: [https://doi.org/10.18112/openneuro.ds006374.v1.0.0](https://doi.org/10.18112/openneuro.ds006374.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006374 >>> dataset = DS006374(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006374) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006374) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006377: fnirs dataset, 115 subjects *InclusionStudy* Access recordings and metadata through EEGDash. **Citation:** Meryem A. Yücel, Jessica E. Anderson, De’Ja Rogers, Parisa Hajirahimi, Parya Farzam, Yuanyuan Gao, Rini I. Kaplan, Emily J. Braun, Nishaat Mukadam, Sudan Duwadi, Laura Carlton, David Beeler, Lindsay K. Butler, Erin Carpenter, Jaimie Girnis, John Wilson, Vaibhav Tripathi, Yiwen Zhang, Bettina Sorger, Alexander von Lühmann, David C. Somers, Alice Cronin-Golomb, Swathi Kiran, Terry D. Ellis, David A. Boas (2025). *InclusionStudy*. [10.18112/openneuro.ds006377.v1.0.2](https://doi.org/10.18112/openneuro.ds006377.v1.0.2) Modality: fnirs Subjects: 115 Recordings: 690 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006377 dataset = DS006377(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006377(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006377( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006377, title = {InclusionStudy}, author = {Meryem A. Yücel and Jessica E. Anderson and De'Ja Rogers and Parisa Hajirahimi and Parya Farzam and Yuanyuan Gao and Rini I. Kaplan and Emily J. Braun and Nishaat Mukadam and Sudan Duwadi and Laura Carlton and David Beeler and Lindsay K. Butler and Erin Carpenter and Jaimie Girnis and John Wilson and Vaibhav Tripathi and Yiwen Zhang and Bettina Sorger and Alexander von Lühmann and David C. Somers and Alice Cronin-Golomb and Swathi Kiran and Terry D. Ellis and David A. Boas}, doi = {10.18112/openneuro.ds006377.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds006377.v1.0.2}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS006377` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | InclusionStudy | | Author (year) | `Yucel2025_InclusionStudy` | | Canonical | — | | Importable as | `DS006377`, `Yucel2025_InclusionStudy` | | Year | 2025 | | Authors | Meryem A. Yücel, Jessica E. Anderson, De’Ja Rogers, Parisa Hajirahimi, Parya Farzam, Yuanyuan Gao, Rini I. Kaplan, Emily J. Braun, Nishaat Mukadam, Sudan Duwadi, Laura Carlton, David Beeler, Lindsay K. Butler, Erin Carpenter, Jaimie Girnis, John Wilson, Vaibhav Tripathi, Yiwen Zhang, Bettina Sorger, Alexander von Lühmann, David C. Somers, Alice Cronin-Golomb, Swathi Kiran, Terry D. Ellis, David A. Boas | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006377.v1.0.2](https://doi.org/10.18112/openneuro.ds006377.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006377) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006377) | [Source URL](https://openneuro.org/datasets/ds006377) | ### Copy-paste BibTeX ```bibtex @dataset{ds006377, title = {InclusionStudy}, author = {Meryem A. Yücel and Jessica E. Anderson and De'Ja Rogers and Parisa Hajirahimi and Parya Farzam and Yuanyuan Gao and Rini I. Kaplan and Emily J. Braun and Nishaat Mukadam and Sudan Duwadi and Laura Carlton and David Beeler and Lindsay K. Butler and Erin Carpenter and Jaimie Girnis and John Wilson and Vaibhav Tripathi and Yiwen Zhang and Bettina Sorger and Alexander von Lühmann and David C. Somers and Alice Cronin-Golomb and Swathi Kiran and Terry D. Ellis and David A. Boas}, doi = {10.18112/openneuro.ds006377.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds006377.v1.0.2}, } ``` ## Technical Details - Subjects: 115 - Recordings: 690 - Tasks: 6 - Channels: 52 - Sampling rate (Hz): 10.172526041666666 (643), 10.172526041666664 (24), 10.172526041666668 (21), 10.172526043256445 (2) - Duration (hours): Not calculated - Pathology: Not specified - Modality: Motor - Type: Motor - Size on disk: 1.4 GB - File count: 690 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006377.v1.0.2 - Source: openneuro - OpenNeuro: [ds006377](https://openneuro.org/datasets/ds006377) - NeMAR: [ds006377](https://nemar.org/dataexplorer/detail?dataset_id=ds006377) ## API Reference Use the `DS006377` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006377(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) InclusionStudy * **Study:** `ds006377` (OpenNeuro) * **Author (year):** `Yucel2025_InclusionStudy` * **Canonical:** — Also importable as: `DS006377`, `Yucel2025_InclusionStudy`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 115; recordings: 690; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006377](https://openneuro.org/datasets/ds006377) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006377](https://nemar.org/dataexplorer/detail?dataset_id=ds006377) DOI: [https://doi.org/10.18112/openneuro.ds006377.v1.0.2](https://doi.org/10.18112/openneuro.ds006377.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006377 >>> dataset = DS006377(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006377) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006377) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS006386: eeg dataset, 30 subjects *PhysioMotion_Artifact* Access recordings and metadata through EEGDash. **Citation:** Jiangwei Yu, Aonan He (2025). *PhysioMotion_Artifact*. [10.18112/openneuro.ds006386.v1.0.1](https://doi.org/10.18112/openneuro.ds006386.v1.0.1) Modality: eeg Subjects: 30 Recordings: 180 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006386 dataset = DS006386(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006386(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006386( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006386, title = {PhysioMotion_Artifact}, author = {Jiangwei Yu and Aonan He}, doi = {10.18112/openneuro.ds006386.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006386.v1.0.1}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS006386` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PhysioMotion_Artifact | | Author (year) | `Yu2025` | | Canonical | `Yu2019` | | Importable as | `DS006386`, `Yu2025`, `Yu2019` | | Year | 2025 | | Authors | Jiangwei Yu, Aonan He | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006386.v1.0.1](https://doi.org/10.18112/openneuro.ds006386.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006386) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006386) | [Source URL](https://openneuro.org/datasets/ds006386) | ### Copy-paste BibTeX ```bibtex @dataset{ds006386, title = {PhysioMotion_Artifact}, author = {Jiangwei Yu and Aonan He}, doi = {10.18112/openneuro.ds006386.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006386.v1.0.1}, } ``` ## Technical Details - Subjects: 30 - Recordings: 180 - Tasks: 1 - Channels: 59 - Sampling rate (Hz): 1000.0 - Duration (hours): 57.99939444444445 - Pathology: Healthy - Modality: Motor - Type: Other - Size on disk: 23.0 GB - File count: 180 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006386.v1.0.1 - Source: openneuro - OpenNeuro: [ds006386](https://openneuro.org/datasets/ds006386) - NeMAR: [ds006386](https://nemar.org/dataexplorer/detail?dataset_id=ds006386) ## API Reference Use the `DS006386` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006386(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PhysioMotion_Artifact * **Study:** `ds006386` (OpenNeuro) * **Author (year):** `Yu2025` * **Canonical:** `Yu2019` Also importable as: `DS006386`, `Yu2025`, `Yu2019`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 30; recordings: 180; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006386](https://openneuro.org/datasets/ds006386) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006386](https://nemar.org/dataexplorer/detail?dataset_id=ds006386) DOI: [https://doi.org/10.18112/openneuro.ds006386.v1.0.1](https://doi.org/10.18112/openneuro.ds006386.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006386 >>> dataset = DS006386(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006386) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006386) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006392: ieeg dataset, 1 subjects *HED schema library for SCORE annotations example* Access recordings and metadata through EEGDash. **Citation:** Tal Pal Attia, Kay Robbins, Dora Hermes (2025). *HED schema library for SCORE annotations example*. [10.18112/openneuro.ds006392.v1.0.1](https://doi.org/10.18112/openneuro.ds006392.v1.0.1) Modality: ieeg Subjects: 1 Recordings: 1 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006392 dataset = DS006392(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006392(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006392( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006392, title = {HED schema library for SCORE annotations example}, author = {Tal Pal Attia and Kay Robbins and Dora Hermes}, doi = {10.18112/openneuro.ds006392.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006392.v1.0.1}, } ``` ## About This Dataset **BIDS example with HED-SCORE schema library annotations** The HED schema library for the Standardized Computer-based Organized Reporting of EEG (SCORE) can be used to add annotations for BIDS datasets. The annotations are machine readable and validated with the BIDS and HED validators. This example is related to the following preprint: Dora Hermes, Tal Pal Attia, Sándor Beniczky, Jorge Bosch-Bayard, Arnaud Delorme, Brian Nils Lundstrom, Christine Rogers, Stefan Rampp, Seyed Yahya Shirazi, Dung Truong, Pedro Valdes-Sosa, Greg Worrell, Scott Makeig, Kay Robbins. Hierarchical Event Descriptor library schema for EEG data annotation. arXiv preprint arXiv:2310.15173. 2024 Oct 27. **General information** This BIDS example dataset includes iEEG data from one subject that were measured during clinical photic stimulation. Intracranial EEG data were collected at Mayo Clinic Rochester, MN under IRB#: 15-006530. **Events** The events are annotated according to the HED-SCORE schema library. Data are annotated by adding a column for annotations in the \_events.tsv. The levels and annotations in this column are defined in the \_events.json sidecar as HED tags. **More information** HED: [https://www.hedtags.org/](https://www.hedtags.org/) HED schema library for SCORE: [https://github.com/hed-standard/hed-schema-library](https://github.com/hed-standard/hed-schema-library) **Contact** Dora Hermes: [hermes.dora@mayo.edu](mailto:hermes.dora@mayo.edu) ## Dataset Information | Dataset ID | `DS006392` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | HED schema library for SCORE annotations example | | Author (year) | `Attia2025` | | Canonical | `Hermes2024` | | Importable as | `DS006392`, `Attia2025`, `Hermes2024` | | Year | 2025 | | Authors | Tal Pal Attia, Kay Robbins, Dora Hermes | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006392.v1.0.1](https://doi.org/10.18112/openneuro.ds006392.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006392) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006392) | [Source URL](https://openneuro.org/datasets/ds006392) | ### Copy-paste BibTeX ```bibtex @dataset{ds006392, title = {HED schema library for SCORE annotations example}, author = {Tal Pal Attia and Kay Robbins and Dora Hermes}, doi = {10.18112/openneuro.ds006392.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006392.v1.0.1}, } ``` ## Technical Details - Subjects: 1 - Recordings: 1 - Tasks: 1 - Channels: 166 - Sampling rate (Hz): 512.0 - Duration (hours): 0.2403428819444444 - Pathology: Not specified - Modality: Visual - Type: Perception - Size on disk: 32.0 MB - File count: 1 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006392.v1.0.1 - Source: openneuro - OpenNeuro: [ds006392](https://openneuro.org/datasets/ds006392) - NeMAR: [ds006392](https://nemar.org/dataexplorer/detail?dataset_id=ds006392) ## API Reference Use the `DS006392` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006392(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HED schema library for SCORE annotations example * **Study:** `ds006392` (OpenNeuro) * **Author (year):** `Attia2025` * **Canonical:** `Hermes2024` Also importable as: `DS006392`, `Attia2025`, `Hermes2024`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006392](https://openneuro.org/datasets/ds006392) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006392](https://nemar.org/dataexplorer/detail?dataset_id=ds006392) DOI: [https://doi.org/10.18112/openneuro.ds006392.v1.0.1](https://doi.org/10.18112/openneuro.ds006392.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006392 >>> dataset = DS006392(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006392) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006392) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS006394: eeg dataset, 33 subjects *Electrophysiological markers of surprise-induced failures of visual and auditory awareness* Access recordings and metadata through EEGDash. **Citation:** En-Lin Leong, Yun Da Chua, Takashi Obana, Christopher L. Asplund (2025). *Electrophysiological markers of surprise-induced failures of visual and auditory awareness*. [10.18112/openneuro.ds006394.v1.0.3](https://doi.org/10.18112/openneuro.ds006394.v1.0.3) Modality: eeg Subjects: 33 Recordings: 60 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006394 dataset = DS006394(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006394(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006394( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006394, title = {Electrophysiological markers of surprise-induced failures of visual and auditory awareness}, author = {En-Lin Leong and Yun Da Chua and Takashi Obana and Christopher L. Asplund}, doi = {10.18112/openneuro.ds006394.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds006394.v1.0.3}, } ``` ## About This Dataset This is the dataset for Leong et al. (in prep). 33 participants completed both a visual and auditory surprise task in counterbalanced order. Methodological details are contained in the manuscript. Certain participants were excluded at various stages of the analyses. Their data and event lists are included up to the stage of processing that their data reached. Due to incorrect settings specific to OpenBCI GUI v5.0.1, indicated EEG values are 24 times larger than what they should be. The units (also specified in the channels.tsv files) are thus in microvolts / 24. ## Dataset Information | Dataset ID | `DS006394` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Electrophysiological markers of surprise-induced failures of visual and auditory awareness | | Author (year) | `Leong2025` | | Canonical | — | | Importable as | `DS006394`, `Leong2025` | | Year | 2025 | | Authors | En-Lin Leong, Yun Da Chua, Takashi Obana, Christopher L. Asplund | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006394.v1.0.3](https://doi.org/10.18112/openneuro.ds006394.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006394) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006394) | [Source URL](https://openneuro.org/datasets/ds006394) | ### Copy-paste BibTeX ```bibtex @dataset{ds006394, title = {Electrophysiological markers of surprise-induced failures of visual and auditory awareness}, author = {En-Lin Leong and Yun Da Chua and Takashi Obana and Christopher L. Asplund}, doi = {10.18112/openneuro.ds006394.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds006394.v1.0.3}, } ``` ## Technical Details - Subjects: 33 - Recordings: 60 - Tasks: 2 - Channels: 16 - Sampling rate (Hz): 125.0 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Multisensory - Type: Attention - Size on disk: 534.8 MB - File count: 60 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006394.v1.0.3 - Source: openneuro - OpenNeuro: [ds006394](https://openneuro.org/datasets/ds006394) - NeMAR: [ds006394](https://nemar.org/dataexplorer/detail?dataset_id=ds006394) ## API Reference Use the `DS006394` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006394(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrophysiological markers of surprise-induced failures of visual and auditory awareness * **Study:** `ds006394` (OpenNeuro) * **Author (year):** `Leong2025` * **Canonical:** — Also importable as: `DS006394`, `Leong2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 33; recordings: 60; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006394](https://openneuro.org/datasets/ds006394) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006394](https://nemar.org/dataexplorer/detail?dataset_id=ds006394) DOI: [https://doi.org/10.18112/openneuro.ds006394.v1.0.3](https://doi.org/10.18112/openneuro.ds006394.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS006394 >>> dataset = DS006394(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006394) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006394) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006434: eeg dataset, 66 subjects *The auditory brainstem response to natural speech is not affected by selective attention* Access recordings and metadata through EEGDash. **Citation:** Thomas J Stoll, Nathan D Vandjelovic, Melissa J Polonenko, Nadja R S Li, Adrian K C Lee, Ross K Maddox (2025). *The auditory brainstem response to natural speech is not affected by selective attention*. [10.18112/openneuro.ds006434.v1.2.0](https://doi.org/10.18112/openneuro.ds006434.v1.2.0) Modality: eeg Subjects: 66 Recordings: 118 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006434 dataset = DS006434(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006434(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006434( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006434, title = {The auditory brainstem response to natural speech is not affected by selective attention}, author = {Thomas J Stoll and Nathan D Vandjelovic and Melissa J Polonenko and Nadja R S Li and Adrian K C Lee and Ross K Maddox}, doi = {10.18112/openneuro.ds006434.v1.2.0}, url = {https://doi.org/10.18112/openneuro.ds006434.v1.2.0}, } ``` ## About This Dataset **Overview** This is the dataset for our study investigating the effects of selective attention to speech stimuli in the subcortex and cortex, entitled “The auditory brainstem response to natural speech is not affected by selective attention” by Stoll et al. (2025). Please cite our paper if you use our dataset. It contains EEG data for three experiments, detailed in the paper and briefly summarized below. Code and stimuli to derive the responses are ### View full README **Overview** This is the dataset for our study investigating the effects of selective attention to speech stimuli in the subcortex and cortex, entitled “The auditory brainstem response to natural speech is not affected by selective attention” by Stoll et al. (2025). Please cite our paper if you use our dataset. It contains EEG data for three experiments, detailed in the paper and briefly summarized below. Code and stimuli to derive the responses are provided in the Dataset folder and on our lab’s github: [https://github.com/maddoxlab/stoll_et_al_selective_attention](https://github.com/maddoxlab/stoll_et_al_selective_attention). Experiment 1 - diotic stimuli (exp1Diotic) This “task” includes EEG data for 28 subjects who listened to 120 trials each (64 s each; total 128 minutes) of two audiobooks - A Wrinkle in Time (Female narrator) and The Alchemyst (male narrator). Stimuli were set to 65 dB SPL then summed together to be presented diotically. Subjects sat at a computer desk in a soundproof room. They were instructed to attend to only one narrator on each trial, with cues given before they started the trial and through a fixation dot which remained for the duration of the trial. For details, see the `Details about the experiment` section and refer to our paper. EEG was recorded simultaneously from a 32 channel activate montage (to examine cortical responses) and a 2 channel passive bipolar montage (FCz to earlobes, to examine subcortical responses). On a subset of the subjects (1, 3, 4, 7, 8, 9, 10, 11, 12, 13, 16, 18) an additional electrode was placed on the eardrum. Data are split into cortical (active) electrodes and subcortical (passive) electrodes. Since data was collected simultaneously, data from all electrodes were sampled at 25 kHz. To reduce file size and computation time, the cortical electrodes were downsampled to 1 kHz and the subcortical electrodes were downsampled to 10 kHz. Experiment 2 - dichotic stimuli (exp2Dichotic) This “task” contains EEG data for 25 subjects who listened to 60 trials each (64 s each; total 64 minutes) of two audiobooks - A Wrinkle in Time (Female narrator) and The Alchemyst (male narrator). Stimuli were set to 65 dB SPL and presented diotically. Subjects sat at a computer desk in a soundproof room. They were instructed to attend to only one narrator on each trial (indicated by the story name, talker sex, and direction) with cues given before they started the trial and through a fixation dot with an arrow which remained for the duration of the trial. For details, see the `Details about the experiment` section and refer to our paper. The records of individual participant age and sex no longer exist, but overall statistics are reported in the paper. EEG was recorded simultaneously from a 32 channel activate montage (to examine cortical responses) and passive electrodes using a bipolar montage, with the noninverting electrode placed on FCz and the inverting electrode on the earlobe, with ground on the forehead. The side the electrode was placed on was counterbalanced across subjects. Experiment 3 - passive listening to stimuli from Forte et al. (exp3Passive) This “task” contains EEG data for 14 subjects who listened to 32 trials each (~117 s each; total ~62 minutes) of four audiobooks - Tales of Troy: Ulysses the Sacker of Cities and The Green Forest Fairy Book narrated by James K. White for the male speech and The Children of Odin and The Adventures of Odysseus and the Tale of Troy narrated by Elizabeth Klett for the female speech. These audiobooks were selected to match the study by Forte et al. (2017), who provided us with the audio files. Stimuli were set to 73 dB SPL then summed together to be presented diotically (i.e., at 76 dB SPL). The stories were paired in the same manner as in Forte et al. (2017). Subjects sat at a computer desk in a soundproof room. They were instructed to ignore the audio as best they could and distract themselves by watching silent captioned videos of their choosing or by reading. For details, see the `Details about the experiment` section and refer to our paper. EEG was recorded with a passive electrodes using a bipolar montage, with the noninverting electrode placed on FCz and the inverting electrode on the earlobe, with ground on the forehead. **Format** The dataset is formatted according to the EEG Brain Imaging Data Structure. See the `dataset_description.json` file for the specific version used. Generally, you can find detailed event data in the .tsv files and descriptions in the accompanying .json files. Raw eeg files are provided in the Brain Products format. **Details about the experiment** For a detailed description of the task, see Stoll et al. (2025) as well as the supplied file json files. Trigger onset times have already been corrected for the tubing delay of the insert earphones. Trial numbers and more metadata of the events are in each of the ‘\*_eeg_events.tsv” file, which is sufficient to know which trial corresponded to which chapter and which narrator the subjects were instructed to attend. As chapters were organized to allow subjects to follow to stories, all subjects had the same trial order in experiment 1 and 2. Story order was randomized in experiment 3, with that information stored in the ‘\*_eeg_evnets.tsv” file. ## Dataset Information | Dataset ID | `DS006434` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The auditory brainstem response to natural speech is not affected by selective attention | | Author (year) | `Stoll2025` | | Canonical | — | | Importable as | `DS006434`, `Stoll2025` | | Year | 2025 | | Authors | Thomas J Stoll, Nathan D Vandjelovic, Melissa J Polonenko, Nadja R S Li, Adrian K C Lee, Ross K Maddox | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006434.v1.2.0](https://doi.org/10.18112/openneuro.ds006434.v1.2.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006434) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006434) | [Source URL](https://openneuro.org/datasets/ds006434) | ### Copy-paste BibTeX ```bibtex @dataset{ds006434, title = {The auditory brainstem response to natural speech is not affected by selective attention}, author = {Thomas J Stoll and Nathan D Vandjelovic and Melissa J Polonenko and Nadja R S Li and Adrian K C Lee and Ross K Maddox}, doi = {10.18112/openneuro.ds006434.v1.2.0}, url = {https://doi.org/10.18112/openneuro.ds006434.v1.2.0}, } ``` ## Technical Details - Subjects: 66 - Recordings: 118 - Tasks: 5 - Channels: 32 (52), 2 (28), 1 (24), 3 (14) - Sampling rate (Hz): 10000.0 (52), 1000.0 (28), 500.0 (24), 25000.0 (14) - Duration (hours): 263.13554426111114 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 103.0 GB - File count: 118 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006434.v1.2.0 - Source: openneuro - OpenNeuro: [ds006434](https://openneuro.org/datasets/ds006434) - NeMAR: [ds006434](https://nemar.org/dataexplorer/detail?dataset_id=ds006434) ## API Reference Use the `DS006434` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006434(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The auditory brainstem response to natural speech is not affected by selective attention * **Study:** `ds006434` (OpenNeuro) * **Author (year):** `Stoll2025` * **Canonical:** — Also importable as: `DS006434`, `Stoll2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 66; recordings: 118; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006434](https://openneuro.org/datasets/ds006434) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006434](https://nemar.org/dataexplorer/detail?dataset_id=ds006434) DOI: [https://doi.org/10.18112/openneuro.ds006434.v1.2.0](https://doi.org/10.18112/openneuro.ds006434.v1.2.0) ### Examples ```pycon >>> from eegdash.dataset import DS006434 >>> dataset = DS006434(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006434) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006434) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006437: eeg dataset, 9 subjects *LIGHT Hypnotherapy* Access recordings and metadata through EEGDash. **Citation:** anonymous (2025). *LIGHT Hypnotherapy*. [10.18112/openneuro.ds006437.v1.1.0](https://doi.org/10.18112/openneuro.ds006437.v1.1.0) Modality: eeg Subjects: 9 Recordings: 63 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006437 dataset = DS006437(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006437(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006437( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006437, title = {LIGHT Hypnotherapy}, author = {anonymous}, doi = {10.18112/openneuro.ds006437.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds006437.v1.1.0}, } ``` ## About This Dataset LIGHT Dataset During these guided imagery hypnotherapy sessions, a hypnotherapist integrates storytelling and deep visualizations techniques to encourage personal and wellbeing. This method allows individuals to explore and modify their subconscious narratives, in order to foster changes that resonate through their lives. This differs from traditional hypnotherapy by tapping into eudaemonic aspects of well-being, emphasizing the pursuit of meaning and self ### View full README LIGHT Dataset During these guided imagery hypnotherapy sessions, a hypnotherapist integrates storytelling and deep visualizations techniques to encourage personal and wellbeing. This method allows individuals to explore and modify their subconscious narratives, in order to foster changes that resonate through their lives. This differs from traditional hypnotherapy by tapping into eudaemonic aspects of well-being, emphasizing the pursuit of meaning and self realization. 2 weeks before the first session, they will come to the lab to record baseline resting EEG (session 0). With the exception of sessions 1, 4, and 8, all LIGHT sessions were performed virtually. For sessions 1, 4, and 8, EEG and ECG data was collected. The first phase of each session was guided relaxation or induction followed by prompts to visualize a perfect place of comfort and safely in ones creative imagination. Next, they were prompted to visualize a path or a set of stairs, and asked to walk down that path or stairway for ten steps, counting down from 10 to 1 outloud. They were guided to visualize and describe a chair or seat, followed by a crown. After taking a seat in their imagined chair and putting on the crown, they were encouraged to pause and observe the world they had conjured. Following this brief period of reflection, awareness was drawn to a light or color that emerged from their creative mind. They identified the color of the light and were instructed to imagine that light entering and traveling through their body, filling up each cell as it passed through. The participant was instructed to leave a mental bookmark in this place so that they could come back to it at any time, before removing and storing their crown. At the end of the LIGHT sessions, the participant was asked to rise from their imagined chair or throne and count their way up from one to ten, with the session ending as the participant returned to an awake and alert state at the end of the ten count. Baseline recordings were 5 minute of length. Hypnotherapy recordings are variable in length with multiple events indicating a hypnotherapist phase transitions. ## Dataset Information | Dataset ID | `DS006437` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | LIGHT Hypnotherapy | | Author (year) | `DS6437_LIGHT_Hypnotherapy` | | Canonical | — | | Importable as | `DS006437`, `DS6437_LIGHT_Hypnotherapy` | | Year | 2025 | | Authors | anonymous | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006437.v1.1.0](https://doi.org/10.18112/openneuro.ds006437.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006437) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006437) | [Source URL](https://openneuro.org/datasets/ds006437) | ### Copy-paste BibTeX ```bibtex @dataset{ds006437, title = {LIGHT Hypnotherapy}, author = {anonymous}, doi = {10.18112/openneuro.ds006437.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds006437.v1.1.0}, } ``` ## Technical Details - Subjects: 9 - Recordings: 63 - Tasks: 5 - Channels: 64 - Sampling rate (Hz): 256.0 - Duration (hours): 16.798037109375 - Pathology: Healthy - Modality: Auditory - Type: Clinical/Intervention - Size on disk: 4.3 GB - File count: 63 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006437.v1.1.0 - Source: openneuro - OpenNeuro: [ds006437](https://openneuro.org/datasets/ds006437) - NeMAR: [ds006437](https://nemar.org/dataexplorer/detail?dataset_id=ds006437) ## API Reference Use the `DS006437` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006437(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LIGHT Hypnotherapy * **Study:** `ds006437` (OpenNeuro) * **Author (year):** `DS6437_LIGHT_Hypnotherapy` * **Canonical:** — Also importable as: `DS006437`, `DS6437_LIGHT_Hypnotherapy`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 9; recordings: 63; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006437](https://openneuro.org/datasets/ds006437) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006437](https://nemar.org/dataexplorer/detail?dataset_id=ds006437) DOI: [https://doi.org/10.18112/openneuro.ds006437.v1.1.0](https://doi.org/10.18112/openneuro.ds006437.v1.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS006437 >>> dataset = DS006437(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006437) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006437) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006446: eeg dataset, 29 subjects *Cueing the future to reduce temporal discounting* Access recordings and metadata through EEGDash. **Citation:** Isaac Kinley, Sue Becker (2025). *Cueing the future to reduce temporal discounting*. [10.18112/openneuro.ds006446.v1.0.0](https://doi.org/10.18112/openneuro.ds006446.v1.0.0) Modality: eeg Subjects: 29 Recordings: 29 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006446 dataset = DS006446(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006446(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006446( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006446, title = {Cueing the future to reduce temporal discounting}, author = {Isaac Kinley and Sue Becker}, doi = {10.18112/openneuro.ds006446.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006446.v1.0.0}, } ``` ## About This Dataset EEG study of episodic future thinking and delay discounting, to be described in a forthcoming paper. Briefly, participants described a series of future events and were then cued to think about these events as they made intertemporal choices. They were also asked how vivid their mental imagery of these events was. **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `DS006446` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Cueing the future to reduce temporal discounting | | Author (year) | `Kinley2025` | | Canonical | `Kinley2019` | | Importable as | `DS006446`, `Kinley2025`, `Kinley2019` | | Year | 2025 | | Authors | Isaac Kinley, Sue Becker | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006446.v1.0.0](https://doi.org/10.18112/openneuro.ds006446.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006446) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006446) | [Source URL](https://openneuro.org/datasets/ds006446) | ### Copy-paste BibTeX ```bibtex @dataset{ds006446, title = {Cueing the future to reduce temporal discounting}, author = {Isaac Kinley and Sue Becker}, doi = {10.18112/openneuro.ds006446.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006446.v1.0.0}, } ``` ## Technical Details - Subjects: 29 - Recordings: 29 - Tasks: 1 - Channels: 65 - Sampling rate (Hz): 2048.0 - Duration (hours): 18.029718288845487 - Pathology: Healthy - Modality: Visual - Type: Decision-making - Size on disk: 16.1 GB - File count: 29 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006446.v1.0.0 - Source: openneuro - OpenNeuro: [ds006446](https://openneuro.org/datasets/ds006446) - NeMAR: [ds006446](https://nemar.org/dataexplorer/detail?dataset_id=ds006446) ## API Reference Use the `DS006446` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006446(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cueing the future to reduce temporal discounting * **Study:** `ds006446` (OpenNeuro) * **Author (year):** `Kinley2025` * **Canonical:** `Kinley2019` Also importable as: `DS006446`, `Kinley2025`, `Kinley2019`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 29; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006446](https://openneuro.org/datasets/ds006446) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006446](https://nemar.org/dataexplorer/detail?dataset_id=ds006446) DOI: [https://doi.org/10.18112/openneuro.ds006446.v1.0.0](https://doi.org/10.18112/openneuro.ds006446.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006446 >>> dataset = DS006446(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006446) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006446) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006459: fnirs dataset, 17 subjects *High-DensityvSparsefNIRS_WordColorStroop_Sparse_Anderson_2025* Access recordings and metadata through EEGDash. **Citation:** Jessica E. Anderson, Laura Carlton, Sreekanth Kura, Walker J. O’Brien, De’Ja Rogers, Parisa Rahimi, Parya Y. Farzam, Muhammad H. Zaman, David A. Boas, Meryem A. Yücel (2025). *High-DensityvSparsefNIRS_WordColorStroop_Sparse_Anderson_2025*. [10.18112/openneuro.ds006459.v1.0.0](https://doi.org/10.18112/openneuro.ds006459.v1.0.0) Modality: fnirs Subjects: 17 Recordings: 17 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006459 dataset = DS006459(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006459(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006459( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006459, title = {High-DensityvSparsefNIRS_WordColorStroop_Sparse_Anderson_2025}, author = {Jessica E. Anderson and Laura Carlton and Sreekanth Kura and Walker J. O'Brien and De'Ja Rogers and Parisa Rahimi and Parya Y. Farzam and Muhammad H. Zaman and David A. Boas and Meryem A. Yücel}, doi = {10.18112/openneuro.ds006459.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006459.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS006459` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | High-DensityvSparsefNIRS_WordColorStroop_Sparse_Anderson_2025 | | Author (year) | `Anderson2025_Sparse` | | Canonical | — | | Importable as | `DS006459`, `Anderson2025_Sparse` | | Year | 2025 | | Authors | Jessica E. Anderson, Laura Carlton, Sreekanth Kura, Walker J. O’Brien, De’Ja Rogers, Parisa Rahimi, Parya Y. Farzam, Muhammad H. Zaman, David A. Boas, Meryem A. Yücel | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006459.v1.0.0](https://doi.org/10.18112/openneuro.ds006459.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006459) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006459) | [Source URL](https://openneuro.org/datasets/ds006459) | ### Copy-paste BibTeX ```bibtex @dataset{ds006459, title = {High-DensityvSparsefNIRS_WordColorStroop_Sparse_Anderson_2025}, author = {Jessica E. Anderson and Laura Carlton and Sreekanth Kura and Walker J. O'Brien and De'Ja Rogers and Parisa Rahimi and Parya Y. Farzam and Muhammad H. Zaman and David A. Boas and Meryem A. Yücel}, doi = {10.18112/openneuro.ds006459.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006459.v1.0.0}, } ``` ## Technical Details - Subjects: 17 - Recordings: 17 - Tasks: 1 - Channels: 120 - Sampling rate (Hz): 24.414062499999996 (15), 24.414062499999993, 24.414062500000004 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 168.8 MB - File count: 17 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006459.v1.0.0 - Source: openneuro - OpenNeuro: [ds006459](https://openneuro.org/datasets/ds006459) - NeMAR: [ds006459](https://nemar.org/dataexplorer/detail?dataset_id=ds006459) ## API Reference Use the `DS006459` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006459(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High-DensityvSparsefNIRS_WordColorStroop_Sparse_Anderson_2025 * **Study:** `ds006459` (OpenNeuro) * **Author (year):** `Anderson2025_Sparse` * **Canonical:** — Also importable as: `DS006459`, `Anderson2025_Sparse`. Modality: `fnirs`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006459](https://openneuro.org/datasets/ds006459) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006459](https://nemar.org/dataexplorer/detail?dataset_id=ds006459) DOI: [https://doi.org/10.18112/openneuro.ds006459.v1.0.0](https://doi.org/10.18112/openneuro.ds006459.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006459 >>> dataset = DS006459(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006459) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006459) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS006460: fnirs dataset, 17 subjects *High-DensityvSparsefNIRS_WordColorStroop_HD_Anderson_2025* Access recordings and metadata through EEGDash. **Citation:** Jessica E. Anderson, Laura Carlton, Sreekanth Kura, Walker J. O’Brien, De’Ja Rogers, Parisa Rahimi, Parya Y. Farzam, Muhammad H. Zaman, David A. Boas, Meryem A. Yücel (2025). *High-DensityvSparsefNIRS_WordColorStroop_HD_Anderson_2025*. [10.18112/openneuro.ds006460.v1.0.0](https://doi.org/10.18112/openneuro.ds006460.v1.0.0) Modality: fnirs Subjects: 17 Recordings: 17 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006460 dataset = DS006460(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006460(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006460( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006460, title = {High-DensityvSparsefNIRS_WordColorStroop_HD_Anderson_2025}, author = {Jessica E. Anderson and Laura Carlton and Sreekanth Kura and Walker J. O'Brien and De'Ja Rogers and Parisa Rahimi and Parya Y. Farzam and Muhammad H. Zaman and David A. Boas and Meryem A. Yücel}, doi = {10.18112/openneuro.ds006460.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006460.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS006460` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | High-DensityvSparsefNIRS_WordColorStroop_HD_Anderson_2025 | | Author (year) | `Anderson2025_HD` | | Canonical | — | | Importable as | `DS006460`, `Anderson2025_HD` | | Year | 2025 | | Authors | Jessica E. Anderson, Laura Carlton, Sreekanth Kura, Walker J. O’Brien, De’Ja Rogers, Parisa Rahimi, Parya Y. Farzam, Muhammad H. Zaman, David A. Boas, Meryem A. Yücel | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006460.v1.0.0](https://doi.org/10.18112/openneuro.ds006460.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006460) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006460) | [Source URL](https://openneuro.org/datasets/ds006460) | ### Copy-paste BibTeX ```bibtex @dataset{ds006460, title = {High-DensityvSparsefNIRS_WordColorStroop_HD_Anderson_2025}, author = {Jessica E. Anderson and Laura Carlton and Sreekanth Kura and Walker J. O'Brien and De'Ja Rogers and Parisa Rahimi and Parya Y. Farzam and Muhammad H. Zaman and David A. Boas and Meryem A. Yücel}, doi = {10.18112/openneuro.ds006460.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006460.v1.0.0}, } ``` ## Technical Details - Subjects: 17 - Recordings: 17 - Tasks: 1 - Channels: 428 - Sampling rate (Hz): 17.438616071428573 (15), 17.438616071428577 (2) - Duration (hours): Not calculated - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 459.7 MB - File count: 17 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006460.v1.0.0 - Source: openneuro - OpenNeuro: [ds006460](https://openneuro.org/datasets/ds006460) - NeMAR: [ds006460](https://nemar.org/dataexplorer/detail?dataset_id=ds006460) ## API Reference Use the `DS006460` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006460(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High-DensityvSparsefNIRS_WordColorStroop_HD_Anderson_2025 * **Study:** `ds006460` (OpenNeuro) * **Author (year):** `Anderson2025_HD` * **Canonical:** — Also importable as: `DS006460`, `Anderson2025_HD`. Modality: `fnirs`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006460](https://openneuro.org/datasets/ds006460) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006460](https://nemar.org/dataexplorer/detail?dataset_id=ds006460) DOI: [https://doi.org/10.18112/openneuro.ds006460.v1.0.0](https://doi.org/10.18112/openneuro.ds006460.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006460 >>> dataset = DS006460(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006460) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006460) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS006465: eeg dataset, 20 subjects *3M-CPSEED:An EEG-based Dataset for Chinese Pinyin Production in Overt, Silent-intended, and Imagined Speech* Access recordings and metadata through EEGDash. **Citation:** Xinyu Ma, Jiang Yi, Ning Jiang (2025). *3M-CPSEED:An EEG-based Dataset for Chinese Pinyin Production in Overt, Silent-intended, and Imagined Speech*. [10.18112/openneuro.ds006465.v2.0.0](https://doi.org/10.18112/openneuro.ds006465.v2.0.0) Modality: eeg Subjects: 20 Recordings: 80 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006465 dataset = DS006465(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006465(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006465( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006465, title = {3M-CPSEED:An EEG-based Dataset for Chinese Pinyin Production in Overt, Silent-intended, and Imagined Speech}, author = {Xinyu Ma and Jiang Yi and Ning Jiang}, doi = {10.18112/openneuro.ds006465.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds006465.v2.0.0}, } ``` ## About This Dataset Overview This dataset, named 3M-CPSEED, consists of electroencephalogram (EEG) recordings obtained from 20 participants engaged in imagined speech tasks. 3M-CPSEED holds significant implications for speech neurophysiology research, not only facilitating exploration of neural activity differences across pinyin articulations but also enabling robust transfer learning studies for other alphabetic languages. Data Collection Participants: 20 healthy, right-handed individuals (average age: 24.55 years, standard deviation: 2.58 years; 11 females, 9 males) who are native Chinese speakers. Materials: To strike a balance between comprehensively capturing the articulatory features of the Chinese phonological system and maintaining a concise, controllable set of stimuli, we selected this set of Pinyin sounds: Finals: “a, i, u, ü”; Initials: “m, f, j, l, k, ch”. Procedure: Participants read Pinyin displayed on a screen at ‘speak’, ‘Silently articulated’ and ‘imagined’ phase. Each participant completed 4 blocks of 1600 trials in total. Data Structure The dataset is organized according to the BIDS standard: Main Folder: dataset_description.json: Description of the dataset. participants.tsv: Participant information. participants.json: Details of columns in participants.tsv. README: General information about the dataset. data_all.mat: Labeled EEG data of all subjects in MAT format. Derivative Data: preproc/: Preprocessed data, including subfolders for each subject (sub-01, etc.), with data in .mat formats . Acknowledgments This work was supported by a 1.3.5 project for disciplines of excellence from West China Hospital (#ZYYC22001). ## Dataset Information | Dataset ID | `DS006465` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 3M-CPSEED:An EEG-based Dataset for Chinese Pinyin Production in Overt, Silent-intended, and Imagined Speech | | Author (year) | `Ma2025` | | Canonical | `CPSEED_3M`, `CPSEED` | | Importable as | `DS006465`, `Ma2025`, `CPSEED_3M`, `CPSEED` | | Year | 2025 | | Authors | Xinyu Ma, Jiang Yi, Ning Jiang | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006465.v2.0.0](https://doi.org/10.18112/openneuro.ds006465.v2.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006465) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006465) | [Source URL](https://openneuro.org/datasets/ds006465) | ### Copy-paste BibTeX ```bibtex @dataset{ds006465, title = {3M-CPSEED:An EEG-based Dataset for Chinese Pinyin Production in Overt, Silent-intended, and Imagined Speech}, author = {Xinyu Ma and Jiang Yi and Ning Jiang}, doi = {10.18112/openneuro.ds006465.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds006465.v2.0.0}, } ``` ## Technical Details - Subjects: 20 - Recordings: 80 - Tasks: 1 - Channels: 32 (58), 126 (19), 33 (3) - Sampling rate (Hz): 500.0 - Duration (hours): 36.19995555555556 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 8.2 GB - File count: 80 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006465.v2.0.0 - Source: openneuro - OpenNeuro: [ds006465](https://openneuro.org/datasets/ds006465) - NeMAR: [ds006465](https://nemar.org/dataexplorer/detail?dataset_id=ds006465) ## API Reference Use the `DS006465` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006465(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 3M-CPSEED:An EEG-based Dataset for Chinese Pinyin Production in Overt, Silent-intended, and Imagined Speech * **Study:** `ds006465` (OpenNeuro) * **Author (year):** `Ma2025` * **Canonical:** `CPSEED_3M`, `CPSEED` Also importable as: `DS006465`, `Ma2025`, `CPSEED_3M`, `CPSEED`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 20; recordings: 80; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006465](https://openneuro.org/datasets/ds006465) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006465](https://nemar.org/dataexplorer/detail?dataset_id=ds006465) DOI: [https://doi.org/10.18112/openneuro.ds006465.v2.0.0](https://doi.org/10.18112/openneuro.ds006465.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006465 >>> dataset = DS006465(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006465) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006465) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006466: eeg dataset, 66 subjects *HeartBEAM: Older Adult Resting State and Auditory Oddball Task EEG Data* Access recordings and metadata through EEGDash. **Citation:** Andy Jeesu Kim, Santiago Morales, Joshua Senior, Mara Mather (2025). *HeartBEAM: Older Adult Resting State and Auditory Oddball Task EEG Data*. [10.18112/openneuro.ds006466.v1.0.1](https://doi.org/10.18112/openneuro.ds006466.v1.0.1) Modality: eeg Subjects: 66 Recordings: 1257 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006466 dataset = DS006466(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006466(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006466( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006466, title = {HeartBEAM: Older Adult Resting State and Auditory Oddball Task EEG Data}, author = {Andy Jeesu Kim and Santiago Morales and Joshua Senior and Mara Mather}, doi = {10.18112/openneuro.ds006466.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006466.v1.0.1}, } ``` ## About This Dataset **Older Adult Resting State and Auditory Oddball Task EEG Data** **What is included** - This dataset includes resting state and auditory oddball task EEG data for two conditions: control and arousal (under threat of unpredictable shock). **Event labels** ### View full README **Older Adult Resting State and Auditory Oddball Task EEG Data** **What is included** - This dataset includes resting state and auditory oddball task EEG data for two conditions: control and arousal (under threat of unpredictable shock). **Event labels** 100 - 5 minutes eyes open resting, control condition, begin 101 - 5 minutes eyes open resting, control condition, end 102 - 5 minutes eyes closed resting, control condition, begin 103 - 5 minutes eyes closed resting, control condition, end 104 - passive auditory oddball task, control condition, run begin 105 - passive auditory oddball task, control condition, trial start 106 - passive auditory oddball task, control condition, standard tone 107 - passive auditory oddball task, control condition, target tone 108 - passive auditory oddball task, control condition, distractor tone 109 - passive auditory oddball task, control condition, trial end 110 - passive auditory oddball task, control condition, run end 111 - active auditory oddball task, control condition, run begin 112 - active auditory oddball task, control condition, trial start 113 - active auditory oddball task, control condition, standard tone 114 - active auditory oddball task, control condition, target tone 115 - active auditory oddball task, control condition, distractor tone 116 - active auditory oddball task, control condition, start of response period 117 - active auditory oddball task, control condition, manual button response recorded 118 - active auditory oddball task, control condition, run end 119 - 5 minutes eyes open resting, shock condition, begin 120 - 5 minutes eyes open resting, shock condition, end 121 - 5 minutes eyes closed resting, shock condition, begin 122 - 5 minutes eyes closed resting, shock condition, end 123 - passive auditory oddball task, shock condition, run begin 124 - passive auditory oddball task, shock condition, trial start 125 - passive auditory oddball task, shock condition, standard tone 126 - passive auditory oddball task, shock condition, target tone 127 - passive auditory oddball task, shock condition, distractor tone 128 - passive auditory oddball task, shock condition, trial end 129 - passive auditory oddball task, shock condition, run end 130 - active auditory oddball task, shock condition, run begin 131 - active auditory oddball task, shock condition, trial start 132 - active auditory oddball task, shock condition, standard tone 133 - active auditory oddball task, shock condition, target tone 134 - active auditory oddball task, shock condition, distractor tone 135 - active auditory oddball task, shock condition, start of response period 136 - active auditory oddball task, shock condition, manual button response recorded 137 - active auditory oddball task, shock condition, run end **Citations** Nashiro, K., Yoo, H. J., Cho, C., Kim, A. J., Nasseri, P., Min, J., … & Mather, M. (2024). Heart rate and breathing effects on attention and memory (HeartBEAM): Study protocol for a randomized controlled trial in older adults. Trials, 25(1), 190. Kim, A. J., Morales, S., Senior, J., & Mather, M. (2025). Electroencephalography, pupillometry, and behavioral evidence for locus coeruleus-noradrenaline system related tonic hyperactivity in older adults. Preprint: doi.org/10.1101/2025.10.02.680040 ## Dataset Information | Dataset ID | `DS006466` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | HeartBEAM: Older Adult Resting State and Auditory Oddball Task EEG Data | | Author (year) | `Kim2025_HeartBEAM_Older_Adult` | | Canonical | `HeartBEAM` | | Importable as | `DS006466`, `Kim2025_HeartBEAM_Older_Adult`, `HeartBEAM` | | Year | 2025 | | Authors | Andy Jeesu Kim, Santiago Morales, Joshua Senior, Mara Mather | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006466.v1.0.1](https://doi.org/10.18112/openneuro.ds006466.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006466) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006466) | [Source URL](https://openneuro.org/datasets/ds006466) | ### Copy-paste BibTeX ```bibtex @dataset{ds006466, title = {HeartBEAM: Older Adult Resting State and Auditory Oddball Task EEG Data}, author = {Andy Jeesu Kim and Santiago Morales and Joshua Senior and Mara Mather}, doi = {10.18112/openneuro.ds006466.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006466.v1.0.1}, } ``` ## Technical Details - Subjects: 66 - Recordings: 1257 - Tasks: 6 - Channels: 65 - Sampling rate (Hz): 1000.0 - Duration (hours): 131.76201944444446 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 117.5 GB - File count: 1257 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006466.v1.0.1 - Source: openneuro - OpenNeuro: [ds006466](https://openneuro.org/datasets/ds006466) - NeMAR: [ds006466](https://nemar.org/dataexplorer/detail?dataset_id=ds006466) ## API Reference Use the `DS006466` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006466(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HeartBEAM: Older Adult Resting State and Auditory Oddball Task EEG Data * **Study:** `ds006466` (OpenNeuro) * **Author (year):** `Kim2025_HeartBEAM_Older_Adult` * **Canonical:** `HeartBEAM` Also importable as: `DS006466`, `Kim2025_HeartBEAM_Older_Adult`, `HeartBEAM`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 66; recordings: 1257; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006466](https://openneuro.org/datasets/ds006466) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006466](https://nemar.org/dataexplorer/detail?dataset_id=ds006466) DOI: [https://doi.org/10.18112/openneuro.ds006466.v1.0.1](https://doi.org/10.18112/openneuro.ds006466.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006466 >>> dataset = DS006466(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006466) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006466) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006468: meg dataset, 24 subjects *MEG-SCANS - A comprehensive magnetoencephalography speech dataset with Stories, Chirps And Noisy Sentences.* Access recordings and metadata through EEGDash. **Citation:** Till Habersetzer, Bernd T. Meyer (2025). *MEG-SCANS - A comprehensive magnetoencephalography speech dataset with Stories, Chirps And Noisy Sentences.*. [10.18112/openneuro.ds006468.v1.1.2](https://doi.org/10.18112/openneuro.ds006468.v1.1.2) Modality: meg Subjects: 24 Recordings: 189 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006468 dataset = DS006468(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006468(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006468( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006468, title = {MEG-SCANS - A comprehensive magnetoencephalography speech dataset with Stories, Chirps And Noisy Sentences.}, author = {Till Habersetzer and Bernd T. Meyer}, doi = {10.18112/openneuro.ds006468.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds006468.v1.1.2}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110.https://doi.org/10.1038/sdata.2018.110 **Description** The MEG-SCANS (Stories, Chirps, And Noisy Sentences) dataset provides raw and MaxFiltered magnetoencephalography (MEG) recordings from 24 German-speaking participants, collected over three months. Each participant engaged in an auditory experiment, listening to approximately one hour of stimuli, including two audiobooks (approx. 20 minutes each), 120 sentences from the Oldenburger Matrix Sentence Test (OLSA) presented at varying speech intelligibility levels (20% to 95%) for Speech Reception Threshold (SRT) assessment, and short up-chirps used for MEG signal quality assessment. For each participant, the dataset comprises raw MEG data, corresponding MaxFiltered data, two empty-room MEG recordings (pre- and post-session), a structural MRI scan of the head, behavioral audiogram and SRT results from hearing screenings, and the corresponding audio stimulus material (audiobooks, envelopes, and chirp stimuli). Auxiliary channels recorded include the left audio channel (MISC001), right audio channel (MISC002), and the instructor’s microphone (MISC007), all sampled at 1000 Hz. Organized according to the Brain Imaging Data Structure (BIDS), this dataset offers a robust benchmark for large-scale encoding/decoding analyses of temporally-resolved brain responses to speech. Note that sub-01 served as a pilot so that its data resembles a slightly different experimental design, specifically lacking chirp stimuli and featuring different audiobooks; this variation is accounted for in the provided analysis pipelines. Comprehensive Matlab and Python code are included alongside the entire analysis pipeline [[https://doi.org/10.5281/zenodo.17397581](https://doi.org/10.5281/zenodo.17397581)] to replicate key data validations, ensuring transparency and reproducibility. The dataset is described in an accompanying data descriptor paper [[https://doi.org/10.1038/s41597-025-06397-4](https://doi.org/10.1038/s41597-025-06397-4)]. ## Dataset Information | Dataset ID | `DS006468` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MEG-SCANS - A comprehensive magnetoencephalography speech dataset with Stories, Chirps And Noisy Sentences. | | Author (year) | `Habersetzer2025` | | Canonical | `MEG_SCANS` | | Importable as | `DS006468`, `Habersetzer2025`, `MEG_SCANS` | | Year | 2025 | | Authors | Till Habersetzer, Bernd T. Meyer | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006468.v1.1.2](https://doi.org/10.18112/openneuro.ds006468.v1.1.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006468) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006468) | [Source URL](https://openneuro.org/datasets/ds006468) | ### Copy-paste BibTeX ```bibtex @dataset{ds006468, title = {MEG-SCANS - A comprehensive magnetoencephalography speech dataset with Stories, Chirps And Noisy Sentences.}, author = {Till Habersetzer and Bernd T. Meyer}, doi = {10.18112/openneuro.ds006468.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds006468.v1.1.2}, } ``` ## Technical Details - Subjects: 24 - Recordings: 189 - Tasks: 4 - Channels: 341 (153), 347 (7), 372 (5) - Sampling rate (Hz): 1000.0 - Duration (hours): 21.73856527777778 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 101.2 GB - File count: 189 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006468.v1.1.2 - Source: openneuro - OpenNeuro: [ds006468](https://openneuro.org/datasets/ds006468) - NeMAR: [ds006468](https://nemar.org/dataexplorer/detail?dataset_id=ds006468) ## API Reference Use the `DS006468` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006468(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEG-SCANS - A comprehensive magnetoencephalography speech dataset with Stories, Chirps And Noisy Sentences. * **Study:** `ds006468` (OpenNeuro) * **Author (year):** `Habersetzer2025` * **Canonical:** `MEG_SCANS` Also importable as: `DS006468`, `Habersetzer2025`, `MEG_SCANS`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 24; recordings: 189; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006468](https://openneuro.org/datasets/ds006468) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006468](https://nemar.org/dataexplorer/detail?dataset_id=ds006468) DOI: [https://doi.org/10.18112/openneuro.ds006468.v1.1.2](https://doi.org/10.18112/openneuro.ds006468.v1.1.2) ### Examples ```pycon >>> from eegdash.dataset import DS006468 >>> dataset = DS006468(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006468) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006468) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS006480: eeg dataset, 68 subjects *Young Adult Resting State and Auditory Oddball Task EEG Data* Access recordings and metadata through EEGDash. **Citation:** Andy Jeesu Kim, Mara Mather, Santiago Morales, Joshua Senior (2025). *Young Adult Resting State and Auditory Oddball Task EEG Data*. [10.18112/openneuro.ds006480.v1.0.1](https://doi.org/10.18112/openneuro.ds006480.v1.0.1) Modality: eeg Subjects: 68 Recordings: 68 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006480 dataset = DS006480(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006480(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006480( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006480, title = {Young Adult Resting State and Auditory Oddball Task EEG Data}, author = {Andy Jeesu Kim and Mara Mather and Santiago Morales and Joshua Senior}, doi = {10.18112/openneuro.ds006480.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006480.v1.0.1}, } ``` ## About This Dataset **Young Adult Resting State and Auditory Oddball Task EEG Data** **What is included** - This dataset includes resting state and auditory oddball task EEG data for two conditions: control and arousal (under threat of unpredictable shock). **Event labels** ### View full README **Young Adult Resting State and Auditory Oddball Task EEG Data** **What is included** - This dataset includes resting state and auditory oddball task EEG data for two conditions: control and arousal (under threat of unpredictable shock). **Event labels** 100 - 5 minutes eyes open resting, control condition, begin 101 - 5 minutes eyes open resting, control condition, end 102 - 5 minutes eyes closed resting, control condition, begin 103 - 5 minutes eyes closed resting, control condition, end 104 - passive auditory oddball task, control condition, run begin 105 - passive auditory oddball task, control condition, trial start 106 - passive auditory oddball task, control condition, standard tone 107 - passive auditory oddball task, control condition, target tone 108 - passive auditory oddball task, control condition, distractor tone 109 - passive auditory oddball task, control condition, trial end 110 - passive auditory oddball task, control condition, run end 111 - active auditory oddball task, control condition, run begin 112 - active auditory oddball task, control condition, trial start 113 - active auditory oddball task, control condition, standard tone 114 - active auditory oddball task, control condition, target tone 115 - active auditory oddball task, control condition, distractor tone 116 - active auditory oddball task, control condition, start of response period 117 - active auditory oddball task, control condition, manual button response recorded 118 - active auditory oddball task, control condition, run end 119 - 5 minutes eyes open resting, shock condition, begin 120 - 5 minutes eyes open resting, shock condition, end 121 - 5 minutes eyes closed resting, shock condition, begin 122 - 5 minutes eyes closed resting, shock condition, end 123 - passive auditory oddball task, shock condition, run begin 124 - passive auditory oddball task, shock condition, trial start 125 - passive auditory oddball task, shock condition, standard tone 126 - passive auditory oddball task, shock condition, target tone 127 - passive auditory oddball task, shock condition, distractor tone 128 - passive auditory oddball task, shock condition, trial end 129 - passive auditory oddball task, shock condition, run end 130 - active auditory oddball task, shock condition, run begin 131 - active auditory oddball task, shock condition, trial start 132 - active auditory oddball task, shock condition, standard tone 133 - active auditory oddball task, shock condition, target tone 134 - active auditory oddball task, shock condition, distractor tone 135 - active auditory oddball task, shock condition, start of response period 136 - active auditory oddball task, shock condition, manual button response recorded 137 - active auditory oddball task, shock condition, run end **Citations** Nashiro, K., Yoo, H. J., Cho, C., Kim, A. J., Nasseri, P., Min, J., … & Mather, M. (2024). Heart rate and breathing effects on attention and memory (HeartBEAM): Study protocol for a randomized controlled trial in older adults. Trials, 25(1), 190. Kim, A. J., Morales, S., Senior, J., & Mather, M. (2025). Electroencephalography, pupillometry, and behavioral evidence for locus coeruleus-noradrenaline system related tonic hyperactivity in older adults. Preprint: doi.org/10.1101/2025.10.02.680040 ## Dataset Information | Dataset ID | `DS006480` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Young Adult Resting State and Auditory Oddball Task EEG Data | | Author (year) | `Kim2025_Young_Adult_Resting` | | Canonical | — | | Importable as | `DS006480`, `Kim2025_Young_Adult_Resting` | | Year | 2025 | | Authors | Andy Jeesu Kim, Mara Mather, Santiago Morales, Joshua Senior | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006480.v1.0.1](https://doi.org/10.18112/openneuro.ds006480.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006480) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006480) | [Source URL](https://openneuro.org/datasets/ds006480) | ### Copy-paste BibTeX ```bibtex @dataset{ds006480, title = {Young Adult Resting State and Auditory Oddball Task EEG Data}, author = {Andy Jeesu Kim and Mara Mather and Santiago Morales and Joshua Senior}, doi = {10.18112/openneuro.ds006480.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006480.v1.0.1}, } ``` ## Technical Details - Subjects: 68 - Recordings: 68 - Tasks: 1 - Channels: 65 - Sampling rate (Hz): 1000.0 - Duration (hours): 71.66592555555556 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 64.1 GB - File count: 68 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006480.v1.0.1 - Source: openneuro - OpenNeuro: [ds006480](https://openneuro.org/datasets/ds006480) - NeMAR: [ds006480](https://nemar.org/dataexplorer/detail?dataset_id=ds006480) ## API Reference Use the `DS006480` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006480(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Young Adult Resting State and Auditory Oddball Task EEG Data * **Study:** `ds006480` (OpenNeuro) * **Author (year):** `Kim2025_Young_Adult_Resting` * **Canonical:** — Also importable as: `DS006480`, `Kim2025_Young_Adult_Resting`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 68; recordings: 68; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006480](https://openneuro.org/datasets/ds006480) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006480](https://nemar.org/dataexplorer/detail?dataset_id=ds006480) DOI: [https://doi.org/10.18112/openneuro.ds006480.v1.0.1](https://doi.org/10.18112/openneuro.ds006480.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006480 >>> dataset = DS006480(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006480) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006480) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006502: meg dataset, 31 subjects *Skill learning and consolidation in healthy humans* Access recordings and metadata through EEGDash. **Citation:** Bönstrup, M, Buch, ER, Cohen, LG (2025). *Skill learning and consolidation in healthy humans*. [10.18112/openneuro.ds006502.v1.0.0](https://doi.org/10.18112/openneuro.ds006502.v1.0.0) Modality: meg Subjects: 31 Recordings: 380 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006502 dataset = DS006502(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006502(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006502( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006502, title = {Skill learning and consolidation in healthy humans}, author = {Bönstrup, M and Buch, ER and Cohen, LG}, doi = {10.18112/openneuro.ds006502.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006502.v1.0.0}, } ``` ## About This Dataset **README** **Contact** For additional information, please contact: Ethan R. Buch ([ethan.buch@nih.gov](mailto:ethan.buch@nih.gov); ORCID: 0000-0002-5443-8222) **Overview** ### View full README **README** **Contact** For additional information, please contact: Ethan R. Buch ([ethan.buch@nih.gov](mailto:ethan.buch@nih.gov); ORCID: 0000-0002-5443-8222) **Overview** This study was carried out by the Human Cortical Physiology & Neurorehabilitation Section (HCPS) in the NINDS Intramural Research Program. The primary aim of the study was to investigate changes in brain activity associated with early skill learning during an initial training session, overnight skill consolidation and longer-term skill retention. A longitudinal design was used. Skill learning was assessed using the sequential finger tapping task (SFTT). Participants completed an initial session in which anatomical MRI data was acquired, followed by up to three separate MEG session. During the first MEG training session, participants repeatedly typed a 5-item skill sequence (i.e. – 4-1-3-2-4) with their non-dominant left hand over 36 practice trials (lasting 10s each) inter-leaved with short 10s rest breaks. Their instructed goal was to type the sequence as fast and as accurately as possible. This first training session was typically followed by up to two separate MEG retest sessions occurring approximately 24 (mean+/-SD: 23.44+/- 1.32) hours and 30 (29.45 +/- 6.77) days later. During these later MEG sessions, participants were retested on the trained skill (over 9 trials) and 9 different untrained control skill sequences (one trial each). Resting state MEG data was acquired before and after practice blocks during all three MEG sessions. Skill performance, eye gaze and pupillometry, and left wrist flexor/extensor EMG data were also acquired and synchronized with MEG recordings. **Methods** **Ethics Review** All study procedures were approved by the Combined Neuroscience Institutional Review Board of the National Institutes of Health (NIH). **Subjects** Study data was acquired from a total of 31 right-handed, healthy adults (21 females; mean+/-SD age = 26.14 +/- 4.17). All participants provided written informed consent. Verification of clinical status was based upon a comprehensive health history assessment, physical and neurological examination, and unremarkable clinical Brain MRI scan prior to study data collection. Inclusion criteria: Healthy right-handed adults. Exclusion criteria: Active musicians were excluded from the study. **Apparatus** T1-weighted high-resolution (1mm3 isotropic MPRAGE sequence) anatomical brain MRI volumes were acquired for each participant on 3T MRI scanners (GE Excite HDxt and Siemens Skyra) with a standard 32-channel head coil. Continuous MEG and EMG data were acquired on a CTF-275 system (CTF Systems, Inc.) at a sampling frequency of 600Hz (60Hz power line frequency). All recordings were performed in a seated position inside a magnetically shielded room. Head position was determined before and after each scan run using three head localization coils attached to the right and left preauricular and nasion landmarks with adhesive tape. The locations for the coils was digitized and mapped the individual participant’s anatomical MRI volume using BrainSight (Rogue Research Inc.). Behavioral stimuli were presented and response data acquired using E-Prime 2 (Psychology Software Tools, Inc.) and the Cedrus LS-Line (Cedrus Corp) four-key response pad, respectively. Eye gaze and pupillometry data was acquired using the EyeLink 1000 Plus (SR Research Ltd.) eye-tracker device and recorded using ADC channel inputs to the CTF-275 system. **Task details** Participants typed a 5-item numerical sequence displayed on a computer screen (41324) as quickly and as accurately as possible, with their non-dominant left hand. No explicit feedback related to performance accuracy or speed was provided. Small asterisks appeared above each sequence item as keypresses were recorded to provide location information to participants during practice. Individual practice trials lasted 10s each. All practice trials were interleaved with short 10s rest breaks. The displayed numerical target sequence was replaced with “XXXXX” during the rest breaks. **MEG session design** MEG1: 1) 6-minute rest scan 2) 12-minute “training” scan (4-1-3-2-4) 3) 6-minute rest scan MEG2 (approximately 24 hours after MEG1 session on average; precise inter-session intervals can be found in participants.tsv file): 1) 6-minute rest scan 2) 3-minute trained sequence “retest” scan (4-1-3-2-4; 36 practice trials lasting 10s each with 10s interleaved rest breaks) 3) 6-minute rest scan 4) 3-minute “control” sequence scan (one 10s trial each of 9 different untrained sequences [2-1-3-4-2, 4-2-4-3-1, 3-4-2-3-1, 1-4-3-4-2, 3-2-4-3-1, 1-4-2-3-1, 3-2-4-2-1, 2-3-1-4-2, 4-2-3-1-4] with 10s interleaved rest breaks) 5) 6-minute rest scan MEG3 (approximately 30 days after MEG1 session on average; precise inter-session intervals can be found in participants.tsv file): 1) 6-minute rest scan 2) 3-minute trained sequence “retest” scan (4-1-3-2-4; 9 practice trials lasting 10s each with 10s interleaved rest breaks) 3) 6-minute rest scan 4) 3-minute “control” sequence scan (one 10s trial each of 9 different untrained sequences [2-1-3-4-2, 3-1-2-1-4, 1-2-4-3-4, 4-1-3-2-1, 2-3-2-4-1, 3-1-4-3-2, 2-3-1-3-4, 1-2-1-3-4, 4-3-2-4-1] with 10s interleaved rest breaks) 5) 6-minute rest scan **Experimental location** All study data was acquired in the Nuclear Magnetic Resonance Facility (NMRF) at the NIH Clinical Center in Bethesda, MD. **Missing data** Three of the gradiometers were malfunctioning and were not used, resulting in 272 total channels of MEG data. Some participants did not complete the 2nd and 3rd MEG sessions. No keypress responses were recorded for participant “sub-01” on trial 1 of MEG1 training. The MEG recording for participant “sub-10” MEG1 training terminated during the rest break after practice trial 34. No MEG data was recorded for practice trials 35 and 36. No keypress responses were recorded for participant “sub-23” on trial 1 of MEG1 training and trials 1 and 2 of MEG2 retest. ## Dataset Information | Dataset ID | `DS006502` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Skill learning and consolidation in healthy humans | | Author (year) | `Bonstrup2025` | | Canonical | — | | Importable as | `DS006502`, `Bonstrup2025` | | Year | 2025 | | Authors | Bönstrup, M, Buch, ER, Cohen, LG | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006502.v1.0.0](https://doi.org/10.18112/openneuro.ds006502.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006502) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006502) | [Source URL](https://openneuro.org/datasets/ds006502) | ### Copy-paste BibTeX ```bibtex @dataset{ds006502, title = {Skill learning and consolidation in healthy humans}, author = {Bönstrup, M and Buch, ER and Cohen, LG}, doi = {10.18112/openneuro.ds006502.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006502.v1.0.0}, } ``` ## Technical Details - Subjects: 31 - Recordings: 380 - Tasks: 4 - Channels: 307 (204), 308 (101), 310 (51), 306 (24) - Sampling rate (Hz): 600.0 - Duration (hours): 37.88333333333333 - Pathology: Healthy - Modality: Visual - Type: Learning - Size on disk: 95.8 GB - File count: 380 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006502.v1.0.0 - Source: openneuro - OpenNeuro: [ds006502](https://openneuro.org/datasets/ds006502) - NeMAR: [ds006502](https://nemar.org/dataexplorer/detail?dataset_id=ds006502) ## API Reference Use the `DS006502` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006502(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Skill learning and consolidation in healthy humans * **Study:** `ds006502` (OpenNeuro) * **Author (year):** `Bonstrup2025` * **Canonical:** — Also importable as: `DS006502`, `Bonstrup2025`. Modality: `meg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 31; recordings: 380; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006502](https://openneuro.org/datasets/ds006502) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006502](https://nemar.org/dataexplorer/detail?dataset_id=ds006502) DOI: [https://doi.org/10.18112/openneuro.ds006502.v1.0.0](https://doi.org/10.18112/openneuro.ds006502.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006502 >>> dataset = DS006502(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006502) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006502) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS006519: ieeg dataset, 21 subjects *Dataset of intracranial EEG during cortical stimulations evoking negative motor responses* Access recordings and metadata through EEGDash. **Citation:** Andrei Barborica, Cristina Ghita, Laurentiu Tofan, Irina Oane, Ioana Mindruta (2025). *Dataset of intracranial EEG during cortical stimulations evoking negative motor responses*. [10.18112/openneuro.ds006519.v1.0.0](https://doi.org/10.18112/openneuro.ds006519.v1.0.0) Modality: ieeg Subjects: 21 Recordings: 35 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006519 dataset = DS006519(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006519(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006519( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006519, title = {Dataset of intracranial EEG during cortical stimulations evoking negative motor responses}, author = {Andrei Barborica and Cristina Ghita and Laurentiu Tofan and Irina Oane and Ioana Mindruta}, doi = {10.18112/openneuro.ds006519.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006519.v1.0.0}, } ``` ## About This Dataset In this dataset we included iEEG recordings of responses to 41 intracranial high frequency stimulations evoking negative motor responses, in 23 patients undergoing stereo-EEG presurgical evaluation for drug-resistant epilepsy. The dataset contains 24 seconds of iEEG data around each stimulation, 9-10 seconds before the start of the stimulation, up to 5 seconds of intracranial stimulation and 9-10 seconds after the end of the stimulation. Each recording contains two 5 second epochs, pre-stimulation (used as baseline in the connectivity analysis) and post-stimulation. We have used high-frequency bipolar stimulations of different areas of the brain, using biphasic pulses having a duration of 1 ms, at a frequency of 43.2 Hz (alternating polarity) or 50 Hz (non-alternating), current intensity between 0.25 to 3 mA, for up to 5 s. The contact pair on which stimulation is applied, the current intensity level and evoked effect are specified in the events tsv. Not all patients in which stimulations evoked negative motor responses met the inclusion criteria for network analysis that requires running the freesurfer pipeline, for instance patients having prior resections, therefore there are subjects that do not contain anatomy data and are not included in the dataset. However, they are included in the numbering of patients to match the table in the companion manuscript. Contact: [andrei.barborica@fizica.unibuc.ro](mailto:andrei.barborica@fizica.unibuc.ro) ## Dataset Information | Dataset ID | `DS006519` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset of intracranial EEG during cortical stimulations evoking negative motor responses | | Author (year) | `Barborica2025` | | Canonical | — | | Importable as | `DS006519`, `Barborica2025` | | Year | 2025 | | Authors | Andrei Barborica, Cristina Ghita, Laurentiu Tofan, Irina Oane, Ioana Mindruta | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006519.v1.0.0](https://doi.org/10.18112/openneuro.ds006519.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006519) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006519) | [Source URL](https://openneuro.org/datasets/ds006519) | ### Copy-paste BibTeX ```bibtex @dataset{ds006519, title = {Dataset of intracranial EEG during cortical stimulations evoking negative motor responses}, author = {Andrei Barborica and Cristina Ghita and Laurentiu Tofan and Irina Oane and Ioana Mindruta}, doi = {10.18112/openneuro.ds006519.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006519.v1.0.0}, } ``` ## Technical Details - Subjects: 21 - Recordings: 35 - Tasks: 1 - Channels: 35 (5), 33 (5), 37 (4), 31 (2), 32 (2), 41 (2), 52 (2), 61, 176, 56, 63, 25, 150, 40, 34, 43, 89, 101, 69, 47 - Sampling rate (Hz): 4096.0 (31), 512.0 (4) - Duration (hours): 0.2686111111111111 - Pathology: Epilepsy - Modality: Other - Type: Clinical/Intervention - Size on disk: 1.0 GB - File count: 35 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006519.v1.0.0 - Source: openneuro - OpenNeuro: [ds006519](https://openneuro.org/datasets/ds006519) - NeMAR: [ds006519](https://nemar.org/dataexplorer/detail?dataset_id=ds006519) ## API Reference Use the `DS006519` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006519(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of intracranial EEG during cortical stimulations evoking negative motor responses * **Study:** `ds006519` (OpenNeuro) * **Author (year):** `Barborica2025` * **Canonical:** — Also importable as: `DS006519`, `Barborica2025`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 21; recordings: 35; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006519](https://openneuro.org/datasets/ds006519) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006519](https://nemar.org/dataexplorer/detail?dataset_id=ds006519) DOI: [https://doi.org/10.18112/openneuro.ds006519.v1.0.0](https://doi.org/10.18112/openneuro.ds006519.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006519 >>> dataset = DS006519(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006519) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006519) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS006525: eeg dataset, 34 subjects *Resting EEG* Access recordings and metadata through EEGDash. **Citation:** Computational Neuroimaging and Neuroengineering Lab ar the University of Oklahoma (2025). *Resting EEG*. [10.18112/openneuro.ds006525.v1.0.0](https://doi.org/10.18112/openneuro.ds006525.v1.0.0) Modality: eeg Subjects: 34 Recordings: 34 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006525 dataset = DS006525(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006525(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006525( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006525, title = {Resting EEG}, author = {Computational Neuroimaging and Neuroengineering Lab ar the University of Oklahoma}, doi = {10.18112/openneuro.ds006525.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006525.v1.0.0}, } ``` ## About This Dataset Introduction: The EEG data was recorded using the 128-channel Amps 300 amplifier (Electrical Geodesics Inc., OR, USA) at a sampling frequency of 1000 Hz. The EEG data acquisition was conducted during the resting. Structural MRI data for the same participants were acquired at the University of Oklahoma Health Science Center (OUHSC) MRI facility using a GE MR750 scanner. The scans were obtained with GE’s BRAVO sequence, with a field of view (FOV) of 240 mm and 180 axial slices per slab Preprocessing in EEGLAB: After the data acquisition, a band-pass filter (0.5–100 Hz) and a notch filter (58–62 Hz) were applied to remove noise. Noisy channels and artifacts (e.g., from eye blinks, muscle movements, or heartbeats) were identified and removed. Bad channels were replaced using interpolation, and the data was re-referenced to the average of all electrodes. The data was then sampled down to 250 Hz to reduce file size while keeping enough detail. No data segments were removed to ensure the continuity needed for later analysis. ## Dataset Information | Dataset ID | `DS006525` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Resting EEG | | Author (year) | `Neuroimaging2025` | | Canonical | — | | Importable as | `DS006525`, `Neuroimaging2025` | | Year | 2025 | | Authors | Computational Neuroimaging and Neuroengineering Lab ar the University of Oklahoma | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006525.v1.0.0](https://doi.org/10.18112/openneuro.ds006525.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006525) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006525) | [Source URL](https://openneuro.org/datasets/ds006525) | ### Copy-paste BibTeX ```bibtex @dataset{ds006525, title = {Resting EEG}, author = {Computational Neuroimaging and Neuroengineering Lab ar the University of Oklahoma}, doi = {10.18112/openneuro.ds006525.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006525.v1.0.0}, } ``` ## Technical Details - Subjects: 34 - Recordings: 34 - Tasks: 1 - Channels: 128 (26), 129 (8) - Sampling rate (Hz): 250.0 - Duration (hours): Not calculated - Pathology: Not specified - Modality: Resting State - Type: Resting-state - Size on disk: 3.0 GB - File count: 34 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006525.v1.0.0 - Source: openneuro - OpenNeuro: [ds006525](https://openneuro.org/datasets/ds006525) - NeMAR: [ds006525](https://nemar.org/dataexplorer/detail?dataset_id=ds006525) ## API Reference Use the `DS006525` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006525(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting EEG * **Study:** `ds006525` (OpenNeuro) * **Author (year):** `Neuroimaging2025` * **Canonical:** — Also importable as: `DS006525`, `Neuroimaging2025`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Unknown`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006525](https://openneuro.org/datasets/ds006525) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006525](https://nemar.org/dataexplorer/detail?dataset_id=ds006525) DOI: [https://doi.org/10.18112/openneuro.ds006525.v1.0.0](https://doi.org/10.18112/openneuro.ds006525.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006525 >>> dataset = DS006525(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006525) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006525) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006545: fnirs dataset, 49 subjects *Reliability-Dubois2024* Access recordings and metadata through EEGDash. **Citation:** Kernel (2025). *Reliability-Dubois2024*. [10.18112/openneuro.ds006545.v1.0.0](https://doi.org/10.18112/openneuro.ds006545.v1.0.0) Modality: fnirs Subjects: 49 Recordings: 98 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006545 dataset = DS006545(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006545(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006545( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006545, title = {Reliability-Dubois2024}, author = {Kernel}, doi = {10.18112/openneuro.ds006545.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006545.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS006545` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Reliability-Dubois2024 | | Author (year) | `ReliabilityDubois2024` | | Canonical | `Dubois2024` | | Importable as | `DS006545`, `ReliabilityDubois2024`, `Dubois2024` | | Year | 2025 | | Authors | Kernel | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006545.v1.0.0](https://doi.org/10.18112/openneuro.ds006545.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006545) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006545) | [Source URL](https://openneuro.org/datasets/ds006545) | ### Copy-paste BibTeX ```bibtex @dataset{ds006545, title = {Reliability-Dubois2024}, author = {Kernel}, doi = {10.18112/openneuro.ds006545.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006545.v1.0.0}, } ``` ## Technical Details - Subjects: 49 - Recordings: 98 - Tasks: 1 - Channels: 6180 (2), 6498 (2), 8340 (2), 3678 (2), 6708 (2), 12180 (2), 6990, 6696, 5400, 6282, 4614, 6432, 11682, 4170, 9558, 5640, 7890, 4752, 7794, 5076, 12300, 5676, 3552, 12738, 8730, 9012, 5280, 14520, 7524, 16266, 14592, 15288, 9966, 8874, 11094, 5568, 9276, 3630, 13014, 7932, 3570, 4278, 5256, 7464, 6060, 11142, 6126, 12468, 4194, 16086, 6768, 6744, 15354, 5190, 10224, 6930, 14820, 5862, 13494, 8250, 4866, 5130, 4986, 7332, 4626, 3792, 10458, 4530, 6522, 14142, 8646, 4062, 4122, 8082, 4734, 7596, 16122, 7044, 16464, 5766, 8832, 4116, 4098, 8592, 3900, 4764, 5082, 7800, 4308, 9180, 10254, 9426 - Sampling rate (Hz): 3.7593757537230927, 3.759380549030849, 3.759383854917815, 3.7593844649490076, 3.7593369412842033, 3.759382548055328, 3.759381637968117, 3.7593335689320417, 3.7593809119130897, 3.7593790288058946, 3.7593368622422507, 3.759384668748248, 3.75938652671013, 3.759327715365909, 3.759378131656764, 3.7593917495093443, 3.759335491657337, 3.7593770035234657, 3.759381147438357, 3.759382410127993, 3.7593847313363185, 3.759326944731349, 3.7593764115478234, 3.7593815029752466, 3.759380476653631, 3.7593798802765264, 3.7593841548655034, 3.7593343198689566, 3.7593316689597076, 3.75938158151899, 3.7593827348988054, 3.759335334223433, 3.7593859458888867, 3.7593821349246923, 3.7593764941046097, 3.7593750038748928, 3.759382593611545, 3.7593818001216643, 3.759380541825277, 3.759340320968606, 3.759327770404511, 3.7593764966001504, 3.759382926882352, 3.759380897280349, 3.759385538565235, 3.759336320191231, 3.759384688523407, 3.7593320784412283, 3.7593804486200146, 3.759336916674929, 3.759376802130892, 3.7593834552836913, 3.7593794232712234, 3.7593266384012547, 3.7593813477897906, 3.759383655551909, 3.7593783750690566, 3.759379675664703, 3.7593859613989697, 3.7593797563033773, 3.759332720066484, 3.7593852258423093, 3.759381014194889, 3.7593815330436198, 3.7593816783733027, 3.759377394526281, 3.7593787725752463, 3.759384908721897, 3.7593360211640108, 3.7593806230201263, 3.7593790725510097, 3.7593852959156377, 3.75933410440123, 3.7593801964283244, 3.7593830794615157, 3.759380220764679, 3.7593374155646906, 3.75933672882927, 3.759382867121934, 3.7593800192877977, 3.759381561915346, 3.7593808053564546, 3.759384261106816, 3.759384299582689, 3.7593826417073126, 3.759332685108552, 3.7593841728783493, 3.7593851070356754, 3.759331427389511, 3.7593278601126636, 3.759384944435528, 3.7593821400544667, 3.759377231180893, 3.7593400623176056, 3.7593792061899447, 3.759337444344509, 3.759389442742258, 3.7593814407919455 - Duration (hours): Not calculated - Pathology: Not specified - Modality: Auditory - Type: — - Size on disk: 46.7 GB - File count: 98 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006545.v1.0.0 - Source: openneuro - OpenNeuro: [ds006545](https://openneuro.org/datasets/ds006545) - NeMAR: [ds006545](https://nemar.org/dataexplorer/detail?dataset_id=ds006545) ## API Reference Use the `DS006545` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006545(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Reliability-Dubois2024 * **Study:** `ds006545` (OpenNeuro) * **Author (year):** `ReliabilityDubois2024` * **Canonical:** `Dubois2024` Also importable as: `DS006545`, `ReliabilityDubois2024`, `Dubois2024`. Modality: `fnirs`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 49; recordings: 98; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006545](https://openneuro.org/datasets/ds006545) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006545](https://nemar.org/dataexplorer/detail?dataset_id=ds006545) DOI: [https://doi.org/10.18112/openneuro.ds006545.v1.0.0](https://doi.org/10.18112/openneuro.ds006545.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006545 >>> dataset = DS006545(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006545) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006545) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS006547: eeg dataset, 31 subjects *Visual EEG Study (BrainVision → BIDS)* Access recordings and metadata through EEGDash. **Citation:** Sanaz Ghaffari, Arian Yavari, Sara Bonyadian, Arsalan Ghofrani, Russell Butler (2025). *Visual EEG Study (BrainVision → BIDS)*. [10.18112/openneuro.ds006547.v1.0.0](https://doi.org/10.18112/openneuro.ds006547.v1.0.0) Modality: eeg Subjects: 31 Recordings: 31 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006547 dataset = DS006547(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006547(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006547( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006547, title = {Visual EEG Study (BrainVision → BIDS)}, author = {Sanaz Ghaffari and Arian Yavari and Sara Bonyadian and Arsalan Ghofrani and Russell Butler}, doi = {10.18112/openneuro.ds006547.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006547.v1.0.0}, } ``` ## About This Dataset This dataset contains high-density EEG recordings collected during a visual stimulation task. Files are organized according to the EEG-BIDS specification. Raw data are BrainVision (.vhdr/.vmrk/.eeg). Task: visual Session: ses-01 Provenance: Converted from c:/shared/raw_eeg with this helper script. No acquisition-time filters applied (offline preprocessing not included here). ## Dataset Information | Dataset ID | `DS006547` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Visual EEG Study (BrainVision → BIDS) | | Author (year) | `Ghaffari2025` | | Canonical | `Ghaffari2024` | | Importable as | `DS006547`, `Ghaffari2025`, `Ghaffari2024` | | Year | 2025 | | Authors | Sanaz Ghaffari, Arian Yavari, Sara Bonyadian, Arsalan Ghofrani, Russell Butler | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006547.v1.0.0](https://doi.org/10.18112/openneuro.ds006547.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006547) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006547) | [Source URL](https://openneuro.org/datasets/ds006547) | ### Copy-paste BibTeX ```bibtex @dataset{ds006547, title = {Visual EEG Study (BrainVision → BIDS)}, author = {Sanaz Ghaffari and Arian Yavari and Sara Bonyadian and Arsalan Ghofrani and Russell Butler}, doi = {10.18112/openneuro.ds006547.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006547.v1.0.0}, } ``` ## Technical Details - Subjects: 31 - Recordings: 31 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 500.0 - Duration (hours): 39.21989444444444 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 17.6 GB - File count: 31 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006547.v1.0.0 - Source: openneuro - OpenNeuro: [ds006547](https://openneuro.org/datasets/ds006547) - NeMAR: [ds006547](https://nemar.org/dataexplorer/detail?dataset_id=ds006547) ## API Reference Use the `DS006547` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006547(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual EEG Study (BrainVision → BIDS) * **Study:** `ds006547` (OpenNeuro) * **Author (year):** `Ghaffari2025` * **Canonical:** `Ghaffari2024` Also importable as: `DS006547`, `Ghaffari2025`, `Ghaffari2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 31; recordings: 31; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006547](https://openneuro.org/datasets/ds006547) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006547](https://nemar.org/dataexplorer/detail?dataset_id=ds006547) DOI: [https://doi.org/10.18112/openneuro.ds006547.v1.0.0](https://doi.org/10.18112/openneuro.ds006547.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006547 >>> dataset = DS006547(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006547) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006547) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006554: eeg dataset, 47 subjects *Social Observation EEG raw data* Access recordings and metadata through EEGDash. **Citation:** Yaner Su (2025). *Social Observation EEG raw data*. [10.18112/openneuro.ds006554.v1.0.0](https://doi.org/10.18112/openneuro.ds006554.v1.0.0) Modality: eeg Subjects: 47 Recordings: 47 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006554 dataset = DS006554(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006554(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006554( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006554, title = {Social Observation EEG raw data}, author = {Yaner Su}, doi = {10.18112/openneuro.ds006554.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006554.v1.0.0}, } ``` ## About This Dataset **README** **WARNING** Below is a template to write a README file for this BIDS dataset. If this message is still present, it means that the person exporting the file has decided not to update the template.If you are the researcher editing this README file, please remove this warning section. The README is usually the starting point for researchers using your dataand serves as a guidepost for users of your data. A clear and informativeREADME makes your data much more usable. In general you can include information in the README that is not captured by some otherfiles in the BIDS dataset (dataset_description.json, events.tsv, …). It can also be useful to also include information that might already bepresent in another file of the dataset but might be important for users to be aware ofbefore preprocessing or analysing the data. ### View full README **README** **WARNING** Below is a template to write a README file for this BIDS dataset. If this message is still present, it means that the person exporting the file has decided not to update the template.If you are the researcher editing this README file, please remove this warning section. The README is usually the starting point for researchers using your dataand serves as a guidepost for users of your data. A clear and informativeREADME makes your data much more usable. In general you can include information in the README that is not captured by some otherfiles in the BIDS dataset (dataset_description.json, events.tsv, …). It can also be useful to also include information that might already bepresent in another file of the dataset but might be important for users to be aware ofbefore preprocessing or analysing the data. If the README gets too long you have the possibility to create a `/doc` folderand add it to the `.bidsignore` file to make sure it is ignored by the BIDS validator. More info here: [https://neurostars.org/t/where-in-a-bids-dataset-should-i-put-notes-about-individual-mri-acqusitions/17315/3](https://neurostars.org/t/where-in-a-bids-dataset-should-i-put-notes-about-individual-mri-acqusitions/17315/3) **Details related to access to the data** - Data user agreement If the dataset requires a data user agreement, link to the relevant information. - Contact person Indicate the name and contact details (email and ORCID) of the person responsible for additional information. - Practical information to access the data If there is any special information related to access rights orhow to download the data make sure to include it.For example, if the dataset was curated using datalad,make sure to include the relevant section from the datalad handbook:http://handbook.datalad.org/en/latest/basics/101-180-FAQ.html#how-can-i-help-others-get-started-with-a-shared-dataset **Overview** - Project name (if relevant) - Year(s) that the project ran If no `scans.tsv` is included, this could at least cover when the data acquisitionstarter and ended. Local time of day is particularly relevant to subject state. - Brief overview of the tasks in the experiment A paragraph giving an overview of the experiment. This should include thegoals or purpose and a discussion about how the experiment tries to achievethese goals. - Description of the contents of the dataset An easy thing to add is the output of the bids-validator that describes what type ofdata and the number of subject one can expect to find in the dataset. - Independent variables A brief discussion of condition variables (sometimes called contrastsor independent variables) that were varied across the experiment. - Dependent variables A brief discussion of the response variables (sometimes called thedependent variables) that were measured and or calculated to assessthe effects of varying the condition variables. This might also includequestionnaires administered to assess behavioral aspects of the experiment. - Control variables A brief discussion of the control variables — that is what aspectswere explicitly controlled in this experiment. The control variables mightinclude subject pool, environmental conditions, set up, or other thingsthat were explicitly controlled. - Quality assessment of the data Provide a short summary of the quality of the data ideally with descriptive statistics if relevantand with a link to more comprehensive description (like with MRIQC) if possible. **Methods** **Subjects** A brief sentence about the subject pool in this experiment. Remember that `Control` or `Patient` status should be defined in the ``` `` ``` participants.tsv\`\`using a group column. - Information about the recruitment procedure- [ ] Subject inclusion criteria (if relevant)- [ ] Subject exclusion criteria (if relevant) **Apparatus** A summary of the equipment and environment setup for theexperiment. For example, was the experiment performed in a shielded roomwith the subject seated in a fixed position. **Initial setup** A summary of what setup was performed when a subject arrived. **Task organization** How the tasks were organized for a session.This is particularly important because BIDS datasets usually have task dataseparated into different files.) - Was task order counter-balanced?- [ ] What other activities were interspersed between tasks? - In what order were the tasks and other activities performed? **Task details** As much detail as possible about the task and the events that were recorded. **Additional data acquired** A brief indication of data other than theimaging data that was acquired as part of this experiment. In additionto data from other modalities and behavioral data, this might includequestionnaires and surveys, swabs, and clinical information. Indicatethe availability of this data. This is especially relevant if the data are not included in a `phenotype` folder.https://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html#phenotypic-and-assessment-data **Experimental location** This should include any additional information regarding thethe geographical location and facility that cannot be includedin the relevant json files. **Missing data** Mention something if some participants are missing some aspects of the data.This can take the form of a processing log and/or abnormalities about the dataset. Some examples: - A brain lesion or defect only present in one participant- Some experimental conditions missing on a given run for a participant because of some technical issue.- Any noticeable feature of the data for certain participants- Differences (even slight) in protocol for certain participants. **Notes** Any additional information or pointers to information thatmight be helpful to users of the dataset. Include qualitative informationrelated to how the data acquisition went. ## Dataset Information | Dataset ID | `DS006554` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Social Observation EEG raw data | | Author (year) | `Su2025` | | Canonical | — | | Importable as | `DS006554`, `Su2025` | | Year | 2025 | | Authors | Yaner Su | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006554.v1.0.0](https://doi.org/10.18112/openneuro.ds006554.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006554) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006554) | [Source URL](https://openneuro.org/datasets/ds006554) | ### Copy-paste BibTeX ```bibtex @dataset{ds006554, title = {Social Observation EEG raw data}, author = {Yaner Su}, doi = {10.18112/openneuro.ds006554.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006554.v1.0.0}, } ``` ## Technical Details - Subjects: 47 - Recordings: 47 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 500.0 - Duration (hours): 28.228528333333333 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 12.1 GB - File count: 47 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006554.v1.0.0 - Source: openneuro - OpenNeuro: [ds006554](https://openneuro.org/datasets/ds006554) - NeMAR: [ds006554](https://nemar.org/dataexplorer/detail?dataset_id=ds006554) ## API Reference Use the `DS006554` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006554(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Social Observation EEG raw data * **Study:** `ds006554` (OpenNeuro) * **Author (year):** `Su2025` * **Canonical:** — Also importable as: `DS006554`, `Su2025`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 47; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006554](https://openneuro.org/datasets/ds006554) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006554](https://nemar.org/dataexplorer/detail?dataset_id=ds006554) DOI: [https://doi.org/10.18112/openneuro.ds006554.v1.0.0](https://doi.org/10.18112/openneuro.ds006554.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006554 >>> dataset = DS006554(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006554) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006554) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006563: eeg dataset, 12 subjects *Dimension-based attention modulates early visual processing* Access recordings and metadata through EEGDash. **Citation:** Klaus Gramann, Thomas Töllner, Hermann J. Müller (2025). *Dimension-based attention modulates early visual processing*. [10.18112/openneuro.ds006563.v1.0.0](https://doi.org/10.18112/openneuro.ds006563.v1.0.0) Modality: eeg Subjects: 12 Recordings: 12 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006563 dataset = DS006563(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006563(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006563( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006563, title = {Dimension-based attention modulates early visual processing}, author = {Klaus Gramann, Thomas Töllner, Hermann J. Müller}, doi = {10.18112/openneuro.ds006563.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006563.v1.0.0}, } ``` ## About This Dataset **Overview** This dataset was originally published in Gramann, K., Töllner, T. and Müller, H.J. (2010), Dimension-based attention modulates early visual processing. Psychophysiology, 47: 968-978. [https://doi.org/10.1111/j.1469-8986.2010.00998.x](https://doi.org/10.1111/j.1469-8986.2010.00998.x) It was subsequently used to investigate automatic labeling of independent components in ICA and is referred to as the “Cue” dataset: Frølich, L., Andersen, T.S. and Mørup, M. (2015), Multi-class classification of ICS of EEG. Psychophysiol, 52: 32-45. [https://doi.org/10.1111/psyp.12290](https://doi.org/10.1111/psyp.12290) “64 scalp channels “referenced to Cz and re-referenced off-line to linked mastoids” from 12 subjects during a visual task (Gramann et al., 2010). ICA was performed with the implementation of the ICA infomax algorithm in the Brain Vision Analyzer software from Brain Products GmbH.2 The data sets we had access to were between 56 and 66 min long for the different subjects.” After contacting the above authors, Laura Frølich provided a copy of the data. With Klaus Gramann’s permission, this was converted to BIDS format by Austin J. Brockmeier and Carlos H. Mendoza-Cardenas. Continuous EEG from 12 with ICA weights. **Subjects** “Twelve observers took part in the Experiment (2 female; age range 21–25 years). All were right-handed, had normal or corrected-to-normal vision, and reported no history of neurological disorder. Observers were either paid or received course credit for participating. All observers provided written informed consent, and the experimental procedure was approved by the ethics committee of the Department of Psychology, University of Munich, in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki).” **Expert-annotated Independent Components (ICs)** The expert-annotated labels of the ICs can be found in the field `expert_ica_labels`, and the class names in `ica_classes`. `expert_ica_labels(i)` is the Matlab-index of `ica_classes` for the i-th IC. ICs can be computed using the fields `data`, `icasphere`, and `icaweights` (e.g., `icaact = icaweights * icasphere * data`). **Details related to access to the data** CC-BY Contact persons: [klaus.gramann@tu-berlin.de](mailto:klaus.gramann@tu-berlin.de) [https://orcid.org/0000-0003-2673-1832](https://orcid.org/0000-0003-2673-1832) [ajbrock@udel.edu](mailto:ajbrock@udel.edu) [https://orcid.org/0000-0002-7293-8140](https://orcid.org/0000-0002-7293-8140) ## Dataset Information | Dataset ID | `DS006563` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dimension-based attention modulates early visual processing | | Author (year) | `Gramann2025` | | Canonical | — | | Importable as | `DS006563`, `Gramann2025` | | Year | 2025 | | Authors | Klaus Gramann, Thomas Töllner, Hermann J. Müller | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006563.v1.0.0](https://doi.org/10.18112/openneuro.ds006563.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006563) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006563) | [Source URL](https://openneuro.org/datasets/ds006563) | ### Copy-paste BibTeX ```bibtex @dataset{ds006563, title = {Dimension-based attention modulates early visual processing}, author = {Klaus Gramann, Thomas Töllner, Hermann J. Müller}, doi = {10.18112/openneuro.ds006563.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006563.v1.0.0}, } ``` ## Technical Details - Subjects: 12 - Recordings: 12 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 500.0 - Duration (hours): 12.639361666666666 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 5.6 GB - File count: 12 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006563.v1.0.0 - Source: openneuro - OpenNeuro: [ds006563](https://openneuro.org/datasets/ds006563) - NeMAR: [ds006563](https://nemar.org/dataexplorer/detail?dataset_id=ds006563) ## API Reference Use the `DS006563` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006563(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dimension-based attention modulates early visual processing * **Study:** `ds006563` (OpenNeuro) * **Author (year):** `Gramann2025` * **Canonical:** — Also importable as: `DS006563`, `Gramann2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006563](https://openneuro.org/datasets/ds006563) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006563](https://nemar.org/dataexplorer/detail?dataset_id=ds006563) DOI: [https://doi.org/10.18112/openneuro.ds006563.v1.0.0](https://doi.org/10.18112/openneuro.ds006563.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006563 >>> dataset = DS006563(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006563) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006563) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006576: eeg dataset, 57 subjects *The role of REM sleep in neural differentiation of memories in the hippocampus* Access recordings and metadata through EEGDash. **Citation:** Elizabeth A. McDevitt, Ghootae Kim, Nicholas B. Turk-Browne, Kenneth A. Norman (2025). *The role of REM sleep in neural differentiation of memories in the hippocampus*. [10.18112/openneuro.ds006576.v1.0.3](https://doi.org/10.18112/openneuro.ds006576.v1.0.3) Modality: eeg Subjects: 57 Recordings: 57 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006576 dataset = DS006576(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006576(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006576( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006576, title = {The role of REM sleep in neural differentiation of memories in the hippocampus}, author = {Elizabeth A. McDevitt and Ghootae Kim and Nicholas B. Turk-Browne and Kenneth A. Norman}, doi = {10.18112/openneuro.ds006576.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds006576.v1.0.3}, } ``` ## About This Dataset NOTE: This version contains datasets for 59 of the 69 participants. New versions will be created as more data are uploaded. This dataset contains the fMRI and EEG data for E.A. McDevitt, G. Kim, N.B. Turk-Browne, K.A. Norman (2026). The role of rapid eye movement sleep in neural differentiation of memories in the hippocampus. Journal of Cognitive Neuroscience, 10.1162/jocn.a.82 Please refer to the paper for detailed methods. The dataset includes 69 participants with three fMRI scans and one EEG session per participant. Depending on the participant’s condition, the EEG session either contains sleep data from a nap or data recorded during a quiet wake session. Please contact Elizabeth McDevitt ([emcdevitt@princeton.edu](mailto:emcdevitt@princeton.edu)) if you have any questions. Notes about the dataset: The following subjects/sessions do not include a T1w anatomical scan: sub-160 ses-00; sub-170 ses-00; sub-178 ses-01 - sub-107/ses-02/func: There are three runs of the decision task included instead of two. During decision_run-01, the participant did not respond to ‘B’ trials (coded in column trial_type). Therefore, there are many trials with no response_accuracy or response_times recorded in task-decision_run-01_events.tsv. Immediately following this run, the same task was re-run as decision_run-03 to collect behavioral responses; therefore the data associated with task-decision_run-03 can be considered a “repeat” of task-decision_run-01. Decision_run-02 was run as expected during the second cycle of the reward prediction task. - sub-108_ses-02_task-reward_run-01_events.tsv: Many trials have no response_accuracy or response_time recorded. The participant misunderstood instructions and did not respond on trials where they predicted a “neutral” outcome. - sub-182_ses-01_task-study_run-01_events.tsv: There was an issue with Matlab not recording “2” button presses during this run of the task. The experimenter recoded all “no response” trials as “2” and used this to code response_accuracy. However, there were no response_times recorded for these trials. ## Dataset Information | Dataset ID | `DS006576` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The role of REM sleep in neural differentiation of memories in the hippocampus | | Author (year) | `McDevitt2025` | | Canonical | — | | Importable as | `DS006576`, `McDevitt2025` | | Year | 2025 | | Authors | Elizabeth A. McDevitt, Ghootae Kim, Nicholas B. Turk-Browne, Kenneth A. Norman | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006576.v1.0.3](https://doi.org/10.18112/openneuro.ds006576.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006576) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006576) | [Source URL](https://openneuro.org/datasets/ds006576) | ### Copy-paste BibTeX ```bibtex @dataset{ds006576, title = {The role of REM sleep in neural differentiation of memories in the hippocampus}, author = {Elizabeth A. McDevitt and Ghootae Kim and Nicholas B. Turk-Browne and Kenneth A. Norman}, doi = {10.18112/openneuro.ds006576.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds006576.v1.0.3}, } ``` ## Technical Details - Subjects: 57 - Recordings: 57 - Tasks: 1 - Channels: 73 - Sampling rate (Hz): 512.0 - Duration (hours): 97.67361111111111 - Pathology: Healthy - Modality: Sleep - Type: Sleep - Size on disk: 553.9 GB - File count: 57 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006576.v1.0.3 - Source: openneuro - OpenNeuro: [ds006576](https://openneuro.org/datasets/ds006576) - NeMAR: [ds006576](https://nemar.org/dataexplorer/detail?dataset_id=ds006576) ## API Reference Use the `DS006576` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006576(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The role of REM sleep in neural differentiation of memories in the hippocampus * **Study:** `ds006576` (OpenNeuro) * **Author (year):** `McDevitt2025` * **Canonical:** — Also importable as: `DS006576`, `McDevitt2025`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 57; recordings: 57; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006576](https://openneuro.org/datasets/ds006576) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006576](https://nemar.org/dataexplorer/detail?dataset_id=ds006576) DOI: [https://doi.org/10.18112/openneuro.ds006576.v1.0.3](https://doi.org/10.18112/openneuro.ds006576.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS006576 >>> dataset = DS006576(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006576) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006576) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006593: eeg dataset, 21 subjects *cBCI Matrix Multimodal Dataset* Access recordings and metadata through EEGDash. **Citation:** Basak Celik, Tab Memmott, Matthew Lawhead, Srikar Ananthoju, Deniz Erdogmus (2025). *cBCI Matrix Multimodal Dataset*. [10.18112/openneuro.ds006593.v1.0.0](https://doi.org/10.18112/openneuro.ds006593.v1.0.0) Modality: eeg Subjects: 21 Recordings: 21 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006593 dataset = DS006593(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006593(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006593( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006593, title = {cBCI Matrix Multimodal Dataset}, author = {Basak Celik and Tab Memmott and Matthew Lawhead and Srikar Ananthoju and Deniz Erdogmus}, doi = {10.18112/openneuro.ds006593.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006593.v1.0.0}, } ``` ## About This Dataset **Multimodal Sensor Fusion for EEG-Based BCI Typing Systems** **Dataset Overview** This dataset contains recordings of EEG and EyeTracking for a BCI spelling task. The data were collected in 2023 at Northeastern University. - N=21 - Calibration task were proctored using BciPy [1] - The dataset is organized in accordance with the Brain Imaging Data Structure (BIDS) specification (version 1.7.0). **Methodology** Calibration data were collected from control participants (n=21, mean age 23.6 ± 3.1 years) in a quiet lab room at Northeastern University. EEG data were collected using the DSI-24, dry electrode cap (Wearable Sensing, San Diego CA) at a sampling rate of 300 Hz. The device employs a hardware filter permitting a collection bandwidth of 0.003–150 Hz. Data were recorded from Fp1/2, Fz, F3/4, F7/8, Cz, C3/4, T7/T8, T3/T4, Pz, P3/P4, P7/P8, T5/T6, O1/2 with linked-ear reference (A1 and A2) and ground at A1. All data were collected using a Lenovo Legion 5 Pro Laptop with Windows 11, an Intel Core i7-11800H @ 2.30 GHz, 16 GB DDR4 RAM, and a NVIDIA GeForce RTX 3050. Trigger fidelity on the experiment laptop was verified using the Matrix Time Test Task in BciPy and a photodiode. The results of this timing test were used to determine static offsets between hardware and prevent experimentation with any timing violations greater than +/− 10 ms. The Eyetracker data were collected using a portable eye tracker (Tobii Pro Nano) at a sampling rate of 60 Hz. The matrix paradigm and the data acquisition modules are developed in BciPy [1], which is a standalone application for experimental data collection. This work focuses on a specific BCI paradigm called single-character-presentation (SCP) based visual presentation, which consists of symbols presented in matrix form and individually highlighted in randomized order. Calibration task presented letter characters at a rate of 4 Hz, with 100 inquiries consisting of 10 letters each (1 target, 9 non-target). In 10% of the inquiries, only non-target characters were shown. The stimuli included all 26 letters of the English alphabet, as well as the characters “_” for space and “<“ for backspace. The order of target stimuli was randomly distributed among the inquiries. Between inquiries, there was a two-second blank screen. Each inquiry consisted of a one-second prompt showing the target letter, followed by a 0.5s fixation, and then the presentation of 10 letters. The letters were displayed in the center of the screen, in white on a black background. Target prompts and stimuli were presented in white, while fixation crosses were rendered in red. The experimental protocol was approved by the Northeastern University Institutional Review Board (IRB). All participants provided written informed consent prior to participation. **Directory Structure** The datasets follows the BIDS convention with the following structure: /sub-[subject]/ses-[session]/[eeg or et]. To load the BIDS formatted data into BciPy Simulator, please see the following directory: /sourcedata/bcipy_metadata. This directory contains the raw BciPy parameter files. It also contains the output of the matrix display (matrix.png) for eyetracking visualization. **Contact Information** For questions or issues regarding this dataset, please contact the corresponding author [Basak Celik](mailto:celik.b@northeastern.edu) via email. [1] Memmott T, Koçanaoğulları A, Lawhead M, Klee D, Dudy S, Fried-Oken M, Oken B. BciPy: brain-computer interface software in Python. Brain-Computer Interfaces, 8(4), 137-53, 2021. ## Dataset Information | Dataset ID | `DS006593` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | cBCI Matrix Multimodal Dataset | | Author (year) | `Celik2025` | | Canonical | — | | Importable as | `DS006593`, `Celik2025` | | Year | 2025 | | Authors | Basak Celik, Tab Memmott, Matthew Lawhead, Srikar Ananthoju, Deniz Erdogmus | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006593.v1.0.0](https://doi.org/10.18112/openneuro.ds006593.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006593) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006593) | [Source URL](https://openneuro.org/datasets/ds006593) | ### Copy-paste BibTeX ```bibtex @dataset{ds006593, title = {cBCI Matrix Multimodal Dataset}, author = {Basak Celik and Tab Memmott and Matthew Lawhead and Srikar Ananthoju and Deniz Erdogmus}, doi = {10.18112/openneuro.ds006593.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006593.v1.0.0}, } ``` ## Technical Details - Subjects: 21 - Recordings: 21 - Tasks: 1 - Channels: 19 - Sampling rate (Hz): 300.0 - Duration (hours): 5.610982407407407 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 441.9 MB - File count: 21 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006593.v1.0.0 - Source: openneuro - OpenNeuro: [ds006593](https://openneuro.org/datasets/ds006593) - NeMAR: [ds006593](https://nemar.org/dataexplorer/detail?dataset_id=ds006593) ## API Reference Use the `DS006593` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006593(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) cBCI Matrix Multimodal Dataset * **Study:** `ds006593` (OpenNeuro) * **Author (year):** `Celik2025` * **Canonical:** — Also importable as: `DS006593`, `Celik2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006593](https://openneuro.org/datasets/ds006593) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006593](https://nemar.org/dataexplorer/detail?dataset_id=ds006593) DOI: [https://doi.org/10.18112/openneuro.ds006593.v1.0.0](https://doi.org/10.18112/openneuro.ds006593.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006593 >>> dataset = DS006593(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006593) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006593) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006629: meg dataset, 19 subjects *SINGSING* Access recordings and metadata through EEGDash. **Citation:** Valerie Chanoine, Jean-Michel Badier, Mireille Besson, Talya Inbar (2025). *SINGSING*. [10.18112/openneuro.ds006629.v1.0.1](https://doi.org/10.18112/openneuro.ds006629.v1.0.1) Modality: meg Subjects: 19 Recordings: 38 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006629 dataset = DS006629(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006629(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006629( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006629, title = {SINGSING}, author = {Valerie Chanoine and Jean-Michel Badier and Mireille Besson and Talya Inbar}, doi = {10.18112/openneuro.ds006629.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006629.v1.0.1}, } ``` ## About This Dataset We presented twenty adult participants with harmonic complex sound (HCS) stimuli that varied in frequency in an auditory oddball protocol during simultaneous EEG and MEG recording (for details, see Inbar et al., 2025) **References** Inbar, T.C., Badier, JM., Bénar, C. et al. Pre-attentive Pitch Processing of Harmonic Complex Sounds at Sensor and Source Levels: Comparing Simultaneously Recorded EEG and MEG Data. Brain Topogr 38, 71 (2025). [https://doi.org/10.1007/s10548-025-01147-6](https://doi.org/10.1007/s10548-025-01147-6) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896file:///media/chanoine/My%20Passport/SINGSING/data/COMB/preproc/FIF/BIDS/dataset_description.json Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110.https://doi.org/10.1038/sdata.2018.110 ## Dataset Information | Dataset ID | `DS006629` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | SINGSING | | Author (year) | `Chanoine2025` | | Canonical | `SINGSING` | | Importable as | `DS006629`, `Chanoine2025`, `SINGSING` | | Year | 2025 | | Authors | Valerie Chanoine, Jean-Michel Badier, Mireille Besson, Talya Inbar | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006629.v1.0.1](https://doi.org/10.18112/openneuro.ds006629.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006629) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006629) | [Source URL](https://openneuro.org/datasets/ds006629) | ### Copy-paste BibTeX ```bibtex @dataset{ds006629, title = {SINGSING}, author = {Valerie Chanoine and Jean-Michel Badier and Mireille Besson and Talya Inbar}, doi = {10.18112/openneuro.ds006629.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006629.v1.0.1}, } ``` ## Technical Details - Subjects: 19 - Recordings: 38 - Tasks: 2 - Channels: 339 - Sampling rate (Hz): 250.0 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 11.2 GB - File count: 38 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006629.v1.0.1 - Source: openneuro - OpenNeuro: [ds006629](https://openneuro.org/datasets/ds006629) - NeMAR: [ds006629](https://nemar.org/dataexplorer/detail?dataset_id=ds006629) ## API Reference Use the `DS006629` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006629(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SINGSING * **Study:** `ds006629` (OpenNeuro) * **Author (year):** `Chanoine2025` * **Canonical:** `SINGSING` Also importable as: `DS006629`, `Chanoine2025`, `SINGSING`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 19; recordings: 38; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006629](https://openneuro.org/datasets/ds006629) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006629](https://nemar.org/dataexplorer/detail?dataset_id=ds006629) DOI: [https://doi.org/10.18112/openneuro.ds006629.v1.0.1](https://doi.org/10.18112/openneuro.ds006629.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006629 >>> dataset = DS006629(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006629) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006629) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS006647: eeg dataset, 4 subjects *Poetry Assessment EEG Dataset 2* Access recordings and metadata through EEGDash. **Citation:** Soma Chaudhuri, Joydeep Bhattacharya (2025). *Poetry Assessment EEG Dataset 2*. [10.18112/openneuro.ds006647.v1.0.1](https://doi.org/10.18112/openneuro.ds006647.v1.0.1) Modality: eeg Subjects: 4 Recordings: 4 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006647 dataset = DS006647(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006647(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006647( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006647, title = {Poetry Assessment EEG Dataset 2}, author = {Soma Chaudhuri and Joydeep Bhattacharya}, doi = {10.18112/openneuro.ds006647.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006647.v1.0.1}, } ``` ## About This Dataset Understanding how the brain engages with poetic language is key to advancing empirical research on aesthetic and creative cognition. This experiment involved 64-channel EEG recordings and behavioural ratings from 51 participants who read and evaluated 210 short English-language texts — 70 Haiku (nature-themed), 70 Senryu (emotion-themed), and 70 non-poetic Control texts. Each poem/text was rated on five subjective dimensions: Aesthetic Appeal, Vivid Imagery, Being Moved, Originality, and Creativity — using a 7-point scale. The full study involved 51 participants, and the data were divided into two BIDS-compliant datasets to ensure technical validation and facilitate upload to OpenNeuro. Poetry Assessment EEG Dataset 1 contains data from 47 participants whose continuous EEG recordings passed technical validation and were used in the primary analyses. Poetry Assessment EEG Dataset 2 (this dataset) includes the remaining 4 participants (P105, P141, P142, P146), whose EEG recordings were acquired in segments due to session interruptions and later concatenated during preprocessing. These participants were excluded from the PSD analysis to avoid potential artifacts but are included here for completeness and transparency. In this dataset, the participants.tsv file maps anonymized BIDS IDs (sub-001 to sub-004) to the original participant codes used during data collection (P105–P146), as follows: sub-001 → P105 sub-002 → P141 sub-003 → P142 sub-004 → P146 Dataset Structure and Navigation: Each subject folder contains four core EEG files: channels.tsv – EEG channel metadata eeg.json – EEG recording metadata eeg.set – Raw EEG data (EEGLAB format) events.tsv – Event markers aligned with poem presentation The /code/ directory includes: Preprocessing.m – MATLAB preprocessing script BioSemi64.loc – 64-channel coordinate file The /derivatives/ directory contains: Behavioural_Ratings/ – One .csv file per participant (e.g., P105.csv), including trial-by-trial ratings across five dimensions: Aesthetic Appeal, Vivid Imagery, Emotional Impact (labeled as ‘being moved’), Originality, and Creativity. Psychometric_Responses/ – A single .csv file with demographic and trait-level questionnaire responses per participant, including: PANAS (mood), Openness, Curiosity, VVIQ (visual imagery), AVIQ (auditory imagery), MAAS (mindfulness), and AReA (aesthetic responsiveness). Also includes questionnaires.pdf with full questionnaire texts and scoring keys The /stimuli/ directory includes: All 210 texts used in the experiment: 70 Haiku (nature-themed poetry), 70 Senryu (emotion-themed poetry), 70 Control (non-poetic matched prose). Block-wise trial assignments for all seven blocks Resting-state EEG was recorded at the beginning and end of each session. These segments are embedded within the raw EEG files and can be identified using the following trigger codes in events.tsv: 65285, 65286 → Resting state (before experiment); 65287, 65288 → Resting state (after experiment) Interested users are encouraged to consult Poetry Assessment EEG Dataset 1 to gain a complete understanding of the full experiment and its validated main dataset. All preprocessing steps, event markers, and metadata structures were applied identically across both datasets (Poetry Assessment EEG Dataset 1 and Poetry Assessment EEG Dataset 2), ensuring consistency. This enables users to apply their own quality control pipelines and include these data if desired. Of note, the anonymized participant IDs (e.g., PXXX) are used consistently across all data modalities, enabling reliable cross-referencing between EEG data, behavioural ratings, and psychometric responses. Data collection took place at the Department of Psychology at Goldsmiths, University of London, UK. The project was approved by the Local Ethics Committee at the Department of Psychology, Goldsmiths University of London. The experiment was conducted in accordance with the Declaration of Helsinki. All EEG, behavioural, and psychometric data were anonymized. Participant identifiers were coded (P101–P151), and no names, dates of birth, or other direct identifiers are included. ## Dataset Information | Dataset ID | `DS006647` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Poetry Assessment EEG Dataset 2 | | Author (year) | `Chaudhuri2025_D2` | | Canonical | — | | Importable as | `DS006647`, `Chaudhuri2025_D2` | | Year | 2025 | | Authors | Soma Chaudhuri, Joydeep Bhattacharya | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006647.v1.0.1](https://doi.org/10.18112/openneuro.ds006647.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006647) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006647) | [Source URL](https://openneuro.org/datasets/ds006647) | ### Copy-paste BibTeX ```bibtex @dataset{ds006647, title = {Poetry Assessment EEG Dataset 2}, author = {Soma Chaudhuri and Joydeep Bhattacharya}, doi = {10.18112/openneuro.ds006647.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006647.v1.0.1}, } ``` ## Technical Details - Subjects: 4 - Recordings: 4 - Tasks: 1 - Channels: 70 - Sampling rate (Hz): 512.0 - Duration (hours): 8.681111111111111 - Pathology: Healthy - Modality: Visual - Type: Affect - Size on disk: 4.3 GB - File count: 4 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006647.v1.0.1 - Source: openneuro - OpenNeuro: [ds006647](https://openneuro.org/datasets/ds006647) - NeMAR: [ds006647](https://nemar.org/dataexplorer/detail?dataset_id=ds006647) ## API Reference Use the `DS006647` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006647(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Poetry Assessment EEG Dataset 2 * **Study:** `ds006647` (OpenNeuro) * **Author (year):** `Chaudhuri2025_D2` * **Canonical:** — Also importable as: `DS006647`, `Chaudhuri2025_D2`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 4; recordings: 4; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006647](https://openneuro.org/datasets/ds006647) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006647](https://nemar.org/dataexplorer/detail?dataset_id=ds006647) DOI: [https://doi.org/10.18112/openneuro.ds006647.v1.0.1](https://doi.org/10.18112/openneuro.ds006647.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006647 >>> dataset = DS006647(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006647) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006647) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006648: eeg dataset, 47 subjects *Poetry Assessment EEG Dataset 1* Access recordings and metadata through EEGDash. **Citation:** Soma Chaudhuri, Joydeep Bhattacharya (2025). *Poetry Assessment EEG Dataset 1*. [10.18112/openneuro.ds006648.v1.0.0](https://doi.org/10.18112/openneuro.ds006648.v1.0.0) Modality: eeg Subjects: 47 Recordings: 47 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006648 dataset = DS006648(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006648(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006648( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006648, title = {Poetry Assessment EEG Dataset 1}, author = {Soma Chaudhuri and Joydeep Bhattacharya}, doi = {10.18112/openneuro.ds006648.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006648.v1.0.0}, } ``` ## About This Dataset Understanding how the brain engages with poetic language is key to advancing empirical research on aesthetic and creative cognition. This experiment involved 64-channel EEG recordings and behavioural ratings from 51 participants who read and evaluated 210 short English-language texts — 70 Haiku (nature-themed), 70 Senryu (emotion-themed), and 70 non-poetic Control texts. Each poem/text was rated on five subjective dimensions: Aesthetic Appeal, Vivid Imagery, Being Moved, Originality, and Creativity — using a 7-point scale. The full study involved 51 participants, and the data were divided into two BIDS-compliant datasets to ensure technical validation and facilitate upload to OpenNeuro. Poetry Assessment EEG Dataset 1 (this dataset) contains data from 47 participants whose continuous EEG recordings passed technical validation and were used in the primary analyses. In this dataset, the participants.tsv file maps anonymized BIDS IDs (sub-001 to sub-047) to the original participant codes used during data collection (P101–P151) Poetry Assessment EEG Dataset 2 includes the remaining 4 participants (P105, P141, P142, P146), whose EEG recordings were acquired in segments due to session interruptions and later concatenated during preprocessing. These participants were excluded from the PSD analysis to avoid potential artifacts but are included here for completeness and transparency. Dataset Structure and Navigation: Each subject folder contains four core EEG files: channels.tsv – EEG channel metadata eeg.json – EEG recording metadata eeg.set – Raw EEG data (EEGLAB format) events.tsv – Event markers aligned with poem presentation The /code/ directory includes: Preprocessing.m – MATLAB preprocessing script BioSemi64.loc – 64-channel coordinate file The /derivatives/ directory contains: Behavioural_Ratings/ – One .csv file per participant (e.g., P101.csv), including trial-by-trial ratings across five dimensions: Aesthetic Appeal, Vivid Imagery, Emotional Impact (labeled as ‘being moved’), Originality, and Creativity. Psychometric_Responses/ – A single .csv file with demographic and trait-level questionnaire responses per participant, including: PANAS (mood), Openness, Curiosity, VVIQ (visual imagery), AVIQ (auditory imagery), MAAS (mindfulness), and AReA (aesthetic responsiveness). Also includes questionnaires.pdf with full questionnaire texts and scoring keys The /stimuli/ directory includes: All 210 texts used in the experiment: 70 Haiku (nature-themed poetry), 70 Senryu (emotion-themed poetry), 70 Control (non-poetic matched prose). Block-wise trial assignments for all seven blocks Resting-state EEG was recorded at the beginning and end of each session. These segments are embedded within the raw EEG files and can be identified using the following trigger codes in events.tsv: 65285, 65286 → Resting state (before experiment); 65287, 65288 → Resting state (after experiment) Interested users may also consult Poetry Assessment EEG Dataset 2 to access recordings from the remaining 4 participants excluded from the main analyses. All preprocessing steps, event markers, and metadata structures were applied identically across both datasets (Poetry Assessment EEG Dataset 1 and Poetry Assessment EEG Dataset 2), ensuring consistency. This enables users to apply their own quality control pipelines and include these data if desired. Of note, the anonymized participant IDs (e.g., PXXX) are used consistently across all data modalities, enabling reliable cross-referencing between EEG data, behavioural ratings, and psychometric responses. Data collection took place at the Department of Psychology at Goldsmiths, University of London, UK. The project was approved by the Local Ethics Committee at the Department of Psychology, Goldsmiths University of London. The experiment was conducted in accordance with the Declaration of Helsinki. All EEG, behavioural, and psychometric data were anonymized. Participant identifiers were coded (P101–P151), and no names, dates of birth, or other direct identifiers are included. ## Dataset Information | Dataset ID | `DS006648` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Poetry Assessment EEG Dataset 1 | | Author (year) | `Chaudhuri2025_D1` | | Canonical | — | | Importable as | `DS006648`, `Chaudhuri2025_D1` | | Year | 2025 | | Authors | Soma Chaudhuri, Joydeep Bhattacharya | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006648.v1.0.0](https://doi.org/10.18112/openneuro.ds006648.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006648) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006648) | [Source URL](https://openneuro.org/datasets/ds006648) | ### Copy-paste BibTeX ```bibtex @dataset{ds006648, title = {Poetry Assessment EEG Dataset 1}, author = {Soma Chaudhuri and Joydeep Bhattacharya}, doi = {10.18112/openneuro.ds006648.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006648.v1.0.0}, } ``` ## Technical Details - Subjects: 47 - Recordings: 47 - Tasks: 1 - Channels: 70 - Sampling rate (Hz): 512.0 - Duration (hours): 91.80333333333331 - Pathology: Healthy - Modality: Visual - Type: Affect - Size on disk: 45.4 GB - File count: 47 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006648.v1.0.0 - Source: openneuro - OpenNeuro: [ds006648](https://openneuro.org/datasets/ds006648) - NeMAR: [ds006648](https://nemar.org/dataexplorer/detail?dataset_id=ds006648) ## API Reference Use the `DS006648` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006648(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Poetry Assessment EEG Dataset 1 * **Study:** `ds006648` (OpenNeuro) * **Author (year):** `Chaudhuri2025_D1` * **Canonical:** — Also importable as: `DS006648`, `Chaudhuri2025_D1`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 47; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006648](https://openneuro.org/datasets/ds006648) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006648](https://nemar.org/dataexplorer/detail?dataset_id=ds006648) DOI: [https://doi.org/10.18112/openneuro.ds006648.v1.0.0](https://doi.org/10.18112/openneuro.ds006648.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006648 >>> dataset = DS006648(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006648) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006648) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006673: fnirs dataset, 17 subjects *ball_squeeze_Carlton_2025* Access recordings and metadata through EEGDash. **Citation:** Laura B. Carlton, Miray Altinkaynak, Shannon Kelley, Bernhard Zimmerman, Sreekanth Kura, Eike Middell, Alexander von Luhmann, Meryem A. Yucel, David A. Boas (2025). *ball_squeeze_Carlton_2025*. [10.18112/openneuro.ds006673.v1.0.2](https://doi.org/10.18112/openneuro.ds006673.v1.0.2) Modality: fnirs Subjects: 17 Recordings: 67 License: CC0 Source: openneuro Metadata: Good (80%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006673 dataset = DS006673(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006673(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006673( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006673, title = {ball_squeeze_Carlton_2025}, author = {Laura B. Carlton and Miray Altinkaynak and Shannon Kelley and Bernhard Zimmerman and Sreekanth Kura and Eike Middell and Alexander von Luhmann and Meryem A. Yucel and David A. Boas}, doi = {10.18112/openneuro.ds006673.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds006673.v1.0.2}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS006673` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | ball_squeeze_Carlton_2025 | | Author (year) | `Carlton2025` | | Canonical | — | | Importable as | `DS006673`, `Carlton2025` | | Year | 2025 | | Authors | Laura B. Carlton, Miray Altinkaynak, Shannon Kelley, Bernhard Zimmerman, Sreekanth Kura, Eike Middell, Alexander von Luhmann, Meryem A. Yucel, David A. Boas | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006673.v1.0.2](https://doi.org/10.18112/openneuro.ds006673.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006673) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006673) | [Source URL](https://openneuro.org/datasets/ds006673) | ### Copy-paste BibTeX ```bibtex @dataset{ds006673, title = {ball_squeeze_Carlton_2025}, author = {Laura B. Carlton and Miray Altinkaynak and Shannon Kelley and Bernhard Zimmerman and Sreekanth Kura and Eike Middell and Alexander von Luhmann and Meryem A. Yucel and David A. Boas}, doi = {10.18112/openneuro.ds006673.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds006673.v1.0.2}, } ``` ## Technical Details - Subjects: 17 - Recordings: 67 - Tasks: 2 - Channels: 2440 (52), 1134 (15) - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Healthy - Modality: Motor - Type: Motor - Size on disk: 7.8 GB - File count: 67 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006673.v1.0.2 - Source: openneuro - OpenNeuro: [ds006673](https://openneuro.org/datasets/ds006673) - NeMAR: [ds006673](https://nemar.org/dataexplorer/detail?dataset_id=ds006673) ## API Reference Use the `DS006673` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006673(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ball_squeeze_Carlton_2025 * **Study:** `ds006673` (OpenNeuro) * **Author (year):** `Carlton2025` * **Canonical:** — Also importable as: `DS006673`, `Carlton2025`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 17; recordings: 67; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006673](https://openneuro.org/datasets/ds006673) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006673](https://nemar.org/dataexplorer/detail?dataset_id=ds006673) DOI: [https://doi.org/10.18112/openneuro.ds006673.v1.0.2](https://doi.org/10.18112/openneuro.ds006673.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006673 >>> dataset = DS006673(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006673) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006673) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS006695: eeg dataset, 19 subjects *Validation of Sleep Staging with Forehead EEG Patch* Access recordings and metadata through EEGDash. **Citation:** Julie Onton, Sarah Mednick (2025). *Validation of Sleep Staging with Forehead EEG Patch*. [10.18112/openneuro.ds006695.v1.0.2](https://doi.org/10.18112/openneuro.ds006695.v1.0.2) Modality: eeg Subjects: 19 Recordings: 19 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006695 dataset = DS006695(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006695(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006695( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006695, title = {Validation of Sleep Staging with Forehead EEG Patch}, author = {Julie Onton and Sarah Mednick}, doi = {10.18112/openneuro.ds006695.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds006695.v1.0.2}, } ``` ## About This Dataset **UCSD Forehead Patch Sleep Validation Dataset** Curated EEG recordings for validating sleep staging from a three-electrode forehead patch against standard 33-channel polysomnography. **What is included** - CGX patch `.set` files that contain `EEG.VisualHypnogram` and `EEG.SpectralScoring`. The 33-channel data is not included in this release – this release only includes the three-electrode data. The 33-channel data will be released separately. **Sleep stage labels** ### View full README **UCSD Forehead Patch Sleep Validation Dataset** Curated EEG recordings for validating sleep staging from a three-electrode forehead patch against standard 33-channel polysomnography. **What is included** - CGX patch `.set` files that contain `EEG.VisualHypnogram` and `EEG.SpectralScoring`. The 33-channel data is not included in this release – this release only includes the three-electrode data. The 33-channel data will be released separately. **Sleep stage labels** `EEG.VisualHypnogram` is manual scoring in 30-second epochs using the following integers 1 equals Wake 2 equals REM 3 equals N1 4 equals N2 5 equals N3 0 equals unknown or movement `EEG.SpectralScoring` is spectral staging from the forehead patch. One row per patch channel. One column per 30-second epoch (see publication). **Alignment policy** The 33-channel cap data used to score polysomnography and the 3-channel patch EEG data do not always start and stop at the same clock times. CGX patch data were aligned to the cap start time based on a spreadsheet completed by the data collector, so the start may be off by a few seconds. The 3-channel EEG data were segmented into 30-second windows, and the number of these windows should approximately match the number of values in the EEG.VisualHypnogram for the same dataset. If the patch data ended up shorter than the visual hypnogram, the hypnogram was trimmed at the end to match the patch length. If the hypnogram was longer, it was left untrimmed. In general, the mismatch at the end of the recording is less than one 30-second window. **Subject exclusions** 113 and 121 are excluded. The CGX patch was inadequate or unavailable. **Citation** Onton JA, Simon KC, Morehouse AB, Shuster AE, Zhang J, Peña AA, Mednick SC. Validation of spectral sleep scoring with polysomnography using forehead EEG device. Frontiers in Sleep. 2024. doi 10.3389/frsle.2024.1349537. American Academy of Sleep Medicine. The AASM manual for the scoring of sleep and associated events. 2007 and later. ## Dataset Information | Dataset ID | `DS006695` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Validation of Sleep Staging with Forehead EEG Patch | | Author (year) | `Onton2025` | | Canonical | `Onton2024` | | Importable as | `DS006695`, `Onton2025`, `Onton2024` | | Year | 2025 | | Authors | Julie Onton, Sarah Mednick | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006695.v1.0.2](https://doi.org/10.18112/openneuro.ds006695.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006695) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006695) | [Source URL](https://openneuro.org/datasets/ds006695) | ### Copy-paste BibTeX ```bibtex @dataset{ds006695, title = {Validation of Sleep Staging with Forehead EEG Patch}, author = {Julie Onton and Sarah Mednick}, doi = {10.18112/openneuro.ds006695.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds006695.v1.0.2}, } ``` ## Technical Details - Subjects: 19 - Recordings: 19 - Tasks: 1 - Channels: 3 - Sampling rate (Hz): 500.0 - Duration (hours): 164.257975 - Pathology: Healthy - Modality: Sleep - Type: Sleep - Size on disk: 9.4 GB - File count: 19 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006695.v1.0.2 - Source: openneuro - OpenNeuro: [ds006695](https://openneuro.org/datasets/ds006695) - NeMAR: [ds006695](https://nemar.org/dataexplorer/detail?dataset_id=ds006695) ## API Reference Use the `DS006695` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006695(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Validation of Sleep Staging with Forehead EEG Patch * **Study:** `ds006695` (OpenNeuro) * **Author (year):** `Onton2025` * **Canonical:** `Onton2024` Also importable as: `DS006695`, `Onton2025`, `Onton2024`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 19; recordings: 19; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006695](https://openneuro.org/datasets/ds006695) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006695](https://nemar.org/dataexplorer/detail?dataset_id=ds006695) DOI: [https://doi.org/10.18112/openneuro.ds006695.v1.0.2](https://doi.org/10.18112/openneuro.ds006695.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006695 >>> dataset = DS006695(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006695) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006695) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006720: meg dataset, 24 subjects *Alpha power indexes working memory load for durations* Access recordings and metadata through EEGDash. **Citation:** Sophie K. Herbst [1], Izem Mangione [1], Charbel-Raphaël Segerie [2], Richard Höchenberger [2], Tadeusz Kononowicz [1, 3, 4], Alexandre Gramfort [2], Virginie van Wassenhove [1], [1] Cognitive Neuroimaging Unit, INSERM, CEA, Université Paris-Saclay, NeuroSpin, 91191 Gif/Yvette, France [2] Inria, CEA, Université Paris-Saclay, Palaiseau, France [3] Institute of Psychology, The Polish Academy of Sciences, ul. Jaracza 1, 00-378 Warsaw, Poland [4] Institut NeuroPSI - UMR9197 CNRS Université Paris-Saclay (2025). *Alpha power indexes working memory load for durations*. [10.18112/openneuro.ds006720.v1.0.0](https://doi.org/10.18112/openneuro.ds006720.v1.0.0) Modality: meg Subjects: 24 Recordings: 246 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006720 dataset = DS006720(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006720(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006720( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006720, title = {Alpha power indexes working memory load for durations}, author = {Sophie K. Herbst [1] and Izem Mangione [1] and Charbel-Raphaël Segerie [2] and Richard Höchenberger [2] and Tadeusz Kononowicz [1, 3, 4] and Alexandre Gramfort [2] and Virginie van Wassenhove [1] and [1] Cognitive Neuroimaging Unit, INSERM, CEA, Université Paris-Saclay, NeuroSpin, 91191 Gif/Yvette, France [2] Inria, CEA, Université Paris-Saclay, Palaiseau, France [3] Institute of Psychology, The Polish Academy of Sciences, ul. Jaracza 1, 00-378 Warsaw, Poland [4] Institut NeuroPSI - UMR9197 CNRS Université Paris-Saclay}, doi = {10.18112/openneuro.ds006720.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006720.v1.0.0}, } ``` ## About This Dataset The data set contains anonymized raw magnetoencephalography (MEG) recordings of 23 healthy adult participants, performed at Neurospin, Gif sur Yvette, France. Participants performed an n-item delayed temporal reproduction task: They were presented with a sequence of one or three “empty” intervals (see cover figure), delimited by short pure tones. They had to maintain the sequence in memory (retention), and, upon a prompt, reproduce the whole sequence by pressing a button for each tone. Eight task runs were recorded (~ 10 min each). The dataset also contains recordings of the electro-occulogram (EOG, horizontal and vertical eye movements) and -cardiogram (ECG), and the 3D coordinates of the EEG electrodes, four head-position indicator coils, and three fiducial points (nasion, left and right pre-auricular areas). A two-minute-long resting state recording (eyes open) was performed after the task. To improve the spatial resolution of the source reconstruction, individual high-resolution structural Magnetic Resonance Imaging (MRI) recordings were acquired. The data are reusable for researchers with a dedicated interest in the neural dynamics of working memory, but also to a broader community interested in neural dynamics in the healthy adult brain, in relation to auditory stimuli, motor responses, and during periods of rest. The data were formatted in BIDS and anonymized using the following software: MNE Python version 1.8.0 MNE-BIDS version 1.6.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) MEG recording: Before undergoing the MEG recording, participants were equipped with external electrodes, positioned to record the electro-occulogram (EOG, horizontal and vertical eye movements) and -cardiogram (ECG). The positions of the EEG electrodes, four head-position indicator coils, and three fiducial points (nasion, left and right pre-auricular areas) were digitized using a 3D digitizer (Polhemus, US/Canada) for subsequent co-registration with the individual's anatomical MRI. The MEG recordings took place in a magnetically shielded chamber, where the participant was seated in an armchair under the MEG helmet. The electromagnetic brain activity was recorded using a whole-head Elekta Neuromag Vector View 306 MEG system (Neuromag Elekta LTD, Helsinki) with 102 triple-sensors elements (two orthogonal planar gradiometers, and one magnetometer per sensor location). Participants were instructed to fixate their gaze on a screen positioned in front of them, at about one meter distance. The chamber was dimly lit. Their head position was measured before each recording run (8 in total) using the head-position indicator coils. MEG recordings were sampled online at 1 kHz, high-pass filtered at 0.03 Hz, and low-pass filtered at 330 Hz. A two-minute-long resting state recording (eyes open) was performed after the task, used to compute the noise covariance matrix for source reconstruction. Anatomical MRI recordings: To improve the spatial resolution of the source reconstruction, individual high-resolution structural Magnetic Resonance Imaging (MRI) recordings were used. These were recorded on another day, using a Siemens 3 T Magnetom Prisma Fit MRI scanner. Parameters of the sequence were: slice thickness: 1 mm, repetition time TR = 2300 ms, echo time TE = 2.98 ms, and flip angle = 9 degrees. ## Dataset Information | Dataset ID | `DS006720` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Alpha power indexes working memory load for durations | | Author (year) | `Herbst2025` | | Canonical | — | | Importable as | `DS006720`, `Herbst2025` | | Year | 2025 | | Authors | Sophie K. Herbst [1], Izem Mangione [1], Charbel-Raphaël Segerie [2], Richard Höchenberger [2], Tadeusz Kononowicz [1, 3, 4], Alexandre Gramfort [2], Virginie van Wassenhove [1], [1] Cognitive Neuroimaging Unit, INSERM, CEA, Université Paris-Saclay, NeuroSpin, 91191 Gif/Yvette, France [2] Inria, CEA, Université Paris-Saclay, Palaiseau, France [3] Institute of Psychology, The Polish Academy of Sciences, ul. Jaracza 1, 00-378 Warsaw, Poland [4] Institut NeuroPSI - UMR9197 CNRS Université Paris-Saclay | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006720.v1.0.0](https://doi.org/10.18112/openneuro.ds006720.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006720) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006720) | [Source URL](https://openneuro.org/datasets/ds006720) | ### Copy-paste BibTeX ```bibtex @dataset{ds006720, title = {Alpha power indexes working memory load for durations}, author = {Sophie K. Herbst [1] and Izem Mangione [1] and Charbel-Raphaël Segerie [2] and Richard Höchenberger [2] and Tadeusz Kononowicz [1, 3, 4] and Alexandre Gramfort [2] and Virginie van Wassenhove [1] and [1] Cognitive Neuroimaging Unit, INSERM, CEA, Université Paris-Saclay, NeuroSpin, 91191 Gif/Yvette, France [2] Inria, CEA, Université Paris-Saclay, Palaiseau, France [3] Institute of Psychology, The Polish Academy of Sciences, ul. Jaracza 1, 00-378 Warsaw, Poland [4] Institut NeuroPSI - UMR9197 CNRS Université Paris-Saclay}, doi = {10.18112/openneuro.ds006720.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006720.v1.0.0}, } ``` ## Technical Details - Subjects: 24 - Recordings: 246 - Tasks: 3 - Channels: 328 (209), 321 (11), 340 (2), 390 - Sampling rate (Hz): 1000.0 (222), 2000.0 - Duration (hours): 30.722993750000004 - Pathology: Healthy - Modality: Auditory - Type: Memory - Size on disk: 136.5 GB - File count: 246 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006720.v1.0.0 - Source: openneuro - OpenNeuro: [ds006720](https://openneuro.org/datasets/ds006720) - NeMAR: [ds006720](https://nemar.org/dataexplorer/detail?dataset_id=ds006720) ## API Reference Use the `DS006720` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006720(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alpha power indexes working memory load for durations * **Study:** `ds006720` (OpenNeuro) * **Author (year):** `Herbst2025` * **Canonical:** — Also importable as: `DS006720`, `Herbst2025`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 24; recordings: 246; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006720](https://openneuro.org/datasets/ds006720) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006720](https://nemar.org/dataexplorer/detail?dataset_id=ds006720) DOI: [https://doi.org/10.18112/openneuro.ds006720.v1.0.0](https://doi.org/10.18112/openneuro.ds006720.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006720 >>> dataset = DS006720(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006720) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006720) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS006735: eeg dataset, 27 subjects *Chimeric music reveals an interaction of pitch and time in electrophysiological signatures of music encoding* Access recordings and metadata through EEGDash. **Citation:** Tong Shan, Edmund C. Lalor, Ross K. Maddox (2025). *Chimeric music reveals an interaction of pitch and time in electrophysiological signatures of music encoding*. [10.18112/openneuro.ds006735.v2.0.0](https://doi.org/10.18112/openneuro.ds006735.v2.0.0) Modality: eeg Subjects: 27 Recordings: 27 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006735 dataset = DS006735(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006735(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006735( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006735, title = {Chimeric music reveals an interaction of pitch and time in electrophysiological signatures of music encoding}, author = {Tong Shan and Edmund C. Lalor and Ross K. Maddox}, doi = {10.18112/openneuro.ds006735.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds006735.v2.0.0}, } ``` ## About This Dataset **Details related to access to the data** Please contact the following authors for further information: Tong Shan (email: [tongshan@stanford.edu](mailto:tongshan@stanford.edu)) Ross K. Maddox (email: [rkmaddox@med.umich.edu](mailto:rkmaddox@med.umich.edu)) **Overview** This study examines pitch-time interactions in music processing by introducing “chimeric music,” which pairs two distinct melodies, and exchanges their pitch contours and note onset-times to create two new melodies, thereby distorting musical pattern while maintaining the marginal statistics of the original pieces’ pitch and temporal sequences. ### View full README **Details related to access to the data** Please contact the following authors for further information: Tong Shan (email: [tongshan@stanford.edu](mailto:tongshan@stanford.edu)) Ross K. Maddox (email: [rkmaddox@med.umich.edu](mailto:rkmaddox@med.umich.edu)) **Overview** This study examines pitch-time interactions in music processing by introducing “chimeric music,” which pairs two distinct melodies, and exchanges their pitch contours and note onset-times to create two new melodies, thereby distorting musical pattern while maintaining the marginal statistics of the original pieces’ pitch and temporal sequences. Data collected from Sep to Nov, 2023. The details of the experiment can be found at Shan et al. (2024). There were two phases in this experiment. For the first phase, ten trials of one-minute clicks were presented to the subjects. For the second phase, the 2 types of monophonic music (original and chimeric) clips were presented. There were 33 trials for each type with shuffled order. Between trials, there was a 0.5 s pause. The code for analysis for this study can be found in GitHub repo ([https://github.com/maddoxlab/Chimeric_music](https://github.com/maddoxlab/Chimeric_music)). **Format** This dataset is formatted according to the EEG Brain Imaging Data Structure. It includes EEG recording from subject 001 to subject 027 in raw brainvision format (including .eeg, .vhdr, and .vmrk triplet). **Subjects** 27 subjects participated in this study. **Subject inclusion criteria** Age between 18-40. Normal hearing: audiometric thresholds of 20 dB HL or better from 500 to 8000 Hz. Speak English as their primary language. Self-reported normal or correctable to normal vision. Twenty-seven participants participated in this experiment with an age of 22.9 ± 3.9 (mean ± STD) years. **Apparatus** Subjects were seated in a sound-isolating booth on a chair in front of a 24-inch BenQ monitor with a viewing distance of approximately 60 cm. Stimuli were presented at an average level of 60 dB SPL and a sampling rate of 48000 Hz through ER-2 insert earphones plugged into an RME Babyface Pro digital sound card. The stimulus presentation for the experiment was controlled by a python script using a custom package, expyfun. Following the experimental session, participants completed a self-reported musicianship questionnaire (adapted from Whiteford et al, 2025). The questionnaire is included in this repository. ## Dataset Information | Dataset ID | `DS006735` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Chimeric music reveals an interaction of pitch and time in electrophysiological signatures of music encoding | | Author (year) | `Shan2025` | | Canonical | — | | Importable as | `DS006735`, `Shan2025` | | Year | 2025 | | Authors | Tong Shan, Edmund C. Lalor, Ross K. Maddox | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006735.v2.0.0](https://doi.org/10.18112/openneuro.ds006735.v2.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006735) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006735) | [Source URL](https://openneuro.org/datasets/ds006735) | ### Copy-paste BibTeX ```bibtex @dataset{ds006735, title = {Chimeric music reveals an interaction of pitch and time in electrophysiological signatures of music encoding}, author = {Tong Shan and Edmund C. Lalor and Ross K. Maddox}, doi = {10.18112/openneuro.ds006735.v2.0.0}, url = {https://doi.org/10.18112/openneuro.ds006735.v2.0.0}, } ``` ## Technical Details - Subjects: 27 - Recordings: 27 - Tasks: 1 - Channels: 36 (24), 63 (2), 34 - Sampling rate (Hz): 10000.0 - Duration (hours): 38.84182222222222 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 175.9 GB - File count: 27 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006735.v2.0.0 - Source: openneuro - OpenNeuro: [ds006735](https://openneuro.org/datasets/ds006735) - NeMAR: [ds006735](https://nemar.org/dataexplorer/detail?dataset_id=ds006735) ## API Reference Use the `DS006735` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006735(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Chimeric music reveals an interaction of pitch and time in electrophysiological signatures of music encoding * **Study:** `ds006735` (OpenNeuro) * **Author (year):** `Shan2025` * **Canonical:** — Also importable as: `DS006735`, `Shan2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 27; recordings: 27; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006735](https://openneuro.org/datasets/ds006735) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006735](https://nemar.org/dataexplorer/detail?dataset_id=ds006735) DOI: [https://doi.org/10.18112/openneuro.ds006735.v2.0.0](https://doi.org/10.18112/openneuro.ds006735.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006735 >>> dataset = DS006735(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006735) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006735) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006761: eeg dataset, 31 subjects *Neural decoding of competitive decision-making in Rock-Paper-Scissors* Access recordings and metadata through EEGDash. **Citation:** Moerel, Denise, Grootswagers, Tijl, Chin, Jessica L.L., Ciardo, Francesca, Nijhuis, Patti, Quek, Genevieve L., Smit, Sophie, Varlet, Manuel (2025). *Neural decoding of competitive decision-making in Rock-Paper-Scissors*. [10.18112/openneuro.ds006761.v1.0.0](https://doi.org/10.18112/openneuro.ds006761.v1.0.0) Modality: eeg Subjects: 31 Recordings: 31 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006761 dataset = DS006761(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006761(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006761( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006761, title = {Neural decoding of competitive decision-making in Rock-Paper-Scissors}, author = {Moerel, Denise and Grootswagers, Tijl and Chin, Jessica L.L. and Ciardo, Francesca and Nijhuis, Patti and Quek, Genevieve L. and Smit, Sophie and Varlet, Manuel}, doi = {10.18112/openneuro.ds006761.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006761.v1.0.0}, } ``` ## About This Dataset Experiment Details Participants played a computerised version of the competitive Rock-Paper-Scissors game (480 games). We recorded 64 channel EEG from 62 participants, grouped into 31 pairs. Experiment length: 1 hour More information: [https://doi.org/10.17605/OSF.IO/YJXKN](https://doi.org/10.17605/OSF.IO/YJXKN) (OSF repository with more information and analysis code) Moerel, D., Grootswagers, T., Chin, J. L., Ciardo, F., Nijhuis, P., Quek, G. L., Smit, S. & Varlet, M. (2025). Neural decoding of competitive decision-making in Rock-Paper-Scissors. Social Cognitive And Affective Neuroscience, nsaf101. doi: [https://doi.org/10.1093/scan/nsaf101](https://doi.org/10.1093/scan/nsaf101) ## Dataset Information | Dataset ID | `DS006761` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Neural decoding of competitive decision-making in Rock-Paper-Scissors | | Author (year) | `Moerel2025_Neural` | | Canonical | — | | Importable as | `DS006761`, `Moerel2025_Neural` | | Year | 2025 | | Authors | Moerel, Denise, Grootswagers, Tijl, Chin, Jessica L.L., Ciardo, Francesca, Nijhuis, Patti, Quek, Genevieve L., Smit, Sophie, Varlet, Manuel | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006761.v1.0.0](https://doi.org/10.18112/openneuro.ds006761.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006761) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006761) | [Source URL](https://openneuro.org/datasets/ds006761) | ### Copy-paste BibTeX ```bibtex @dataset{ds006761, title = {Neural decoding of competitive decision-making in Rock-Paper-Scissors}, author = {Moerel, Denise and Grootswagers, Tijl and Chin, Jessica L.L. and Ciardo, Francesca and Nijhuis, Patti and Quek, Genevieve L. and Smit, Sophie and Varlet, Manuel}, doi = {10.18112/openneuro.ds006761.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006761.v1.0.0}, } ``` ## Technical Details - Subjects: 31 - Recordings: 31 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 2048.0 - Duration (hours): 25.621111111111112 - Pathology: Healthy - Modality: Visual - Type: Decision-making - Size on disk: 78.0 GB - File count: 31 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006761.v1.0.0 - Source: openneuro - OpenNeuro: [ds006761](https://openneuro.org/datasets/ds006761) - NeMAR: [ds006761](https://nemar.org/dataexplorer/detail?dataset_id=ds006761) ## API Reference Use the `DS006761` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006761(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neural decoding of competitive decision-making in Rock-Paper-Scissors * **Study:** `ds006761` (OpenNeuro) * **Author (year):** `Moerel2025_Neural` * **Canonical:** — Also importable as: `DS006761`, `Moerel2025_Neural`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 31; recordings: 31; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006761](https://openneuro.org/datasets/ds006761) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006761](https://nemar.org/dataexplorer/detail?dataset_id=ds006761) DOI: [https://doi.org/10.18112/openneuro.ds006761.v1.0.0](https://doi.org/10.18112/openneuro.ds006761.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006761 >>> dataset = DS006761(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006761) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006761) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006768: eeg dataset, 30 subjects *Multiple Object Monitoring (EEG)* Access recordings and metadata through EEGDash. **Citation:** Benjamin G. Lowe, Alexandra Woolgar, Sophie Smit, Anina N. Rich (2025). *Multiple Object Monitoring (EEG)*. [10.18112/openneuro.ds006768.v1.1.0](https://doi.org/10.18112/openneuro.ds006768.v1.1.0) Modality: eeg Subjects: 30 Recordings: 210 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006768 dataset = DS006768(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006768(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006768( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006768, title = {Multiple Object Monitoring (EEG)}, author = {Benjamin G. Lowe and Alexandra Woolgar and Sophie Smit and Anina N. Rich}, doi = {10.18112/openneuro.ds006768.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds006768.v1.1.0}, } ``` ## About This Dataset Subjects (N = 30) completed a Multiple Object Monitoring (MOM) task. Methodological details can be read within the pre-print: [https://doi.org/10.1101/2025.07.10.663816](https://doi.org/10.1101/2025.07.10.663816) Please email [ben.lowe@mq.edu.au](mailto:ben.lowe@mq.edu.au) if you have any further questions. ## Dataset Information | Dataset ID | `DS006768` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Multiple Object Monitoring (EEG) | | Author (year) | `Lowe2025` | | Canonical | — | | Importable as | `DS006768`, `Lowe2025` | | Year | 2025 | | Authors | Benjamin G. Lowe, Alexandra Woolgar, Sophie Smit, Anina N. Rich | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006768.v1.1.0](https://doi.org/10.18112/openneuro.ds006768.v1.1.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006768) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006768) | [Source URL](https://openneuro.org/datasets/ds006768) | ### Copy-paste BibTeX ```bibtex @dataset{ds006768, title = {Multiple Object Monitoring (EEG)}, author = {Benjamin G. Lowe and Alexandra Woolgar and Sophie Smit and Anina N. Rich}, doi = {10.18112/openneuro.ds006768.v1.1.0}, url = {https://doi.org/10.18112/openneuro.ds006768.v1.1.0}, } ``` ## Technical Details - Subjects: 30 - Recordings: 210 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 14.466388888888888 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 6.5 GB - File count: 210 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006768.v1.1.0 - Source: openneuro - OpenNeuro: [ds006768](https://openneuro.org/datasets/ds006768) - NeMAR: [ds006768](https://nemar.org/dataexplorer/detail?dataset_id=ds006768) ## API Reference Use the `DS006768` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006768(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multiple Object Monitoring (EEG) * **Study:** `ds006768` (OpenNeuro) * **Author (year):** `Lowe2025` * **Canonical:** — Also importable as: `DS006768`, `Lowe2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 30; recordings: 210; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006768](https://openneuro.org/datasets/ds006768) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006768](https://nemar.org/dataexplorer/detail?dataset_id=ds006768) DOI: [https://doi.org/10.18112/openneuro.ds006768.v1.1.0](https://doi.org/10.18112/openneuro.ds006768.v1.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS006768 >>> dataset = DS006768(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006768) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006768) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006801: eeg dataset, 21 subjects *Resting-state EEG before and after different study methods* Access recordings and metadata through EEGDash. **Citation:** Paloma Victoria de Sales Alves, Antonio Simeão Sobrinho Neto, Carla Alexandra da Silva Moita Minervino (2025). *Resting-state EEG before and after different study methods*. [10.18112/openneuro.ds006801.v1.0.0](https://doi.org/10.18112/openneuro.ds006801.v1.0.0) Modality: eeg Subjects: 21 Recordings: 42 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006801 dataset = DS006801(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006801(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006801( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006801, title = {Resting-state EEG before and after different study methods}, author = {Paloma Victoria de Sales Alves and Antonio Simeão Sobrinho Neto and Carla Alexandra da Silva Moita Minervino}, doi = {10.18112/openneuro.ds006801.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006801.v1.0.0}, } ``` ## About This Dataset The RECAP-EEG: Retrieval with Feedback and Cognitive Adaptation EEG Dataset provides an open-access collection of human electroencephalography (EEG) recordings aimed at investigating the neural correlates of learning processes in educational contexts. The study involved 21 neurotypical undergraduate students (mean age = 23.10 years, SD = 3.92) and was conducted at the Federal University of Paraíba (UFPB), Brazil. Participants were randomly assigned to one of three experimental groups through an automated Python v3.12.7 script that ensured continuous balance among groups. In the active learning group, participants completed a review session using the NeuroShow platform, which consisted of 10 retrieval-practice questions with immediate feedback after each response. In the passive learning group, participants performed a review session based on their own notes taken during the lecture. In the control group, participants watched the same lecture but did not perform any review activity. Before data collection, all participants received detailed written instructions recommending that they avoid consuming caffeine or alcohol for at least 12 hours before the session, maintain a good night’s sleep, and have a proper breakfast on the morning of the experiment. Sessions were scheduled to start at 9:00 a.m., with a maximum delay tolerance of 15 minutes. Upon arrival at the laboratory, participants were briefed about the procedures specific to their group and were given the opportunity to ask questions before the experiment began. The first EEG recording (pre-intervention) was then performed, followed by the respective study condition for each group (active, passive, or control), and finally the second EEG recording (post-intervention). EEG signals were recorded using a 32-channel ActiChamp system (Brain Products GmbH, Germany) with active silver/silver chloride (Ag/AgCl) electrodes positioned according to the international 10–20 system. Electrode impedance was kept below 15 kΩ, with the ground at Fpz. Signals were sampled at 500 Hz, filtered between 0.5 and 50 Hz, and recorded at two time points: before and immediately after the study session. Each session lasted approximately nine minutes, comprising four blocks: two eyes-open blocks (2 minutes and 15 seconds each) and two eyes-closed blocks (2 minutes and 15 seconds each). The raw EEG data are organized in compliance with the BIDS (Brain Imaging Data Structure) standard and include .vhdr, .eeg, and .vmrk files, as well as the required metadata and descriptive files. Signal quality was ensured through impedance control and power spectral density (PSD) analysis, which confirmed the integrity and consistency of the recordings. The RECAP-EEG dataset may contribute to research in cognitive neuroscience and learning, particularly studies on retrieval practice with feedback, attentional modulation, and functional reorganization associated with active learning. It also supports interdisciplinary investigations in educational neuroscience, cognitive training, and neural modeling of learning and memory processes. The study was approved by the Research Ethics Committee of the Health Sciences Center at the Federal University of Paraíba (CCS/UFPB) under CAAE number 84958824.1.0000.5188 and approval number 7.400.264. All participants provided written informed consent prior to participation. The data are released under the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction provided that proper credit is given to the original authors. ## Dataset Information | Dataset ID | `DS006801` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Resting-state EEG before and after different study methods | | Author (year) | `Alves2025` | | Canonical | — | | Importable as | `DS006801`, `Alves2025` | | Year | 2025 | | Authors | Paloma Victoria de Sales Alves, Antonio Simeão Sobrinho Neto, Carla Alexandra da Silva Moita Minervino | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006801.v1.0.0](https://doi.org/10.18112/openneuro.ds006801.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006801) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006801) | [Source URL](https://openneuro.org/datasets/ds006801) | ### Copy-paste BibTeX ```bibtex @dataset{ds006801, title = {Resting-state EEG before and after different study methods}, author = {Paloma Victoria de Sales Alves and Antonio Simeão Sobrinho Neto and Carla Alexandra da Silva Moita Minervino}, doi = {10.18112/openneuro.ds006801.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006801.v1.0.0}, } ``` ## Technical Details - Subjects: 21 - Recordings: 42 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 500.0 - Duration (hours): 6.335461111111111 - Pathology: Healthy - Modality: Resting State - Type: Learning - Size on disk: 1.3 GB - File count: 42 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006801.v1.0.0 - Source: openneuro - OpenNeuro: [ds006801](https://openneuro.org/datasets/ds006801) - NeMAR: [ds006801](https://nemar.org/dataexplorer/detail?dataset_id=ds006801) ## API Reference Use the `DS006801` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006801(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting-state EEG before and after different study methods * **Study:** `ds006801` (OpenNeuro) * **Author (year):** `Alves2025` * **Canonical:** — Also importable as: `DS006801`, `Alves2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 21; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006801](https://openneuro.org/datasets/ds006801) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006801](https://nemar.org/dataexplorer/detail?dataset_id=ds006801) DOI: [https://doi.org/10.18112/openneuro.ds006801.v1.0.0](https://doi.org/10.18112/openneuro.ds006801.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006801 >>> dataset = DS006801(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006801) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006801) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006802: eeg dataset, 24 subjects *Collaborative rule learning promotes interbrain information alignment* Access recordings and metadata through EEGDash. **Citation:** Moerel, Denise, Grootswagers, Tijl, Quek, Genevieve L., Smit, Sophie, Varlet, Manuel (2025). *Collaborative rule learning promotes interbrain information alignment*. [10.18112/openneuro.ds006802.v1.0.0](https://doi.org/10.18112/openneuro.ds006802.v1.0.0) Modality: eeg Subjects: 24 Recordings: 24 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006802 dataset = DS006802(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006802(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006802( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006802, title = {Collaborative rule learning promotes interbrain information alignment}, author = {Moerel, Denise and Grootswagers, Tijl and Quek, Genevieve L. and Smit, Sophie and Varlet, Manuel}, doi = {10.18112/openneuro.ds006802.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006802.v1.0.0}, } ``` ## About This Dataset Experiment Details We recorded EEG from 24 pairs of participants while they performed a 4-way categorisation task based on rules they first agreed upon together. In addition, participants did a pre- and post-test on the same stimuli. Experiment length: 1 hour More information: Pre-print: Moerel, D., Grootswagers, T., Quek, G.L., Smit, S., & Varlet, M. (2025). Information alignment between interacting brains. bioRxiv. doi: [https://doi.org/10.1101/2025.01.07.631802](https://doi.org/10.1101/2025.01.07.631802) Code: [https://doi.org/10.17605/OSF.IO/HE4TU](https://doi.org/10.17605/OSF.IO/HE4TU) ## Dataset Information | Dataset ID | `DS006802` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Collaborative rule learning promotes interbrain information alignment | | Author (year) | `Moerel2025_Collaborative` | | Canonical | — | | Importable as | `DS006802`, `Moerel2025_Collaborative` | | Year | 2025 | | Authors | Moerel, Denise, Grootswagers, Tijl, Quek, Genevieve L., Smit, Sophie, Varlet, Manuel | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006802.v1.0.0](https://doi.org/10.18112/openneuro.ds006802.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006802) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006802) | [Source URL](https://openneuro.org/datasets/ds006802) | ### Copy-paste BibTeX ```bibtex @dataset{ds006802, title = {Collaborative rule learning promotes interbrain information alignment}, author = {Moerel, Denise and Grootswagers, Tijl and Quek, Genevieve L. and Smit, Sophie and Varlet, Manuel}, doi = {10.18112/openneuro.ds006802.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006802.v1.0.0}, } ``` ## Technical Details - Subjects: 24 - Recordings: 24 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 2048.0 - Duration (hours): 23.414166666666667 - Pathology: Healthy - Modality: Visual - Type: Learning - Size on disk: 62.2 GB - File count: 24 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006802.v1.0.0 - Source: openneuro - OpenNeuro: [ds006802](https://openneuro.org/datasets/ds006802) - NeMAR: [ds006802](https://nemar.org/dataexplorer/detail?dataset_id=ds006802) ## API Reference Use the `DS006802` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006802(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Collaborative rule learning promotes interbrain information alignment * **Study:** `ds006802` (OpenNeuro) * **Author (year):** `Moerel2025_Collaborative` * **Canonical:** — Also importable as: `DS006802`, `Moerel2025_Collaborative`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006802](https://openneuro.org/datasets/ds006802) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006802](https://nemar.org/dataexplorer/detail?dataset_id=ds006802) DOI: [https://doi.org/10.18112/openneuro.ds006802.v1.0.0](https://doi.org/10.18112/openneuro.ds006802.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006802 >>> dataset = DS006802(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006802) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006802) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006803: eeg dataset, 63 subjects *NeuroTechs Dataset for Stem Skills* Access recordings and metadata through EEGDash. **Citation:** Tania Yareni Pech-Canul, Roberto Guajardo, Luis Fernando Acosta-Soto, Mónica Sofía Margoya-Constantino, Juan Pablo Rosado-Aíza, Luz María Alonso-Valerdi (2025). *NeuroTechs Dataset for Stem Skills*. [10.18112/openneuro.ds006803.v1.1.1](https://doi.org/10.18112/openneuro.ds006803.v1.1.1) Modality: eeg Subjects: 63 Recordings: 126 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006803 dataset = DS006803(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006803(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006803( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006803, title = {NeuroTechs Dataset for Stem Skills}, author = {Tania Yareni Pech-Canul and Roberto Guajardo and Luis Fernando Acosta-Soto and Mónica Sofía Margoya-Constantino and Juan Pablo Rosado-Aíza and Luz María Alonso-Valerdi}, doi = {10.18112/openneuro.ds006803.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds006803.v1.1.1}, } ``` ## About This Dataset **README** **Details related to access to the data]** - Contact person Juan Pablo Rosado Aíza [jprosadoa@gmail.com](mailto:jprosadoa@gmail.com) ORCID 0009-0004-5690-1753 ### View full README **README** **Details related to access to the data]** - Contact person Juan Pablo Rosado Aíza [jprosadoa@gmail.com](mailto:jprosadoa@gmail.com) ORCID 0009-0004-5690-1753 - Practical information to access the data The data units are in microvolts, transformed from raw Unicorn API for Python values. **Overview** Evaluating STEM skills in students - Year(s) that the project ran 2025 May - July - Brief overview of the tasks in the experiment Participants answered a computer test through psychopy. The paradigm includes a 2 minute basal state (minute 1 with eyes closed, minute 2 with eyes open) and sections for each skill evaluated. 4 math sections, 1 per basic operation (sum, subtraction, multiplication and division), 1 programming section and 1 spatial ability section. The sections ran until either time or questions ran out. There was a 30 second break between sections. The event markers with each question, answer and time can be found within each subject folder. The point of the paradigm is to compare different class groups and their global performance. The point of the EEG data is to image the brain for potential analysis of band activity to help explain differences in the groups. the experimental group took classes using interactive tools like Google Colab during class. - Description of the contents of the dataset 8 Channel EEG data for 63 subjects, 23 experimental “intervention” subjects and 40 control subjects. You can find both raw (Session 1) and preprocessed (Session 2) data. All EEG data starts at second 3, since seconds (0-3) were cut in preprocessing. The timestamps in all event markers are in this time signature (Timestamp in second 3 corresponds to sample 1, second 4 is sample 251). - Independent variables Groups for the subjects. - Dependent variables Performance, EEG data. - Control variables Time of participation (End of semester), place for data acquisition, status as student. **Methods** **Subjects** All subjects are either experimental or control, whose ID is in the format XXc for control and XXe for experimental. [ ] Subject inclusion/exclusion criteria (if relevant) Only students enrolled in the course at hand. Participants 1e, 3e, 4e, 6e, 9e, 10e, 12e, 14e, 15e, 24e, 25e, 33e, 34e, 36e, 37e, 39e, 40e, 41e, 14c and 16c were outliers on RMS voltage. **Apparatus** the room was performed in a closed room with a single researcher there to give instructions and answer any questions. There was a laptop and the EEG device was mounted using conductive gel. **Initial setup** Signing consent on paper was the first thing that was done, afterwards impedance measurements using UHB recorder software were made until all signals were “good” on the sofware. The subjects then answered the test. **Task organization** The test’s sections are not randomized nor counterbalanced, the order is as described above. The questions within each section were randomized. **Task details** Each question answered has a code, an answer and a timestamp, which can be found in the corresponding main section file for each subject. The questions themselves with codes and correct answers can be found in the stimuli folder. **Additional data acquired** Average cycle data for female subjects was calculated for each group, anonymously. Refer to extra_metadata.xlsx. **Experimental location** All data collection was collected in a controlled environment. **Missing data** Subject 17c, 30e, 32e and 35e where lost in the process of acquisition. All records start at second 3, instead of second 0, to eliminate connectivity noise and drift at the beginning. The basal state lasted 123 seconds to account for this, so the first 120 seconds correspond to the basal states. All responses to “OR4” in the spatial ability sections are invalid, given that the correct answer is not among the options. It was excluded from all calculations shown in extra_metadata.xlsx. ## Dataset Information | Dataset ID | `DS006803` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | NeuroTechs Dataset for Stem Skills | | Author (year) | `PechCanul2025` | | Canonical | — | | Importable as | `DS006803`, `PechCanul2025` | | Year | 2025 | | Authors | Tania Yareni Pech-Canul, Roberto Guajardo, Luis Fernando Acosta-Soto, Mónica Sofía Margoya-Constantino, Juan Pablo Rosado-Aíza, Luz María Alonso-Valerdi | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006803.v1.1.1](https://doi.org/10.18112/openneuro.ds006803.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006803) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006803) | [Source URL](https://openneuro.org/datasets/ds006803) | ### Copy-paste BibTeX ```bibtex @dataset{ds006803, title = {NeuroTechs Dataset for Stem Skills}, author = {Tania Yareni Pech-Canul and Roberto Guajardo and Luis Fernando Acosta-Soto and Mónica Sofía Margoya-Constantino and Juan Pablo Rosado-Aíza and Luz María Alonso-Valerdi}, doi = {10.18112/openneuro.ds006803.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds006803.v1.1.1}, } ``` ## Technical Details - Subjects: 63 - Recordings: 126 - Tasks: 1 - Channels: 8 - Sampling rate (Hz): 250.0 - Duration (hours): 45.946035555555554 - Pathology: Healthy - Modality: Visual - Type: Learning - Size on disk: 1.4 GB - File count: 126 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006803.v1.1.1 - Source: openneuro - OpenNeuro: [ds006803](https://openneuro.org/datasets/ds006803) - NeMAR: [ds006803](https://nemar.org/dataexplorer/detail?dataset_id=ds006803) ## API Reference Use the `DS006803` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006803(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NeuroTechs Dataset for Stem Skills * **Study:** `ds006803` (OpenNeuro) * **Author (year):** `PechCanul2025` * **Canonical:** — Also importable as: `DS006803`, `PechCanul2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 63; recordings: 126; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006803](https://openneuro.org/datasets/ds006803) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006803](https://nemar.org/dataexplorer/detail?dataset_id=ds006803) DOI: [https://doi.org/10.18112/openneuro.ds006803.v1.1.1](https://doi.org/10.18112/openneuro.ds006803.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS006803 >>> dataset = DS006803(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006803) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006803) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006817: eeg dataset, 34 subjects *Visual Attribute-Specific Contextual Trajectory Paradigm 2.0* Access recordings and metadata through EEGDash. **Citation:** Benjamin Lowe ([ben.lowe@mq.edu.au](mailto:ben.lowe@mq.edu.au)), Naohide Yamamoto ([naohide.yamamoto@qut.edu.au](mailto:naohide.yamamoto@qut.edu.au)), Jonathan Robinson ([jonathan.robinson@monash.edu](mailto:jonathan.robinson@monash.edu)), Patrick Johnston ([dr.pat.johnston@icloud.com](mailto:dr.pat.johnston@icloud.com)) (2025). *Visual Attribute-Specific Contextual Trajectory Paradigm 2.0*. [10.18112/openneuro.ds006817.v1.0.0](https://doi.org/10.18112/openneuro.ds006817.v1.0.0) Modality: eeg Subjects: 34 Recordings: 34 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006817 dataset = DS006817(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006817(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006817( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006817, title = {Visual Attribute-Specific Contextual Trajectory Paradigm 2.0}, author = {Benjamin Lowe (ben.lowe@mq.edu.au) and Naohide Yamamoto (naohide.yamamoto@qut.edu.au) and Jonathan Robinson (jonathan.robinson@monash.edu) and Patrick Johnston (dr.pat.johnston@icloud.com)}, doi = {10.18112/openneuro.ds006817.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006817.v1.0.0}, } ``` ## About This Dataset TBD upon publication. Associated pre-print: [https://doi.org/10.1101/2025.08.18.670829](https://doi.org/10.1101/2025.08.18.670829) ## Dataset Information | Dataset ID | `DS006817` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Visual Attribute-Specific Contextual Trajectory Paradigm 2.0 | | Author (year) | `Lowe2025` | | Canonical | `VisualContextTrajectory_v2` | | Importable as | `DS006817`, `Lowe2025`, `VisualContextTrajectory_v2` | | Year | 2025 | | Authors | Benjamin Lowe ([ben.lowe@mq.edu.au](mailto:ben.lowe@mq.edu.au)), Naohide Yamamoto ([naohide.yamamoto@qut.edu.au](mailto:naohide.yamamoto@qut.edu.au)), Jonathan Robinson ([jonathan.robinson@monash.edu](mailto:jonathan.robinson@monash.edu)), Patrick Johnston ([dr.pat.johnston@icloud.com](mailto:dr.pat.johnston@icloud.com)) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006817.v1.0.0](https://doi.org/10.18112/openneuro.ds006817.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006817) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006817) | [Source URL](https://openneuro.org/datasets/ds006817) | ### Copy-paste BibTeX ```bibtex @dataset{ds006817, title = {Visual Attribute-Specific Contextual Trajectory Paradigm 2.0}, author = {Benjamin Lowe (ben.lowe@mq.edu.au) and Naohide Yamamoto (naohide.yamamoto@qut.edu.au) and Jonathan Robinson (jonathan.robinson@monash.edu) and Patrick Johnston (dr.pat.johnston@icloud.com)}, doi = {10.18112/openneuro.ds006817.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006817.v1.0.0}, } ``` ## Technical Details - Subjects: 34 - Recordings: 34 - Tasks: 1 - Channels: 65 - Sampling rate (Hz): 1024.0 - Duration (hours): 21.68111111111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 9.7 GB - File count: 34 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006817.v1.0.0 - Source: openneuro - OpenNeuro: [ds006817](https://openneuro.org/datasets/ds006817) - NeMAR: [ds006817](https://nemar.org/dataexplorer/detail?dataset_id=ds006817) ## API Reference Use the `DS006817` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006817(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual Attribute-Specific Contextual Trajectory Paradigm 2.0 * **Study:** `ds006817` (OpenNeuro) * **Author (year):** `Lowe2025` * **Canonical:** `VisualContextTrajectory_v2` Also importable as: `DS006817`, `Lowe2025`, `VisualContextTrajectory_v2`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006817](https://openneuro.org/datasets/ds006817) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006817](https://nemar.org/dataexplorer/detail?dataset_id=ds006817) DOI: [https://doi.org/10.18112/openneuro.ds006817.v1.0.0](https://doi.org/10.18112/openneuro.ds006817.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006817 >>> dataset = DS006817(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006817) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006817) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006839: eeg dataset, 36 subjects *EEG recordings during sham neurofeedback in virtual reality* Access recordings and metadata through EEGDash. **Citation:** C. Brigitte Aguilar Gonzales, Collaborators from the Experimental and Computational Neuroscience Group (2025). *EEG recordings during sham neurofeedback in virtual reality*. [10.18112/openneuro.ds006839.v1.0.0](https://doi.org/10.18112/openneuro.ds006839.v1.0.0) Modality: eeg Subjects: 36 Recordings: 144 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006839 dataset = DS006839(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006839(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006839( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006839, title = {EEG recordings during sham neurofeedback in virtual reality}, author = {C. Brigitte Aguilar Gonzales and Collaborators from the Experimental and Computational Neuroscience Group}, doi = {10.18112/openneuro.ds006839.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006839.v1.0.0}, } ``` ## About This Dataset EEG recordings during sham neurofeedback in virtual reality Description This dataset contains EEG recordings acquired during a sham neurofeedback experiment conducted in a virtual reality (VR) environment. The study aimed to investigate how feedback valence (positive, negative, or control) modulates alpha-band activity and during an attentional task. EEG signals were recorded using a 32-channel SynAmps RT amplifier (Compumedics NeuroScan Inc., Charlotte, NC, USA) and Ag/AgCl passive electrodes mounted on an elastic cap (Wuhan Greentek Pty. Ltd., China) following the extended 10–20 international system. Each participant completed four conditions: Positive feedback (S##_p.cnt) - sham feedback with a reinforcement valence. Negative feedback (S##_n.cnt) - sham feedback with a punishment valence. Control (S##_c.cnt) — participants observed the VR environment without any feedback. Resting-state (S##_resting.cnt) — participants alternated between eyes open and eyes closed conditions. Experimental design Feedback blocks: Each feedback condition consisted of four blocks of approximately 2 minutes each. Events: 238 — marks the beginning of each 2-minute feedback block. 222 — indicates an increase in brightness or volume of VR objects. 190 — indicates a decrease in brightness or volume. 126 — marks the beginning and end of eyes open/closed periods during the resting condition. Resting-state order: Eyes open first, followed by eyes closed. Data format Original EEG recordings were collected in .cnt format (NeuroScan). Data were converted to the Brain Imaging Data Structure (BIDS) format using the MNE-BIDS toolbox (Appelhoff et al., 2019). Each subject folder (e.g., sub-01/) contains EEG data files (.eeg), event markers, and corresponding JSON sidecar files with acquisition parameters. Data availability The BIDS-formatted dataset is publicly available on the OpenNeuro repository and linked through the OSF Wiki project. References Appelhoff, S., Sanderson, M., Brooks, T. L., van Vliet, M., Quentin, R., Holdgraf, C., … Gramfort, A. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software, 4(44), 1896. [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., & Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. ## Dataset Information | Dataset ID | `DS006839` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG recordings during sham neurofeedback in virtual reality | | Author (year) | `Gonzales2025` | | Canonical | — | | Importable as | `DS006839`, `Gonzales2025` | | Year | 2025 | | Authors | 1. Brigitte Aguilar Gonzales, Collaborators from the Experimental and Computational Neuroscience Group | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006839.v1.0.0](https://doi.org/10.18112/openneuro.ds006839.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006839) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006839) | [Source URL](https://openneuro.org/datasets/ds006839) | ### Copy-paste BibTeX ```bibtex @dataset{ds006839, title = {EEG recordings during sham neurofeedback in virtual reality}, author = {C. Brigitte Aguilar Gonzales and Collaborators from the Experimental and Computational Neuroscience Group}, doi = {10.18112/openneuro.ds006839.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006839.v1.0.0}, } ``` ## Technical Details - Subjects: 36 - Recordings: 144 - Tasks: 4 - Channels: 29 - Sampling rate (Hz): 1000.0 - Duration (hours): 26.62208888888889 - Pathology: Healthy - Modality: Multisensory - Type: Attention - Size on disk: 10.4 GB - File count: 144 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006839.v1.0.0 - Source: openneuro - OpenNeuro: [ds006839](https://openneuro.org/datasets/ds006839) - NeMAR: [ds006839](https://nemar.org/dataexplorer/detail?dataset_id=ds006839) ## API Reference Use the `DS006839` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006839(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG recordings during sham neurofeedback in virtual reality * **Study:** `ds006839` (OpenNeuro) * **Author (year):** `Gonzales2025` * **Canonical:** — Also importable as: `DS006839`, `Gonzales2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 36; recordings: 144; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006839](https://openneuro.org/datasets/ds006839) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006839](https://nemar.org/dataexplorer/detail?dataset_id=ds006839) DOI: [https://doi.org/10.18112/openneuro.ds006839.v1.0.0](https://doi.org/10.18112/openneuro.ds006839.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006839 >>> dataset = DS006839(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006839) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006839) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006840: eeg dataset, 15 subjects *IACKD: Intention Action Conflict EEG-Hand Kinematics Dataset* Access recordings and metadata through EEGDash. **Citation:** Mengpu Cai, Rongrong Fu, Yaodong Wang, Bin Lu, Saiwei Guo, Fangyao Xu (2025). *IACKD: Intention Action Conflict EEG-Hand Kinematics Dataset*. [10.18112/openneuro.ds006840.v1.0.0](https://doi.org/10.18112/openneuro.ds006840.v1.0.0) Modality: eeg Subjects: 15 Recordings: 128 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006840 dataset = DS006840(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006840(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006840( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006840, title = {IACKD: Intention Action Conflict EEG-Hand Kinematics Dataset}, author = {Mengpu Cai and Rongrong Fu and Yaodong Wang and Bin Lu and Saiwei Guo and Fangyao Xu}, doi = {10.18112/openneuro.ds006840.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006840.v1.0.0}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `DS006840` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | IACKD: Intention Action Conflict EEG-Hand Kinematics Dataset | | Author (year) | `Cai2025` | | Canonical | `IACKD` | | Importable as | `DS006840`, `Cai2025`, `IACKD` | | Year | 2025 | | Authors | Mengpu Cai, Rongrong Fu, Yaodong Wang, Bin Lu, Saiwei Guo, Fangyao Xu | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006840.v1.0.0](https://doi.org/10.18112/openneuro.ds006840.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006840) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006840) | [Source URL](https://openneuro.org/datasets/ds006840) | ### Copy-paste BibTeX ```bibtex @dataset{ds006840, title = {IACKD: Intention Action Conflict EEG-Hand Kinematics Dataset}, author = {Mengpu Cai and Rongrong Fu and Yaodong Wang and Bin Lu and Saiwei Guo and Fangyao Xu}, doi = {10.18112/openneuro.ds006840.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006840.v1.0.0}, } ``` ## Technical Details - Subjects: 15 - Recordings: 128 - Tasks: 1 - Channels: 29 (96), 31 (32) - Sampling rate (Hz): 1024.0 - Duration (hours): 14.781475694444444 - Pathology: Healthy - Modality: Motor - Type: Motor - Size on disk: 6.0 GB - File count: 128 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006840.v1.0.0 - Source: openneuro - OpenNeuro: [ds006840](https://openneuro.org/datasets/ds006840) - NeMAR: [ds006840](https://nemar.org/dataexplorer/detail?dataset_id=ds006840) ## API Reference Use the `DS006840` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006840(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) IACKD: Intention Action Conflict EEG-Hand Kinematics Dataset * **Study:** `ds006840` (OpenNeuro) * **Author (year):** `Cai2025` * **Canonical:** `IACKD` Also importable as: `DS006840`, `Cai2025`, `IACKD`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 15; recordings: 128; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006840](https://openneuro.org/datasets/ds006840) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006840](https://nemar.org/dataexplorer/detail?dataset_id=ds006840) DOI: [https://doi.org/10.18112/openneuro.ds006840.v1.0.0](https://doi.org/10.18112/openneuro.ds006840.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006840 >>> dataset = DS006840(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006840) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006840) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006848: eeg dataset, 30 subjects *AlphaDirection1: EEG, ECG, PPG in the resting state and working memory for sequentially and simultaneously presented digits* Access recordings and metadata through EEGDash. **Citation:** Alexandra I. Kosachenko, Danil I. Syttykov, Dmitry A. Tarasov, Alexander I. Kotyusov, Yuri G. Pavlov (2025). *AlphaDirection1: EEG, ECG, PPG in the resting state and working memory for sequentially and simultaneously presented digits*. [10.18112/openneuro.ds006848.v1.0.0](https://doi.org/10.18112/openneuro.ds006848.v1.0.0) Modality: eeg Subjects: 30 Recordings: 52 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006848 dataset = DS006848(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006848(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006848( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006848, title = {AlphaDirection1: EEG, ECG, PPG in the resting state and working memory for sequentially and simultaneously presented digits}, author = {Alexandra I. Kosachenko and Danil I. Syttykov and Dmitry A. Tarasov and Alexander I. Kotyusov and Yuri G. Pavlov}, doi = {10.18112/openneuro.ds006848.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006848.v1.0.0}, } ``` ## About This Dataset Overview This dataset consists of raw 64-channel EEG, electrocardiography (ECG), photoplethysmography (PPG), and behavioral data recorded from 30 healthy young adults during two experimental conditions: resting state and a verbal working memory (digit span) task with serial recall. Resting-state recording During the resting-state session, participants alternated between four 1-minute blocks of eyes-closed and eyes-open resting, followed by 3 minutes 52 seconds of passive cartoon watching (“The Man Who Was Afraid of Falling”, 2011). EEG, ECG, and PPG were recorded continuously throughout this session. Verbal working memory task In the verbal working memory task, participants were presented visually with sequences of seven digits under four different presentation modes: > 1. Simultaneous – all seven digits presented together for 2800 ms; > 2. Fast sequential – each digit presented for 400 ms; > 3. Fast + delay sequential – each digit presented for 400 ms with a 600 ms inter-stimulus interval (ISI); > 4. Slow sequential – each digit presented for 1000 ms. They were instructed to memorize each sequence and type the digits in serial order using the right hand on the numpad. Behavioral accuracy and partial-score measures were computed for each trial. Data organization Each participant folder (sub-XXX) contains: > - eeg/ — EEG, ECG, and PPG recordings in BrainVision format (.vhdr, .vmrk, .eeg) accompanied by event (_events.tsv) and metadata (.json) files. When available, both the resting-state (task-rest) and working-memory (task-verbalwm) recordings are stored here. : - beh/ — behavioral data (_beh.tsv and \_beh.json) with trial-by-trial recall accuracy, sequence information, and response measures. Participants The dataset includes 30 participants (age range 18–32 years; 23 females, 7 males). Most were right-handed, with a few left-handed or ambidextrous. All participants contributed working memory EEG and behavioral data. Several lacked resting state data for EEG, PPG, and ECG: sub-002, sub-003, sub-004, sub-005, sub-006, sub-008, sub-009, sub-011. Potential applications This dataset can be used to: 1. Develop algorithms that classify working memory load. 2. Study neural signals, including event-related potentials and oscillations, alongside peripheral physiology from ECG and PPG during encoding, maintenance, and retrieval at a fine time scale for each sequential item. 3. Examine how neural and physiological signals relate to behavioral accuracy and retrieval time on a trial-by-trial basis. ## Dataset Information | Dataset ID | `DS006848` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | AlphaDirection1: EEG, ECG, PPG in the resting state and working memory for sequentially and simultaneously presented digits | | Author (year) | `Kosachenko2025` | | Canonical | — | | Importable as | `DS006848`, `Kosachenko2025` | | Year | 2025 | | Authors | Alexandra I. Kosachenko, Danil I. Syttykov, Dmitry A. Tarasov, Alexander I. Kotyusov, Yuri G. Pavlov | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006848.v1.0.0](https://doi.org/10.18112/openneuro.ds006848.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006848) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006848) | [Source URL](https://openneuro.org/datasets/ds006848) | ### Copy-paste BibTeX ```bibtex @dataset{ds006848, title = {AlphaDirection1: EEG, ECG, PPG in the resting state and working memory for sequentially and simultaneously presented digits}, author = {Alexandra I. Kosachenko and Danil I. Syttykov and Dmitry A. Tarasov and Alexander I. Kotyusov and Yuri G. Pavlov}, doi = {10.18112/openneuro.ds006848.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006848.v1.0.0}, } ``` ## Technical Details - Subjects: 30 - Recordings: 52 - Tasks: 2 - Channels: 65 - Sampling rate (Hz): 1000.0 - Duration (hours): 47.409621666666666 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 41.4 GB - File count: 52 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006848.v1.0.0 - Source: openneuro - OpenNeuro: [ds006848](https://openneuro.org/datasets/ds006848) - NeMAR: [ds006848](https://nemar.org/dataexplorer/detail?dataset_id=ds006848) ## API Reference Use the `DS006848` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006848(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) AlphaDirection1: EEG, ECG, PPG in the resting state and working memory for sequentially and simultaneously presented digits * **Study:** `ds006848` (OpenNeuro) * **Author (year):** `Kosachenko2025` * **Canonical:** — Also importable as: `DS006848`, `Kosachenko2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 30; recordings: 52; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006848](https://openneuro.org/datasets/ds006848) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006848](https://nemar.org/dataexplorer/detail?dataset_id=ds006848) DOI: [https://doi.org/10.18112/openneuro.ds006848.v1.0.0](https://doi.org/10.18112/openneuro.ds006848.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006848 >>> dataset = DS006848(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006848) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006848) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006850: eeg dataset, 63 subjects *Urban Appraisal: Physiological Recording during Rating of Different Urban Environments* Access recordings and metadata through EEGDash. **Citation:** Carolina Zaehme, Isabelle Sander, Klaus Gramann (2025). *Urban Appraisal: Physiological Recording during Rating of Different Urban Environments*. [10.18112/openneuro.ds006850.v1.0.0](https://doi.org/10.18112/openneuro.ds006850.v1.0.0) Modality: eeg Subjects: 63 Recordings: 126 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006850 dataset = DS006850(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006850(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006850( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006850, title = {Urban Appraisal: Physiological Recording during Rating of Different Urban Environments}, author = {Carolina Zaehme and Isabelle Sander and Klaus Gramann}, doi = {10.18112/openneuro.ds006850.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006850.v1.0.0}, } ``` ## About This Dataset **README** **Details related to access to the data** - Data user agreement The EEG dataset is licensed under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license. You are free to use, share, and adapt the data, provided appropriate credit is given. Please ensure compliance with any applicable ethical and institutional guidelines. ### View full README **README** **Details related to access to the data** - Data user agreement The EEG dataset is licensed under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license. You are free to use, share, and adapt the data, provided appropriate credit is given. Please ensure compliance with any applicable ethical and institutional guidelines. Ethical approval for the data collection was obtained from the Ethics Board of the Institute of Psychology and Ergonomics at Technische Universität Berlin (ethics protocol BPN_GRA_230,415). - Contact person Isabelle Sander [isabelle.sander@tu-berlin.de](mailto:isabelle.sander@tu-berlin.de) ORCID: 0009-0006-0304-7690 - Practical information to access the data NA **Overview** - Project name (if relevant) Urban Appraisal - Year(s) that the project ran Data was collected between April and July 2024. - Brief overview of the tasks in the experiment The data was recorded to investigate the influence of different urban environments and their elements on urban appraisals and neural responses. Participants were presented with and rated Streetview images of different urban environments on a desktop PC. - Description of the contents of the dataset Continuous EEG, ECG and EDA (GSR) Data from 63 participants. The data is separated into two block, during which participants took a break. ECG and EDA data was recorded using ExG amplifier by BrainProducts and is thus included in the eeg datasets as additional channels. Note: While EDA data is labeled as being recorded in microVolts, the actual unit is microSiemens! - Independent variables 56 different Streetview images (available via github.com/BeMoBIL/urban_appraisal_experiment) being presented in combination with 9 different prompts & scales. Semantic segmentation was used to extract area of images covered by buildings, greenery, cars, sky and people to use as predictors for subjective and neural responses. - Dependent variables Stimulus-Onset ERPs (P1, N1 at occipital electrode cluster POz, Oz, O1, and O2 and P3, LPP at parietal cluster CPz, Pz, P3, and P4) as well as subjective ratings on 9 scales. - Control variables Experiment was performed in the same room with the same set up and under the same lighting conditions. - Quality assessment of the data Data is of generally good quality. For used preprocessing steps see publication. **Methods** **Subjects** Subjects were recruited from the participant pool of TU Berlin and consisted of students who participated for course credit as well as citizens of Berlin who participated for monetary renumeration. 63 subjects included (age M = 29.16 years, SD = 7.53, range = 19–61 years; 29 male, 33 female, 1 non-binary) Remember that `Control` or `Patient` status should be defined in the `participants.tsv` using a group column. **Apparatus** Participants were seated. The experiment was presented on a 27” (diagonal) monitor with a 60hz refresh rate at a resolution of 2560x1440p using Psychtoolbox (Brainard, 1997; Kleiner et al., 2007) for MATLAB (The Mathworks Inc., Version 2023b). 64-channel EEG data with actively amplified wet electrodes in 10-20 System using FCz as reference. ECG data was collected using one electrode at the right clavicle, one the left shinbone. EDA data was recorded from middle and ring fingers of the non-dominant hand. The data was sampled at 500 Hz with a 16-bit resolution using BrainAmp DC amplifiers from BrainProducts (BrainProducts GmbH, Gilching, Germany) with a 0.016 Hz high-pass filter during data acquisition **Initial setup** Participants signed consent and were then prepped for EEG. Electrodes were gelled and impedances kept under 10 kOhm. Pre-gelled ECG electrodes were applied after skin was shaved and cleaned using alcohol. EDA velcro electrodes were applied and gelled with isotonic gel. **Task organization** Two sessions (pre and post break): Stimuli were separated into 28 pre and 28 post break. Within blocks, stimulus x scale presentations were randomized. **Task details** During the experiment, participants were presented with different urban stimuli and had to subsequently rate them on the nine subjective rating scales (arousal, valence, dominance, stress, openness, safety, beauty, hominess, and fascination). Each of the 56 stimuli were rated on each of the nine scales resulting in 504 trials. The stimulus-scale combinations were randomized individually for each participant and presented across two blocks of 28 stimuli each, separated by a break. Each experimental trial consisted of participants being presented with a word pair for 1000ms priming them to the scale they would be presented with (arousal: excited – calm; valence: happy – unhappy; dominance: controlled – in control; stress: relaxed – stressful; openness: narrow – open; safety: unsafe – safe; beauty: ugly – beautiful; hominess: alienated – at home; fascination: boring – fascinating). Subsequently, a fixation cross appeared for 500ms, followed by a stimulus for 3000ms. After the stimulus disappeared, the rating scale was presented until participants logged a rating using the computer mouse. At the beginning and end of the experiment there was a 3 min baseline recording in which participants kept their eyes open and looking at the screen. **Additional data acquired** Subjective rating data on the 9 scales per stimulus as well as sociodemographic data of participants (extraversion, emotional stability, size of the city they spent the first 15 years of their life in) was also collected. This data is also available under TBA. Stimuli used are available under TBA. **Experimental location** Small Lab in BeMoBIL at TU Berlin **Missing data** NA **Notes** Data was recorded by Carolina Zähme, Kim Aljoscha Bressem and Isabelle Sander. ## Dataset Information | Dataset ID | `DS006850` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Urban Appraisal: Physiological Recording during Rating of Different Urban Environments | | Author (year) | `Zaehme2025` | | Canonical | — | | Importable as | `DS006850`, `Zaehme2025` | | Year | 2025 | | Authors | Carolina Zaehme, Isabelle Sander, Klaus Gramann | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006850.v1.0.0](https://doi.org/10.18112/openneuro.ds006850.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006850) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006850) | [Source URL](https://openneuro.org/datasets/ds006850) | ### Copy-paste BibTeX ```bibtex @dataset{ds006850, title = {Urban Appraisal: Physiological Recording during Rating of Different Urban Environments}, author = {Carolina Zaehme and Isabelle Sander and Klaus Gramann}, doi = {10.18112/openneuro.ds006850.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006850.v1.0.0}, } ``` ## Technical Details - Subjects: 63 - Recordings: 126 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 500.0 - Duration (hours): 78.82088222222221 - Pathology: Healthy - Modality: Visual - Type: Affect - Size on disk: 34.7 GB - File count: 126 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006850.v1.0.0 - Source: openneuro - OpenNeuro: [ds006850](https://openneuro.org/datasets/ds006850) - NeMAR: [ds006850](https://nemar.org/dataexplorer/detail?dataset_id=ds006850) ## API Reference Use the `DS006850` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006850(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Urban Appraisal: Physiological Recording during Rating of Different Urban Environments * **Study:** `ds006850` (OpenNeuro) * **Author (year):** `Zaehme2025` * **Canonical:** — Also importable as: `DS006850`, `Zaehme2025`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 63; recordings: 126; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006850](https://openneuro.org/datasets/ds006850) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006850](https://nemar.org/dataexplorer/detail?dataset_id=ds006850) DOI: [https://doi.org/10.18112/openneuro.ds006850.v1.0.0](https://doi.org/10.18112/openneuro.ds006850.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006850 >>> dataset = DS006850(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006850) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006850) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006861: eeg dataset, 120 subjects *Targeted Neuromodulation of the Left Dorsolateral Prefrontal Cortex Alleviates Altered Affective Response Evaluation in Lonely Individuals* Access recordings and metadata through EEGDash. **Citation:** Szymon Mąka, Marta Chrustowicz, Łukasz Okruszek (2025). *Targeted Neuromodulation of the Left Dorsolateral Prefrontal Cortex Alleviates Altered Affective Response Evaluation in Lonely Individuals*. [10.18112/openneuro.ds006861.v1.0.2](https://doi.org/10.18112/openneuro.ds006861.v1.0.2) Modality: eeg Subjects: 120 Recordings: 239 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006861 dataset = DS006861(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006861(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006861( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006861, title = {Targeted Neuromodulation of the Left Dorsolateral Prefrontal Cortex Alleviates Altered Affective Response Evaluation in Lonely Individuals}, author = {Szymon Mąka and Marta Chrustowicz and Łukasz Okruszek}, doi = {10.18112/openneuro.ds006861.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds006861.v1.0.2}, } ``` ## About This Dataset **Emotion Processing and Regulation Task (Static Stimuli) — tDCS‑EEG Dataset** This repository provides EEG recordings and behavioral data from the \*\*Emotion Processing and Regulation\*\*task conducted with \*\*transcranial direct current stimulation (tDCS)\*\*. \*\\\*Preregistration:\* [https://osf.io/qdp3w](https://osf.io/qdp3w) **\* Preprint: https://osf.io/qtm8r** **Overview** ### View full README **Emotion Processing and Regulation Task (Static Stimuli) — tDCS‑EEG Dataset** This repository provides EEG recordings and behavioral data from the \*\*Emotion Processing and Regulation\*\*task conducted with \*\*transcranial direct current stimulation (tDCS)\*\*. \*\\\*Preregistration:\* [https://osf.io/qdp3w](https://osf.io/qdp3w) **\* Preprint: https://osf.io/qtm8r** **Overview** Each participant took part in **two experimental sessions**: \* \*\*\`\`ses-1\`\`\*\* — Sham stimulation \* \*\*\`\`ses-2\`\`\*\* — Active stimulation **The order of sham/active conditions was counterbalanced across participants.** **Participants** \* \*\*N = 120\*\* right‑handed, neurologically healthy adults with normal or corrected‑to‑normal vision. \*\\\*Missing [data:\*](data:*) Participant `sub-005` completed only `ses-1` due to a recording error during `ses-2`. **Experimental Task** Participants completed 120 trials per session, evenly allocated to a 2 (content: social, non-social) × 3 (regulation requirement: watch-neutral, watch-negative, reappraise-negative) factorial design. On each trial, they viewed a static image for 5 s and either watched or reappraised it as instructed. After each image, participants rated its arousal and then valence on separate 9-point scales. **tDCS Stimulation** *System:* Starstim 8 (Neuroelectrics, Spain) with **NIC2** software. **Electrode Montage** Stimulation was targeted to the dorsolateral prefrontal cortex (dlPFC) in two alternative montages: \* \*\*Right dlPFC stimulation\*\* > \*\\\*Anode:\* \*\*F4\*\* > \*\\\*Returns:\* \*\*FP2, FZ, FC2, FC6\*\* \* \*\*Left dlPFC stimulation\*\* : \*\\\*Anode:\* \*\*F3\*\* \*\\\*Returns:\* \*\*FP1, FZ, FC1, FC5\*\* **Electrode areas** \*\\\*Anodal:\* 8 cm² \*\\\*Return:\* π cm² \*\\\*Ground:\* left earlobe **Stimulation Protocol** \*\\\*Active:\* 2 mA for 20 min (with 30 s ramp‑up) \*\\\*Sham:\* only ramp‑up periods at start and end; no sustained current \* \*Questionnaires:\* After each session, participants completed the ``` ** ``` tDCS Sensation Questionnaire\*\*(Polish version: [https://osf.io/ufszr](https://osf.io/ufszr)) to evaluate potential side effects. Additionally, after the final session, they indicated whether they believed each session involved\*real\*,\*sham\*, or\*I don’t know\* stimulation to assess ``` ** ``` blinding effectiveness\*\*. **EEG Acquisition** \*\\\*Cap:\* 64‑channel QuickCap (32 EEG electrodes used) \*\\\*Amplifier:\* Neuroscan SynampsRT \*\\\*Sampling rate:\* 1000 Hz \*\\\*Impedance:\* kept < 10 kΩ *Active EEG electrodes (32):* FP1, FP2, F7, F3, FZ, F4, F8, FT7, FC3, FCZ, FC4, FT8, T7, C3, CZ, C4, T8, M1, TP7, CP3, CPZ, CP4, TP8, M2, P7, P3, PZ, P4, P8, O1, OZ, O2 **Additional sensors** \*\\\*EOG:\* Horizontal (HEO) and Vertical (VEO) channels were available on the cap but \*\*were not connected\*\* during recording. **\* Physio: ECG and GSR/EDA were recorded via auxiliary channels.** **EEG Preprocessing** All preprocessing was performed in \*\*MATLAB R2020b\*\*using \*\* EEGLAB 2023.0\*\*and \*\*ERPLAB 9.10\*\*. The full, commented pipeline is provided in `code/Preprocessing_EEG.m`. **Steps** 1. *Band‑pass filter:* 0.1–30 Hz (zero‑phase Hamming‑windowed FIR) 2. **Downsample\*\*to \*\*250 Hz** 3. **Re‑reference** to average mastoids (M1, M2) 4. ``` ** ``` Bad‑channel detection\*\*using\*clean_rawdata\* (autocorrelation criterion = 0.8) 5. ``` ** ``` ICA\*\*with\*runica\* 6. **Automatic IC rejection\*\*using \*\* ADJUST\*\*and \*\* SASICA** 7. **Spherical interpolation** of removed channels 8. *Epoching:* −200 to 5000 ms relative to stimulus onset 9. *Baseline correction:* −200 ms pre‑stimulus 10. *Artifact rejection:* Step 1 – absolute amplitude on channels 1–30, epochs rejected if amplitude exceeded ±200 µV within −200 to 5000 ms. Step 2 – FASTER epoch_properties on channels 1–30, epochs rejected if any metric exceeded |z| > 2. **11. Condition‑wise averaging using ERPLAB** **Derivatives & Ancillary Data** **derivatives/processed_erps/** Averaged ERP files (`.erp`) for each participant and session after preprocessing. **derivatives/sideeffectsblinding_effectiveness/** \* `side_effects_blinding_effectiveness_english.csv` — \_blinding effectiveness and side effects questionnaire \* `side_effects_blinding_effectiveness_data_dictionary.csv` — data dictionary with variable names and value coding **code/** **\* MATLAB preprocessing script and documentation: Preprocessing_EEG.m** ## Dataset Information | Dataset ID | `DS006861` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Targeted Neuromodulation of the Left Dorsolateral Prefrontal Cortex Alleviates Altered Affective Response Evaluation in Lonely Individuals | | Author (year) | `Maka2025_Targeted` | | Canonical | — | | Importable as | `DS006861`, `Maka2025_Targeted` | | Year | 2025 | | Authors | Szymon Mąka, Marta Chrustowicz, Łukasz Okruszek | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006861.v1.0.2](https://doi.org/10.18112/openneuro.ds006861.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006861) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006861) | [Source URL](https://openneuro.org/datasets/ds006861) | ### Copy-paste BibTeX ```bibtex @dataset{ds006861, title = {Targeted Neuromodulation of the Left Dorsolateral Prefrontal Cortex Alleviates Altered Affective Response Evaluation in Lonely Individuals}, author = {Szymon Mąka and Marta Chrustowicz and Łukasz Okruszek}, doi = {10.18112/openneuro.ds006861.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds006861.v1.0.2}, } ``` ## Technical Details - Subjects: 120 - Recordings: 239 - Tasks: 1 - Channels: 37 - Sampling rate (Hz): 1000.0 - Duration (hours): 99.5760275 - Pathology: Healthy - Modality: Visual - Type: Affect - Size on disk: 52.1 GB - File count: 239 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006861.v1.0.2 - Source: openneuro - OpenNeuro: [ds006861](https://openneuro.org/datasets/ds006861) - NeMAR: [ds006861](https://nemar.org/dataexplorer/detail?dataset_id=ds006861) ## API Reference Use the `DS006861` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006861(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Targeted Neuromodulation of the Left Dorsolateral Prefrontal Cortex Alleviates Altered Affective Response Evaluation in Lonely Individuals * **Study:** `ds006861` (OpenNeuro) * **Author (year):** `Maka2025_Targeted` * **Canonical:** — Also importable as: `DS006861`, `Maka2025_Targeted`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 120; recordings: 239; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006861](https://openneuro.org/datasets/ds006861) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006861](https://nemar.org/dataexplorer/detail?dataset_id=ds006861) DOI: [https://doi.org/10.18112/openneuro.ds006861.v1.0.2](https://doi.org/10.18112/openneuro.ds006861.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006861 >>> dataset = DS006861(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006861) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006861) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006866: eeg dataset, 148 subjects *Discrepancy between self-report and neurophysiological markers of socio-affective responses in lonely individuals* Access recordings and metadata through EEGDash. **Citation:** Szymon Mąka, Marta Chrustowicz, Łukasz Okruszek (2025). *Discrepancy between self-report and neurophysiological markers of socio-affective responses in lonely individuals*. [10.18112/openneuro.ds006866.v1.0.0](https://doi.org/10.18112/openneuro.ds006866.v1.0.0) Modality: eeg Subjects: 148 Recordings: 148 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006866 dataset = DS006866(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006866(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006866( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006866, title = {Discrepancy between self-report and neurophysiological markers of socio-affective responses in lonely individuals}, author = {Szymon Mąka and Marta Chrustowicz and Łukasz Okruszek}, doi = {10.18112/openneuro.ds006866.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006866.v1.0.0}, } ``` ## About This Dataset **Emotion Processing and Regulation Task (Static Stimuli) — EEG Dataset** This dataset contains EEG recordings and behavioral data from the **Emotion Processing and Regulation** task with static emotional stimuli. \*\\\*Preregistration:\* [https://osf.io/g8qey](https://osf.io/g8qey) \*\\\*Preprint:\* [https://osf.io/preprints/psyarxiv/v9dt3_v2](https://osf.io/preprints/psyarxiv/v9dt3_v2) **Participants** \* \*\*N = 148\*\* right-handed, neurologically healthy adults with normal or corrected-to-normal vision. ### View full README **Emotion Processing and Regulation Task (Static Stimuli) — EEG Dataset** This dataset contains EEG recordings and behavioral data from the **Emotion Processing and Regulation** task with static emotional stimuli. \*\\\*Preregistration:\* [https://osf.io/g8qey](https://osf.io/g8qey) \*\\\*Preprint:\* [https://osf.io/preprints/psyarxiv/v9dt3_v2](https://osf.io/preprints/psyarxiv/v9dt3_v2) **Participants** \* \*\*N = 148\*\* right-handed, neurologically healthy adults with normal or corrected-to-normal vision. **Experimental Design** The single session comprised 240 trials, split evenly across six conditions defined by stimulus content type Participants completed 240 trials in a single session, evenly allocated to a 2 (content: social, non-social) × 3 (regulation requirement: watch-neutral, watch-negative, reappraise-negative) factorial design. On each trial, they viewed a static image for 5 s and either passively watched it or reappraised it as instructed. They then rated arousal and subsequently valence of the image on separate 9-point scales. **EEG Data Acquisition** \*\\\*EEG Cap:\* 64-channel QuickCap \*\\\*Amplifier:\* Neuroscan SynampsRT \*\\\*Sampling rate:\* 1000 Hz \*\\\*Impedance:\* below 5 kΩ **Recorded Channels** FP1, FPZ, FP2, AF3, AF4, F7, F5, F3, F1, FZ, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCZ, FC2, FC4, FC6, FT8, T7, C5, C3, C1, CZ, C2, C4, C6, T8, M1, TP7, CP5, CP3, CP1, CPZ, CP2, CP4, CP6, TP8, M2, P7, P5, P3, P1, PZ, P2, P4, P6, P8, PO7, PO5, PO3, POZ, PO4, PO6, PO8, CB1, O1, OZ, O2, CB2 **Additional Sensors** \* HEO (Horizontal EOG) \* VEO (Vertical EOG) \* EKG **\* GSR/EDA** **EEG Preprocessing** All preprocessing was conducted in \*\*MATLAB R2020b\*\*using \*\* EEGLAB 2023.0\*\*and \*\*ERPLAB 9.10\*\*. The preprocessing pipeline and fully commented scripts are available in `code/Preprocessing_EEG.m`. **Summary of preprocessing steps** 1. *Band-pass filtering:* 0.1–30 Hz (zero-phase Hamming-windowed FIR) 2. *Downsampling:* 250 Hz 3. *Re-referencing:* average mastoids (M1, M2) 4. *Automatic bad-channel rejection:* *clean_rawdata* (autocorrelation = 0.8) 5. *ICA:* *runica* algorithm 6. *Automatic component rejection:* **ADJUST\*\*and \*\* SASICA** 7. **Spherical interpolation** of removed channels 8. *Epoching:* −200 to 5000 ms relative to stimulus onset 9. *Baseline correction:* −200 ms pre-stimulus 10. *Artifact rejection:* ±100 µV within 200 ms moving window (100 ms step) **11. Condition averaging: using ERPLAB** **Derivatives & Supplemental Data** **derivatives/processed_erps/** Contains averaged ERP files (`.erp`) for each participant after preprocessing. **code/** Includes MATLAB preprocessing scripts and documentation (`Preprocessing_EEG.m`). ## Dataset Information | Dataset ID | `DS006866` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Discrepancy between self-report and neurophysiological markers of socio-affective responses in lonely individuals | | Author (year) | `Maka2025_Discrepancy` | | Canonical | — | | Importable as | `DS006866`, `Maka2025_Discrepancy` | | Year | 2025 | | Authors | Szymon Mąka, Marta Chrustowicz, Łukasz Okruszek | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006866.v1.0.0](https://doi.org/10.18112/openneuro.ds006866.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006866) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006866) | [Source URL](https://openneuro.org/datasets/ds006866) | ### Copy-paste BibTeX ```bibtex @dataset{ds006866, title = {Discrepancy between self-report and neurophysiological markers of socio-affective responses in lonely individuals}, author = {Szymon Mąka and Marta Chrustowicz and Łukasz Okruszek}, doi = {10.18112/openneuro.ds006866.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006866.v1.0.0}, } ``` ## Technical Details - Subjects: 148 - Recordings: 148 - Tasks: 1 - Channels: 69 - Sampling rate (Hz): 1000.0 - Duration (hours): 123.35075777777776 - Pathology: Healthy - Modality: Visual - Type: Affect - Size on disk: 116.2 GB - File count: 148 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006866.v1.0.0 - Source: openneuro - OpenNeuro: [ds006866](https://openneuro.org/datasets/ds006866) - NeMAR: [ds006866](https://nemar.org/dataexplorer/detail?dataset_id=ds006866) ## API Reference Use the `DS006866` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006866(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Discrepancy between self-report and neurophysiological markers of socio-affective responses in lonely individuals * **Study:** `ds006866` (OpenNeuro) * **Author (year):** `Maka2025_Discrepancy` * **Canonical:** — Also importable as: `DS006866`, `Maka2025_Discrepancy`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 148; recordings: 148; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006866](https://openneuro.org/datasets/ds006866) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006866](https://nemar.org/dataexplorer/detail?dataset_id=ds006866) DOI: [https://doi.org/10.18112/openneuro.ds006866.v1.0.0](https://doi.org/10.18112/openneuro.ds006866.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006866 >>> dataset = DS006866(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006866) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006866) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006890: ieeg dataset, 2 subjects *Longitudinal Multitask Wireless ECoG Data from Two Fully Implanted Macaca fuscata* Access recordings and metadata through EEGDash. **Citation:** Huixiang Yang, Ryohei Fukuma, Tomoyuki Namima, Kotaro Okuda, Asaya Nishi, Takamitsu Iwata, Abdi Reza, Kota S Sasaki, Taro Kaiju, Gurlal Gill, Haruhiko Kishima, Shinji Nishimoto, Takufumi Yanagisawa (2025). *Longitudinal Multitask Wireless ECoG Data from Two Fully Implanted Macaca fuscata*. [10.18112/openneuro.ds006890.v1.0.0](https://doi.org/10.18112/openneuro.ds006890.v1.0.0) Modality: ieeg Subjects: 2 Recordings: 870 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006890 dataset = DS006890(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006890(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006890( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006890, title = {Longitudinal Multitask Wireless ECoG Data from Two Fully Implanted Macaca fuscata}, author = {Huixiang Yang and Ryohei Fukuma and Tomoyuki Namima and Kotaro Okuda and Asaya Nishi and Takamitsu Iwata and Abdi Reza and Kota S Sasaki and Taro Kaiju and Gurlal Gill and Haruhiko Kishima and Shinji Nishimoto and Takufumi Yanagisawa}, doi = {10.18112/openneuro.ds006890.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006890.v1.0.0}, } ``` ## About This Dataset Longitudinal Multitask Wireless ECoG Data from Two Fully Implanted Macaca fuscata — README **Overview** This repository contains a wireless subdural ECoG (iEEG) dataset from *Macaca fuscata* monkeys, organized in compliance with the iEEG-BIDS specification. Recordings were acquired several times each week using a wireless, inductively powered implant. The data were curated and organized in BIDS format to facilitate reproducible research in neuroscience. Keywords: wireless subdural ECoG, iEEG, Macaca fuscata, BIDS-compliant dataset, longitudinal recordings, task-based neurophysiology ### View full README Longitudinal Multitask Wireless ECoG Data from Two Fully Implanted Macaca fuscata — README **Overview** This repository contains a wireless subdural ECoG (iEEG) dataset from *Macaca fuscata* monkeys, organized in compliance with the iEEG-BIDS specification. Recordings were acquired several times each week using a wireless, inductively powered implant. The data were curated and organized in BIDS format to facilitate reproducible research in neuroscience. Keywords: wireless subdural ECoG, iEEG, Macaca fuscata, BIDS-compliant dataset, longitudinal recordings, task-based neurophysiology **BIDS Organization** - dataset_description.json - participants.tsv, participants.json - README.md, CHANGES.md - sub-/ses-/ieeg/ (with \*_ieeg.edf, \*_ieeg.json, \*_channels.tsv, \*_events.tsv, \*_scans.tsv, \*_electrodes.tsv, \*_electrodes.json, \*_coordsystem.json) **Tasks** Tasks include rest, pressing, reaching, listening, sep. Only curated and validated tasks are exported. **Signals and Channels** - Uniform sampling rate per file. - channels.tsv lists physiological (ECoG), trigger (TRIGGER) and auxiliary channels (MISC). **Usage** This dataset can be loaded with BIDS-compatible toolboxes such as MNE-Python, FieldTrip, or EEGLAB. Inspect \*_events.tsv for task timing and \*_channels.tsv for channel information. **Participants** Each subject corresponds to an individual monkey (e.g., sub-monkeyb, sub-monkeyc). **Ethics** All animal procedures complied with Japanese laws and institutional regulations, including the Science Council of Japan Guidelines for Proper Conduct of Animal Experiments and national standards on pain relief and euthanasia, and were approved by the Animal Experiment Committee — The University of Osaka (approval FBS-25-002). **License and Citation** License: CC BY 4.0 Citation: [Authors], “[Dataset Title],” [Repository/DOI], [Year]. **Contact** Maintainer: Huixiang Yang, The University of Osaka, [yanghuixiang@bci.med.osaka-u.ac.jp](mailto:yanghuixiang@bci.med.osaka-u.ac.jp) For issues, please use the repository issue tracker. ## Dataset Information | Dataset ID | `DS006890` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Longitudinal Multitask Wireless ECoG Data from Two Fully Implanted Macaca fuscata | | Author (year) | `Yang2025_Longitudinal` | | Canonical | — | | Importable as | `DS006890`, `Yang2025_Longitudinal` | | Year | 2025 | | Authors | Huixiang Yang, Ryohei Fukuma, Tomoyuki Namima, Kotaro Okuda, Asaya Nishi, Takamitsu Iwata, Abdi Reza, Kota S Sasaki, Taro Kaiju, Gurlal Gill, Haruhiko Kishima, Shinji Nishimoto, Takufumi Yanagisawa | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006890.v1.0.0](https://doi.org/10.18112/openneuro.ds006890.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006890) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006890) | [Source URL](https://openneuro.org/datasets/ds006890) | ### Copy-paste BibTeX ```bibtex @dataset{ds006890, title = {Longitudinal Multitask Wireless ECoG Data from Two Fully Implanted Macaca fuscata}, author = {Huixiang Yang and Ryohei Fukuma and Tomoyuki Namima and Kotaro Okuda and Asaya Nishi and Takamitsu Iwata and Abdi Reza and Kota S Sasaki and Taro Kaiju and Gurlal Gill and Haruhiko Kishima and Shinji Nishimoto and Takufumi Yanagisawa}, doi = {10.18112/openneuro.ds006890.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006890.v1.0.0}, } ``` ## Technical Details - Subjects: 2 - Recordings: 870 - Tasks: 5 - Channels: 50 (471), 66 (399) - Sampling rate (Hz): 1000.0 - Duration (hours): 105.82138916666666 - Pathology: Healthy - Modality: Multisensory - Type: Motor - Size on disk: 41.2 GB - File count: 870 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006890.v1.0.0 - Source: openneuro - OpenNeuro: [ds006890](https://openneuro.org/datasets/ds006890) - NeMAR: [ds006890](https://nemar.org/dataexplorer/detail?dataset_id=ds006890) ## API Reference Use the `DS006890` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006890(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Longitudinal Multitask Wireless ECoG Data from Two Fully Implanted Macaca fuscata * **Study:** `ds006890` (OpenNeuro) * **Author (year):** `Yang2025_Longitudinal` * **Canonical:** — Also importable as: `DS006890`, `Yang2025_Longitudinal`. Modality: `ieeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 2; recordings: 870; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006890](https://openneuro.org/datasets/ds006890) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006890](https://nemar.org/dataexplorer/detail?dataset_id=ds006890) DOI: [https://doi.org/10.18112/openneuro.ds006890.v1.0.0](https://doi.org/10.18112/openneuro.ds006890.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006890 >>> dataset = DS006890(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006890) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006890) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS006902: fnirs dataset, 42 subjects *Profound neuronal differences during Exercise-Induced Hypoalgesia between athletes and non-athletes revealed by functional near-infrared spectroscopy* Access recordings and metadata through EEGDash. **Citation:** Maria Geisler, Marco Herbsleb, Feliberto de la Cruz, Sabrina von Au, Andy Schumann, Ilona Croy, Karl-Jürgen Bär (2025). *Profound neuronal differences during Exercise-Induced Hypoalgesia between athletes and non-athletes revealed by functional near-infrared spectroscopy*. [10.18112/openneuro.ds006902.v1.1.1](https://doi.org/10.18112/openneuro.ds006902.v1.1.1) Modality: fnirs Subjects: 42 Recordings: 42 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006902 dataset = DS006902(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006902(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006902( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006902, title = {Profound neuronal differences during Exercise-Induced Hypoalgesia between athletes and non-athletes revealed by functional near-infrared spectroscopy}, author = {Maria Geisler, Marco Herbsleb, Feliberto de la Cruz, Sabrina von Au, Andy Schumann, Ilona Croy, Karl-Jürgen Bär}, doi = {10.18112/openneuro.ds006902.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds006902.v1.1.1}, } ``` ## About This Dataset Regular physical activity is an important treatment constituent for chronic pain. To unravel the neuronal influence of exercise on pain, we investigated the neuronal changes during exercise-induced hypoalgesia in endurance athletes and controls. Twenty-two athletes (mean age: 33.3 ± 10.8 years) and twenty non-athletes (mean age: 28.9 ± 9.0 years) underwent High-Intensity Interval Training (HIIT) and pressure pain tests, while brain oxygenation was monitored using functional near-infrared spectroscopy to cover key regions of pain processing: the prefrontal cortex (PFC), sensory motor cortices, and posterior parietal cortex (PPC). During HIIT, both groups exhibited a steady increase in PFC oxyhemoglobin, with athletes showing a greater increase in the PPC area than non-athletes. As expected, athletes showed a significant reduction in pain perception after HIIT, whereas non-athletes did not. In line, athletes showed a significant decrease in oxyhemoglobin levels in all brain areas post-HIIT, while non-athletes only showed a decrease in sensory motor areas. Interestingly, in athletes, pain reduction correlated with the decrease in PFC oxyhemoglobin during painful stimulation, whereas no significant correlation was observed in non-athletes. The pronounced HIIT-induced increase in oxyhemoglobin in athletes may elevate baseline neural activity to a level where additional activation is limited, potentially reducing the salience of pain-related signals. This athlete-specific response may result from endurance training adaptations, such as enhanced microvascularization and oxygen delivery, promoting greater neural efficiency during high-intensity exercise. These findings highlight HIIT’s potential as a targeted pain management strategy for athletes and the need for tailored approaches in non-athletes. dataset: sub01-sub27 are athletes; sub29-sub53 are non-athletes ## Dataset Information | Dataset ID | `DS006902` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Profound neuronal differences during Exercise-Induced Hypoalgesia between athletes and non-athletes revealed by functional near-infrared spectroscopy | | Author (year) | `Geisler2025` | | Canonical | — | | Importable as | `DS006902`, `Geisler2025` | | Year | 2025 | | Authors | Maria Geisler, Marco Herbsleb, Feliberto de la Cruz, Sabrina von Au, Andy Schumann, Ilona Croy, Karl-Jürgen Bär | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006902.v1.1.1](https://doi.org/10.18112/openneuro.ds006902.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006902) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006902) | [Source URL](https://openneuro.org/datasets/ds006902) | ### Copy-paste BibTeX ```bibtex @dataset{ds006902, title = {Profound neuronal differences during Exercise-Induced Hypoalgesia between athletes and non-athletes revealed by functional near-infrared spectroscopy}, author = {Maria Geisler, Marco Herbsleb, Feliberto de la Cruz, Sabrina von Au, Andy Schumann, Ilona Croy, Karl-Jürgen Bär}, doi = {10.18112/openneuro.ds006902.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds006902.v1.1.1}, } ``` ## Technical Details - Subjects: 42 - Recordings: 42 - Tasks: 1 - Channels: 112 - Sampling rate (Hz): 7.627765064836003 - Duration (hours): 27.677904833333333 - Pathology: Healthy - Modality: Motor - Type: Perception - Size on disk: 5.5 GB - File count: 42 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006902.v1.1.1 - Source: openneuro - OpenNeuro: [ds006902](https://openneuro.org/datasets/ds006902) - NeMAR: [ds006902](https://nemar.org/dataexplorer/detail?dataset_id=ds006902) ## API Reference Use the `DS006902` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006902(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Profound neuronal differences during Exercise-Induced Hypoalgesia between athletes and non-athletes revealed by functional near-infrared spectroscopy * **Study:** `ds006902` (OpenNeuro) * **Author (year):** `Geisler2025` * **Canonical:** — Also importable as: `DS006902`, `Geisler2025`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 42; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006902](https://openneuro.org/datasets/ds006902) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006902](https://nemar.org/dataexplorer/detail?dataset_id=ds006902) DOI: [https://doi.org/10.18112/openneuro.ds006902.v1.1.1](https://doi.org/10.18112/openneuro.ds006902.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS006902 >>> dataset = DS006902(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006902) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006902) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS006903: fnirs dataset, 17 subjects *ball_squeeze_2025* Access recordings and metadata through EEGDash. **Citation:** Enter author names here (2025). *ball_squeeze_2025*. [10.18112/openneuro.ds006903.v1.0.0](https://doi.org/10.18112/openneuro.ds006903.v1.0.0) Modality: fnirs Subjects: 17 Recordings: 67 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006903 dataset = DS006903(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006903(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006903( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006903, title = {ball_squeeze_2025}, author = {Enter author names here}, doi = {10.18112/openneuro.ds006903.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006903.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS006903` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | ball_squeeze_2025 | | Author (year) | `here2025` | | Canonical | — | | Importable as | `DS006903`, `here2025` | | Year | 2025 | | Authors | Enter author names here | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006903.v1.0.0](https://doi.org/10.18112/openneuro.ds006903.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006903) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006903) | [Source URL](https://openneuro.org/datasets/ds006903) | ### Copy-paste BibTeX ```bibtex @dataset{ds006903, title = {ball_squeeze_2025}, author = {Enter author names here}, doi = {10.18112/openneuro.ds006903.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006903.v1.0.0}, } ``` ## Technical Details - Subjects: 17 - Recordings: 67 - Tasks: 2 - Channels: 1134 (66), 2440 - Sampling rate (Hz): 4.324324324324325 (38), 8.98876404494382 (15), 4.324324324324324 (14) - Duration (hours): Not calculated - Pathology: Healthy - Modality: Motor - Type: Motor - Size on disk: 5.4 GB - File count: 67 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006903.v1.0.0 - Source: openneuro - OpenNeuro: [ds006903](https://openneuro.org/datasets/ds006903) - NeMAR: [ds006903](https://nemar.org/dataexplorer/detail?dataset_id=ds006903) ## API Reference Use the `DS006903` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006903(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ball_squeeze_2025 * **Study:** `ds006903` (OpenNeuro) * **Author (year):** `here2025` * **Canonical:** — Also importable as: `DS006903`, `here2025`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 17; recordings: 67; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006903](https://openneuro.org/datasets/ds006903) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006903](https://nemar.org/dataexplorer/detail?dataset_id=ds006903) DOI: [https://doi.org/10.18112/openneuro.ds006903.v1.0.0](https://doi.org/10.18112/openneuro.ds006903.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006903 >>> dataset = DS006903(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006903) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006903) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS006910: ieeg dataset, 121 subjects *Auditory Naming EC* Access recordings and metadata through EEGDash. **Citation:** Ryuzaburo Kochi, Aya Kanno, Hiroshi Uda, Keisuke Hatano, Masaki Sonoda, Hidenori Endo, Michael Cools, Robert Rothermel, Aimee F. Luat, Eishi Asano (2025). *Auditory Naming EC*. [10.18112/openneuro.ds006910.v1.0.1](https://doi.org/10.18112/openneuro.ds006910.v1.0.1) Modality: ieeg Subjects: 121 Recordings: 384 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006910 dataset = DS006910(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006910(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006910( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006910, title = {Auditory Naming EC}, author = {Ryuzaburo Kochi and Aya Kanno and Hiroshi Uda and Keisuke Hatano and Masaki Sonoda and Hidenori Endo and Michael Cools and Robert Rothermel and Aimee F. Luat and Eishi Asano}, doi = {10.18112/openneuro.ds006910.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006910.v1.0.1}, } ``` ## About This Dataset This dataset, used in the analysis reported by Kochi et al., (2025), contains intracranial EEG recordings from 121 individuals who performed an auditory‑naming task. Electrode coordinates are provided in MNI‑305 space. Each EDF file is tagged for the auditory naming task with the following event codes: 401 – stimulus onset 402 – stimulus offset 501 – response onset Reference: Ryuzaburo Kochi, Aya Kanno, Hiroshi Uda, Keisuke Hatano, Masaki Sonoda, Hidenori Endo, Michael Cools, Robert Rothermel, Aimee F. Luat, Eishi Asano. Whole-Brain Millisecond-Scale Effective Connectivity Atlases of Speech ## Dataset Information | Dataset ID | `DS006910` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Auditory Naming EC | | Author (year) | `Kochi2025_Auditory_Naming_EC` | | Canonical | — | | Importable as | `DS006910`, `Kochi2025_Auditory_Naming_EC` | | Year | 2025 | | Authors | Ryuzaburo Kochi, Aya Kanno, Hiroshi Uda, Keisuke Hatano, Masaki Sonoda, Hidenori Endo, Michael Cools, Robert Rothermel, Aimee F. Luat, Eishi Asano | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006910.v1.0.1](https://doi.org/10.18112/openneuro.ds006910.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006910) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006910) | [Source URL](https://openneuro.org/datasets/ds006910) | ### Copy-paste BibTeX ```bibtex @dataset{ds006910, title = {Auditory Naming EC}, author = {Ryuzaburo Kochi and Aya Kanno and Hiroshi Uda and Keisuke Hatano and Masaki Sonoda and Hidenori Endo and Michael Cools and Robert Rothermel and Aimee F. Luat and Eishi Asano}, doi = {10.18112/openneuro.ds006910.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006910.v1.0.1}, } ``` ## Technical Details - Subjects: 121 - Recordings: 384 - Tasks: 1 - Channels: 128 (269), 138 (14), 136 (11), 112 (9), 140 (8), 164 (8), 134 (7), 110 (6), 142 (5), 156 (5), 150 (5), 132 (4), 148 (4), 144 (4), 130 (4), 160 (3), 154 (3), 84 (3), 118 (3), 96 (3), 152 (3), 64 (2), 58 - Sampling rate (Hz): 1000.0 - Duration (hours): 130.07328611111112 - Pathology: Not specified - Modality: Auditory - Type: Other - Size on disk: 44.6 GB - File count: 384 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006910.v1.0.1 - Source: openneuro - OpenNeuro: [ds006910](https://openneuro.org/datasets/ds006910) - NeMAR: [ds006910](https://nemar.org/dataexplorer/detail?dataset_id=ds006910) ## API Reference Use the `DS006910` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006910(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory Naming EC * **Study:** `ds006910` (OpenNeuro) * **Author (year):** `Kochi2025_Auditory_Naming_EC` * **Canonical:** — Also importable as: `DS006910`, `Kochi2025_Auditory_Naming_EC`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Unknown`. Subjects: 121; recordings: 384; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006910](https://openneuro.org/datasets/ds006910) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006910](https://nemar.org/dataexplorer/detail?dataset_id=ds006910) DOI: [https://doi.org/10.18112/openneuro.ds006910.v1.0.1](https://doi.org/10.18112/openneuro.ds006910.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006910 >>> dataset = DS006910(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006910) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006910) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS006914: ieeg dataset, 110 subjects *Visual Naming EC* Access recordings and metadata through EEGDash. **Citation:** Ryuzaburo Kochi, Aya Kanno, Hiroshi Uda, Keisuke Hatano, Masaki Sonoda, Hidenori Endo, Michael Cools, Robert Rothermel, Aimee F. Luat, Eishi Asano (2025). *Visual Naming EC*. [10.18112/openneuro.ds006914.v1.0.3](https://doi.org/10.18112/openneuro.ds006914.v1.0.3) Modality: ieeg Subjects: 110 Recordings: 353 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006914 dataset = DS006914(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006914(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006914( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006914, title = {Visual Naming EC}, author = {Ryuzaburo Kochi and Aya Kanno and Hiroshi Uda and Keisuke Hatano and Masaki Sonoda and Hidenori Endo and Michael Cools and Robert Rothermel and Aimee F. Luat and Eishi Asano}, doi = {10.18112/openneuro.ds006914.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds006914.v1.0.3}, } ``` ## About This Dataset This dataset, used in the analysis reported by Kochi et al., (2025), contains intracranial EEG recordings from 110 individuals who performed an visual‑naming task. Electrode coordinates are provided in MNI‑305 space. Each EDF file is tagged for the visual naming task with the following event codes: 401 – stimulus onset 501 – response onset Reference: Ryuzaburo Kochi, Aya Kanno, Hiroshi Uda, Keisuke Hatano, Masaki Sonoda, Hidenori Endo, Michael Cools, Robert Rothermel, Aimee F. Luat, Eishi Asano. Whole-Brain Millisecond-Scale Effective Connectivity Atlases of Speech ## Dataset Information | Dataset ID | `DS006914` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Visual Naming EC | | Author (year) | `Kochi2025_Visual_Naming_EC` | | Canonical | — | | Importable as | `DS006914`, `Kochi2025_Visual_Naming_EC` | | Year | 2025 | | Authors | Ryuzaburo Kochi, Aya Kanno, Hiroshi Uda, Keisuke Hatano, Masaki Sonoda, Hidenori Endo, Michael Cools, Robert Rothermel, Aimee F. Luat, Eishi Asano | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006914.v1.0.3](https://doi.org/10.18112/openneuro.ds006914.v1.0.3) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006914) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006914) | [Source URL](https://openneuro.org/datasets/ds006914) | ### Copy-paste BibTeX ```bibtex @dataset{ds006914, title = {Visual Naming EC}, author = {Ryuzaburo Kochi and Aya Kanno and Hiroshi Uda and Keisuke Hatano and Masaki Sonoda and Hidenori Endo and Michael Cools and Robert Rothermel and Aimee F. Luat and Eishi Asano}, doi = {10.18112/openneuro.ds006914.v1.0.3}, url = {https://doi.org/10.18112/openneuro.ds006914.v1.0.3}, } ``` ## Technical Details - Subjects: 110 - Recordings: 353 - Tasks: 1 - Channels: 128 (245), 136 (19), 138 (19), 140 (8), 112 (6), 110 (6), 156 (5), 150 (5), 164 (4), 134 (4), 130 (4), 148 (4), 96 (3), 84 (3), 160 (3), 152 (3), 118 (3), 144 (3), 154 (3), 64 (2), 58 - Sampling rate (Hz): 1000.0 - Duration (hours): 0.7271305555555556 - Pathology: Epilepsy - Modality: Visual - Type: Other - Size on disk: 17.5 GB - File count: 353 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006914.v1.0.3 - Source: openneuro - OpenNeuro: [ds006914](https://openneuro.org/datasets/ds006914) - NeMAR: [ds006914](https://nemar.org/dataexplorer/detail?dataset_id=ds006914) ## API Reference Use the `DS006914` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006914(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual Naming EC * **Study:** `ds006914` (OpenNeuro) * **Author (year):** `Kochi2025_Visual_Naming_EC` * **Canonical:** — Also importable as: `DS006914`, `Kochi2025_Visual_Naming_EC`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Epilepsy`. Subjects: 110; recordings: 353; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006914](https://openneuro.org/datasets/ds006914) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006914](https://nemar.org/dataexplorer/detail?dataset_id=ds006914) DOI: [https://doi.org/10.18112/openneuro.ds006914.v1.0.3](https://doi.org/10.18112/openneuro.ds006914.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS006914 >>> dataset = DS006914(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006914) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006914) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS006921: eeg dataset, 38 subjects *High Density Resting State EEG of Phantom Limb Pain and Controls* Access recordings and metadata through EEGDash. **Citation:** Ramne, M., Damercheli, S., Apelgren, F., Pettersson, I., Lendaro, E. (2025). *High Density Resting State EEG of Phantom Limb Pain and Controls*. [10.18112/openneuro.ds006921.v1.1.1](https://doi.org/10.18112/openneuro.ds006921.v1.1.1) Modality: eeg Subjects: 38 Recordings: 152 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006921 dataset = DS006921(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006921(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006921( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006921, title = {High Density Resting State EEG of Phantom Limb Pain and Controls}, author = {Ramne, M. and Damercheli, S. and Apelgren, F. and Pettersson, I. and Lendaro, E.}, doi = {10.18112/openneuro.ds006921.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds006921.v1.1.1}, } ``` ## About This Dataset **High Density Resting State EEG of Phantom Limb Pain and Controls** This dataset comprises resting state high density EEG data (64 or 128 channels) collected from three categories of subjects: amputees with phantom limb pain, amputees without phantom limb pain, and intact, pain free controls. The data has been organised according to the BIDS standard for more accessible reuse. Recordings are approximately 7 minutes long with eyes opened or closed, as indicated by task. **Usage** For loading and using the data in Matlab we recommend using pop_importbids by EEGLab, example usage here: [https://eeglab.org/tutorials/11_Scripting/Analyzing_EEG_BIDS_data_in_EEGLAB.html](https://eeglab.org/tutorials/11_Scripting/Analyzing_EEG_BIDS_data_in_EEGLAB.html) For a complete pipeline for resting state EEG preprocessing and feature extraction in Matlab we recommend DISCOVER-EEG: Cristina Gil. (2024). crisglav/discover-eeg: 2.0.0 (2.0.0). Zenodo. [https://doi.org/10.5281/zenodo.10797803](https://doi.org/10.5281/zenodo.10797803) **Phenotype data note** Session-level questionnaire data are stored in `phenotype/pain-questionnaire_sessions.tsv` with descriptions of the corresponding questionnaire items in `phenotype/pain-questionnaire_sessions.json`. The phenotype files are currently ignored by the BIDS Validator due to incomplete support for phenotype indexing across multiple sessions. **License** CC0 ## Dataset Information | Dataset ID | `DS006921` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | High Density Resting State EEG of Phantom Limb Pain and Controls | | Author (year) | `Ramne2025` | | Canonical | — | | Importable as | `DS006921`, `Ramne2025` | | Year | 2025 | | Authors | Ramne, M., Damercheli, S., Apelgren, F., Pettersson, I., Lendaro, E. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006921.v1.1.1](https://doi.org/10.18112/openneuro.ds006921.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006921) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006921) | [Source URL](https://openneuro.org/datasets/ds006921) | ### Copy-paste BibTeX ```bibtex @dataset{ds006921, title = {High Density Resting State EEG of Phantom Limb Pain and Controls}, author = {Ramne, M. and Damercheli, S. and Apelgren, F. and Pettersson, I. and Lendaro, E.}, doi = {10.18112/openneuro.ds006921.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds006921.v1.1.1}, } ``` ## Technical Details - Subjects: 38 - Recordings: 152 - Tasks: 2 - Channels: 128 (124), 64 (28) - Sampling rate (Hz): 2400.0 - Duration (hours): 16.961872800925924 - Pathology: Other - Modality: Resting State - Type: Clinical/Intervention - Size on disk: 64.4 GB - File count: 152 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006921.v1.1.1 - Source: openneuro - OpenNeuro: [ds006921](https://openneuro.org/datasets/ds006921) - NeMAR: [ds006921](https://nemar.org/dataexplorer/detail?dataset_id=ds006921) ## API Reference Use the `DS006921` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006921(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High Density Resting State EEG of Phantom Limb Pain and Controls * **Study:** `ds006921` (OpenNeuro) * **Author (year):** `Ramne2025` * **Canonical:** — Also importable as: `DS006921`, `Ramne2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 38; recordings: 152; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006921](https://openneuro.org/datasets/ds006921) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006921](https://nemar.org/dataexplorer/detail?dataset_id=ds006921) DOI: [https://doi.org/10.18112/openneuro.ds006921.v1.1.1](https://doi.org/10.18112/openneuro.ds006921.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS006921 >>> dataset = DS006921(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006921) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006921) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006923: eeg dataset, 140 subjects *Dataset of Electroencephalograms of Juvenile Offenders* Access recordings and metadata through EEGDash. **Citation:** Aura Polo, Elmer León, Mariana Pino-Melgarejo, Julie Viloria-Porto (2025). *Dataset of Electroencephalograms of Juvenile Offenders*. [10.18112/openneuro.ds006923.v1.0.0](https://doi.org/10.18112/openneuro.ds006923.v1.0.0) Modality: eeg Subjects: 140 Recordings: 280 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006923 dataset = DS006923(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006923(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006923( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006923, title = {Dataset of Electroencephalograms of Juvenile Offenders}, author = {Aura Polo and Elmer León and Mariana Pino-Melgarejo and Julie Viloria-Porto}, doi = {10.18112/openneuro.ds006923.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006923.v1.0.0}, } ``` ## About This Dataset **Dataset of Electroencephalograms of Juvenile Offenders** **Project’s name** Desarrollo de un sistema inteligente multiparamétrico para el reconocimiento de patrones asociados a disfunciones neurocognitivas en jóvenes en conflicto con la ley en el departamento del Atlántico. **Year of project execution** ### View full README **Dataset of Electroencephalograms of Juvenile Offenders** **Project’s name** Desarrollo de un sistema inteligente multiparamétrico para el reconocimiento de patrones asociados a disfunciones neurocognitivas en jóvenes en conflicto con la ley en el departamento del Atlántico. **Year of project execution** 2021 **Authors and acknowledgment** Aura Polo, Elmer León, Mariana Pino-Melgarejo and Julie Viloria-Porto. Ronald Ruiz for his assistance during the data collection process, and Sergio Miranda for his dedication to data processing and cleaning. **Work team** \* MAGMA Ingeniería research group \* Hogares Claret foundation **Institutions** - Institución Universitaria de Barranquilla (sede Soledad) - Universidad del Magdalena - Universidad Autónoma del Caribe **Description** This repository contains resting-state EEG data collected with the Biosemi ActiveTwo of 140 participants: - 74 juvenile offenders (JO) - 66 juvenile non-offender controls Exclusion criteria: No psychiatric treatment, dental/orthodontic appliances. Recruitment: JO Hogares Claret Foundation (Centro de Reeducación el Oasis & Fundación Luz de Esperanza). Controls: Institución Nacional de Educación Media INEM Miguel Antonio Caro (Barranquilla). **Contents of the dataset** **Core Files** - `dataset_description.json`: General information about the study - `participants.json`: Demographic and group assignment data - `participants.tsv`: Demographic and group assignment data in table format **Features Data (EEGJODataset/code)** **Feature file nomenclature** Files are named using the pattern: `FR_Dats_band_{BAND}_EP_{EYESTATE}_{EPOCH#}_can_{CHANNEL}.xlsx` ```text | Component | Example | Description | |--------------------|-------------|---------------------------------------------------------------------------| | **FR_Dats_band** | Fixed | Prefix = "Feature Results Dataset" | | **{BAND}** | `ALFA` | EEG frequency band: `ALFA` = Alpha (8-13Hz); `BETA` = Beta (13-30Hz); `DELTA` = Delta (1-4Hz); `THETA` = Theta (4-8Hz) | | **EP_{EYESTATE}_** | `EP_C_` | Eye state during epoch: `C` = Eyes closed; `O` = Eyes open | | **{EPOCH#}** | `1` | Epoch number (1 or 2) two epochs per eye state | | **can_** | Fixed | "Channel" prefix | ``` ```text | **{CHANNEL}** | `A1` | Electrode position (ABCD system): First letter = A • B • C • D ``` - Number = Electrode ID (1-32) | **File Contents:** Each Excel file contains 7 features for the specified band/channel/epoch combination: 1. Mean Power 2. RMS of PSD 3. Standard Deviation 4. Min Power 5. Max Power 6. Skewness 7. Kurtosis **Examples:** 1. `FR_Dats_band_ALFA_EP_C_1_can_A1.xlsx` - Alpha band features - First closed-eyes epoch - Channel A1 (Frontal electrode 1) 2. `FR_Dats_band_THETA_EP_O_2_can_C15.xlsx` - Theta band features - Second open-eyes epoch - Channel C15 (Posterior electrode 15) 3. `FR_Dats_band_BETA_EP_C_2_can_B7.xlsx` - Beta band features - Second closed-eyes epoch - Channel B7 (Central electrode 7) **Dataset Structure:** - 4 epochs per subject: - 2 closed-eyes: `EP_C_1`, `EP_C_2` - 2 open-eyes: `EP_O_1`, `EP_O_2` - 128 channels (A1-D32) - 4 frequency bands - Total files per subject: 4 epochs × 128 channels × 4 bands = 2,048 files **EEG Data** ```text EEG_JO_Dataset/ ``` ```text ├── code/ ├── sub-{Subject ID}{Group}/ | ├── eeg/ | | ├── sub-{Subject ID}{Group}_coordsystem.json | | ├── sub-{Subject ID}{Group}_electrodes.tsv | | ├── sub-{Subject ID}{Group}_task-{Task Name}_acq-{Datatype}_eeg.json # Epoched data sidecar json | | ├── sub-{Subject ID}{Group}_task-{Task Name}_acq-{Datatype}_eeg.set # Epoched data | | ├── sub-{Subject ID}{Group}_task-{Task Name}_channels.tsv | | ├── sub-{Subject ID}{Group}_task-{Task Name}_desc-{Datatype}_eeg.json # Preprocessed data sidecar json | | └── sub-{Subject ID}{Group}_task-{Task Name}_desc-{Datatype}_eeg.set # Preprocessed data ├── ... ├── CHANGES ├── dataset_description.json ├── participants.json ├── participants.tsv └── README.md ``` **File Nomenclature** ```text | Denomination | Value | Description | |-----------------------|-----------------|------------------------------------------------------------------| | `sub-` | Fixed | Subject prefix | ``` ```text | `{Subject ID}` | Fixed | **Unique identifier**: ``` - First digit = group ( ``` `` ``` 1\`\`=sg, ``` `` ``` 1\`\`=sg2, ``` `` ``` 2\`\`=cg) - Last 3 digits = subject ID | ```text | `{Group}` | `cg`/`sg`/`sg2` | **Group**: `cg`=control, `sg`=study group 1, `sg2`=study group 2 | | `{Task Name}` | `restingstate` | **Task name** (resting state) | | `acq-` `desc-` | `acq-`/`desc-` | **Label**: `acq-` = acquisition, `desc-` = description | | `{Datatype}` | `epochs`/`preprocessed` | Adquisition type | | `eeg` | Electroencephalography data | Data type | | Extension | `.set` | **File type**: processed | ``` **Examples** 1. `sub-1005sg_task-restingstate_acq-epochs_eeg.set` = Epochs EEG for **study group 1** subject 005 (full ID 1005) 2. `sub-1005sg_task-restingstate_desc-preprocessing_eeg.set` = Preprocessed EEG for **study group 1** subject 005 (full ID 1005) **Methods** **EEG Acquisition** - **Device**: Biosemi ActiveTwo system - **Electrodes**: 128 channels (radial placement, 10-20 system reference) - **Additional channels**: EOG, ECG recorded - **Sampling rate**: 2048 Hz (downsampled to 128 Hz during preprocessing) - **Online filtering**: 0.1-100 Hz bandpass - **Setup**: - Participants seated awake - Continuous monitoring for movements/sleep - Event markers via serial communication (paradigm triggers) **Paradigms** *(Dataset contains only resting-state recordings)* - **Resting State (RS)**: > - Total duration: 12 minutes > - Sequence: > - 4 min alternating eyes closed/open (COCO: Closed-Open-Closed-Open) > - 8 min eyes closed (excluded from current dataset) - **Segment trimming**: : - 5s post-event onset - 5s pre-event offset (to avoid transition artifacts) **Preprocessing pipeline (EEGLAB/MATLAB)** 1. **Visual inspection**: - Raw data review using BDFreader - Identification of bad channels/artifacts 2. **Downsampling**: - 2048 Hz → 128 Hz (resting-state data) 3. **Rereferencing**: - Average reference (replaced failed earlobe reference) 4. **Filtering**: - Bandpass FIR: 1-40 Hz - High-pass: 1 Hz (0.5 Hz cutoff, 425 points) - Low-pass: 40 Hz (45 Hz cutoff, 45 points) 5. **Artifact Removal**: - Bad channel rejection: > - Flat signals > 5s > - SD > 4 > - Correlation < 0.8 with neighbors - ASR (Artifact Subspace Reconstruction) - ICA + ICLabel (components >90% non-brain removed) **Feature Extraction** - **PSD Calculation**: Welch’s method (50% overlap, Hamming window) - **Frequency bands**: - Delta (δ): 1-4 Hz - Theta (θ): 4-8 Hz - Alpha (α): 8-13 Hz - Beta (β): 13-30 Hz - **Features per band/channel**: 1. Mean Power 2. RMS of PSD 3. Standard Deviation 4. Minimum Power 5. Maximum Power 6. Skewness 7. Kurtosis - **Feature volume**: 14,336 features/subject (4 bands × 128 channels × 4 segments × 7 features) **Technical Specifications** - **Processing Hardware**: - Intel Core i5-9400F @2.9GHz - 16GB RAM - Windows 10 (64-bit) - **Software**: - MATLAB 2020a - EEGLAB toolbox - Python (scikit-learn, pandas for feature selection) - **Processing Time**: ~10 minutes/subject **Funding** This research was funded by the SISTEMA GENERAL DE REGALÍAS - SGR and the MINISTERIO DE CIENCIA TECNOLOGÍA E INNOVACIÓN - MINCIENCIAS from Colombia, in the framework of the project “Desarrollo de un sistema inteligente multiparamétrico para el reconocimiento de patrones asociados a disfunciones neurocognitivas en jóvenes en conflicto con la ley en el departamento del Atlántico”, with grant number BPIN 2020000100006. **Support** Correspondence: Aura Polo ([apolol@unimagdalena.edu.co](mailto:apolol@unimagdalena.edu.co)); Elmer León ([elmerleondb@unimagdalena.edu.co](mailto:elmerleondb@unimagdalena.edu.co)); Julie Viloria-Porto ([julieviloriapp@unimagdalena.edu.co](mailto:julieviloriapp@unimagdalena.edu.co)) ## Dataset Information | Dataset ID | `DS006923` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset of Electroencephalograms of Juvenile Offenders | | Author (year) | `Polo2025` | | Canonical | — | | Importable as | `DS006923`, `Polo2025` | | Year | 2025 | | Authors | Aura Polo, Elmer León, Mariana Pino-Melgarejo, Julie Viloria-Porto | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006923.v1.0.0](https://doi.org/10.18112/openneuro.ds006923.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006923) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006923) | [Source URL](https://openneuro.org/datasets/ds006923) | ### Copy-paste BibTeX ```bibtex @dataset{ds006923, title = {Dataset of Electroencephalograms of Juvenile Offenders}, author = {Aura Polo and Elmer León and Mariana Pino-Melgarejo and Julie Viloria-Porto}, doi = {10.18112/openneuro.ds006923.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006923.v1.0.0}, } ``` ## Technical Details - Subjects: 140 - Recordings: 280 - Tasks: 1 - Channels: 128 - Sampling rate (Hz): 128.0 - Duration (hours): 37.333333333333336 - Pathology: Other - Modality: Resting State - Type: Clinical/Intervention - Size on disk: 8.1 GB - File count: 280 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006923.v1.0.0 - Source: openneuro - OpenNeuro: [ds006923](https://openneuro.org/datasets/ds006923) - NeMAR: [ds006923](https://nemar.org/dataexplorer/detail?dataset_id=ds006923) ## API Reference Use the `DS006923` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006923(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of Electroencephalograms of Juvenile Offenders * **Study:** `ds006923` (OpenNeuro) * **Author (year):** `Polo2025` * **Canonical:** — Also importable as: `DS006923`, `Polo2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 140; recordings: 280; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006923](https://openneuro.org/datasets/ds006923) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006923](https://nemar.org/dataexplorer/detail?dataset_id=ds006923) DOI: [https://doi.org/10.18112/openneuro.ds006923.v1.0.0](https://doi.org/10.18112/openneuro.ds006923.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006923 >>> dataset = DS006923(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006923) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006923) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006940: eeg dataset, 7 subjects *Dataset: EEG-Controlled Exoskeleton for Walking and Standing - A Longitudinal Study of Healthy Individuals* Access recordings and metadata through EEGDash. **Citation:** Shantanu Sarkar, Kevin Nathan, Jose L. Contreras-Vidal (2025). *Dataset: EEG-Controlled Exoskeleton for Walking and Standing - A Longitudinal Study of Healthy Individuals*. [10.18112/openneuro.ds006940.v1.0.0](https://doi.org/10.18112/openneuro.ds006940.v1.0.0) Modality: eeg Subjects: 7 Recordings: 935 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006940 dataset = DS006940(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006940(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006940( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006940, title = {Dataset: EEG-Controlled Exoskeleton for Walking and Standing - A Longitudinal Study of Healthy Individuals}, author = {Shantanu Sarkar and Kevin Nathan and Jose L. Contreras-Vidal}, doi = {10.18112/openneuro.ds006940.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006940.v1.0.0}, } ``` ## About This Dataset EEG-Controlled Exoskeleton for Walking and Standing A Longitudinal Motor Imagery Study in Healthy Adults Dataset Overview This dataset contains multimodal recordings from a brain–machine interface (BMI) training study involving seven healthy adult participants (ages 20–30, Mean = 24.3, SD = 3.8). The study focused on open-loop and closed-loop control of a lower-limb exoskeleton (Rex Bionics) using EEG and inertial sensor data. Each participant completed nine sessions over several weeks, structured into training and trial phases. Experimental Design \* Participants: 7 healthy adults (4 male, 3 female) \* Sessions: 9 per participant \* Training Phase: Motor imagery calibration \* Trial Phase: Closed-loop BMI control (walk/stop) \* Conditions: Walk / Stop (motor imagery) ### View full README EEG-Controlled Exoskeleton for Walking and Standing A Longitudinal Motor Imagery Study in Healthy Adults Dataset Overview This dataset contains multimodal recordings from a brain–machine interface (BMI) training study involving seven healthy adult participants (ages 20–30, Mean = 24.3, SD = 3.8). The study focused on open-loop and closed-loop control of a lower-limb exoskeleton (Rex Bionics) using EEG and inertial sensor data. Each participant completed nine sessions over several weeks, structured into training and trial phases. Experimental Design \* Participants: 7 healthy adults (4 male, 3 female) \* Sessions: 9 per participant \* Training Phase: Motor imagery calibration \* Trial Phase: Closed-loop BMI control (walk/stop) \* Conditions: Walk / Stop (motor imagery) Task Structure and Naming Convention Each session includes multiple motor imagery tasks organized as follows: Training: The training phase is used to calibrate the BMI decoder. Participants perform motor imagery tasks without feedback. TrialXX: The trial phase consists of 12 closed-loop BMI trials per session, labeled trial01 to trial12. During these trials, participants use motor imagery to control the exoskeleton in real time. Block 1: Trials 1–4 Block 2: Trials 5–8 Block 3: Trials 9–12 walk6min / stop6min: After completing the 12 trials, participants perform two extended motor imagery tasks: walk6min – Imagining continuous walking for 6 minutes stop6min – Imagining standing still for 6 minutes Data Modalities \* EEG: 60 scalp channels + 4 EOG channels \* IMU: 3-axis accelerometer, gyroscope, magnetometer, and quaternion \* Sensor Placement: IMUs mounted on participant forehead and exosuit back brace \* Decoder Signals/Feedback: Logged control signals and BMI predictions Additional Materials \* MIQ-RS: Motor Imagery Questionnaire – Revised Second Version (PDFs in derivatives/MIQ-RS/) \* Validation Tables: Data availability, synchronization, and electrode placement (derivatives/validation/) \* Raw Data: Provided without filtering or artifact removal BIDS Structure \* dataset_description.json: Metadata and provenance \* sub-XX/ses-YY/: EEG and IMU recordings per session \* derivatives/: MIQ-RS responses and validation spreadsheets ## Dataset Information | Dataset ID | `DS006940` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset: EEG-Controlled Exoskeleton for Walking and Standing - A Longitudinal Study of Healthy Individuals | | Author (year) | `Sarkar2025_StudyOF` | | Canonical | — | | Importable as | `DS006940`, `Sarkar2025_StudyOF` | | Year | 2025 | | Authors | Shantanu Sarkar, Kevin Nathan, Jose L. Contreras-Vidal | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006940.v1.0.0](https://doi.org/10.18112/openneuro.ds006940.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006940) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006940) | [Source URL](https://openneuro.org/datasets/ds006940) | ### Copy-paste BibTeX ```bibtex @dataset{ds006940, title = {Dataset: EEG-Controlled Exoskeleton for Walking and Standing - A Longitudinal Study of Healthy Individuals}, author = {Shantanu Sarkar and Kevin Nathan and Jose L. Contreras-Vidal}, doi = {10.18112/openneuro.ds006940.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006940.v1.0.0}, } ``` ## Technical Details - Subjects: 7 - Recordings: 935 - Tasks: 15 - Channels: 64 - Sampling rate (Hz): 100.0 - Duration (hours): 34.022077777777774 - Pathology: Healthy - Modality: Motor - Type: Motor - Size on disk: 3.6 GB - File count: 935 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006940.v1.0.0 - Source: openneuro - OpenNeuro: [ds006940](https://openneuro.org/datasets/ds006940) - NeMAR: [ds006940](https://nemar.org/dataexplorer/detail?dataset_id=ds006940) ## API Reference Use the `DS006940` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006940(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset: EEG-Controlled Exoskeleton for Walking and Standing - A Longitudinal Study of Healthy Individuals * **Study:** `ds006940` (OpenNeuro) * **Author (year):** `Sarkar2025_StudyOF` * **Canonical:** — Also importable as: `DS006940`, `Sarkar2025_StudyOF`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 7; recordings: 935; tasks: 15. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006940](https://openneuro.org/datasets/ds006940) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006940](https://nemar.org/dataexplorer/detail?dataset_id=ds006940) DOI: [https://doi.org/10.18112/openneuro.ds006940.v1.0.0](https://doi.org/10.18112/openneuro.ds006940.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006940 >>> dataset = DS006940(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006940) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006940) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006945: eeg dataset, 5 subjects *Dataset: T1-Weighted Structural MRI and fMRI of Participants Viewing Self-Avatar Exoskeleton Walking (11 SWS Cycles)* Access recordings and metadata through EEGDash. **Citation:** Shantanu Sarkar, Kevin Nathan, Jose L. Contreras-Vidal (2025). *Dataset: T1-Weighted Structural MRI and fMRI of Participants Viewing Self-Avatar Exoskeleton Walking (11 SWS Cycles)*. [10.18112/openneuro.ds006945.v1.2.1](https://doi.org/10.18112/openneuro.ds006945.v1.2.1) Modality: eeg Subjects: 5 Recordings: 14 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006945 dataset = DS006945(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006945(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006945( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006945, title = {Dataset: T1-Weighted Structural MRI and fMRI of Participants Viewing Self-Avatar Exoskeleton Walking (11 SWS Cycles)}, author = {Shantanu Sarkar and Kevin Nathan and Jose L. Contreras-Vidal}, doi = {10.18112/openneuro.ds006945.v1.2.1}, url = {https://doi.org/10.18112/openneuro.ds006945.v1.2.1}, } ``` ## About This Dataset Neuroimaging Data Collected During Kinesthetic Motor Imagery of Walking vs. Rest This dataset includes multimodal neuroimaging recordings from five participants performing kinesthetic motor imagery (KI) while viewing themselves walking in an exoskeleton. The dataset includes synchronized MRI (structural and functional) and EEG recordings organized according to the BIDS specification. Functional MRI data were acquired in two runs while participants viewed a 10-minute video, along with a separate baseline scan during which participants simulated a resting state for approximately 5 minutes. MRI sessions were conducted after participants completed nine sessions of EEG‑controlled exoskeleton walking and standing experiments. Dataset link: [https://openneuro.org/datasets/ds006940](https://openneuro.org/datasets/ds006940) MRI Acquisition: - Scanner: Philips Ingenia 3.0T (Koninklijke Philips N.V., The Netherlands) - Structural scans: T1‑weighted anatomical images - Functional scans (fMRI): Participants viewed a 10‑minute video of themselves walking in the exoskeleton, filmed from a first‑person perspective. The video contained 11 Stop‑Walk‑Stop (SWS) cycles. During viewing, participants were instructed to evoke KI in synchrony with the exoskeleton movements. - Baseline condition: Participants mentally simulated resting state for approximately 5 minutes while fMRI data was recorded. ### View full README Neuroimaging Data Collected During Kinesthetic Motor Imagery of Walking vs. Rest This dataset includes multimodal neuroimaging recordings from five participants performing kinesthetic motor imagery (KI) while viewing themselves walking in an exoskeleton. The dataset includes synchronized MRI (structural and functional) and EEG recordings organized according to the BIDS specification. Functional MRI data were acquired in two runs while participants viewed a 10-minute video, along with a separate baseline scan during which participants simulated a resting state for approximately 5 minutes. MRI sessions were conducted after participants completed nine sessions of EEG‑controlled exoskeleton walking and standing experiments. Dataset link: [https://openneuro.org/datasets/ds006940](https://openneuro.org/datasets/ds006940) MRI Acquisition: - Scanner: Philips Ingenia 3.0T (Koninklijke Philips N.V., The Netherlands) - Structural scans: T1‑weighted anatomical images - Functional scans (fMRI): Participants viewed a 10‑minute video of themselves walking in the exoskeleton, filmed from a first‑person perspective. The video contained 11 Stop‑Walk‑Stop (SWS) cycles. During viewing, participants were instructed to evoke KI in synchrony with the exoskeleton movements. - Baseline condition: Participants mentally simulated resting state for approximately 5 minutes while fMRI data was recorded. EEG Acquisition: - MR‑compatible EEG cap (Brain Products GmbH, Gilching, Germany) - Electrode locations are provided in EEGLAB format. - 59 scalp channels + 4 EOG channels + 1 ECG channel Stimuli: - A video stimulus (`stimuli/walking_exoskeleton_S1.mp4`) was presented during walking tasks. Participants: Five healthy adults out of seven participated in the EEG‑controlled exoskeleton experiments. Participants S6 and S7 did not undergo MRI scanning due to a pause in data collection during the COVID‑19 pandemic.

Folder Structure (Example: Participant S1)



```text
├── dataset_description.json
├── README
├── derivatives
│   └── sub-01
│       └── ses-01
│           ├── anat
│           │   └── sub-01_ses-01_T1w.nii
│           ├── dwi
│           │   ├── sub-01_ses-01_run-001_dwi.json
│           │   ├── sub-01_ses-01_run-001_dwi.bval
│           │   ├── sub-01_ses-01_run-001_dwi.bvec
│           │   └── sub-01_ses-01_run-001_dwi.nii.gz
│           │
│           └── spm
│               ├── sub-01_ses-01_beta_0001.nii
│               ├── ...
│               ├── sub-01_ses-01_beta_0008.nii
│               ├── sub-01_ses-01_con_0001.nii
│               ├── ...
│               ├── sub-01_ses-01_con_0004.nii
│               ├── sub-01_ses-01_smpt_0001.nii
│               ├── ...
│               ├── sub-01_ses-01_smpt_0004.nii
│               ├── sub-01_ses-01_mask.mat
│               ├── sub-01_ses-01_resms.mat
│               ├── sub-01_ses-01_rpv.mat
│               └── sub-01_ses-01_spm.mat
│
├── stimuli
│   └── walking_exoskeleton_S1.mp4
│
├── sub-01
│   └── ses-01
│       ├── anat
│       │   ├── sub-01_ses-01_T1w.json
│       │   └── sub-01_ses-01_T1w.nii
│       ├── eeg
│       │   ├── sub-01_ses-01_coordsystem.json
│       │   ├── sub-01_ses-01_electrodes.json
│       │   ├── sub-01_ses-01_electrodes.tsv
│       │   ├── sub-01_ses-01_task-baseline_eeg.eeg
│       │   ├── sub-01_ses-01_task-baseline_eeg.json
│       │   ├── sub-01_ses-01_task-baseline_eeg.vhdr
│       │   ├── sub-01_ses-01_task-baseline_eeg.vmrk
│       │   ├── sub-01_ses-01_task-walking1_eeg.eeg
│       │   ├── ...
│       │   └── sub-01_ses-01_task-walking2_eeg.vmrk
│       │
│       └── func
│           ├── sub-01_ses-01_task-baseline_run-001_bold.json
│           ├── sub-01_ses-01_task-baseline_run-001_bold.nii.gz
│           ├── sub-01_ses-01_task-walking1_run-001_bold.json
│           ├── sub-01_ses-01_task-walking1_run-001_bold.nii.gz
│           ├── sub-01_ses-01_task-walking2_run-001_bold.json
│           └── sub-01_ses-01_task-walking2_run-001_bold.nii.gz
```

**Validation Data** A validation file (`derivatives/MRI_DataValidation.xls`) is provided to summarize dataset completeness and quality checks. - **Sheet: Files** > Lists presence/absence of EEG, MRI, and SPM outputs across subjects (S1–S5). > Includes counts for beta, con, spmT maps, and DTI volumes. - **Sheet: VMRK-R128** Reports event marker counts (R128 triggers) for baseline, walking1, and walking2 tasks. - **Sheet: EEG-Duration** Provides task durations (minutes) for ‘baseline’, ‘walking1’, and ‘walking2’ EEG recordings. **Notes on Organization** \* Raw data (anat, func, eeg) are stored under each subject directory (sub-XX/ses-YY). \* Derivatives: Preprocessed outputs are stored separately under derivatives/sub-XX/ses-YY, including:  - Statistical Parametric Mapping (SPM) outputs  - SPM-normalized (warped) anatomical scans  - Diffusion Tensor Imaging (DTI) derivatives  - Validation Excel file \* The video stimulus is stored in the top-level stimuli/ folder. \* Naming conventions follow BIDS entities:  - sub-<label> : subject identifier  - ses-<label> : session identifier  - task-<label> : task name (baseline, walking1, walking2)  - run-<index> : run number **Citation** If you use this dataset, please cite the associated study and acknowledge the contributors. ## Dataset Information | Dataset ID | `DS006945` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset: T1-Weighted Structural MRI and fMRI of Participants Viewing Self-Avatar Exoskeleton Walking (11 SWS Cycles) | | Author (year) | `Sarkar2025_T1_Weighted_Structural` | | Canonical | — | | Importable as | `DS006945`, `Sarkar2025_T1_Weighted_Structural` | | Year | 2025 | | Authors | Shantanu Sarkar, Kevin Nathan, Jose L. Contreras-Vidal | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006945.v1.2.1](https://doi.org/10.18112/openneuro.ds006945.v1.2.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006945) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006945) | [Source URL](https://openneuro.org/datasets/ds006945) | ### Copy-paste BibTeX ```bibtex @dataset{ds006945, title = {Dataset: T1-Weighted Structural MRI and fMRI of Participants Viewing Self-Avatar Exoskeleton Walking (11 SWS Cycles)}, author = {Shantanu Sarkar and Kevin Nathan and Jose L. Contreras-Vidal}, doi = {10.18112/openneuro.ds006945.v1.2.1}, url = {https://doi.org/10.18112/openneuro.ds006945.v1.2.1}, } ``` ## Technical Details - Subjects: 5 - Recordings: 14 - Tasks: 3 - Channels: 64 - Sampling rate (Hz): 5000.0 - Duration (hours): 2.110611111111111 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 5.4 GB - File count: 14 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006945.v1.2.1 - Source: openneuro - OpenNeuro: [ds006945](https://openneuro.org/datasets/ds006945) - NeMAR: [ds006945](https://nemar.org/dataexplorer/detail?dataset_id=ds006945) ## API Reference Use the `DS006945` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006945(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset: T1-Weighted Structural MRI and fMRI of Participants Viewing Self-Avatar Exoskeleton Walking (11 SWS Cycles) * **Study:** `ds006945` (OpenNeuro) * **Author (year):** `Sarkar2025_T1_Weighted_Structural` * **Canonical:** — Also importable as: `DS006945`, `Sarkar2025_T1_Weighted_Structural`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 14; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006945](https://openneuro.org/datasets/ds006945) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006945](https://nemar.org/dataexplorer/detail?dataset_id=ds006945) DOI: [https://doi.org/10.18112/openneuro.ds006945.v1.2.1](https://doi.org/10.18112/openneuro.ds006945.v1.2.1) ### Examples ```pycon >>> from eegdash.dataset import DS006945 >>> dataset = DS006945(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006945) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006945) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006963: eeg dataset, 32 subjects *Motor Control Processes Moderate Visual Working Memory Gating Dataset* Access recordings and metadata through EEGDash. **Citation:** Şahcan Özdemir, Eren Günseli, Daniel Schneider (2025). *Motor Control Processes Moderate Visual Working Memory Gating Dataset*. [10.18112/openneuro.ds006963.v1.0.0](https://doi.org/10.18112/openneuro.ds006963.v1.0.0) Modality: eeg Subjects: 32 Recordings: 32 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006963 dataset = DS006963(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006963(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006963( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006963, title = {Motor Control Processes Moderate Visual Working Memory Gating Dataset}, author = {Şahcan Özdemir and Eren Günseli and Daniel Schneider}, doi = {10.18112/openneuro.ds006963.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006963.v1.0.0}, } ``` ## About This Dataset This dataset accompanies the paper “Motor Control Processes Moderate Working Memory Gating,” published in The Journal of Neuroscience. It contains the raw EEG recordings (not preprocessed) from the study, as well as each participant’s behavioral data within the EEG dataset struct (labeled as EEG.behaviordata). Mean age of subjects is 23.7 (sd=2.9). For any of your inquiries, please reach out to the corresponding author: [oezdemir@ifado.de](mailto:oezdemir@ifado.de) You can find the explanation of triggers at"task-VisuomotorDelayedMatchToSampleWithInterference_events.json”. For any of your inquiries, “task-VisuomotorDelayedMatchToSampleWithInterference_events.json”. While using the dataset, please cite: Özdemir, Ş., Günseli, E., & Schneider, D. (2025). Motor control processes moderate visual working memory gating. The ### View full README This dataset accompanies the paper “Motor Control Processes Moderate Working Memory Gating,” published in The Journal of Neuroscience. It contains the raw EEG recordings (not preprocessed) from the study, as well as each participant’s behavioral data within the EEG dataset struct (labeled as EEG.behaviordata). Mean age of subjects is 23.7 (sd=2.9). For any of your inquiries, please reach out to the corresponding author: [oezdemir@ifado.de](mailto:oezdemir@ifado.de) You can find the explanation of triggers at"task-VisuomotorDelayedMatchToSampleWithInterference_events.json”. For any of your inquiries, “task-VisuomotorDelayedMatchToSampleWithInterference_events.json”. While using the dataset, please cite: Özdemir, Ş., Günseli, E., & Schneider, D. (2025). Motor control processes moderate visual working memory gating. The Journal of Neuroscience, 45(47), e0673252025. [https://doi.org/10.1523/](https://doi.org/10.1523/) JNEUROSCI.0673-25.2025 To reach the analysis codes, please visit the OSF project ([https://osf.io/7fve8](https://osf.io/7fve8)). Participants’ subject numbers were randomized to ensure anonymity and do not reflect the order of data collection. The dataset includes 32 participants in total. Two participants were excluded from all analyses due to misunderstanding of the task rules (one participant didn’t follow the interference task, an the other participant tried to use the response knobs during the target presentation). One participant was included only in the behavioral analysis because of abnormal EEG data, and one participant was excluded based on predefined exclusion criteria. However these excluded participants are shared within this dataset to further ensure transparency. All cases are documented in the relevant notes and the participant info list “participants.tsv”. For detailed methodological information, please refer to the paper or the associated OSF project ([https://osf.io/7fve8](https://osf.io/7fve8)). A brief summary of the experimental procedure is provided here. The experiment used a 2×2 within-subject design with four conditions: same-hand motor interference, different-hand motor interference, same-hand visuomotor interference, and different-hand visuomotor interference. Participants also completed 240 baseline trials with no interference. The experiment included 10 blocks, each containing 120 trials. Each trial began with a colored square or diamond presented for 500 ms. After a 2900 ms delay, participants reported the target color using a color wheel controlled by the left or right knob. The shape of the stimulus indicated which hand to use, and this mapping was counterbalanced across participants. Participants had 4000 ms to respond, and each trial ended with an inter-trial interval between 800 and 1400 ms. In two-thirds of the trials, an interference task occurred during the delay. At 900 ms, a left- or right-pointing triangle appeared, and participants pressed the knob with the corresponding hand. In the motor interference condition, these triangles were gray. In the visuomotor interference condition, the triangles were colored, with their hue shifted 60–90 degrees from the target color, introducing visual interference. In the remaining no-interference trials, a gray up- or down-pointing triangle appeared, and participants made no response until the color wheel appeared. ## Dataset Information | Dataset ID | `DS006963` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Motor Control Processes Moderate Visual Working Memory Gating Dataset | | Author (year) | `Ozdemir2025` | | Canonical | — | | Importable as | `DS006963`, `Ozdemir2025` | | Year | 2025 | | Authors | Şahcan Özdemir, Eren Günseli, Daniel Schneider | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006963.v1.0.0](https://doi.org/10.18112/openneuro.ds006963.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006963) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006963) | [Source URL](https://openneuro.org/datasets/ds006963) | ### Copy-paste BibTeX ```bibtex @dataset{ds006963, title = {Motor Control Processes Moderate Visual Working Memory Gating Dataset}, author = {Şahcan Özdemir and Eren Günseli and Daniel Schneider}, doi = {10.18112/openneuro.ds006963.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds006963.v1.0.0}, } ``` ## Technical Details - Subjects: 32 - Recordings: 32 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 85.26408194444444 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 52.8 GB - File count: 32 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006963.v1.0.0 - Source: openneuro - OpenNeuro: [ds006963](https://openneuro.org/datasets/ds006963) - NeMAR: [ds006963](https://nemar.org/dataexplorer/detail?dataset_id=ds006963) ## API Reference Use the `DS006963` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006963(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Control Processes Moderate Visual Working Memory Gating Dataset * **Study:** `ds006963` (OpenNeuro) * **Author (year):** `Ozdemir2025` * **Canonical:** — Also importable as: `DS006963`, `Ozdemir2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 32; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006963](https://openneuro.org/datasets/ds006963) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006963](https://nemar.org/dataexplorer/detail?dataset_id=ds006963) DOI: [https://doi.org/10.18112/openneuro.ds006963.v1.0.0](https://doi.org/10.18112/openneuro.ds006963.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006963 >>> dataset = DS006963(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006963) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006963) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS006979: eeg dataset, 53 subjects *Examining Perceptual Grouping on Stages of Processing in Visual Working Memory: An ERP Study* Access recordings and metadata through EEGDash. **Citation:** Hanane Ramzaoui, Melissa Beck (2025). *Examining Perceptual Grouping on Stages of Processing in Visual Working Memory: An ERP Study*. [10.18112/openneuro.ds006979.v1.0.1](https://doi.org/10.18112/openneuro.ds006979.v1.0.1) Modality: eeg Subjects: 53 Recordings: 56 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS006979 dataset = DS006979(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS006979(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS006979( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds006979, title = {Examining Perceptual Grouping on Stages of Processing in Visual Working Memory: An ERP Study}, author = {Hanane Ramzaoui and Melissa Beck}, doi = {10.18112/openneuro.ds006979.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006979.v1.0.1}, } ``` ## About This Dataset **BIDS-EEG Dataset: Examining Perceptual Grouping on Stages of Processing in Visual Working Memory: An ERP Study** **Authors: Hanane Ramzaoui, Melissa Beck** **1. Description: Project Overview** We present an electrophysiological dataset recorded from fifty-three subjects performing a bilateral change-detection task to investigate how perceptual grouping, based on color repetition, influences Visual Working Memory (VWM) processing efficiency. The study is designed to temporally isolate and measure the neural correlates of several critical VWM stages: \*\*individuation encoding\*\*, \*\*maintenance\*\*, \*\*initial comparison\*\*, \*\*percept-memory comparison\*\*, and \*\*decision making/late comparison\*\*. This is achieved using specific \*\*Event-Related Potential (ERP) markers\*\* (N2pc, CDA, N2, FN400). ### View full README **BIDS-EEG Dataset: Examining Perceptual Grouping on Stages of Processing in Visual Working Memory: An ERP Study** **Authors: Hanane Ramzaoui, Melissa Beck** **1. Description: Project Overview** We present an electrophysiological dataset recorded from fifty-three subjects performing a bilateral change-detection task to investigate how perceptual grouping, based on color repetition, influences Visual Working Memory (VWM) processing efficiency. The study is designed to temporally isolate and measure the neural correlates of several critical VWM stages: \*\*individuation encoding\*\*, \*\*maintenance\*\*, \*\*initial comparison\*\*, \*\*percept-memory comparison\*\*, and \*\*decision making/late comparison\*\*. This is achieved using specific \*\*Event-Related Potential (ERP) markers\*\* (N2pc, CDA, N2, FN400). **2. Experimental Task and Conditions** Subjects were cued to encode the colors of 2 or 3 squares in one visual hemifield. After a maintenance period, a single-item probe was presented to determine if its color had changed. **Key Manipulations** The memory array contained four primary conditions: \*\\\*Unrepeated (UR):\* Arrays with 2 or 3 unique colors (**2-UR, 3-UR**). \*\\\*Repeated (R):\* Arrays with 3 items, where two colors were repeated. This condition was further subdivided based on spatial arrangement:     \* Two repeated colors with strong spatial proximity ``` ** ``` (3-RSP)\*\*.     \* Two repeated colors with weak spatial proximity ``` ** ``` (3-RWP)\*\*. The Probe in Repeated Conditions In the repeated conditions (3-RSP and 3-RWP), the single-item probe could test two different item types for change detection: \*\\\*Repeated Item:\* The probe tests one of the two squares that share the same color. \*\\\*Unrepeated Item (Singleton):\* The probe tests the single square with the unique color. **3. Primary Neurophysiological Measurements** The study leverages the following ERP components to index different VWM processing stages: ```text | VWM Stage | ERP Marker | Event Locking | | :--- | :--- | :--- | | **Individuation Encoding** | **N2pc** | Stimulus-Locked | | **Maintenance/Load** | **CDA** (Contralateral Delay Activity) | Stimulus-Locked | | **Initial Comparison** | **N2pc** | Probe-Locked | | **Percept-Memory Comparison** | **N2** | Probe-Locked | | **Decision Making/Late Comparison** | **FN400** | Probe-Locked | ``` **4. Acquisition Details and Structure** **Acquisition Parameters** ```text | Parameter | Detail | | :--- | :--- | | **Subjects (N)** | 53 (N=39 used for stimulus-locked ERPs, see `participants.tsv` for details) | | **Electrode System** | BioSemi ActiveTwo System | | **Number of Channels** | 71 (64 scalp, 3 EOG, 2 Mastoid, 1 CMS/DRL) | | **Sampling Rate (Acquisition)** | 512 Hz | | **Total Trials** | 1248 trials | ``` **BIDS Compliance** The data is structured following the Brain Imaging Data Structure (BIDS) standard for EEG. \*\\\*Acquisition Parameters:\* Detailed recording specifications (e.g., 512 Hz sampling rate, Sinc filter details) are provided in the task-level BIDS JSON files (task-myexperiment_eeg.json). \*\\\*Methodology:\* Comprehensive details on offline preprocessing (e.g., re-referencing to average mastoids, ICA artifact removal, 0.1 Hz high-pass filtering) and the precise analysis plan (e.g., ERP measurement windows, HEOG artifact thresholds, channel clusters) are provided in the stage 1 protocol on OSF (https://doi.org/10.17605/OSF.IO/8ZS96). **5. Event Codes/Triggers** The following table maps the trigger codes recorded in the EEG data to the specific experimental events. \*\\\*Acronym Key:\* UR = Unrepeated; RWP = Repeated Weak Proximity; RSP = Repeated Strong Proximity. ```text | Trigger Code | Event Description | | :---: | :--- | | "11" | Stimulus: 2-UR \| Left Cue \| Change \| Unrepeated Probe | | "12" | Stimulus: 3-UR \| Left Cue \| Change \| Unrepeated Probe | | "13" | Stimulus: 3-RWP \| Left Cue \| Change \| Unrepeated Probe | | "14" | Stimulus: 3-RSP \| Left Cue \| Change \| Unrepeated Probe | | "17" | Stimulus: 3-RWP \| Left Cue \| Change \| Repeated Probe | | "18" | Stimulus: 3-RSP \| Left Cue \| Change \| Repeated Probe | | "19" | Stimulus: 2-UR \| Left Cue \| No-Change \| Unrepeated Probe | | "20" | Stimulus: 3-UR \| Left Cue \| No-Change \| Unrepeated Probe | | "21" | Stimulus: 3-RWP \| Left Cue \| No-Change \| Unrepeated Probe | | "22" | Stimulus: 3-RSP \| Left Cue \| No-Change \| Unrepeated Probe | | "25" | Stimulus: 3-RWP \| Left Cue \| No-Change \| Repeated Probe | | "26" | Stimulus: 3-RSP \| Left Cue \| No-Change \| Repeated Probe | | "27" | Stimulus: 2-UR \| Right Cue \| Change \| Unrepeated Probe | | "28" | Stimulus: 3-UR \| Right Cue \| Change \| Unrepeated Probe | | "29" | Stimulus: 3-RWP \| Right Cue \| Change \| Unrepeated Probe | | "30" | Stimulus: 3-RSP \| Right Cue \| Change \| Unrepeated Probe | | "33" | Stimulus: 3-RWP \| Right Cue \| Change \| Repeated Probe | | "34" | Stimulus: 3-RSP \| Right Cue \| Change \| Repeated Probe | | "35" | Stimulus: 2-UR \| Right Cue \| No-Change \| Unrepeated Probe | | "36" | Stimulus: 3-UR \| Right Cue \| No-Change \| Unrepeated Probe | | "37" | Stimulus: 3-RWP \| Right Cue \| No-Change \| Unrepeated Probe | | "38" | Stimulus: 3-RSP \| Right Cue \| No-Change \| Unrepeated Probe | | "41" | Stimulus: 3-RWP \| Right Cue \| No-Change \| Repeated Probe | | "42" | Stimulus: 3-RSP \| Right Cue \| No-Change \| Repeated Probe | | "51" | Probe Onset event: 2-UR \| Left Cue | | "52" | Probe Onset event: 3-UR \| Left Cue | | "53" | Probe Onset event: 3-RWP \| Left Cue | | "54" | Probe Onset event: 3-RSP \| Left Cue | | "55" | Probe Onset event: 2-UR \| Right Cue | | "56" | Probe Onset event: 3-UR \| Right Cue | | "57" | Probe Onset event: 3-RWP \| Right Cue | | "58" | Probe Onset event: 3-RSP \| Right Cue | | "120" | Manual Response: Correct. | | "121" | Manual Response: Incorrect. | ``` **6. Protocol Registration and Reference** For this dataset project, the approved Stage 1 protocol (registered report) can be found at this OSF link (2024, October 15): [https://doi.org/10.17605/OSF.IO/8ZS96](https://doi.org/10.17605/OSF.IO/8ZS96) **7. Contact and Ethics** \*\\\*Affiliation:\* Louisiana State University \*\\\*Ethical Approval:\* Institutional Review Board of Louisiana State University (IRBAM-23-0273 from March 1, 2023) \*\\\*Contact:\* hramzaoui@lsu.edu ## Dataset Information | Dataset ID | `DS006979` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Examining Perceptual Grouping on Stages of Processing in Visual Working Memory: An ERP Study | | Author (year) | `Ramzaoui2025` | | Canonical | `Ramzaoui2024` | | Importable as | `DS006979`, `Ramzaoui2025`, `Ramzaoui2024` | | Year | 2025 | | Authors | Hanane Ramzaoui, Melissa Beck | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds006979.v1.0.1](https://doi.org/10.18112/openneuro.ds006979.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds006979) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds006979) | [Source URL](https://openneuro.org/datasets/ds006979) | ### Copy-paste BibTeX ```bibtex @dataset{ds006979, title = {Examining Perceptual Grouping on Stages of Processing in Visual Working Memory: An ERP Study}, author = {Hanane Ramzaoui and Melissa Beck}, doi = {10.18112/openneuro.ds006979.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds006979.v1.0.1}, } ``` ## Technical Details - Subjects: 53 - Recordings: 56 - Tasks: 3 - Channels: 69 (53), 72 - Sampling rate (Hz): 512.0 (55), 500.0 - Duration (hours): 148.12 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 38.5 GB - File count: 56 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds006979.v1.0.1 - Source: openneuro - OpenNeuro: [ds006979](https://openneuro.org/datasets/ds006979) - NeMAR: [ds006979](https://nemar.org/dataexplorer/detail?dataset_id=ds006979) ## API Reference Use the `DS006979` class to access this dataset programmatically. ### *class* eegdash.dataset.DS006979(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Examining Perceptual Grouping on Stages of Processing in Visual Working Memory: An ERP Study * **Study:** `ds006979` (OpenNeuro) * **Author (year):** `Ramzaoui2025` * **Canonical:** `Ramzaoui2024` Also importable as: `DS006979`, `Ramzaoui2025`, `Ramzaoui2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 53; recordings: 56; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006979](https://openneuro.org/datasets/ds006979) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006979](https://nemar.org/dataexplorer/detail?dataset_id=ds006979) DOI: [https://doi.org/10.18112/openneuro.ds006979.v1.0.1](https://doi.org/10.18112/openneuro.ds006979.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006979 >>> dataset = DS006979(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds006979) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds006979) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007006: eeg dataset, 10 subjects *VR-Compassion Cultivation Training* Access recordings and metadata through EEGDash. **Citation:** Ying Wu, Enrique Carrillosulub, Leon Lange, Chloe Tanega, Nicole Wells, Erik Virre, Cassandra Vieten (2025). *VR-Compassion Cultivation Training*. [10.18112/openneuro.ds007006.v1.0.0](https://doi.org/10.18112/openneuro.ds007006.v1.0.0) Modality: eeg Subjects: 10 Recordings: 50 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007006 dataset = DS007006(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007006(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007006( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007006, title = {VR-Compassion Cultivation Training}, author = {Ying Wu and Enrique Carrillosulub and Leon Lange and Chloe Tanega and Nicole Wells and Erik Virre and Cassandra Vieten}, doi = {10.18112/openneuro.ds007006.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007006.v1.0.0}, } ``` ## About This Dataset VR-CCT Dataset Compassion Island was a social world implemented in AltspaceVR by tecchnology collaborators Origami Air. It was specifically created for the study of VR-based augmentation of compassion cultivation training (CCT). It featured three main settings – a meditation hall, a garden courtyard with a large willow tree, and a clinic. During experimental sessions, participants interacted with two characters in these spaces, represented as avatars – namely, a guide, who helped ### View full README VR-CCT Dataset Compassion Island was a social world implemented in AltspaceVR by tecchnology collaborators Origami Air. It was specifically created for the study of VR-based augmentation of compassion cultivation training (CCT). It featured three main settings – a meditation hall, a garden courtyard with a large willow tree, and a clinic. During experimental sessions, participants interacted with two characters in these spaces, represented as avatars – namely, a guide, who helped the volunteer navigate from setting to setting and offered other assistance as needed, and Ivan, who was an agitated patient in the clinic. Both characters were animated by live actors in separate locations. Participants were able to converse freely with these characters whenever they were co-present with either character in the same space. All sessions began in the meditation hall, which featured a pulsating orb designed to help participants regulate their breathing during an audio-recorded guided meditation. Next, participants were ushered outside to the garden, where they were invited to contemplate a tree with a glowing core while listening to an audio-recorded compassion meditation and performing visualization exercises that centered on universal compassion for all beings. Lastly, participants were directed into a virtual clinic to converse with Ivan, an agitated patient waiting inside the clinic, where participants would have the opportunity to practice exercising the feeling of universal compassion from the garden meditation. ## Dataset Information | Dataset ID | `DS007006` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | VR-Compassion Cultivation Training | | Author (year) | `Wu2025` | | Canonical | — | | Importable as | `DS007006`, `Wu2025` | | Year | 2025 | | Authors | Ying Wu, Enrique Carrillosulub, Leon Lange, Chloe Tanega, Nicole Wells, Erik Virre, Cassandra Vieten | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007006.v1.0.0](https://doi.org/10.18112/openneuro.ds007006.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007006) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007006) | [Source URL](https://openneuro.org/datasets/ds007006) | ### Copy-paste BibTeX ```bibtex @dataset{ds007006, title = {VR-Compassion Cultivation Training}, author = {Ying Wu and Enrique Carrillosulub and Leon Lange and Chloe Tanega and Nicole Wells and Erik Virre and Cassandra Vieten}, doi = {10.18112/openneuro.ds007006.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007006.v1.0.0}, } ``` ## Technical Details - Subjects: 10 - Recordings: 50 - Tasks: 5 - Channels: 64 - Sampling rate (Hz): 256.0 - Duration (hours): 3.606944444444444 - Pathology: Healthy - Modality: Multisensory - Type: Affect - Size on disk: 918.7 MB - File count: 50 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007006.v1.0.0 - Source: openneuro - OpenNeuro: [ds007006](https://openneuro.org/datasets/ds007006) - NeMAR: [ds007006](https://nemar.org/dataexplorer/detail?dataset_id=ds007006) ## API Reference Use the `DS007006` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007006(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) VR-Compassion Cultivation Training * **Study:** `ds007006` (OpenNeuro) * **Author (year):** `Wu2025` * **Canonical:** — Also importable as: `DS007006`, `Wu2025`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 10; recordings: 50; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007006](https://openneuro.org/datasets/ds007006) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007006](https://nemar.org/dataexplorer/detail?dataset_id=ds007006) DOI: [https://doi.org/10.18112/openneuro.ds007006.v1.0.0](https://doi.org/10.18112/openneuro.ds007006.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007006 >>> dataset = DS007006(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007006) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007006) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007020: eeg dataset, 94 subjects *EEG Mortality Dataset in Parkinson’s Disease* Access recordings and metadata through EEGDash. **Citation:** Simin Jamshidi, Arturo Espinoza, Soura Dasgupta, Nandakumar Narayanan (2025). *EEG Mortality Dataset in Parkinson’s Disease*. [10.18112/openneuro.ds007020.v1.0.0](https://doi.org/10.18112/openneuro.ds007020.v1.0.0) Modality: eeg Subjects: 94 Recordings: 94 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007020 dataset = DS007020(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007020(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007020( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007020, title = {EEG Mortality Dataset in Parkinson's Disease}, author = {Simin Jamshidi and Arturo Espinoza and Soura Dasgupta and Nandakumar Narayanan}, doi = {10.18112/openneuro.ds007020.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007020.v1.0.0}, } ``` ## About This Dataset This dataset contains de-identified resting-state EEG recordings from individuals with Parkinson’s disease (PD) and age-matched healthy control subjects. All EEG data were recorded using standard clinical EEG systems at Neurology Clinic. Dataset Purpose: This dataset was originally used to evaluate whether resting-state EEG can help distinguish subjects who were later deceased from those who remained living (mortality classification). Only de-identified EEG data and mortality labels are included. Participant Information: - Participants are labeled as either “living” or “deceased” in participants.tsv - No other demographic or clinical information (age, cognition, UPDRS, disease duration, etc.) is included per data-sharing guidelines. - All participant IDs are anonymized following BIDS convention (e.g., sub-PD1301). EEG Acquisition Details: - Recording type: Resting-state EEG (eyes open) ### View full README This dataset contains de-identified resting-state EEG recordings from individuals with Parkinson’s disease (PD) and age-matched healthy control subjects. All EEG data were recorded using standard clinical EEG systems at Neurology Clinic. Dataset Purpose: This dataset was originally used to evaluate whether resting-state EEG can help distinguish subjects who were later deceased from those who remained living (mortality classification). Only de-identified EEG data and mortality labels are included. Participant Information: - Participants are labeled as either “living” or “deceased” in participants.tsv - No other demographic or clinical information (age, cognition, UPDRS, disease duration, etc.) is included per data-sharing guidelines. - All participant IDs are anonymized following BIDS convention (e.g., sub-PD1301). EEG Acquisition Details: - Recording type: Resting-state EEG (eyes open) - Device: Clinical BrainVision EEG system - File formats: .vhdr, .eeg, .vmrk - Sampling rate: 500 Hz - Montage: Standard 10–20 international system - Recording condition: “task-rest” (no task) Data Organization: Data are structured following the BIDS (Brain Imaging Data Structure) EEG standard: > sub-/ > : ses-01/ > : eeg/ > : sub-_ses-01_task-rest_eeg.vhdr > sub-_ses-01_task-rest_eeg.eeg > sub-_ses-01_task-rest_eeg.vmrk Mortality Label Format: - Living subjects: survival_status = “living” - Deceased subjects: survival_status = “deceased” Ethics & Privacy: All subjects provided consent for EEG recording at the University of Iowa Hospitals and Clinics. The publicly shared version here is fully de-identified and contains no clinical or personal health information other than mortality classification. Suggested Use: This dataset can be used to explore EEG biomarkers of mortality risk, EEG signal characteristics in PD, or to build machine learning models for classification. Questions or requests: Please contact [nandakumar-narayanan@uiowa.edu](mailto:nandakumar-narayanan@uiowa.edu). ## Dataset Information | Dataset ID | `DS007020` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG Mortality Dataset in Parkinson’s Disease | | Author (year) | `Jamshidi2025` | | Canonical | — | | Importable as | `DS007020`, `Jamshidi2025` | | Year | 2025 | | Authors | Simin Jamshidi, Arturo Espinoza, Soura Dasgupta, Nandakumar Narayanan | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007020.v1.0.0](https://doi.org/10.18112/openneuro.ds007020.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007020) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007020) | [Source URL](https://openneuro.org/datasets/ds007020) | ### Copy-paste BibTeX ```bibtex @dataset{ds007020, title = {EEG Mortality Dataset in Parkinson's Disease}, author = {Simin Jamshidi and Arturo Espinoza and Soura Dasgupta and Nandakumar Narayanan}, doi = {10.18112/openneuro.ds007020.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007020.v1.0.0}, } ``` ## Technical Details - Subjects: 94 - Recordings: 94 - Tasks: 1 - Channels: 63 (76), 64 (18) - Sampling rate (Hz): 500.0 - Duration (hours): 4.106407222222223 - Pathology: Parkinson’s - Modality: Resting State - Type: Clinical/Intervention - Size on disk: 1.7 GB - File count: 94 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007020.v1.0.0 - Source: openneuro - OpenNeuro: [ds007020](https://openneuro.org/datasets/ds007020) - NeMAR: [ds007020](https://nemar.org/dataexplorer/detail?dataset_id=ds007020) ## API Reference Use the `DS007020` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007020(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Mortality Dataset in Parkinson’s Disease * **Study:** `ds007020` (OpenNeuro) * **Author (year):** `Jamshidi2025` * **Canonical:** — Also importable as: `DS007020`, `Jamshidi2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Parkinson's`. Subjects: 94; recordings: 94; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007020](https://openneuro.org/datasets/ds007020) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007020](https://nemar.org/dataexplorer/detail?dataset_id=ds007020) DOI: [https://doi.org/10.18112/openneuro.ds007020.v1.0.0](https://doi.org/10.18112/openneuro.ds007020.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007020 >>> dataset = DS007020(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007020) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007020) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007028: eeg dataset, 3 subjects *Auditory Cortex Macaque Monkey DISC Data* Access recordings and metadata through EEGDash. **Citation:** Yoshinao Kajikawa, Charles Schroeder (2025). *Auditory Cortex Macaque Monkey DISC Data*. [10.18112/openneuro.ds007028.v1.0.0](https://doi.org/10.18112/openneuro.ds007028.v1.0.0) Modality: eeg Subjects: 3 Recordings: 3 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007028 dataset = DS007028(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007028(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007028( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007028, title = {Auditory Cortex Macaque Monkey DISC Data}, author = {Yoshinao Kajikawa and Charles Schroeder}, doi = {10.18112/openneuro.ds007028.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007028.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS007028` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Auditory Cortex Macaque Monkey DISC Data | | Author (year) | `Kajikawa2025` | | Canonical | `Kajikawa2000` | | Importable as | `DS007028`, `Kajikawa2025`, `Kajikawa2000` | | Year | 2025 | | Authors | Yoshinao Kajikawa, Charles Schroeder | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007028.v1.0.0](https://doi.org/10.18112/openneuro.ds007028.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007028) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007028) | [Source URL](https://openneuro.org/datasets/ds007028) | ### Copy-paste BibTeX ```bibtex @dataset{ds007028, title = {Auditory Cortex Macaque Monkey DISC Data}, author = {Yoshinao Kajikawa and Charles Schroeder}, doi = {10.18112/openneuro.ds007028.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007028.v1.0.0}, } ``` ## Technical Details - Subjects: 3 - Recordings: 3 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 20000.0 - Duration (hours): 0.8074388888888889 - Pathology: Other - Modality: Auditory - Type: Perception - Size on disk: 13.9 GB - File count: 3 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007028.v1.0.0 - Source: openneuro - OpenNeuro: [ds007028](https://openneuro.org/datasets/ds007028) - NeMAR: [ds007028](https://nemar.org/dataexplorer/detail?dataset_id=ds007028) ## API Reference Use the `DS007028` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007028(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory Cortex Macaque Monkey DISC Data * **Study:** `ds007028` (OpenNeuro) * **Author (year):** `Kajikawa2025` * **Canonical:** `Kajikawa2000` Also importable as: `DS007028`, `Kajikawa2025`, `Kajikawa2000`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Other`. Subjects: 3; recordings: 3; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007028](https://openneuro.org/datasets/ds007028) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007028](https://nemar.org/dataexplorer/detail?dataset_id=ds007028) DOI: [https://doi.org/10.18112/openneuro.ds007028.v1.0.0](https://doi.org/10.18112/openneuro.ds007028.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007028 >>> dataset = DS007028(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007028) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007028) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007052: eeg dataset, 288 subjects *PURSUE N400 Word Processing* Access recordings and metadata through EEGDash. **Citation:** Couperus, J.W., Bukach, C.M., Reed, C.L. (2025). *PURSUE N400 Word Processing*. [10.18112/openneuro.ds007052.v1.1.2](https://doi.org/10.18112/openneuro.ds007052.v1.1.2) Modality: eeg Subjects: 288 Recordings: 288 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007052 dataset = DS007052(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007052(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007052( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007052, title = {PURSUE N400 Word Processing}, author = {Couperus, J.W. and Bukach, C.M. and Reed, C.L.}, doi = {10.18112/openneuro.ds007052.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds007052.v1.1.2}, } ``` ## About This Dataset **README** Word Processing Task from the PURSUE project (pursureerp.com). Data collected from participants at 3 different primarily undergraduate academic institutions (Southern California, Massachusetts, and Virginia) in 2017 and 2018. The task design can be found in the publication by Kappenman et al.(2021). ERP CORE: An open resource for human event-related potential research. NeuroImage, 225, 117465. Details of task are found in the supplementary materials. Race Key: > “Levels”: { > : “x1”: “White”, > “x2”: “Black/African American”, > “x3”: “Native American”, > “x4”: “Asian”, > “x5”: “Pacific Islander”, > “x6”: “Hispanic/Latino”, > “x7”: “Other”, > “x8”: “Prefer not to respond”, > “x9”: “Chose more than one response”, > “” : “empty” > } ## Dataset Information | Dataset ID | `DS007052` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PURSUE N400 Word Processing | | Author (year) | `Couperus2025_N400` | | Canonical | `Couperus2021_N400` | | Importable as | `DS007052`, `Couperus2025_N400`, `Couperus2021_N400` | | Year | 2025 | | Authors | Couperus, J.W., Bukach, C.M., Reed, C.L. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007052.v1.1.2](https://doi.org/10.18112/openneuro.ds007052.v1.1.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007052) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007052) | [Source URL](https://openneuro.org/datasets/ds007052) | ### Copy-paste BibTeX ```bibtex @dataset{ds007052, title = {PURSUE N400 Word Processing}, author = {Couperus, J.W. and Bukach, C.M. and Reed, C.L.}, doi = {10.18112/openneuro.ds007052.v1.1.2}, url = {https://doi.org/10.18112/openneuro.ds007052.v1.1.2}, } ``` ## Technical Details - Subjects: 288 - Recordings: 288 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 40.00969111111111 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 9.0 GB - File count: 288 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007052.v1.1.2 - Source: openneuro - OpenNeuro: [ds007052](https://openneuro.org/datasets/ds007052) - NeMAR: [ds007052](https://nemar.org/dataexplorer/detail?dataset_id=ds007052) ## API Reference Use the `DS007052` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007052(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE N400 Word Processing * **Study:** `ds007052` (OpenNeuro) * **Author (year):** `Couperus2025_N400` * **Canonical:** `Couperus2021_N400` Also importable as: `DS007052`, `Couperus2025_N400`, `Couperus2021_N400`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 288; recordings: 288; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007052](https://openneuro.org/datasets/ds007052) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007052](https://nemar.org/dataexplorer/detail?dataset_id=ds007052) DOI: [https://doi.org/10.18112/openneuro.ds007052.v1.1.2](https://doi.org/10.18112/openneuro.ds007052.v1.1.2) ### Examples ```pycon >>> from eegdash.dataset import DS007052 >>> dataset = DS007052(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007052) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007052) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007056: eeg dataset, 286 subjects *PURSUE P300 Visual Oddball* Access recordings and metadata through EEGDash. **Citation:** Couperus, J.W., Bukach, C.M., Reed, C.L. (2025). *PURSUE P300 Visual Oddball*. [10.18112/openneuro.ds007056.v1.1.1](https://doi.org/10.18112/openneuro.ds007056.v1.1.1) Modality: eeg Subjects: 286 Recordings: 286 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007056 dataset = DS007056(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007056(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007056( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007056, title = {PURSUE P300 Visual Oddball}, author = {Couperus, J.W. and Bukach, C.M. and Reed, C.L.}, doi = {10.18112/openneuro.ds007056.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds007056.v1.1.1}, } ``` ## About This Dataset Visual Oddball Experiment from the PURSUE project (pursureerp.com). Data collected from participants at 3 different primarily undergraduate academic institutions (Southern California, Massachusetts, and Virginia) in 2017 and 2018. The task design can be found in the publication by Kappenman et al.(2021). ERP CORE: An open resource for human event-related potential research. NeuroImage, 225, 117465. Details of task are found in the supplementary materials. Race Key: “Levels”: { “x1”: “White”, “x2”: “Black/African American”, “x3”: “Native American”, “x4”: “Asian”, “x5”: “Pacific Islander”, “x6”: “Hispanic/Latino”, “x7”: “Other”, “x8”: “Prefer not to respond”, “x9”: “Chose more than one response”, “” : “empty” } ## Dataset Information | Dataset ID | `DS007056` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PURSUE P300 Visual Oddball | | Author (year) | `Couperus2025_P300` | | Canonical | `Couperus2021_P300` | | Importable as | `DS007056`, `Couperus2025_P300`, `Couperus2021_P300` | | Year | 2025 | | Authors | Couperus, J.W., Bukach, C.M., Reed, C.L. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007056.v1.1.1](https://doi.org/10.18112/openneuro.ds007056.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007056) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007056) | [Source URL](https://openneuro.org/datasets/ds007056) | ### Copy-paste BibTeX ```bibtex @dataset{ds007056, title = {PURSUE P300 Visual Oddball}, author = {Couperus, J.W. and Bukach, C.M. and Reed, C.L.}, doi = {10.18112/openneuro.ds007056.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds007056.v1.1.1}, } ``` ## Technical Details - Subjects: 286 - Recordings: 286 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 34.86793055555555 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 7.8 GB - File count: 286 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007056.v1.1.1 - Source: openneuro - OpenNeuro: [ds007056](https://openneuro.org/datasets/ds007056) - NeMAR: [ds007056](https://nemar.org/dataexplorer/detail?dataset_id=ds007056) ## API Reference Use the `DS007056` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007056(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE P300 Visual Oddball * **Study:** `ds007056` (OpenNeuro) * **Author (year):** `Couperus2025_P300` * **Canonical:** `Couperus2021_P300` Also importable as: `DS007056`, `Couperus2025_P300`, `Couperus2021_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 286; recordings: 286; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007056](https://openneuro.org/datasets/ds007056) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007056](https://nemar.org/dataexplorer/detail?dataset_id=ds007056) DOI: [https://doi.org/10.18112/openneuro.ds007056.v1.1.1](https://doi.org/10.18112/openneuro.ds007056.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS007056 >>> dataset = DS007056(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007056) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007056) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007069: eeg dataset, 281 subjects *PURSUE MMN Auditory Oddball* Access recordings and metadata through EEGDash. **Citation:** Couperus, J.W., Bukach, C.M., Reed, C.L. (2025). *PURSUE MMN Auditory Oddball*. [10.18112/openneuro.ds007069.v1.0.0](https://doi.org/10.18112/openneuro.ds007069.v1.0.0) Modality: eeg Subjects: 281 Recordings: 281 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007069 dataset = DS007069(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007069(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007069( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007069, title = {PURSUE MMN Auditory Oddball}, author = {Couperus, J.W. and Bukach, C.M. and Reed, C.L.}, doi = {10.18112/openneuro.ds007069.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007069.v1.0.0}, } ``` ## About This Dataset Passive Auditory Oddball Experiment from the PURSUE project (pursureerp.com). Data collected from participants at 3 different primarily undergraduate academic institutions (Southern California, Massachusetts, and Virginia) in 2017 and 2018. The task design can be found in the publication by Kappenman et al.(2021). ERP CORE: An open resource for human event-related potential research. NeuroImage, 225, 117465. Details of task are found in the supplementary materials. Race Key: “Levels”: { “x1”: “White”, “x2”: “Black/African American”, “x3”: “Native American”, “x4”: “Asian”, “x5”: “Pacific Islander”, “x6”: “Hispanic/Latino”, “x7”: “Other”, “x8”: “Prefer not to respond”, “x9”: “Chose more than one response”, “” : “empty” } ## Dataset Information | Dataset ID | `DS007069` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PURSUE MMN Auditory Oddball | | Author (year) | `Couperus2025_MMN` | | Canonical | `Couperus2021_MMN` | | Importable as | `DS007069`, `Couperus2025_MMN`, `Couperus2021_MMN` | | Year | 2025 | | Authors | Couperus, J.W., Bukach, C.M., Reed, C.L. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007069.v1.0.0](https://doi.org/10.18112/openneuro.ds007069.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007069) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007069) | [Source URL](https://openneuro.org/datasets/ds007069) | ### Copy-paste BibTeX ```bibtex @dataset{ds007069, title = {PURSUE MMN Auditory Oddball}, author = {Couperus, J.W. and Bukach, C.M. and Reed, C.L.}, doi = {10.18112/openneuro.ds007069.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007069.v1.0.0}, } ``` ## Technical Details - Subjects: 281 - Recordings: 281 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 54.90152944444444 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 12.4 GB - File count: 281 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007069.v1.0.0 - Source: openneuro - OpenNeuro: [ds007069](https://openneuro.org/datasets/ds007069) - NeMAR: [ds007069](https://nemar.org/dataexplorer/detail?dataset_id=ds007069) ## API Reference Use the `DS007069` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007069(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE MMN Auditory Oddball * **Study:** `ds007069` (OpenNeuro) * **Author (year):** `Couperus2025_MMN` * **Canonical:** `Couperus2021_MMN` Also importable as: `DS007069`, `Couperus2025_MMN`, `Couperus2021_MMN`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 281; recordings: 281; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007069](https://openneuro.org/datasets/ds007069) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007069](https://nemar.org/dataexplorer/detail?dataset_id=ds007069) DOI: [https://doi.org/10.18112/openneuro.ds007069.v1.0.0](https://doi.org/10.18112/openneuro.ds007069.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007069 >>> dataset = DS007069(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007069) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007069) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007081: eeg dataset, 41 subjects *Passive but accessible: Studied information is not actively stored in working memory, yet attended regardless of anticipated load* Access recordings and metadata through EEGDash. **Citation:** Yakup Yılmaz, Nursena Ataseven Özdemir, Wouter Kruijne, Elkan Akyürek, Eren Günseli (2025). *Passive but accessible: Studied information is not actively stored in working memory, yet attended regardless of anticipated load*. [10.18112/openneuro.ds007081.v1.0.0](https://doi.org/10.18112/openneuro.ds007081.v1.0.0) Modality: eeg Subjects: 41 Recordings: 41 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007081 dataset = DS007081(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007081(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007081( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007081, title = {Passive but accessible: Studied information is not actively stored in working memory, yet attended regardless of anticipated load}, author = {Yakup Yılmaz and Nursena Ataseven Özdemir and Wouter Kruijne and Elkan Akyürek and Eren Günseli}, doi = {10.18112/openneuro.ds007081.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007081.v1.0.0}, } ``` ## About This Dataset Each trial began with a fixation dot presented for a jittered intertrial interval (ITI) between 600 and 1000 ms. The first memory screen (1000 ms) showed two objects on one lateral side that participants were instructed to memorize (indicated by a wedge cue), and two objects on the opposite side to balance visual input. Depending on the block condition, the to-be-memorized objects on the first screen could be studied (learned in the learning phase) or novel/unstudied. After a 1400 ms interstimulus interval, a second memory screen (1000 ms) presented additional items vertically around fixation (one above and one below fixation); these items were always novel/unstudied and were placed near fixation to avoid influencing lateral EEG indices from the first screen. In the extra-load expectation condition, additional second-screen items appeared on 80% of trials (and were omitted on 20% of trials), whereas in the low-load expectation condition this probability was reversed (20% appear, 80% omitted). After a 400 ms interstimulus interval, a probe from either the first or second memory screen was presented and participants reported the probed object’s color by moving the mouse; the probe color updated continuously along an invisible color wheel whose orientation was randomly rotated on each trial. After the response, absolute angular error feedback was displayed for 400 ms; for studied objects, if the error exceeded 40°, the correct color was displayed for 1000 ms as corrective feedback. ## Dataset Information | Dataset ID | `DS007081` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Passive but accessible: Studied information is not actively stored in working memory, yet attended regardless of anticipated load | | Author (year) | `Ylmaz2025` | | Canonical | — | | Importable as | `DS007081`, `Ylmaz2025` | | Year | 2025 | | Authors | Yakup Yılmaz, Nursena Ataseven Özdemir, Wouter Kruijne, Elkan Akyürek, Eren Günseli | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007081.v1.0.0](https://doi.org/10.18112/openneuro.ds007081.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007081) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007081) | [Source URL](https://openneuro.org/datasets/ds007081) | ### Copy-paste BibTeX ```bibtex @dataset{ds007081, title = {Passive but accessible: Studied information is not actively stored in working memory, yet attended regardless of anticipated load}, author = {Yakup Yılmaz and Nursena Ataseven Özdemir and Wouter Kruijne and Elkan Akyürek and Eren Günseli}, doi = {10.18112/openneuro.ds007081.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007081.v1.0.0}, } ``` ## Technical Details - Subjects: 41 - Recordings: 41 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 1000.0 - Duration (hours): 26.319111111111116 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 11.3 GB - File count: 41 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007081.v1.0.0 - Source: openneuro - OpenNeuro: [ds007081](https://openneuro.org/datasets/ds007081) - NeMAR: [ds007081](https://nemar.org/dataexplorer/detail?dataset_id=ds007081) ## API Reference Use the `DS007081` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007081(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Passive but accessible: Studied information is not actively stored in working memory, yet attended regardless of anticipated load * **Study:** `ds007081` (OpenNeuro) * **Author (year):** `Ylmaz2025` * **Canonical:** — Also importable as: `DS007081`, `Ylmaz2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 41; recordings: 41; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007081](https://openneuro.org/datasets/ds007081) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007081](https://nemar.org/dataexplorer/detail?dataset_id=ds007081) DOI: [https://doi.org/10.18112/openneuro.ds007081.v1.0.0](https://doi.org/10.18112/openneuro.ds007081.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007081 >>> dataset = DS007081(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007081) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007081) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007095: ieeg dataset, 8 subjects *RNS_Epilepsy-iBIDS* Access recordings and metadata through EEGDash. **Citation:** Chen Feng, Haoqi Ni, Zhoule Zhu, Hongjie Jiang, Zhe Zheng, Wenjie Ming, Shuang Wang, Kedi Xu, Junming Zhu (2025). *RNS_Epilepsy-iBIDS*. [10.18112/openneuro.ds007095.v1.0.0](https://doi.org/10.18112/openneuro.ds007095.v1.0.0) Modality: ieeg Subjects: 8 Recordings: 6019 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007095 dataset = DS007095(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007095(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007095( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007095, title = {RNS_Epilepsy-iBIDS}, author = {Chen Feng and Haoqi Ni and Zhoule Zhu and Hongjie Jiang and Zhe Zheng and Wenjie Ming and Shuang Wang and Kedi Xu and Junming Zhu}, doi = {10.18112/openneuro.ds007095.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007095.v1.0.0}, } ``` ## About This Dataset Dataset of long-term iEEG invasively recorded in epilepsy patients implanted with responsive neurostimulation system (RNS) We provided a long-term intracranial electroencephalography (iEEG) dataset of 8 epilepsy patients implanted with responsive neurostimulation (RNS) devices. The dataset was constituted by iEEG data recorded from bilateral epileptic lesion areas. Each recording contains 90 seconds of dual-channel iEEG around each stimulation, 60 seconds before the start of the stimulation, and about 30 seconds after the end of the stimulation. The stimulation markers are contained in the events.tsv files, including the onset and duration for each stimulus. The ieeg.json files contain the electrical stimulation parameters for the current session, which were set by the neurosurgeon during each regular clinical follow-up of epilepsy patients. The iEEG data were saved in EDF format, stored as the Brain Imaging Data Structure (BIDS), and published on the OpenNeuro. The criterion for including patients in this dataset is to intracranially record the seizure events for more than six months. For each subject, one week is considered as a session, which includes all seizures within a day with high frequency seizure onset during that week. The dataset can be used to evaluate the alterations of seizure onset pattern during the development of epilepsy, as well as the changes in iEEG characteristics after the electrical stimulation. We have technically validated the dataset through specific signal analysis, such as power spectral analysis, calculation of envelop length, and calculation of phase locking value. ## Dataset Information | Dataset ID | `DS007095` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | RNS_Epilepsy-iBIDS | | Author (year) | `Feng2025` | | Canonical | — | | Importable as | `DS007095`, `Feng2025` | | Year | 2025 | | Authors | Chen Feng, Haoqi Ni, Zhoule Zhu, Hongjie Jiang, Zhe Zheng, Wenjie Ming, Shuang Wang, Kedi Xu, Junming Zhu | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007095.v1.0.0](https://doi.org/10.18112/openneuro.ds007095.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007095) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007095) | [Source URL](https://openneuro.org/datasets/ds007095) | ### Copy-paste BibTeX ```bibtex @dataset{ds007095, title = {RNS_Epilepsy-iBIDS}, author = {Chen Feng and Haoqi Ni and Zhoule Zhu and Hongjie Jiang and Zhe Zheng and Wenjie Ming and Shuang Wang and Kedi Xu and Junming Zhu}, doi = {10.18112/openneuro.ds007095.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007095.v1.0.0}, } ``` ## Technical Details - Subjects: 8 - Recordings: 6019 - Tasks: 1 - Channels: 2 - Sampling rate (Hz): 200.0 - Duration (hours): 154.5963888888889 - Pathology: Epilepsy - Modality: Other - Type: Clinical/Intervention - Size on disk: 497.8 MB - File count: 6019 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007095.v1.0.0 - Source: openneuro - OpenNeuro: [ds007095](https://openneuro.org/datasets/ds007095) - NeMAR: [ds007095](https://nemar.org/dataexplorer/detail?dataset_id=ds007095) ## API Reference Use the `DS007095` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007095(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RNS_Epilepsy-iBIDS * **Study:** `ds007095` (OpenNeuro) * **Author (year):** `Feng2025` * **Canonical:** — Also importable as: `DS007095`, `Feng2025`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 8; recordings: 6019; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007095](https://openneuro.org/datasets/ds007095) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007095](https://nemar.org/dataexplorer/detail?dataset_id=ds007095) DOI: [https://doi.org/10.18112/openneuro.ds007095.v1.0.0](https://doi.org/10.18112/openneuro.ds007095.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007095 >>> dataset = DS007095(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007095) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007095) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS007096: eeg dataset, 292 subjects *PURSUE N170 Face Perception* Access recordings and metadata through EEGDash. **Citation:** Couperus, J.W., Bukach, C.M., Reed,C.L. (2025). *PURSUE N170 Face Perception*. [10.18112/openneuro.ds007096.v1.0.0](https://doi.org/10.18112/openneuro.ds007096.v1.0.0) Modality: eeg Subjects: 292 Recordings: 292 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007096 dataset = DS007096(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007096(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007096( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007096, title = {PURSUE N170 Face Perception}, author = {Couperus, J.W. and Bukach, C.M. and Reed,C.L.}, doi = {10.18112/openneuro.ds007096.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007096.v1.0.0}, } ``` ## About This Dataset **README** Face Perception Task from the PURSUE project (pursureerp.com). Data collected from participants at 3 different primarily undergraduate academic institutions (Southern California, Massachusetts, and Virginia) in 2017 and 2018. The task design can be found in the publication by Kappenman et al.(2021). ERP CORE: An open resource for human event-related potential research. NeuroImage, 225, 117465. Details of task are found in the supplementary materials. Race Key: > “Levels”: { > : “x1”: “White”, > “x2”: “Black/African American”, > “x3”: “Native American”, > “x4”: “Asian”, > “x5”: “Pacific Islander”, > “x6”: “Hispanic/Latino”, > “x7”: “Other”, > “x8”: “Prefer not to respond”, > “x9”: “Chose more than one response”, > “” : “empty” > } ## Dataset Information | Dataset ID | `DS007096` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PURSUE N170 Face Perception | | Author (year) | `Couperus2025_PURSUE_N170_Face` | | Canonical | `Couperus2017` | | Importable as | `DS007096`, `Couperus2025_PURSUE_N170_Face`, `Couperus2017` | | Year | 2025 | | Authors | Couperus, J.W., Bukach, C.M., Reed,C.L. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007096.v1.0.0](https://doi.org/10.18112/openneuro.ds007096.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007096) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007096) | [Source URL](https://openneuro.org/datasets/ds007096) | ### Copy-paste BibTeX ```bibtex @dataset{ds007096, title = {PURSUE N170 Face Perception}, author = {Couperus, J.W. and Bukach, C.M. and Reed,C.L.}, doi = {10.18112/openneuro.ds007096.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007096.v1.0.0}, } ``` ## Technical Details - Subjects: 292 - Recordings: 292 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 51.75700166666667 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 11.6 GB - File count: 292 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007096.v1.0.0 - Source: openneuro - OpenNeuro: [ds007096](https://openneuro.org/datasets/ds007096) - NeMAR: [ds007096](https://nemar.org/dataexplorer/detail?dataset_id=ds007096) ## API Reference Use the `DS007096` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007096(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE N170 Face Perception * **Study:** `ds007096` (OpenNeuro) * **Author (year):** `Couperus2025_PURSUE_N170_Face` * **Canonical:** `Couperus2017` Also importable as: `DS007096`, `Couperus2025_PURSUE_N170_Face`, `Couperus2017`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 292; recordings: 292; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007096](https://openneuro.org/datasets/ds007096) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007096](https://nemar.org/dataexplorer/detail?dataset_id=ds007096) DOI: [https://doi.org/10.18112/openneuro.ds007096.v1.0.0](https://doi.org/10.18112/openneuro.ds007096.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007096 >>> dataset = DS007096(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007096) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007096) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007118: ieeg dataset, 65 subjects *iEEG_comprehensive_HFA_model_part1* Access recordings and metadata through EEGDash. **Citation:** Keisuke Hatano, Naoto Kuroda, Hiroshi Uda, Kazuki Sakakura, Michael J. Cools, Aimee F. Luat, Shin-Ichiro Osawa, Hitoshi Nemoto, Kazushi Ukishiro, Hidenori Endo, Nobukazu Nakasato, Yutaro Takayama, Keiya Iijima, Masaki Iwasaki, Eishi Asano (2025). *iEEG_comprehensive_HFA_model_part1*. [10.18112/openneuro.ds007118.v1.0.0](https://doi.org/10.18112/openneuro.ds007118.v1.0.0) Modality: ieeg Subjects: 65 Recordings: 82 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007118 dataset = DS007118(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007118(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007118( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007118, title = {iEEG_comprehensive_HFA_model_part1}, author = {Keisuke Hatano and Naoto Kuroda and Hiroshi Uda and Kazuki Sakakura and Michael J. Cools and Aimee F. Luat and Shin-Ichiro Osawa and Hitoshi Nemoto and Kazushi Ukishiro and Hidenori Endo and Nobukazu Nakasato and Yutaro Takayama and Keiya Iijima and Masaki Iwasaki and Eishi Asano}, doi = {10.18112/openneuro.ds007118.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007118.v1.0.0}, } ``` ## About This Dataset This dataset contains intracranial EEG data recorded during non-REM sleep and used in Hatano et al. (in press). Authors: Keisuke Hatano, Naoto Kuroda, Hiroshi Uda, Kazuki Sakakura, Michael J. Cools, Aimee F. Luat, Shin-Ichiro Osawa, Hitoshi Nemoto, Kazushi Ukishiro, Hidenori Endo, Nobukazu Nakasato, Yutaro Takayama, Keiya Iijima, Masaki Iwasaki, Eishi Asano Funding: National Institutes of Health (NIH; NS064033 to E.A.); Uehara Memorial Foundation Postdoctoral Fellowship (202441017 to K.H.; 20210301 to H.U.); Japan Society for the Promotion of Science (JP22J23281, JP22KJ0323, and 202560576 to N.K.; 202560628 to H.U.; JP19K09494 and 22K09296 to M.I.) ## Dataset Information | Dataset ID | `DS007118` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | iEEG_comprehensive_HFA_model_part1 | | Author (year) | `Hatano2025_part1` | | Canonical | `Hatano` | | Importable as | `DS007118`, `Hatano2025_part1`, `Hatano` | | Year | 2025 | | Authors | Keisuke Hatano, Naoto Kuroda, Hiroshi Uda, Kazuki Sakakura, Michael J. Cools, Aimee F. Luat, Shin-Ichiro Osawa, Hitoshi Nemoto, Kazushi Ukishiro, Hidenori Endo, Nobukazu Nakasato, Yutaro Takayama, Keiya Iijima, Masaki Iwasaki, Eishi Asano | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007118.v1.0.0](https://doi.org/10.18112/openneuro.ds007118.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007118) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007118) | [Source URL](https://openneuro.org/datasets/ds007118) | ### Copy-paste BibTeX ```bibtex @dataset{ds007118, title = {iEEG_comprehensive_HFA_model_part1}, author = {Keisuke Hatano and Naoto Kuroda and Hiroshi Uda and Kazuki Sakakura and Michael J. Cools and Aimee F. Luat and Shin-Ichiro Osawa and Hitoshi Nemoto and Kazushi Ukishiro and Hidenori Endo and Nobukazu Nakasato and Yutaro Takayama and Keiya Iijima and Masaki Iwasaki and Eishi Asano}, doi = {10.18112/openneuro.ds007118.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007118.v1.0.0}, } ``` ## Technical Details - Subjects: 65 - Recordings: 82 - Tasks: 1 - Channels: 128 (21), 112 (17), 124 (6), 102 (5), 108 (4), 120 (4), 68 (3), 116 (3), 138 (3), 118 (3), 106 (2), 144 (2), 64 (2), 122, 114, 74, 94, 36, 132, 58 - Sampling rate (Hz): 1000.0 - Duration (hours): 44.215 - Pathology: Not specified - Modality: Sleep - Type: Sleep - Size on disk: 33.8 GB - File count: 82 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007118.v1.0.0 - Source: openneuro - OpenNeuro: [ds007118](https://openneuro.org/datasets/ds007118) - NeMAR: [ds007118](https://nemar.org/dataexplorer/detail?dataset_id=ds007118) ## API Reference Use the `DS007118` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007118(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_comprehensive_HFA_model_part1 * **Study:** `ds007118` (OpenNeuro) * **Author (year):** `Hatano2025_part1` * **Canonical:** `Hatano` Also importable as: `DS007118`, `Hatano2025_part1`, `Hatano`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Unknown`. Subjects: 65; recordings: 82; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007118](https://openneuro.org/datasets/ds007118) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007118](https://nemar.org/dataexplorer/detail?dataset_id=ds007118) DOI: [https://doi.org/10.18112/openneuro.ds007118.v1.0.0](https://doi.org/10.18112/openneuro.ds007118.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007118 >>> dataset = DS007118(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007118) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007118) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS007119: ieeg dataset, 103 subjects *iEEG_comprehensive_HFA_model_part3* Access recordings and metadata through EEGDash. **Citation:** Keisuke Hatano, Naoto Kuroda, Hiroshi Uda, Kazuki Sakakura, Michael J. Cools, Aimee F. Luat, Shin-Ichiro Osawa, Hitoshi Nemoto, Kazushi Ukishiro, Hidenori Endo, Nobukazu Nakasato, Yutaro Takayama, Keiya Iijima, Masaki Iwasaki, Eishi Asano (2025). *iEEG_comprehensive_HFA_model_part3*. [10.18112/openneuro.ds007119.v1.0.0](https://doi.org/10.18112/openneuro.ds007119.v1.0.0) Modality: ieeg Subjects: 103 Recordings: 106 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007119 dataset = DS007119(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007119(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007119( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007119, title = {iEEG_comprehensive_HFA_model_part3}, author = {Keisuke Hatano and Naoto Kuroda and Hiroshi Uda and Kazuki Sakakura and Michael J. Cools and Aimee F. Luat and Shin-Ichiro Osawa and Hitoshi Nemoto and Kazushi Ukishiro and Hidenori Endo and Nobukazu Nakasato and Yutaro Takayama and Keiya Iijima and Masaki Iwasaki and Eishi Asano}, doi = {10.18112/openneuro.ds007119.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007119.v1.0.0}, } ``` ## About This Dataset This dataset contains intracranial EEG data recorded during non-REM sleep and used in Hatano et al. (in press). Authors: Keisuke Hatano, Naoto Kuroda, Hiroshi Uda, Kazuki Sakakura, Michael J. Cools, Aimee F. Luat, Shin-Ichiro Osawa, Hitoshi Nemoto, Kazushi Ukishiro, Hidenori Endo, Nobukazu Nakasato, Yutaro Takayama, Keiya Iijima, Masaki Iwasaki, Eishi Asano Funding: National Institutes of Health (NIH; NS064033 to E.A.); Uehara Memorial Foundation Postdoctoral Fellowship (202441017 to K.H.; 20210301 to H.U.); Japan Society for the Promotion of Science (JP22J23281, JP22KJ0323, and 202560576 to N.K.; 202560628 to H.U.; JP19K09494 and 22K09296 to M.I.) ## Dataset Information | Dataset ID | `DS007119` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | iEEG_comprehensive_HFA_model_part3 | | Author (year) | `Hatano2025_part3` | | Canonical | — | | Importable as | `DS007119`, `Hatano2025_part3` | | Year | 2025 | | Authors | Keisuke Hatano, Naoto Kuroda, Hiroshi Uda, Kazuki Sakakura, Michael J. Cools, Aimee F. Luat, Shin-Ichiro Osawa, Hitoshi Nemoto, Kazushi Ukishiro, Hidenori Endo, Nobukazu Nakasato, Yutaro Takayama, Keiya Iijima, Masaki Iwasaki, Eishi Asano | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007119.v1.0.0](https://doi.org/10.18112/openneuro.ds007119.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007119) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007119) | [Source URL](https://openneuro.org/datasets/ds007119) | ### Copy-paste BibTeX ```bibtex @dataset{ds007119, title = {iEEG_comprehensive_HFA_model_part3}, author = {Keisuke Hatano and Naoto Kuroda and Hiroshi Uda and Kazuki Sakakura and Michael J. Cools and Aimee F. Luat and Shin-Ichiro Osawa and Hitoshi Nemoto and Kazushi Ukishiro and Hidenori Endo and Nobukazu Nakasato and Yutaro Takayama and Keiya Iijima and Masaki Iwasaki and Eishi Asano}, doi = {10.18112/openneuro.ds007119.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007119.v1.0.0}, } ``` ## Technical Details - Subjects: 103 - Recordings: 106 - Tasks: 1 - Channels: 128 (12), 124 (5), 86 (4), 58 (4), 134 (4), 120 (4), 102 (4), 78 (4), 94 (4), 100 (3), 110 (3), 118 (3), 136 (2), 112 (2), 96 (2), 130 (2), 132 (2), 148 (2), 64 (2), 108 (2), 74 (2), 84 (2), 34 (2), 72 (2), 140 (2), 122, 126, 52, 6, 116, 73, 114, 90, 76, 70, 48, 88, 54, 146, 180, 135, 138, 142, 28, 152, 82, 46, 38, 144, 44, 104 - Sampling rate (Hz): 1000.0 - Duration (hours): 41.775 - Pathology: Not specified - Modality: Sleep - Type: Sleep - Size on disk: 32.6 GB - File count: 106 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007119.v1.0.0 - Source: openneuro - OpenNeuro: [ds007119](https://openneuro.org/datasets/ds007119) - NeMAR: [ds007119](https://nemar.org/dataexplorer/detail?dataset_id=ds007119) ## API Reference Use the `DS007119` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007119(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_comprehensive_HFA_model_part3 * **Study:** `ds007119` (OpenNeuro) * **Author (year):** `Hatano2025_part3` * **Canonical:** — Also importable as: `DS007119`, `Hatano2025_part3`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Unknown`. Subjects: 103; recordings: 106; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007119](https://openneuro.org/datasets/ds007119) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007119](https://nemar.org/dataexplorer/detail?dataset_id=ds007119) DOI: [https://doi.org/10.18112/openneuro.ds007119.v1.0.0](https://doi.org/10.18112/openneuro.ds007119.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007119 >>> dataset = DS007119(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007119) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007119) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS007120: ieeg dataset, 65 subjects *iEEG_comprehensive_HFA_model_part2* Access recordings and metadata through EEGDash. **Citation:** Keisuke Hatano, Naoto Kuroda, Hiroshi Uda, Kazuki Sakakura, Michael J. Cools, Aimee F. Luat, Shin-Ichiro Osawa, Hitoshi Nemoto, Kazushi Ukishiro, Hidenori Endo, Nobukazu Nakasato, Yutaro Takayama, Keiya Iijima, Masaki Iwasaki, Eishi Asano (2025). *iEEG_comprehensive_HFA_model_part2*. [10.18112/openneuro.ds007120.v1.0.0](https://doi.org/10.18112/openneuro.ds007120.v1.0.0) Modality: ieeg Subjects: 65 Recordings: 70 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007120 dataset = DS007120(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007120(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007120( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007120, title = {iEEG_comprehensive_HFA_model_part2}, author = {Keisuke Hatano and Naoto Kuroda and Hiroshi Uda and Kazuki Sakakura and Michael J. Cools and Aimee F. Luat and Shin-Ichiro Osawa and Hitoshi Nemoto and Kazushi Ukishiro and Hidenori Endo and Nobukazu Nakasato and Yutaro Takayama and Keiya Iijima and Masaki Iwasaki and Eishi Asano}, doi = {10.18112/openneuro.ds007120.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007120.v1.0.0}, } ``` ## About This Dataset This dataset contains intracranial EEG data recorded during non-REM sleep and used in Hatano et al. (in press). Authors: Keisuke Hatano, Naoto Kuroda, Hiroshi Uda, Kazuki Sakakura, Michael J. Cools, Aimee F. Luat, Shin-Ichiro Osawa, Hitoshi Nemoto, Kazushi Ukishiro, Hidenori Endo, Nobukazu Nakasato, Yutaro Takayama, Keiya Iijima, Masaki Iwasaki, Eishi Asano Funding: National Institutes of Health (NIH; NS064033 to E.A.); Uehara Memorial Foundation Postdoctoral Fellowship (202441017 to K.H.; 20210301 to H.U.); Japan Society for the Promotion of Science (JP22J23281, JP22KJ0323, and 202560576 to N.K.; 202560628 to H.U.; JP19K09494 and 22K09296 to M.I.) ## Dataset Information | Dataset ID | `DS007120` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | iEEG_comprehensive_HFA_model_part2 | | Author (year) | `Hatano2025_part2` | | Canonical | — | | Importable as | `DS007120`, `Hatano2025_part2` | | Year | 2025 | | Authors | Keisuke Hatano, Naoto Kuroda, Hiroshi Uda, Kazuki Sakakura, Michael J. Cools, Aimee F. Luat, Shin-Ichiro Osawa, Hitoshi Nemoto, Kazushi Ukishiro, Hidenori Endo, Nobukazu Nakasato, Yutaro Takayama, Keiya Iijima, Masaki Iwasaki, Eishi Asano | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007120.v1.0.0](https://doi.org/10.18112/openneuro.ds007120.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007120) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007120) | [Source URL](https://openneuro.org/datasets/ds007120) | ### Copy-paste BibTeX ```bibtex @dataset{ds007120, title = {iEEG_comprehensive_HFA_model_part2}, author = {Keisuke Hatano and Naoto Kuroda and Hiroshi Uda and Kazuki Sakakura and Michael J. Cools and Aimee F. Luat and Shin-Ichiro Osawa and Hitoshi Nemoto and Kazushi Ukishiro and Hidenori Endo and Nobukazu Nakasato and Yutaro Takayama and Keiya Iijima and Masaki Iwasaki and Eishi Asano}, doi = {10.18112/openneuro.ds007120.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007120.v1.0.0}, } ``` ## Technical Details - Subjects: 65 - Recordings: 70 - Tasks: 1 - Channels: 128 (13), 112 (10), 104 (5), 132 (4), 110 (4), 118 (3), 138 (2), 150 (2), 56 (2), 126 (2), 120 (2), 108 (2), 130 (2), 140 (2), 106 (2), 100 (2), 34, 144, 156, 116, 122, 136, 84, 134, 124, 98, 164 - Sampling rate (Hz): 1000.0 - Duration (hours): 41.86305555555556 - Pathology: Epilepsy - Modality: Sleep - Type: Sleep - Size on disk: 33.0 GB - File count: 70 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007120.v1.0.0 - Source: openneuro - OpenNeuro: [ds007120](https://openneuro.org/datasets/ds007120) - NeMAR: [ds007120](https://nemar.org/dataexplorer/detail?dataset_id=ds007120) ## API Reference Use the `DS007120` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007120(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_comprehensive_HFA_model_part2 * **Study:** `ds007120` (OpenNeuro) * **Author (year):** `Hatano2025_part2` * **Canonical:** — Also importable as: `DS007120`, `Hatano2025_part2`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Epilepsy`. Subjects: 65; recordings: 70; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007120](https://openneuro.org/datasets/ds007120) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007120](https://nemar.org/dataexplorer/detail?dataset_id=ds007120) DOI: [https://doi.org/10.18112/openneuro.ds007120.v1.0.0](https://doi.org/10.18112/openneuro.ds007120.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007120 >>> dataset = DS007120(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007120) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007120) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS007137: eeg dataset, 294 subjects *PURSUE N2pc Visual Search* Access recordings and metadata through EEGDash. **Citation:** Couperus, J.W., Bukach, C.M., Reed, C.L. (2025). *PURSUE N2pc Visual Search*. [10.18112/openneuro.ds007137.v1.0.0](https://doi.org/10.18112/openneuro.ds007137.v1.0.0) Modality: eeg Subjects: 294 Recordings: 294 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007137 dataset = DS007137(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007137(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007137( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007137, title = {PURSUE N2pc Visual Search}, author = {Couperus, J.W. and Bukach, C.M. and Reed, C.L.}, doi = {10.18112/openneuro.ds007137.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007137.v1.0.0}, } ``` ## About This Dataset Visual Search Experiment from the PURSUE project (pursureerp.com). Data collected from participants at 3 different primarily undergraduate academic institutions (Southern California, Massachusetts, and Virginia) in 2017 and 2018. The task design can be found in the publication by Kappenman et al.(2021). ERP CORE: An open resource for human event-related potential research. NeuroImage, 225, 117465. Details of task are found in the supplementary materials. Race Key: “Levels”: { “x1”: “White”, “x2”: “Black/African American”, “x3”: “Native American”, “x4”: “Asian”, “x5”: “Pacific Islander”, “x6”: “Hispanic/Latino”, “x7”: “Other”, “x8”: “Prefer not to respond”, “x9”: “Chose more than one response”, “” : “empty” } ## Dataset Information | Dataset ID | `DS007137` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PURSUE N2pc Visual Search | | Author (year) | `Couperus2025_N2PC` | | Canonical | `Couperus2021_N2pc` | | Importable as | `DS007137`, `Couperus2025_N2PC`, `Couperus2021_N2pc` | | Year | 2025 | | Authors | Couperus, J.W., Bukach, C.M., Reed, C.L. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007137.v1.0.0](https://doi.org/10.18112/openneuro.ds007137.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007137) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007137) | [Source URL](https://openneuro.org/datasets/ds007137) | ### Copy-paste BibTeX ```bibtex @dataset{ds007137, title = {PURSUE N2pc Visual Search}, author = {Couperus, J.W. and Bukach, C.M. and Reed, C.L.}, doi = {10.18112/openneuro.ds007137.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007137.v1.0.0}, } ``` ## Technical Details - Subjects: 294 - Recordings: 294 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 54.44088055555556 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 12.2 GB - File count: 294 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007137.v1.0.0 - Source: openneuro - OpenNeuro: [ds007137](https://openneuro.org/datasets/ds007137) - NeMAR: [ds007137](https://nemar.org/dataexplorer/detail?dataset_id=ds007137) ## API Reference Use the `DS007137` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007137(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE N2pc Visual Search * **Study:** `ds007137` (OpenNeuro) * **Author (year):** `Couperus2025_N2PC` * **Canonical:** `Couperus2021_N2pc` Also importable as: `DS007137`, `Couperus2025_N2PC`, `Couperus2021_N2pc`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 294; recordings: 294; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007137](https://openneuro.org/datasets/ds007137) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007137](https://nemar.org/dataexplorer/detail?dataset_id=ds007137) DOI: [https://doi.org/10.18112/openneuro.ds007137.v1.0.0](https://doi.org/10.18112/openneuro.ds007137.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007137 >>> dataset = DS007137(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007137) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007137) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007139: eeg dataset, 292 subjects *PURSUE LRP/ERN Flanker* Access recordings and metadata through EEGDash. **Citation:** Couperus, J.W., Bukach, C.M., Reed, C.L. (2025). *PURSUE LRP/ERN Flanker*. [10.18112/openneuro.ds007139.v1.0.0](https://doi.org/10.18112/openneuro.ds007139.v1.0.0) Modality: eeg Subjects: 292 Recordings: 292 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007139 dataset = DS007139(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007139(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007139( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007139, title = {PURSUE LRP/ERN Flanker}, author = {Couperus, J.W. and Bukach, C.M. and Reed, C.L.}, doi = {10.18112/openneuro.ds007139.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007139.v1.0.0}, } ``` ## About This Dataset Flanker Experiment from the PURSUE project (pursureerp.com). Data collected from participants at 3 different primarily undergraduate academic institutions (Southern California, Massachusetts, and Virginia) in 2017 and 2018. The task design can be found in the publication by Kappenman et al.(2021). ERP CORE: An open resource for human event-related potential research. NeuroImage, 225, 117465. Details of task are found in the supplementary materials. Race Key: “Levels”: { “x1”: “White”, “x2”: “Black/African American”, “x3”: “Native American”, “x4”: “Asian”, “x5”: “Pacific Islander”, “x6”: “Hispanic/Latino”, “x7”: “Other”, “x8”: “Prefer not to respond”, “x9”: “Chose more than one response”, “” : “empty” } ## Dataset Information | Dataset ID | `DS007139` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PURSUE LRP/ERN Flanker | | Author (year) | `Couperus2025_LRP` | | Canonical | `Couperus2021_LRP` | | Importable as | `DS007139`, `Couperus2025_LRP`, `Couperus2021_LRP` | | Year | 2025 | | Authors | Couperus, J.W., Bukach, C.M., Reed, C.L. | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007139.v1.0.0](https://doi.org/10.18112/openneuro.ds007139.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007139) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007139) | [Source URL](https://openneuro.org/datasets/ds007139) | ### Copy-paste BibTeX ```bibtex @dataset{ds007139, title = {PURSUE LRP/ERN Flanker}, author = {Couperus, J.W. and Bukach, C.M. and Reed, C.L.}, doi = {10.18112/openneuro.ds007139.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007139.v1.0.0}, } ``` ## Technical Details - Subjects: 292 - Recordings: 292 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 64.58555888888888 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 14.5 GB - File count: 292 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007139.v1.0.0 - Source: openneuro - OpenNeuro: [ds007139](https://openneuro.org/datasets/ds007139) - NeMAR: [ds007139](https://nemar.org/dataexplorer/detail?dataset_id=ds007139) ## API Reference Use the `DS007139` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007139(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE LRP/ERN Flanker * **Study:** `ds007139` (OpenNeuro) * **Author (year):** `Couperus2025_LRP` * **Canonical:** `Couperus2021_LRP` Also importable as: `DS007139`, `Couperus2025_LRP`, `Couperus2021_LRP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 292; recordings: 292; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007139](https://openneuro.org/datasets/ds007139) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007139](https://nemar.org/dataexplorer/detail?dataset_id=ds007139) DOI: [https://doi.org/10.18112/openneuro.ds007139.v1.0.0](https://doi.org/10.18112/openneuro.ds007139.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007139 >>> dataset = DS007139(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007139) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007139) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007162: eeg dataset, 34 subjects *Adaptive recruitment of cortex-wide recurrence for visual object recognition (EEG)* Access recordings and metadata through EEGDash. **Citation:** [Unspecified1], [Unspecified2] (2026). *Adaptive recruitment of cortex-wide recurrence for visual object recognition (EEG)*. [10.18112/openneuro.ds007162.v1.0.0](https://doi.org/10.18112/openneuro.ds007162.v1.0.0) Modality: eeg Subjects: 34 Recordings: 69 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007162 dataset = DS007162(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007162(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007162( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007162, title = {Adaptive recruitment of cortex-wide recurrence for visual object recognition (EEG)}, author = {[Unspecified1] and [Unspecified2]}, doi = {10.18112/openneuro.ds007162.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007162.v1.0.0}, } ``` ## About This Dataset **Dataset Description** This dataset contains the EEG data accompanying the study **“Adaptive recruitment of cortex-wide recurrence for visual object recognition”** (Link to preprint: [https://www.biorxiv.org/content/10.1101/2025.10.17.682937v2](https://www.biorxiv.org/content/10.1101/2025.10.17.682937v2)). **Please cite the above paper if you use this data.** **Dataset Overview** **- 34 participants, each with 1 session** **Experimental Design** The EEG experiment used a stimulus set of 242 images (121 “challenge” and 121 “control” images) derived from comparisons between human behavioural performance and AlexNet. - *Main task:* Each trial consisted of a single image presented for 200 ms followed by a 100 ms blank. Trials were grouped into sequences of 14 images. At the end of each sequence, participants reported whether a paper clip appeared anywhere in that sequence. **Derivatives** The derivatives/ folder contains outputs from the decoding analyses, including time-resolved decoding accuracy matrices for object identity. ## Dataset Information | Dataset ID | `DS007162` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Adaptive recruitment of cortex-wide recurrence for visual object recognition (EEG) | | Author (year) | `DS7162_VisualRecognition` | | Canonical | — | | Importable as | `DS007162`, `DS7162_VisualRecognition` | | Year | 2026 | | Authors | [Unspecified1], [Unspecified2] | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007162.v1.0.0](https://doi.org/10.18112/openneuro.ds007162.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007162) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007162) | [Source URL](https://openneuro.org/datasets/ds007162) | ### Copy-paste BibTeX ```bibtex @dataset{ds007162, title = {Adaptive recruitment of cortex-wide recurrence for visual object recognition (EEG)}, author = {[Unspecified1] and [Unspecified2]}, doi = {10.18112/openneuro.ds007162.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007162.v1.0.0}, } ``` ## Technical Details - Subjects: 34 - Recordings: 69 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 71.81638055555555 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 60.9 GB - File count: 69 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007162.v1.0.0 - Source: openneuro - OpenNeuro: [ds007162](https://openneuro.org/datasets/ds007162) - NeMAR: [ds007162](https://nemar.org/dataexplorer/detail?dataset_id=ds007162) ## API Reference Use the `DS007162` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007162(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Adaptive recruitment of cortex-wide recurrence for visual object recognition (EEG) * **Study:** `ds007162` (OpenNeuro) * **Author (year):** `DS7162_VisualRecognition` * **Canonical:** — Also importable as: `DS007162`, `DS7162_VisualRecognition`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 34; recordings: 69; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007162](https://openneuro.org/datasets/ds007162) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007162](https://nemar.org/dataexplorer/detail?dataset_id=ds007162) DOI: [https://doi.org/10.18112/openneuro.ds007162.v1.0.0](https://doi.org/10.18112/openneuro.ds007162.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007162 >>> dataset = DS007162(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007162) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007162) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007169: eeg dataset, 18 subjects *Multimodal Cognitive Workload n-back Task, 4 Difficulties* Access recordings and metadata through EEGDash. **Citation:** Matthew Barras, Liam Booth (2026). *Multimodal Cognitive Workload n-back Task, 4 Difficulties*. [10.18112/openneuro.ds007169.v1.0.5](https://doi.org/10.18112/openneuro.ds007169.v1.0.5) Modality: eeg Subjects: 18 Recordings: 18 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007169 dataset = DS007169(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007169(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007169( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007169, title = {Multimodal Cognitive Workload n-back Task, 4 Difficulties}, author = {Matthew Barras and Liam Booth}, doi = {10.18112/openneuro.ds007169.v1.0.5}, url = {https://doi.org/10.18112/openneuro.ds007169.v1.0.5}, } ``` ## About This Dataset This dataset was generated from LSL/XDF recordings. Converted to BIDS with instructions and code [presented here](https://github.com/LMBooth/QT-nback_study/tree/main/conversion_package) - Original recordings are stored under sourcedata/xdf/ as .xdf files (non-BIDS). - EEG was converted to BrainVision format (.vhdr/.eeg/.vmrk) under each sub-\*/eeg/. - \*_events.tsv was generated from marker streams and then aligned so onset is relative to the EEG start time. - Marker streams include task markers (n-backMarkers) and acquisition dropout annotations (UoHDataOffsetStream); events include a marker_stream column and marker definitions are in task-nback_events.json. - Pupil Labs gaze/pupil data was exported from the XDF pupil_capture stream into sub-\*/pupil as \*_task-nback_pupil.tsv + \*_task-nback_eyetrack.json (PhysioType=eyetrack). - ECG is captured on the EEG system; the ECG channel is typed in \*_channels.tsv and exported as \*_recording-ecg_physio.tsv + \*_recording-ecg_physio.json under sub-\*/ecg. - Analysis note: participants excluded from the analysis remain in participants.tsv with analysis_included=false; no epoch rejection was applied to this raw dataset. - Participant IDs match the original XDF filenames; missing IDs correspond to excluded participants. ### View full README This dataset was generated from LSL/XDF recordings. Converted to BIDS with instructions and code [presented here](https://github.com/LMBooth/QT-nback_study/tree/main/conversion_package) - Original recordings are stored under sourcedata/xdf/ as .xdf files (non-BIDS). - EEG was converted to BrainVision format (.vhdr/.eeg/.vmrk) under each sub-\*/eeg/. - \*_events.tsv was generated from marker streams and then aligned so onset is relative to the EEG start time. - Marker streams include task markers (n-backMarkers) and acquisition dropout annotations (UoHDataOffsetStream); events include a marker_stream column and marker definitions are in task-nback_events.json. - Pupil Labs gaze/pupil data was exported from the XDF pupil_capture stream into sub-\*/pupil as \*_task-nback_pupil.tsv + \*_task-nback_eyetrack.json (PhysioType=eyetrack). - ECG is captured on the EEG system; the ECG channel is typed in \*_channels.tsv and exported as \*_recording-ecg_physio.tsv + \*_recording-ecg_physio.json under sub-\*/ecg. - Analysis note: participants excluded from the analysis remain in participants.tsv with analysis_included=false; no epoch rejection was applied to this raw dataset. - Participant IDs match the original XDF filenames; missing IDs correspond to excluded participants. Participants - N_recorded: 20 - N_released: 18 - Exclusions: 2 participants excluded due to data quality failures (sub-013, sub-017). - Demographics in participants.tsv: age (years), sex, handedness. - Excluded IDs remain in participants.tsv with analysis_included=false. Hardware and data collection - Combined EEG+ECG mobile EEG system (Bateson and Asghar, 2021; Clewett et al., 2016) and Pupil Labs Pupil Core, synchronized via Lab Streaming Layer (LSL). - EEG: 19-channel 10-20 montage (Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, O2), Ag/AgCl electrodes with linked-ear reference, 250 Hz; impedances checked and Neurgel EEG gel applied. - ECG: 3-lead on the same system; positive lead right shoulder/clavicle, negative lead left shoulder/clavicle, feedback lead lower left torso. - Pupillometry: Pupil Labs Pupil Core eye tracking with infrared illuminators; LSL relay with asynchronous sampling (timestamps per sample). Protocol summary - Tutorial phase with feedback: 20 trials at each level (1-back through 4-back) after a 60 s fixation. - Main experiment: 100 trials at each level (1-back through 4-back) with no feedback. - Each level begins with a 6.0 s instruction screen (“Remember N steps back”). - Each trial shows a letter for 1.0 s, followed by a 0.7 s blank interval. - Task events encode nback_level, key_press, matched, response_accuracy, and tutorial flags in task-nback_events.json. Task: nback Release notes - Recorded 20 participants; released 18. - Reason: data quality failures. - Participant IDs match original XDF filenames; missing IDs indicate excluded participants. **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 Clewett CJ, Langley P, Bateson AD et al (2016) Non-invasive, home-based electroencephalography hypoglycaemia warning system for personal monitoring using skin surface electrodes: a single-case feasibility study. Healthc Technol Lett 3:2-5. [https://doi.org/10.1049/htl.2015.0037](https://doi.org/10.1049/htl.2015.0037) Bateson AD, Asghar AUR (2021) Development and evaluation of a smartphone-based electroencephalography (EEG) system. IEEE Access. [https://doi.org/10.1109/ACCESS.2021.3079992](https://doi.org/10.1109/ACCESS.2021.3079992) ## Dataset Information | Dataset ID | `DS007169` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Multimodal Cognitive Workload n-back Task, 4 Difficulties | | Author (year) | `Barras2026_Multimodal` | | Canonical | `Barras2021` | | Importable as | `DS007169`, `Barras2026_Multimodal`, `Barras2021` | | Year | 2026 | | Authors | Matthew Barras, Liam Booth | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007169.v1.0.5](https://doi.org/10.18112/openneuro.ds007169.v1.0.5) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007169) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007169) | [Source URL](https://openneuro.org/datasets/ds007169) | ### Copy-paste BibTeX ```bibtex @dataset{ds007169, title = {Multimodal Cognitive Workload n-back Task, 4 Difficulties}, author = {Matthew Barras and Liam Booth}, doi = {10.18112/openneuro.ds007169.v1.0.5}, url = {https://doi.org/10.18112/openneuro.ds007169.v1.0.5}, } ``` ## Technical Details - Subjects: 18 - Recordings: 18 - Tasks: 1 - Channels: 24 - Sampling rate (Hz): 250.0 - Duration (hours): 5.090633333333333 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 421.7 MB - File count: 18 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007169.v1.0.5 - Source: openneuro - OpenNeuro: [ds007169](https://openneuro.org/datasets/ds007169) - NeMAR: [ds007169](https://nemar.org/dataexplorer/detail?dataset_id=ds007169) ## API Reference Use the `DS007169` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007169(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal Cognitive Workload n-back Task, 4 Difficulties * **Study:** `ds007169` (OpenNeuro) * **Author (year):** `Barras2026_Multimodal` * **Canonical:** `Barras2021` Also importable as: `DS007169`, `Barras2026_Multimodal`, `Barras2021`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007169](https://openneuro.org/datasets/ds007169) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007169](https://nemar.org/dataexplorer/detail?dataset_id=ds007169) DOI: [https://doi.org/10.18112/openneuro.ds007169.v1.0.5](https://doi.org/10.18112/openneuro.ds007169.v1.0.5) ### Examples ```pycon >>> from eegdash.dataset import DS007169 >>> dataset = DS007169(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007169) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007169) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007172: eeg dataset, 100 subjects *EEG-Asymmetries Dataset* Access recordings and metadata through EEGDash. **Citation:** Petunia Reinke, Lisa Deneke, Sebastian Ocklenburg (2026). *EEG-Asymmetries Dataset*. [10.18112/openneuro.ds007172.v1.0.0](https://doi.org/10.18112/openneuro.ds007172.v1.0.0) Modality: eeg Subjects: 100 Recordings: 501 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007172 dataset = DS007172(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007172(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007172( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007172, title = {EEG-Asymmetries Dataset}, author = {Petunia Reinke and Lisa Deneke and Sebastian Ocklenburg}, doi = {10.18112/openneuro.ds007172.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007172.v1.0.0}, } ``` ## About This Dataset **References BIDS** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 **References Dataset** Reinke, P., Deneke, L., & Ocklenburg, S. (2025). Hemispheric asymmetries in the EEG: Is there an association between N1 lateralization and alpha asymmetry?. Laterality, 1–50. Advance online publication. [https://doi.org/10.1080/1357650X.2025.2591660](https://doi.org/10.1080/1357650X.2025.2591660) ### View full README **References BIDS** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 **References Dataset** Reinke, P., Deneke, L., & Ocklenburg, S. (2025). Hemispheric asymmetries in the EEG: Is there an association between N1 lateralization and alpha asymmetry?. Laterality, 1–50. Advance online publication. [https://doi.org/10.1080/1357650X.2025.2591660](https://doi.org/10.1080/1357650X.2025.2591660) **Dataset description** The dataset comprises 100 participants (53 females, 46 males, 1 diverse individual). 27 of the females were right-handed, the rest was non-right-hand dominant. Of the males 24 were right-handed, while the rest was non-right-hand dominant. The mean age of the participants was 25.6 [4.91SD] years. All participants reported normal or 2 corrected-to-normal vision, had no unilateral sensory or motor deficits, no history of mental illnesses 3 or neurologic disorders, and were currently not taking any medication. All participants started with a resting state (RS) of approximately eight minutes, where periods of open and closed eyes were included (each period was 63seconds, leading to 4.2 minutes of open eyes and 4.2 minutes of closed eyes resting state). After the RS each participant completed four tasks in a randomized order. Each task was constructed in the same way: The participants were instructed verbally as well as in written form directly before each trial 4 began. They were told to only react to the target stimuli (animal names, female faces, and houses with pitched roofs) via a press on the space bar. Each trial consisted of three blocks: one short practice block, one block where answers should be given with the right hand, and one block where answers should be given with the left hand. The starting hand was randomized across participants. During the trials, words (words task) or pictures (faces, emotions, and houses task) were shown in the center of the screen for one second, followed by a fixation cross for 500-700ms. After 80 stimuli, the response hand was changed, leading to a total of 160 stimuli presentations for each task. For more specific information look here: Reinke, P., Deneke, L., & Ocklenburg, S. (2025). Hemispheric asymmetries in the EEG: Is there an association between N1 lateralization and alpha asymmetry?. Laterality, 1–50. Advance online publication. [https://doi.org/10.1080/1357650X.2025.2591660](https://doi.org/10.1080/1357650X.2025.2591660) **Trigger description** Resting State (“rest”): Rest/Open: 1 Rest/Closed: 2 Words Task (“words”): Right Hand & animal name: 13 Right Hand & non-animal word: 14 Left Hand & animal name: 23 Left Hand & non-animal word: 24 Faces Task (“faces”): Right Hand – Male – Black: 117 Right Hand – Male – White: 118 Right Hand – Female – Black: 127 Right Hand – Female – White: 128 Left Hand – Male – Black: 217 Left Hand – Male – White: 218 Left Hand – Female – Black: 227 Left Hand – Female – White: 228 Emotions Task (“emotions”): Right Hand – Male – Angry: 111 Right Hand – Male – Fearful: 112 Right Hand – Male – Happy (mouth open): 113 Right Hand – Male – Happy (mouth closed): 114 Right Hand – Female – Angry: 121 Right Hand – Female – Fearful: 122 Right Hand – Female – Happy (mouth open): 123 Right Hand – Female – Happy (mouth closed): 124´ Left Hand – Male – Angry: 211 Left Hand – Male – Fearful: 212 Left Hand – Male – Happy (mouth open): 213 Left Hand – Male – Happy (mouth closed): 214 Left Hand – Female – Angry: 221 Left Hand – Female – Fearful: 222 Left Hand – Female – Happy (mouth open): 223 Left Hand – Female – Happy (mouth closed): 224 Houses task (“houses”): Right Hand & Pitched Roof: 11 Right Hand & Flat Roof: 12 Left Hand & Pitched Roof: 21 Left Hand & Flat Roof: 22 ## Dataset Information | Dataset ID | `DS007172` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG-Asymmetries Dataset | | Author (year) | `Reinke2026` | | Canonical | `EEGAsymmetries` | | Importable as | `DS007172`, `Reinke2026`, `EEGAsymmetries` | | Year | 2026 | | Authors | Petunia Reinke, Lisa Deneke, Sebastian Ocklenburg | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007172.v1.0.0](https://doi.org/10.18112/openneuro.ds007172.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007172) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007172) | [Source URL](https://openneuro.org/datasets/ds007172) | ### Copy-paste BibTeX ```bibtex @dataset{ds007172, title = {EEG-Asymmetries Dataset}, author = {Petunia Reinke and Lisa Deneke and Sebastian Ocklenburg}, doi = {10.18112/openneuro.ds007172.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007172.v1.0.0}, } ``` ## Technical Details - Subjects: 100 - Recordings: 501 - Tasks: 6 - Channels: 32 (496), 29 (5) - Sampling rate (Hz): 500.0 (496), 1000.0 (5) - Duration (hours): 50.98400083333333 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 11.0 GB - File count: 501 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007172.v1.0.0 - Source: openneuro - OpenNeuro: [ds007172](https://openneuro.org/datasets/ds007172) - NeMAR: [ds007172](https://nemar.org/dataexplorer/detail?dataset_id=ds007172) ## API Reference Use the `DS007172` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007172(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG-Asymmetries Dataset * **Study:** `ds007172` (OpenNeuro) * **Author (year):** `Reinke2026` * **Canonical:** `EEGAsymmetries` Also importable as: `DS007172`, `Reinke2026`, `EEGAsymmetries`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 100; recordings: 501; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007172](https://openneuro.org/datasets/ds007172) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007172](https://nemar.org/dataexplorer/detail?dataset_id=ds007172) DOI: [https://doi.org/10.18112/openneuro.ds007172.v1.0.0](https://doi.org/10.18112/openneuro.ds007172.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007172 >>> dataset = DS007172(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007172) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007172) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007175: eeg dataset, 41 subjects *FFR-active-listening* Access recordings and metadata through EEGDash. **Citation:** [Unspecified] (2026). *FFR-active-listening*. [10.18112/openneuro.ds007175.v1.0.1](https://doi.org/10.18112/openneuro.ds007175.v1.0.1) Modality: eeg Subjects: 41 Recordings: 41 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007175 dataset = DS007175(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007175(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007175( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007175, title = {FFR-active-listening}, author = {[Unspecified]}, doi = {10.18112/openneuro.ds007175.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007175.v1.0.1}, } ``` ## About This Dataset Title ## Dataset Information | Dataset ID | `DS007175` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | FFR-active-listening | | Author (year) | `DS7175_FFR_ActiveListening` | | Canonical | — | | Importable as | `DS007175`, `DS7175_FFR_ActiveListening` | | Year | 2026 | | Authors | [Unspecified] | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007175.v1.0.1](https://doi.org/10.18112/openneuro.ds007175.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007175) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007175) | [Source URL](https://openneuro.org/datasets/ds007175) | ### Copy-paste BibTeX ```bibtex @dataset{ds007175, title = {FFR-active-listening}, author = {[Unspecified]}, doi = {10.18112/openneuro.ds007175.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007175.v1.0.1}, } ``` ## Technical Details - Subjects: 41 - Recordings: 41 - Tasks: 1 - Channels: 65 - Sampling rate (Hz): 5000.0 - Duration (hours): 46.89787261111111 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 200.4 GB - File count: 41 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007175.v1.0.1 - Source: openneuro - OpenNeuro: [ds007175](https://openneuro.org/datasets/ds007175) - NeMAR: [ds007175](https://nemar.org/dataexplorer/detail?dataset_id=ds007175) ## API Reference Use the `DS007175` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007175(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FFR-active-listening * **Study:** `ds007175` (OpenNeuro) * **Author (year):** `DS7175_FFR_ActiveListening` * **Canonical:** — Also importable as: `DS007175`, `DS7175_FFR_ActiveListening`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 41; recordings: 41; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007175](https://openneuro.org/datasets/ds007175) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007175](https://nemar.org/dataexplorer/detail?dataset_id=ds007175) DOI: [https://doi.org/10.18112/openneuro.ds007175.v1.0.1](https://doi.org/10.18112/openneuro.ds007175.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007175 >>> dataset = DS007175(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007175) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007175) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007176: eeg dataset, 45 subjects *Longitudinal EEG Test-Retest Reliability in Healthy Individuals* Access recordings and metadata through EEGDash. **Citation:** Verónica Henao Isaza, Valeria Cadavid Castro, Luisa María Zapata Saldarriaga, Yorguin-Jose Mantilla-Ramos, Jazmín Ximena Suarez Revelo, Carlos Andrés Tobón Quintero, John Fredy Ochoa Gómez (2026). *Longitudinal EEG Test-Retest Reliability in Healthy Individuals*. [10.18112/openneuro.ds007176.v1.0.1](https://doi.org/10.18112/openneuro.ds007176.v1.0.1) Modality: eeg Subjects: 45 Recordings: 300 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007176 dataset = DS007176(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007176(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007176( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007176, title = {Longitudinal EEG Test-Retest Reliability in Healthy Individuals}, author = {Verónica Henao Isaza and Valeria Cadavid Castro and Luisa María Zapata Saldarriaga and Yorguin-Jose Mantilla-Ramos and Jazmín Ximena Suarez Revelo and Carlos Andrés Tobón Quintero and John Fredy Ochoa Gómez}, doi = {10.18112/openneuro.ds007176.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007176.v1.0.1}, } ``` ## About This Dataset **Longitudinal EEG Test-Retest Reliability in Healthy Individuals** **Dataset Description** This dataset contains longitudinal resting-state EEG recordings from 43 healthy adults, collected over four sessions spanning approximately two years, with an average interval of 7.2 months between sessions. The dataset includes raw EEG data and relevant metadata following the BIDS standard. ### View full README **Longitudinal EEG Test-Retest Reliability in Healthy Individuals** **Dataset Description** This dataset contains longitudinal resting-state EEG recordings from 43 healthy adults, collected over four sessions spanning approximately two years, with an average interval of 7.2 months between sessions. The dataset includes raw EEG data and relevant metadata following the BIDS standard. **Purpose** The dataset was acquired to assess the test-retest reliability of EEG signals using an automated preprocessing pipeline, including independent component analysis and wavelet-enhanced artifact removal. It allows for analysis of neural components, relative power in regions of interest (ROIs), and longitudinal stability of EEG measures. **Data Structure** - `dataset_description.json` : Dataset metadata and authorship information. - `participants.tsv` : Participant demographics and IDs. - `sub-XX/eeg/` : Folder for each participant containing EEG data files. **EEG Data** Each participant folder contains EEG recordings in BIDS-compliant format. Data include: - Raw EEG signals (`.eeg`, `.vhdr`, `.vmrk`) - Associated metadata files (`.json`) describing recording parameters and task information. **Usage Notes** - All participants provided written informed consent. - Data are de-identified and do not contain personally identifiable information. - Users should cite the following paper when using this dataset: Henao Isaza V, et al. Longitudinal test-retest reliability of quantitative EEG in healthy individuals using an automated preprocessing approach. DOI: 10.1016/j.bspc.2026.109484 **License** This dataset is publicly available under a Creative Commons CC0 license. ## Dataset Information | Dataset ID | `DS007176` | |----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Longitudinal EEG Test-Retest Reliability in Healthy Individuals | | Author (year) | `Isaza2026_Longitudinal` | | Canonical | — | | Importable as | `DS007176`, `Isaza2026_Longitudinal` | | Year | 2026 | | Authors | Verónica Henao Isaza, Valeria Cadavid Castro, Luisa María Zapata Saldarriaga, Yorguin-Jose Mantilla-Ramos, Jazmín Ximena Suarez Revelo, Carlos Andrés Tobón Quintero, John Fredy Ochoa Gómez | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007176.v1.0.1](https://doi.org/10.18112/openneuro.ds007176.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007176) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007176) | [Source URL](https://openneuro.org/datasets/ds007176) | ### Copy-paste BibTeX ```bibtex @dataset{ds007176, title = {Longitudinal EEG Test-Retest Reliability in Healthy Individuals}, author = {Verónica Henao Isaza and Valeria Cadavid Castro and Luisa María Zapata Saldarriaga and Yorguin-Jose Mantilla-Ramos and Jazmín Ximena Suarez Revelo and Carlos Andrés Tobón Quintero and John Fredy Ochoa Gómez}, doi = {10.18112/openneuro.ds007176.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007176.v1.0.1}, } ``` ## Technical Details - Subjects: 45 - Recordings: 300 - Tasks: 2 - Channels: 60 - Sampling rate (Hz): 1000.0 - Duration (hours): 26.174016666666667 - Pathology: Healthy - Modality: Resting State - Type: Resting-state - Size on disk: 21.1 GB - File count: 300 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007176.v1.0.1 - Source: openneuro - OpenNeuro: [ds007176](https://openneuro.org/datasets/ds007176) - NeMAR: [ds007176](https://nemar.org/dataexplorer/detail?dataset_id=ds007176) ## API Reference Use the `DS007176` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007176(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Longitudinal EEG Test-Retest Reliability in Healthy Individuals * **Study:** `ds007176` (OpenNeuro) * **Author (year):** `Isaza2026_Longitudinal` * **Canonical:** — Also importable as: `DS007176`, `Isaza2026_Longitudinal`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 45; recordings: 300; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007176](https://openneuro.org/datasets/ds007176) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007176](https://nemar.org/dataexplorer/detail?dataset_id=ds007176) DOI: [https://doi.org/10.18112/openneuro.ds007176.v1.0.1](https://doi.org/10.18112/openneuro.ds007176.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007176 >>> dataset = DS007176(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007176) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007176) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007180: eeg dataset, 25 subjects *Exo-EEG Experiment* Access recordings and metadata through EEGDash. **Citation:** Águeda Fuentes-Guerra, Elisa Martín Arévalo, Freek van Ede, Carlos González-García (2026). *Exo-EEG Experiment*. [10.18112/openneuro.ds007180.v1.0.0](https://doi.org/10.18112/openneuro.ds007180.v1.0.0) Modality: eeg Subjects: 25 Recordings: 25 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007180 dataset = DS007180(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007180(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007180( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007180, title = {Exo-EEG Experiment}, author = {Águeda Fuentes-Guerra and Elisa Martín Arévalo and Freek van Ede and Carlos González-García}, doi = {10.18112/openneuro.ds007180.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007180.v1.0.0}, } ``` ## About This Dataset Exo-EEG Experiment Participants: see participants.tsv Task: exo (see \*_eeg.json) Contact: [aguedafgt@ugr.es](mailto:aguedafgt@ugr.es) ## Dataset Information | Dataset ID | `DS007180` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Exo-EEG Experiment | | Author (year) | `FuentesGuerra2026` | | Canonical | `FuentesGuerra2024` | | Importable as | `DS007180`, `FuentesGuerra2026`, `FuentesGuerra2024` | | Year | 2026 | | Authors | Águeda Fuentes-Guerra, Elisa Martín Arévalo, Freek van Ede, Carlos González-García | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007180.v1.0.0](https://doi.org/10.18112/openneuro.ds007180.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007180) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007180) | [Source URL](https://openneuro.org/datasets/ds007180) | ### Copy-paste BibTeX ```bibtex @dataset{ds007180, title = {Exo-EEG Experiment}, author = {Águeda Fuentes-Guerra and Elisa Martín Arévalo and Freek van Ede and Carlos González-García}, doi = {10.18112/openneuro.ds007180.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007180.v1.0.0}, } ``` ## Technical Details - Subjects: 25 - Recordings: 25 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 500.0 - Duration (hours): 34.725588888888886 - Pathology: Healthy - Modality: — - Type: — - Size on disk: 14.7 GB - File count: 25 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007180.v1.0.0 - Source: openneuro - OpenNeuro: [ds007180](https://openneuro.org/datasets/ds007180) - NeMAR: [ds007180](https://nemar.org/dataexplorer/detail?dataset_id=ds007180) ## API Reference Use the `DS007180` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007180(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Exo-EEG Experiment * **Study:** `ds007180` (OpenNeuro) * **Author (year):** `FuentesGuerra2026` * **Canonical:** `FuentesGuerra2024` Also importable as: `DS007180`, `FuentesGuerra2026`, `FuentesGuerra2024`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007180](https://openneuro.org/datasets/ds007180) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007180](https://nemar.org/dataexplorer/detail?dataset_id=ds007180) DOI: [https://doi.org/10.18112/openneuro.ds007180.v1.0.0](https://doi.org/10.18112/openneuro.ds007180.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007180 >>> dataset = DS007180(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007180) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007180) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007181: eeg dataset, 59 subjects *Structural MRI, Resting-state fMRI, and PSG/EEG Dataset of Zoster-associated Neuralgia* Access recordings and metadata through EEGDash. **Citation:** Li Li, Qi Han, Haolei Bai, Xiaolong Zhang, Yong Liu, Chao He (2026). *Structural MRI, Resting-state fMRI, and PSG/EEG Dataset of Zoster-associated Neuralgia*. [10.18112/openneuro.ds007181.v1.0.1](https://doi.org/10.18112/openneuro.ds007181.v1.0.1) Modality: eeg Subjects: 59 Recordings: 59 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007181 dataset = DS007181(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007181(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007181( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007181, title = {Structural MRI, Resting-state fMRI, and PSG/EEG Dataset of Zoster-associated Neuralgia}, author = {Li Li and Qi Han and Haolei Bai and Xiaolong Zhang and Yong Liu and Chao He}, doi = {10.18112/openneuro.ds007181.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007181.v1.0.1}, } ``` ## About This Dataset **Structural MRI, Resting-state fMRI, and PSG/EEG Dataset of Zoster-associated Neuralgia** **Summary** This dataset includes anatomical T1-weighted MRI and raw multi-echo resting-state fMRI, as well as PSG data from a study investigating the difference between healthy controls (HC) and zoster-associated neuralgia (ZAN) patients. MRI and PSG data were partially overlapping across participants. Participants with available data in at least one modality were included in the dataset, following the BIDS specification. - For project code and full analysis pipelines (including between-subject comparisons, functional connectivity analyses, and correlation-based statistical modeling), see: > [project-zan-neuro](https://github.com/ellebai/zan-neuro) **Participants** ### View full README **Structural MRI, Resting-state fMRI, and PSG/EEG Dataset of Zoster-associated Neuralgia** **Summary** This dataset includes anatomical T1-weighted MRI and raw multi-echo resting-state fMRI, as well as PSG data from a study investigating the difference between healthy controls (HC) and zoster-associated neuralgia (ZAN) patients. MRI and PSG data were partially overlapping across participants. Participants with available data in at least one modality were included in the dataset, following the BIDS specification. - For project code and full analysis pipelines (including between-subject comparisons, functional connectivity analyses, and correlation-based statistical modeling), see: > [project-zan-neuro](https://github.com/ellebai/zan-neuro) **Participants** - 29 healthy adults (HC) and 27 zoster-associated neuralgia adults (ZAN) for MRI data. - 32 healthy adults and 27 zoster-associated neuralgia adults for PSG data. - See `participants.tsv` for sex, age, group (HC vs. ZAN) **Tasks** - Functional scans are resting-state. **What’s included** - `sub-*/anat/` > - Defaced T1w MRI: `sub-XX_T1w.nii.gz` (+ JSON sidecar if available). - `sub-*/func/` - Raw multi-echo BOLD: `sub-XX_task-rest_bold.nii.gz`. - `sub-*/eeg/` - Defaced T1w MRI: `sub-XX_task-sleep_acq-PSG_eeg.edf`, `sub-XX_task-sleep_acq-PSG_channels.tsv`, `sub-XX_task-sleep_acq-PSG_events.tsv`. - Top-level: `participants.tsv`, `task-rest_bold.json`, `README.md`. **Notes on data quality & privacy** - T1w images were defaced prior to sharing. - Functional files are raw (converted with dcm2niix); files with SPM-style prefixes (r/w/y/s\*) were excluded. - Sleep stages were manually scored based on polysomnography (PSG) data according to the criteria of the American Academy of Sleep Medicine (AASM). Sleep staging followed the standard AASM classification system, including Wake (W), Non-REM stages N1, N2, N3, and Rapid Eye Movement (REM) sleep. Stage N3 corresponds to slow-wave sleep as defined in the AASM manual; no separate N4 stage was used. - EEG signals were recorded with reference to linked mastoids (M1/M2), and channel names reflect the referenced configuration (e.g., Fp1–M2). **Folder conventions (BIDS)** ```text zan/ sub-01/ anat/sub-01_T1w.nii.gz func/sub-01_task-rest_bold.nii.gz sub-02/ anat/sub-02_T1w.nii.gz func/sub-02_task-rest_bold.nii.gz eeg/ sub-02_task-sleep_acq-PSG_eeg.edf sub-03/ ... ``` **How to cite** These data are associated with a manuscript currently under revision. Please cite the dataset DOI when using these data. **Contacts** - Haolei Bai — [ellebai83@gmail.com](mailto:ellebai83@gmail.com) (You may also contact the corresponding author from the manuscript.) **License** This dataset is shared under \*\*CC0\*\*. ## Dataset Information | Dataset ID | `DS007181` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Structural MRI, Resting-state fMRI, and PSG/EEG Dataset of Zoster-associated Neuralgia | | Author (year) | `Li2026` | | Canonical | — | | Importable as | `DS007181`, `Li2026` | | Year | 2026 | | Authors | Li Li, Qi Han, Haolei Bai, Xiaolong Zhang, Yong Liu, Chao He | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007181.v1.0.1](https://doi.org/10.18112/openneuro.ds007181.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007181) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007181) | [Source URL](https://openneuro.org/datasets/ds007181) | ### Copy-paste BibTeX ```bibtex @dataset{ds007181, title = {Structural MRI, Resting-state fMRI, and PSG/EEG Dataset of Zoster-associated Neuralgia}, author = {Li Li and Qi Han and Haolei Bai and Xiaolong Zhang and Yong Liu and Chao He}, doi = {10.18112/openneuro.ds007181.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007181.v1.0.1}, } ``` ## Technical Details - Subjects: 59 - Recordings: 59 - Tasks: 1 - Channels: 24 - Sampling rate (Hz): 1024.0 - Duration (hours): 454.4836111111111 - Pathology: Other - Modality: Sleep - Type: Clinical/Intervention - Size on disk: 59.2 GB - File count: 59 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007181.v1.0.1 - Source: openneuro - OpenNeuro: [ds007181](https://openneuro.org/datasets/ds007181) - NeMAR: [ds007181](https://nemar.org/dataexplorer/detail?dataset_id=ds007181) ## API Reference Use the `DS007181` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007181(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Structural MRI, Resting-state fMRI, and PSG/EEG Dataset of Zoster-associated Neuralgia * **Study:** `ds007181` (OpenNeuro) * **Author (year):** `Li2026` * **Canonical:** — Also importable as: `DS007181`, `Li2026`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 59; recordings: 59; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007181](https://openneuro.org/datasets/ds007181) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007181](https://nemar.org/dataexplorer/detail?dataset_id=ds007181) DOI: [https://doi.org/10.18112/openneuro.ds007181.v1.0.1](https://doi.org/10.18112/openneuro.ds007181.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007181 >>> dataset = DS007181(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007181) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007181) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007216: eeg dataset, 24 subjects *A multi-session simultaneous EEG-fMRI dataset with online experience sampling* Access recordings and metadata through EEGDash. **Citation:** Aaron Kucyi, Lotus Shareef-Trudeau, David Braun, Huiling Peng, Tiara Bounyarith, Janet Z. Li (2026). *A multi-session simultaneous EEG-fMRI dataset with online experience sampling*. [10.18112/openneuro.ds007216.v1.0.0](https://doi.org/10.18112/openneuro.ds007216.v1.0.0) Modality: eeg Subjects: 24 Recordings: 187 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007216 dataset = DS007216(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007216(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007216( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007216, title = {A multi-session simultaneous EEG-fMRI dataset with online experience sampling}, author = {Aaron Kucyi and Lotus Shareef-Trudeau and David Braun and Huiling Peng and Tiara Bounyarith and Janet Z. Li}, doi = {10.18112/openneuro.ds007216.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007216.v1.0.0}, } ``` ## About This Dataset **A multi-session simultaneous EEG-fMRI dataset with online experience sampling** Here we introduce a multi-session dataset that includes simultaneous EEG-fMRI with online measures of continuous behavior and spontaneous mental experience. Data components, organized in Brain Imaging Dataset Structure (BIDS) format, include simultaneous EEG-fMRI recordings with carbon wire loop sensors embedded in EEG caps for artifact removal, electrocardiogram (ECG), behavioral task responses, experience sampling ratings, and mental health surveys, from 24 healthy individuals aged 18-35. Tasks performed during EEG-fMRI were the Gradual Onset Continuous Performance Task (GradCPT) and a resting state condition with intermittent experience sampling assessing 13 unique dimensions of thought contents and processes (36 trials including 468 total ratings per participant). The same task protocol was completed on two different days, resulting in approximately 1 hour 20 minutes of EEG-fMRI data per individual. These data enable the study of the neural bases of various spontaneous cognitive processes, including attentional fluctuations and mind-wandering, thereby promoting insights into the behavioral relevance of resting state brain activity. The dataset also provides a means to study the reliability of temporal relationships between fMRI and EEG data features across different sessions within the same individuals. **Inclusion Criteria** Participants in the study met the following inclusion criteria: - Aged 18 to 35 years old - Spoke English - Right handed - Normal/corrected-to-normal vision ### View full README **A multi-session simultaneous EEG-fMRI dataset with online experience sampling** Here we introduce a multi-session dataset that includes simultaneous EEG-fMRI with online measures of continuous behavior and spontaneous mental experience. Data components, organized in Brain Imaging Dataset Structure (BIDS) format, include simultaneous EEG-fMRI recordings with carbon wire loop sensors embedded in EEG caps for artifact removal, electrocardiogram (ECG), behavioral task responses, experience sampling ratings, and mental health surveys, from 24 healthy individuals aged 18-35. Tasks performed during EEG-fMRI were the Gradual Onset Continuous Performance Task (GradCPT) and a resting state condition with intermittent experience sampling assessing 13 unique dimensions of thought contents and processes (36 trials including 468 total ratings per participant). The same task protocol was completed on two different days, resulting in approximately 1 hour 20 minutes of EEG-fMRI data per individual. These data enable the study of the neural bases of various spontaneous cognitive processes, including attentional fluctuations and mind-wandering, thereby promoting insights into the behavioral relevance of resting state brain activity. The dataset also provides a means to study the reliability of temporal relationships between fMRI and EEG data features across different sessions within the same individuals. **Inclusion Criteria** Participants in the study met the following inclusion criteria: - Aged 18 to 35 years old - Spoke English - Right handed - Normal/corrected-to-normal vision **Exclusion Criteria** Participants meeting any of the following criteria were excluded from the study: - Unable to provide consent - Reported having a current or history of psychiatric/neurological disorders - Reported having a chronic medical condition - Were pregnant - Were prisoners - Were unable to understand English - Were contraindicated for MRI - Had allergies to saline gel - Had cold, flu or COVID-19 symptoms within the two weeks preceding participation - Were unable to fit into one of our three EEG cap sizes (54cm, 56cm, and 58cm circumference) - Wore glasses and did not have access to contacts. **Consent** Institutional review board approval and consent were obtained. Participants were recruited from the Philadelphia, Pennsylvania area, primarily drawing from communities in and around Drexel University and Temple University **Clinical Measures** - State Trait Anxiety Inventory (STAI-S) - DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure-Adult (DSM XC) - WHO Disability Assessment Schedule-12 (WHODAS-12) - General Anxiety Disorder-7 (GAD-7) - Patient Health Questionnaire-9 (PHQ-9) - Ruminative Response Scale (RRS) - Mind-Wandering Deliberate-Spontaneous scale (MWD-S) **MRI Acquisition Parameters** MRI data acquisition. MRI data were acquired using a 64-channel head coil on 3.0 Tesla Siemens MAGNETOM Prisma. To accommodate the EEG-fMRI set-up, the 64-channel head coil included an aperture for routing cables from the top of the EEG cap into the scanner bore to connect to EEG amplifiers. The first 22 subjects and the first session of the 23rd subject were run on Siemens software version Syngo MR E11. The second session of the 23rd subject and both sessions of the 24th subject were run on software version Syngo MR XA30. All acquisition parameters remained consistent across both software versions with the exception of “nonlinear gradient correction,” set to “false” for the first 22 subjects and set to “true” after the software upgrade. This change reflected differences in the default settings of the scanner rather than a deliberate change made by the researchers. After localizer scans, a 3D high resolution MPRAGE structural T1-weighted image was acquired (TR = 2400 ms; TI = 1000 ms; TE = 2.22 ms; slices = 208; FoV = 256 x 240mm, voxel size = 0.8 mm3 isotropic; flip angle = 8°; partial Fourier off; pixel bandwidth = 220Hz/Px) during the first scanning session. During a participant’s second scanning session, a fast T1-weighted, 2D MRI sequence was collected (TR = 190 ms; TE = 2.46 ms; in-plane voxel size = 0.8 x 0.8mm, slices = 25; slice thickness = 4.00 mm; flip angle = 70°; pixel bandwidth = 280 Hz/Px). Following the T1-weighted scan, we ran a B0 field map for the purpose of correcting echo-planar imagining (EPI) fMRI images (TR = 789 ms; TE 1 = 4.92 ms; TE 2 = 7.38 ms; slices = 81; FoV = 192mm x 192mm; voxel size = 2.0 mm3 isotropic; flip angle = 45°; partial Fourier off; pixel bandwidth = 668 Hz/Px). All BOLD fMRI sequences were acquired with a CMRR multiband gradient-echo EPI single shot sequence aligned with the AC-PC plane (TR = 2000 ms; TE = 25.00 ms; flip angle = 70°; slices = 81; FoV = 192mm x 192mm; voxel size = 2.0 mm3 isotropic; multi-band acceleration factor = 3; phase encoding direction = anterior-to posterior). A total of four fMRI runs across two tasks (one GradCPT run, three rs-ES runs) were run for 46 out of 47 scanning sessions, where one session (sub-003_ses-001) was ended early due to time constraints and only 3 runs were collected. **EEG Acquisition** EEG data were collected using an MR-compatible system (Brain Products Gmbh, Gilching, Germany), including the BrainAmp MRPlus, the BrainAmp ExG, and the BrainCap MR with four Carbon Wire Loop (CWL) sensors embedded. Additional EEG-fMRI specific equipment included the Brain Products PowerPack (MR-compatible battery pack for the amplifiers), Triggerbox (sends and receives MRI and stimulus triggers to EEG recording software), SyncBox interface and main unit (syncs EEG phase with MR clock), the BrainVision USB 2 Adapter (connection hub for all EEG components), and the MR sled (mobile surface for mounting the EEG components into the scanner bore). Recorded channels included 4 CWLs, 31 cortical channels and one electrocardiography (ECG) channel (channel 32) placed on the back. In addition, the cap also contained a reference and ground electrode. The cap’s built-in serial current-limiting resistors included 15kOhms of resistance for the ground and reference electrodes, 10kOhms of resistance for all 31 EEG electrodes, and 20kOhms of resistance for the ECG electrode. All electrodes were connected to the BrainAmp MRPlus. The four CWLs sit over the frontal left, frontal right, parietal left, and parietal right locations on the cap and were connected to the bipolar BrainAmp ExG. The CWL channels record minute head movements in the scanner environment, used to perform advanced cardioballistic, helium pump, and movement artifact correction. Cortical electrodes were arranged according to the international 10-20 system. Electrodes were filled using high-chloride abrasive electrolyte-gel (Abralyt HiCl). A layer of Surgilast, an elastic dressing retainer, was placed over the cap after set-up to ensure electrode contact with the scalp and minimize cap movement. Electrode impedance was kept below 20kOhms and measured twice, once during set up, then inside the scanner just before the closing of the head coil. The EEG data were recorded using BrainVision Recorder V.1.25.0201 software at a sampling rate of 5kHz on a Windows operating system using **TR markers in EEG data** As MRI TR markers come in, they are recorded in the EEG data to sync timing between EEG and fMRI data during EEG artifact correction and analyses. The length of the resting state task varied from run to run depending on the participant’s pace and the randomized duration of the fixation cross across trials. This necessitated manual termination of the scan upon conclusion of each rs-MDES run. In the majority of cases, the forced stop would end the functional scan in the middle of the collection of a volume, meaning that the EEG recording would receive an extra fMRI volume marker appended to the end of the recording; however, the scanner would not complete the measurement. This appended an extra fMRI marker (“T 1”) to the end of the EEG data which did not correspond to an fMRI volume. Exceptions to this rule are two runs which had two additional fMRI markers on the end of the EEG recording (sub-013_ses-002_run-003; sub-019_ses-001_run-003), four runs where there was one less fMRI marker in the EEG data than fMRI volumes recorded (sub-001_ses-002_run-003; sub-023_ses-002 all runs), and eight runs where the recorded fMRI markers and volumes were correctly aligned (sub-002_ses-001_run-002, run-003; sub-003_ses-002_run-001; sub-008_ses-001_run-001; sub-008_ses-002_run-002; sub-009_ses-002_run-002; sub-010_ses-001_run-001; sub-012_ses-002_run-003). This issue affects only the last markers in the run, where up to two volume markers in the EEG data may need to be omitted from analyses. All fMRI markers present in the EEG data are aligned with the first fMRI volume. **References for BIDS conversion** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `DS007216` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A multi-session simultaneous EEG-fMRI dataset with online experience sampling | | Author (year) | `Kucyi2026` | | Canonical | `Kucyi2024` | | Importable as | `DS007216`, `Kucyi2026`, `Kucyi2024` | | Year | 2026 | | Authors | Aaron Kucyi, Lotus Shareef-Trudeau, David Braun, Huiling Peng, Tiara Bounyarith, Janet Z. Li | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007216.v1.0.0](https://doi.org/10.18112/openneuro.ds007216.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007216) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007216) | [Source URL](https://openneuro.org/datasets/ds007216) | ### Copy-paste BibTeX ```bibtex @dataset{ds007216, title = {A multi-session simultaneous EEG-fMRI dataset with online experience sampling}, author = {Aaron Kucyi and Lotus Shareef-Trudeau and David Braun and Huiling Peng and Tiara Bounyarith and Janet Z. Li}, doi = {10.18112/openneuro.ds007216.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007216.v1.0.0}, } ``` ## Technical Details - Subjects: 24 - Recordings: 187 - Tasks: 2 - Channels: 36 - Sampling rate (Hz): 5000.0 - Duration (hours): 33.29890561111111 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 104.7 GB - File count: 187 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007216.v1.0.0 - Source: openneuro - OpenNeuro: [ds007216](https://openneuro.org/datasets/ds007216) - NeMAR: [ds007216](https://nemar.org/dataexplorer/detail?dataset_id=ds007216) ## API Reference Use the `DS007216` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007216(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A multi-session simultaneous EEG-fMRI dataset with online experience sampling * **Study:** `ds007216` (OpenNeuro) * **Author (year):** `Kucyi2026` * **Canonical:** `Kucyi2024` Also importable as: `DS007216`, `Kucyi2026`, `Kucyi2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 24; recordings: 187; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007216](https://openneuro.org/datasets/ds007216) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007216](https://nemar.org/dataexplorer/detail?dataset_id=ds007216) DOI: [https://doi.org/10.18112/openneuro.ds007216.v1.0.0](https://doi.org/10.18112/openneuro.ds007216.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007216 >>> dataset = DS007216(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007216) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007216) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007221: eeg dataset, 84 subjects *Cross-Environment Multi-Paradigm Motor Imagery EEG Dataset* Access recordings and metadata through EEGDash. **Citation:** Sun Xinwei, Wang Kun, Pan Lincong, Cao Yupei, Meng Lin (2026). *Cross-Environment Multi-Paradigm Motor Imagery EEG Dataset*. [10.18112/openneuro.ds007221.v1.0.1](https://doi.org/10.18112/openneuro.ds007221.v1.0.1) Modality: eeg Subjects: 84 Recordings: 1265 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007221 dataset = DS007221(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007221(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007221( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007221, title = {Cross-Environment Multi-Paradigm Motor Imagery EEG Dataset}, author = {Sun Xinwei and Wang Kun and Pan Lincong and Cao Yupei and Meng Lin}, doi = {10.18112/openneuro.ds007221.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007221.v1.0.1}, } ``` ## About This Dataset **Cross-Environment Multi-Paradigm Motor Imagery EEG Dataset** **Description** This dataset contains EEG recordings collected during motor imagery (MI) tasks under different recording environments. The primary dataset includes recordings from a controlled laboratory setting and a simulated hospital environment, designed to investigate the impact of environmental variability on EEG-based brain–computer interface (BCI) performance.In addition, a small supplementary dataset recorded in a space station environment is provided, extending the dataset to a non-terrestrial recording condition. **Participants** ### View full README **Cross-Environment Multi-Paradigm Motor Imagery EEG Dataset** **Description** This dataset contains EEG recordings collected during motor imagery (MI) tasks under different recording environments. The primary dataset includes recordings from a controlled laboratory setting and a simulated hospital environment, designed to investigate the impact of environmental variability on EEG-based brain–computer interface (BCI) performance.In addition, a small supplementary dataset recorded in a space station environment is provided, extending the dataset to a non-terrestrial recording condition. **Participants** Main dataset: 84 healthy participants Supplementary dataset: 3 participants (space station environment) All participants completed motor imagery tasks following similar experimental protocols. **Experimental Design** **Recording Environments** Laboratory environment: electromagnetically shielded, low-noise condition Simulated hospital environment: multi-sensory setup including visual, auditory, and contextual elements Space station environment (supplementary): microgravity condition with distinct acquisition setup **Motor Imagery Tasks** Tasks include: Left-hand motor imagery Right-hand motor imagery Resting state (space station dataset only) **Paradigms** The main dataset includes multiple paradigms: Graz motor imagery paradigm SSMVEP-MI paradigm Hybrid paradigms (including video and SSVideo conditions) **Trial Structure** Each trial consists of: Cue period: −2 to 0 s Task period: 0 to 4 s Rest period: 4 s **Data Format** Main Dataset Organized in BIDS format Space Station Dataset (Supplementary) Format: MATLAB .mat files **EEG Recording** Main Dataset 64-channel EEG system (10–20 system) 60 channels used Sampling rate: Raw: 1000 Hz Processed: 250 Hz Space Station Dataset EGGO system 32 dry electrodes Data stored directly in .mat format **This dataset can be used for:** Motor imagery BCI algorithm development Cross-subject decoding Cross-environment analysis Robustness evaluation under different recording conditions Benchmarking EEG signal processing methods **Citation** If you use this dataset, please cite: Cross-Environment Multi-Paradigm Motor Imagery EEG Dataset, Sun Xinwei, Wang Kun, Cao Yupei, OpenNeuro, Version 1.0.0 DOI: [https://openneuro.org/datasets/ds007221](https://openneuro.org/datasets/ds007221) A related manuscript describing this dataset is currently under review. **Notes** The main dataset (84 participants) is designed for systematic analysis across environments.The space station dataset is provided as a supplementary resource due to its different acquisition setup and limited sample size. **License** This dataset is released under an open-access license. Please refer to the repository for details. **Recording Summary** ```text | Subject | Session | Task | Acquisition | Run | Trials | Fs (Hz) | Ch | Duration (s) | Line Freq | Ref | Events | File | |----------|----------|------|--------------|------|----------|---------|------|--------------|-----------|------|---------|------| | sub-01 | ses-01 | graz | N/A | 1 | 40 | 1000.0 | 69 | 342.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-01_ses-01_task-graz_run-01 | | sub-01 | ses-01 | graz | N/A | 2 | 40 | 1000.0 | 69 | 342.4 | 50 | nose | left_hand,right_hand,feet,rest | sub-01_ses-01_task-graz_run-02 | | sub-01 | ses-01 | graz | N/A | 3 | 40 | 1000.0 | 69 | 331.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-01_ses-01_task-graz_run-03 | | sub-01 | ses-01 | graz | N/A | 4 | 40 | 1000.0 | 69 | 330.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-01_ses-01_task-graz_run-04 | | sub-01 | ses-01 | graz | N/A | 5 | 40 | 1000.0 | 69 | 331.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-01_ses-01_task-graz_run-05 | | sub-01 | ses-01 | graz | N/A | 6 | 40 | 1000.0 | 69 | 331.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-01_ses-01_task-graz_run-06 | | sub-01 | ses-02 | graz | N/A | 1 | 40 | 1000.0 | 69 | 349.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-01_ses-02_task-graz_run-01 | | sub-01 | ses-02 | graz | N/A | 2 | 40 | 1000.0 | 69 | 337.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-01_ses-02_task-graz_run-02 | | sub-01 | ses-02 | graz | N/A | 3 | 40 | 1000.0 | 69 | 332.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-01_ses-02_task-graz_run-03 | | sub-01 | ses-02 | graz | N/A | 4 | 40 | 1000.0 | 69 | 332.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-01_ses-02_task-graz_run-04 | | sub-01 | ses-02 | graz | N/A | 5 | 40 | 1000.0 | 69 | 332.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-01_ses-02_task-graz_run-05 | | sub-01 | ses-02 | graz | N/A | 6 | 40 | 1000.0 | 69 | 334.0 | 50 | nose | left_hand,right_hand,feet,rest | sub-01_ses-02_task-graz_run-06 | | sub-02 | ses-01 | graz | N/A | 1 | 40 | 1000.0 | 69 | 336.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-02_ses-01_task-graz_run-01 | | sub-02 | ses-01 | graz | N/A | 2 | 40 | 1000.0 | 69 | 333.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-02_ses-01_task-graz_run-02 | | sub-02 | ses-01 | graz | N/A | 3 | 40 | 1000.0 | 69 | 333.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-02_ses-01_task-graz_run-03 | | sub-02 | ses-01 | graz | N/A | 4 | 40 | 1000.0 | 69 | 331.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-02_ses-01_task-graz_run-04 | | sub-02 | ses-01 | graz | N/A | 5 | 40 | 1000.0 | 69 | 331.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-02_ses-01_task-graz_run-05 | | sub-02 | ses-01 | graz | N/A | 6 | 40 | 1000.0 | 69 | 335.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-02_ses-01_task-graz_run-06 | | sub-02 | ses-02 | graz | N/A | 1 | 40 | 1000.0 | 69 | 331.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-02_ses-02_task-graz_run-01 | | sub-02 | ses-02 | graz | N/A | 2 | 40 | 1000.0 | 69 | 339.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-02_ses-02_task-graz_run-02 | | sub-02 | ses-02 | graz | N/A | 3 | 40 | 1000.0 | 69 | 331.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-02_ses-02_task-graz_run-03 | | sub-02 | ses-02 | graz | N/A | 4 | 40 | 1000.0 | 69 | 328.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-02_ses-02_task-graz_run-04 | | sub-02 | ses-02 | graz | N/A | 5 | 40 | 1000.0 | 69 | 352.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-02_ses-02_task-graz_run-05 | | sub-02 | ses-02 | graz | N/A | 6 | 40 | 1000.0 | 69 | 330.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-02_ses-02_task-graz_run-06 | | sub-03 | ses-01 | graz | N/A | 1 | 40 | 1000.0 | 69 | 372.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-03_ses-01_task-graz_run-01 | | sub-03 | ses-01 | graz | N/A | 2 | 40 | 1000.0 | 69 | 374.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-03_ses-01_task-graz_run-02 | | sub-03 | ses-01 | graz | N/A | 3 | 40 | 1000.0 | 69 | 371.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-03_ses-01_task-graz_run-03 | | sub-03 | ses-01 | graz | N/A | 4 | 40 | 1000.0 | 69 | 377.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-03_ses-01_task-graz_run-04 | | sub-03 | ses-01 | graz | N/A | 5 | 40 | 1000.0 | 69 | 373.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-03_ses-01_task-graz_run-05 | | sub-03 | ses-01 | graz | N/A | 6 | 40 | 1000.0 | 69 | 381.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-03_ses-01_task-graz_run-06 | | sub-03 | ses-02 | graz | N/A | 1 | 40 | 1000.0 | 69 | 329.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-03_ses-02_task-graz_run-01 | | sub-03 | ses-02 | graz | N/A | 2 | 40 | 1000.0 | 69 | 327.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-03_ses-02_task-graz_run-02 | | sub-03 | ses-02 | graz | N/A | 3 | 40 | 1000.0 | 69 | 329.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-03_ses-02_task-graz_run-03 | | sub-03 | ses-02 | graz | N/A | 4 | 40 | 1000.0 | 69 | 331.0 | 50 | nose | left_hand,right_hand,feet,rest | sub-03_ses-02_task-graz_run-04 | | sub-03 | ses-02 | graz | N/A | 5 | 40 | 1000.0 | 69 | 332.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-03_ses-02_task-graz_run-05 | | sub-03 | ses-02 | graz | N/A | 6 | 40 | 1000.0 | 69 | 331.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-03_ses-02_task-graz_run-06 | | sub-04 | ses-01 | graz | N/A | 1 | 40 | 1000.0 | 69 | 372.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-04_ses-01_task-graz_run-01 | | sub-04 | ses-01 | graz | N/A | 2 | 40 | 1000.0 | 69 | 390.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-04_ses-01_task-graz_run-02 | | sub-04 | ses-01 | graz | N/A | 3 | 40 | 1000.0 | 69 | 372.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-04_ses-01_task-graz_run-03 | | sub-04 | ses-01 | graz | N/A | 4 | 40 | 1000.0 | 69 | 375.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-04_ses-01_task-graz_run-04 | | sub-04 | ses-01 | graz | N/A | 5 | 40 | 1000.0 | 69 | 368.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-04_ses-01_task-graz_run-05 | | sub-04 | ses-01 | graz | N/A | 6 | 40 | 1000.0 | 69 | 368.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-04_ses-01_task-graz_run-06 | | sub-04 | ses-02 | graz | N/A | 1 | 40 | 1000.0 | 69 | 331.4 | 50 | nose | left_hand,right_hand,feet,rest | sub-04_ses-02_task-graz_run-01 | | sub-04 | ses-02 | graz | N/A | 2 | 40 | 1000.0 | 69 | 330.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-04_ses-02_task-graz_run-02 | | sub-04 | ses-02 | graz | N/A | 3 | 40 | 1000.0 | 69 | 379.0 | 50 | nose | left_hand,right_hand,feet,rest | sub-04_ses-02_task-graz_run-03 | | sub-04 | ses-02 | graz | N/A | 4 | 40 | 1000.0 | 69 | 370.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-04_ses-02_task-graz_run-04 | | sub-04 | ses-02 | graz | N/A | 5 | 40 | 1000.0 | 69 | 379.0 | 50 | nose | left_hand,right_hand,feet,rest | sub-04_ses-02_task-graz_run-05 | | sub-04 | ses-02 | graz | N/A | 6 | 40 | 1000.0 | 69 | 375.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-04_ses-02_task-graz_run-06 | | sub-04 | ses-02 | graz | N/A | 7 | 40 | 1000.0 | 69 | 370.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-04_ses-02_task-graz_run-07 | | sub-05 | ses-01 | graz | N/A | 1 | 40 | 1000.0 | 69 | 372.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-05_ses-01_task-graz_run-01 | | sub-05 | ses-01 | graz | N/A | 2 | 40 | 1000.0 | 69 | 372.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-05_ses-01_task-graz_run-02 | | sub-05 | ses-01 | graz | N/A | 3 | 40 | 1000.0 | 69 | 376.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-05_ses-01_task-graz_run-03 | | sub-05 | ses-01 | graz | N/A | 4 | 40 | 1000.0 | 69 | 373.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-05_ses-01_task-graz_run-04 | | sub-05 | ses-01 | graz | N/A | 5 | 40 | 1000.0 | 69 | 371.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-05_ses-01_task-graz_run-05 | | sub-05 | ses-01 | graz | N/A | 6 | 40 | 1000.0 | 69 | 373.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-05_ses-01_task-graz_run-06 | | sub-05 | ses-02 | graz | N/A | 1 | 40 | 1000.0 | 69 | 434.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-05_ses-02_task-graz_run-01 | | sub-05 | ses-02 | graz | N/A | 2 | 40 | 1000.0 | 69 | 369.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-05_ses-02_task-graz_run-02 | | sub-05 | ses-02 | graz | N/A | 3 | 40 | 1000.0 | 69 | 372.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-05_ses-02_task-graz_run-03 | | sub-05 | ses-02 | graz | N/A | 4 | 40 | 1000.0 | 69 | 369.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-05_ses-02_task-graz_run-04 | | sub-05 | ses-02 | graz | N/A | 5 | 40 | 1000.0 | 69 | 370.4 | 50 | nose | left_hand,right_hand,feet,rest | sub-05_ses-02_task-graz_run-05 | | sub-05 | ses-02 | graz | N/A | 6 | 40 | 1000.0 | 69 | 374.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-05_ses-02_task-graz_run-06 | | sub-06 | ses-01 | graz | N/A | 1 | 40 | 1000.0 | 69 | 432.0 | 50 | nose | left_hand,right_hand,feet,rest | sub-06_ses-01_task-graz_run-01 | | sub-06 | ses-01 | graz | N/A | 2 | 40 | 1000.0 | 69 | 372.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-06_ses-01_task-graz_run-02 | | sub-06 | ses-01 | graz | N/A | 3 | 40 | 1000.0 | 69 | 392.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-06_ses-01_task-graz_run-03 | | sub-06 | ses-01 | graz | N/A | 4 | 40 | 1000.0 | 69 | 376.0 | 50 | nose | left_hand,right_hand,feet,rest | sub-06_ses-01_task-graz_run-04 | | sub-06 | ses-01 | graz | N/A | 5 | 40 | 1000.0 | 69 | 379.4 | 50 | nose | left_hand,right_hand,feet,rest | sub-06_ses-01_task-graz_run-05 | | sub-06 | ses-01 | graz | N/A | 6 | 40 | 1000.0 | 69 | 429.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-06_ses-01_task-graz_run-06 | | sub-06 | ses-02 | graz | N/A | 1 | 40 | 1000.0 | 69 | 373.0 | 50 | nose | left_hand,right_hand,feet,rest | sub-06_ses-02_task-graz_run-01 | | sub-06 | ses-02 | graz | N/A | 2 | 40 | 1000.0 | 69 | 379.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-06_ses-02_task-graz_run-02 | | sub-06 | ses-02 | graz | N/A | 3 | 40 | 1000.0 | 69 | 384.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-06_ses-02_task-graz_run-03 | | sub-06 | ses-02 | graz | N/A | 4 | 40 | 1000.0 | 69 | 380.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-06_ses-02_task-graz_run-04 | | sub-06 | ses-02 | graz | N/A | 5 | 40 | 1000.0 | 69 | 377.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-06_ses-02_task-graz_run-05 | | sub-06 | ses-02 | graz | N/A | 6 | 40 | 1000.0 | 69 | 375.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-06_ses-02_task-graz_run-06 | | sub-07 | ses-01 | graz | N/A | 1 | 40 | 1000.0 | 69 | 382.4 | 50 | nose | left_hand,right_hand,feet,rest | sub-07_ses-01_task-graz_run-01 | | sub-07 | ses-01 | graz | N/A | 2 | 40 | 1000.0 | 69 | 404.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-07_ses-01_task-graz_run-02 | | sub-07 | ses-01 | graz | N/A | 3 | 40 | 1000.0 | 69 | 373.4 | 50 | nose | left_hand,right_hand,feet,rest | sub-07_ses-01_task-graz_run-03 | | sub-07 | ses-01 | graz | N/A | 4 | 40 | 1000.0 | 69 | 371.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-07_ses-01_task-graz_run-04 | | sub-07 | ses-01 | graz | N/A | 5 | 40 | 1000.0 | 69 | 371.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-07_ses-01_task-graz_run-05 | | sub-07 | ses-01 | graz | N/A | 6 | 40 | 1000.0 | 69 | 370.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-07_ses-01_task-graz_run-06 | | sub-07 | ses-02 | graz | N/A | 1 | 40 | 1000.0 | 69 | 395.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-07_ses-02_task-graz_run-01 | | sub-07 | ses-02 | graz | N/A | 2 | 40 | 1000.0 | 69 | 371.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-07_ses-02_task-graz_run-02 | | sub-07 | ses-02 | graz | N/A | 3 | 40 | 1000.0 | 69 | 374.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-07_ses-02_task-graz_run-03 | | sub-07 | ses-02 | graz | N/A | 4 | 40 | 1000.0 | 69 | 380.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-07_ses-02_task-graz_run-04 | | sub-07 | ses-02 | graz | N/A | 5 | 40 | 1000.0 | 69 | 378.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-07_ses-02_task-graz_run-05 | | sub-07 | ses-02 | graz | N/A | 6 | 40 | 1000.0 | 69 | 366.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-07_ses-02_task-graz_run-06 | | sub-08 | ses-01 | graz | N/A | 1 | 40 | 1000.0 | 69 | 372.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-08_ses-01_task-graz_run-01 | | sub-08 | ses-01 | graz | N/A | 2 | 40 | 1000.0 | 69 | 377.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-08_ses-01_task-graz_run-02 | | sub-08 | ses-01 | graz | N/A | 3 | 40 | 1000.0 | 69 | 383.0 | 50 | nose | left_hand,right_hand,feet,rest | sub-08_ses-01_task-graz_run-03 | | sub-08 | ses-01 | graz | N/A | 4 | 40 | 1000.0 | 69 | 374.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-08_ses-01_task-graz_run-04 | | sub-08 | ses-01 | graz | N/A | 5 | 40 | 1000.0 | 69 | 381.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-08_ses-01_task-graz_run-05 | | sub-08 | ses-01 | graz | N/A | 6 | 40 | 1000.0 | 69 | 376.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-08_ses-01_task-graz_run-06 | | sub-08 | ses-02 | graz | N/A | 1 | 40 | 1000.0 | 69 | 389.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-08_ses-02_task-graz_run-01 | | sub-08 | ses-02 | graz | N/A | 2 | 40 | 1000.0 | 69 | 375.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-08_ses-02_task-graz_run-02 | | sub-08 | ses-02 | graz | N/A | 3 | 40 | 1000.0 | 69 | 381.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-08_ses-02_task-graz_run-03 | | sub-08 | ses-02 | graz | N/A | 4 | 40 | 1000.0 | 69 | 377.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-08_ses-02_task-graz_run-04 | | sub-08 | ses-02 | graz | N/A | 5 | 40 | 1000.0 | 69 | 383.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-08_ses-02_task-graz_run-05 | | sub-08 | ses-02 | graz | N/A | 6 | 40 | 1000.0 | 69 | 374.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-08_ses-02_task-graz_run-06 | | sub-09 | ses-01 | graz | N/A | 1 | 40 | 1000.0 | 69 | 392.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-09_ses-01_task-graz_run-01 | | sub-09 | ses-01 | graz | N/A | 2 | 40 | 1000.0 | 69 | 399.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-09_ses-01_task-graz_run-02 | | sub-09 | ses-01 | graz | N/A | 3 | 40 | 1000.0 | 69 | 373.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-09_ses-01_task-graz_run-03 | | sub-09 | ses-01 | graz | N/A | 4 | 40 | 1000.0 | 69 | 384.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-09_ses-01_task-graz_run-04 | | sub-09 | ses-01 | graz | N/A | 5 | 40 | 1000.0 | 69 | 381.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-09_ses-01_task-graz_run-05 | | sub-09 | ses-01 | graz | N/A | 6 | 40 | 1000.0 | 69 | 375.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-09_ses-01_task-graz_run-06 | | sub-09 | ses-02 | graz | N/A | 1 | 40 | 1000.0 | 69 | 391.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-09_ses-02_task-graz_run-01 | | sub-09 | ses-02 | graz | N/A | 2 | 40 | 1000.0 | 69 | 387.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-09_ses-02_task-graz_run-02 | | sub-09 | ses-02 | graz | N/A | 3 | 40 | 1000.0 | 69 | 390.4 | 50 | nose | left_hand,right_hand,feet,rest | sub-09_ses-02_task-graz_run-03 | | sub-09 | ses-02 | graz | N/A | 4 | 40 | 1000.0 | 69 | 404.4 | 50 | nose | left_hand,right_hand,feet,rest | sub-09_ses-02_task-graz_run-04 | | sub-09 | ses-02 | graz | N/A | 5 | 40 | 1000.0 | 69 | 375.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-09_ses-02_task-graz_run-05 | | sub-09 | ses-02 | graz | N/A | 6 | 40 | 1000.0 | 69 | 373.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-09_ses-02_task-graz_run-06 | | sub-10 | ses-01 | graz | N/A | 1 | 40 | 1000.0 | 69 | 374.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-10_ses-01_task-graz_run-01 | | sub-10 | ses-01 | graz | N/A | 2 | 40 | 1000.0 | 69 | 369.0 | 50 | nose | left_hand,right_hand,feet,rest | sub-10_ses-01_task-graz_run-02 | | sub-10 | ses-01 | graz | N/A | 3 | 40 | 1000.0 | 69 | 370.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-10_ses-01_task-graz_run-03 | | sub-10 | ses-01 | graz | N/A | 4 | 40 | 1000.0 | 69 | 373.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-10_ses-01_task-graz_run-04 | | sub-10 | ses-01 | graz | N/A | 5 | 40 | 1000.0 | 69 | 373.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-10_ses-01_task-graz_run-05 | | sub-10 | ses-01 | graz | N/A | 6 | 40 | 1000.0 | 69 | 369.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-10_ses-01_task-graz_run-06 | | sub-10 | ses-02 | graz | N/A | 1 | 40 | 1000.0 | 69 | 379.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-10_ses-02_task-graz_run-01 | | sub-10 | ses-02 | graz | N/A | 2 | 40 | 1000.0 | 69 | 369.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-10_ses-02_task-graz_run-02 | | sub-10 | ses-02 | graz | N/A | 3 | 40 | 1000.0 | 69 | 379.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-10_ses-02_task-graz_run-03 | | sub-10 | ses-02 | graz | N/A | 4 | 40 | 1000.0 | 69 | 379.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-10_ses-02_task-graz_run-04 | | sub-10 | ses-02 | graz | N/A | 5 | 40 | 1000.0 | 69 | 374.4 | 50 | nose | left_hand,right_hand,feet,rest | sub-10_ses-02_task-graz_run-05 | | sub-10 | ses-02 | graz | N/A | 6 | 40 | 1000.0 | 69 | 372.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-10_ses-02_task-graz_run-06 | | sub-11 | ses-01 | graz | N/A | 1 | 40 | 1000.0 | 69 | 376.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-11_ses-01_task-graz_run-01 | | sub-11 | ses-01 | graz | N/A | 2 | 40 | 1000.0 | 69 | 382.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-11_ses-01_task-graz_run-02 | | sub-11 | ses-01 | graz | N/A | 3 | 40 | 1000.0 | 69 | 375.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-11_ses-01_task-graz_run-03 | | sub-11 | ses-01 | graz | N/A | 4 | 40 | 1000.0 | 69 | 381.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-11_ses-01_task-graz_run-04 | | sub-11 | ses-01 | graz | N/A | 5 | 40 | 1000.0 | 69 | 380.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-11_ses-01_task-graz_run-05 | | sub-11 | ses-01 | graz | N/A | 6 | 40 | 1000.0 | 69 | 373.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-11_ses-01_task-graz_run-06 | | sub-11 | ses-02 | graz | N/A | 1 | 40 | 1000.0 | 69 | 371.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-11_ses-02_task-graz_run-01 | | sub-11 | ses-02 | graz | N/A | 2 | 40 | 1000.0 | 69 | 381.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-11_ses-02_task-graz_run-02 | | sub-11 | ses-02 | graz | N/A | 3 | 40 | 1000.0 | 69 | 371.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-11_ses-02_task-graz_run-03 | | sub-11 | ses-02 | graz | N/A | 4 | 40 | 1000.0 | 69 | 374.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-11_ses-02_task-graz_run-04 | | sub-11 | ses-02 | graz | N/A | 5 | 40 | 1000.0 | 69 | 383.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-11_ses-02_task-graz_run-05 | | sub-11 | ses-02 | graz | N/A | 6 | 40 | 1000.0 | 69 | 374.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-11_ses-02_task-graz_run-06 | | sub-12 | ses-01 | graz | N/A | 1 | 40 | 1000.0 | 69 | 372.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-12_ses-01_task-graz_run-01 | | sub-12 | ses-01 | graz | N/A | 2 | 40 | 1000.0 | 69 | 367.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-12_ses-01_task-graz_run-02 | | sub-12 | ses-01 | graz | N/A | 3 | 40 | 1000.0 | 69 | 375.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-12_ses-01_task-graz_run-03 | | sub-12 | ses-01 | graz | N/A | 4 | 40 | 1000.0 | 69 | 383.0 | 50 | nose | left_hand,right_hand,feet,rest | sub-12_ses-01_task-graz_run-04 | | sub-12 | ses-01 | graz | N/A | 5 | 40 | 1000.0 | 69 | 369.4 | 50 | nose | left_hand,right_hand,feet,rest | sub-12_ses-01_task-graz_run-05 | | sub-12 | ses-01 | graz | N/A | 6 | 40 | 1000.0 | 69 | 369.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-12_ses-01_task-graz_run-06 | | sub-12 | ses-02 | graz | N/A | 1 | 40 | 1000.0 | 69 | 367.4 | 50 | nose | left_hand,right_hand,feet,rest | sub-12_ses-02_task-graz_run-01 | | sub-12 | ses-02 | graz | N/A | 2 | 40 | 1000.0 | 69 | 387.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-12_ses-02_task-graz_run-02 | | sub-12 | ses-02 | graz | N/A | 3 | 40 | 1000.0 | 69 | 369.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-12_ses-02_task-graz_run-03 | | sub-12 | ses-02 | graz | N/A | 4 | 40 | 1000.0 | 69 | 377.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-12_ses-02_task-graz_run-04 | | sub-12 | ses-02 | graz | N/A | 5 | 40 | 1000.0 | 69 | 369.0 | 50 | nose | left_hand,right_hand,feet,rest | sub-12_ses-02_task-graz_run-05 | | sub-12 | ses-02 | graz | N/A | 6 | 40 | 1000.0 | 69 | 369.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-12_ses-02_task-graz_run-06 | | sub-13 | ses-01 | graz | N/A | 1 | 40 | 1000.0 | 69 | 620.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-13_ses-01_task-graz_run-01 | | sub-13 | ses-01 | graz | N/A | 2 | 40 | 1000.0 | 69 | 376.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-13_ses-01_task-graz_run-02 | | sub-13 | ses-01 | graz | N/A | 3 | 40 | 1000.0 | 69 | 370.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-13_ses-01_task-graz_run-03 | | sub-13 | ses-01 | graz | N/A | 4 | 40 | 1000.0 | 69 | 380.4 | 50 | nose | left_hand,right_hand,feet,rest | sub-13_ses-01_task-graz_run-04 | | sub-13 | ses-01 | graz | N/A | 5 | 40 | 1000.0 | 69 | 372.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-13_ses-01_task-graz_run-05 | | sub-13 | ses-01 | graz | N/A | 6 | 40 | 1000.0 | 69 | 372.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-13_ses-01_task-graz_run-06 | | sub-13 | ses-01 | graz | N/A | 7 | 40 | 1000.0 | 69 | 374.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-13_ses-01_task-graz_run-07 | | sub-13 | ses-02 | graz | N/A | 1 | 40 | 1000.0 | 69 | 379.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-13_ses-02_task-graz_run-01 | | sub-13 | ses-02 | graz | N/A | 2 | 40 | 1000.0 | 69 | 376.4 | 50 | nose | left_hand,right_hand,feet,rest | sub-13_ses-02_task-graz_run-02 | | sub-13 | ses-02 | graz | N/A | 3 | 40 | 1000.0 | 69 | 378.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-13_ses-02_task-graz_run-03 | | sub-13 | ses-02 | graz | N/A | 4 | 40 | 1000.0 | 69 | 379.1 | 50 | nose | left_hand,right_hand,feet,rest | sub-13_ses-02_task-graz_run-04 | | sub-13 | ses-02 | graz | N/A | 5 | 40 | 1000.0 | 69 | 397.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-13_ses-02_task-graz_run-05 | | sub-13 | ses-02 | graz | N/A | 6 | 40 | 1000.0 | 69 | 371.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-13_ses-02_task-graz_run-06 | | sub-14 | ses-01 | graz | N/A | 1 | 40 | 1000.0 | 69 | 369.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-14_ses-01_task-graz_run-01 | | sub-14 | ses-01 | graz | N/A | 2 | 40 | 1000.0 | 69 | 373.7 | 50 | nose | left_hand,right_hand,feet,rest | sub-14_ses-01_task-graz_run-02 | | sub-14 | ses-01 | graz | N/A | 3 | 40 | 1000.0 | 69 | 375.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-14_ses-01_task-graz_run-03 | | sub-14 | ses-01 | graz | N/A | 4 | 40 | 1000.0 | 69 | 376.6 | 50 | nose | left_hand,right_hand,feet,rest | sub-14_ses-01_task-graz_run-04 | | sub-14 | ses-01 | graz | N/A | 5 | 40 | 1000.0 | 69 | 383.4 | 50 | nose | left_hand,right_hand,feet,rest | sub-14_ses-01_task-graz_run-05 | | sub-14 | ses-01 | graz | N/A | 6 | 40 | 1000.0 | 69 | 373.3 | 50 | nose | left_hand,right_hand,feet,rest | sub-14_ses-01_task-graz_run-06 | | sub-14 | ses-02 | graz | N/A | 1 | 40 | 1000.0 | 69 | 570.5 | 50 | nose | left_hand,right_hand,feet,rest | sub-14_ses-02_task-graz_run-01 | | sub-14 | ses-02 | graz | N/A | 2 | 40 | 1000.0 | 69 | 374.8 | 50 | nose | left_hand,right_hand,feet,rest | sub-14_ses-02_task-graz_run-02 | | sub-14 | ses-02 | graz | N/A | 3 | 40 | 1000.0 | 69 | 378.9 | 50 | nose | left_hand,right_hand,feet,rest | sub-14_ses-02_task-graz_run-03 | | sub-14 | ses-02 | graz | N/A | 4 | 40 | 1000.0 | 69 | 372.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-14_ses-02_task-graz_run-04 | | sub-14 | ses-02 | graz | N/A | 5 | 40 | 1000.0 | 69 | 376.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-14_ses-02_task-graz_run-05 | | sub-14 | ses-02 | graz | N/A | 6 | 40 | 1000.0 | 69 | 376.2 | 50 | nose | left_hand,right_hand,feet,rest | sub-14_ses-02_task-graz_run-06 | | sub-15 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 329.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-15_ses-01_task-ssmvepmi_run-01 | | sub-15 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 327.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-15_ses-01_task-ssmvepmi_run-02 | | sub-15 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 330.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-15_ses-01_task-ssmvepmi_run-03 | | sub-15 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 355.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-15_ses-01_task-ssmvepmi_run-04 | | sub-15 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 360.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-15_ses-01_task-ssmvepmi_run-05 | | sub-15 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 339.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-15_ses-01_task-ssmvepmi_run-06 | | sub-15 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 349.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-15_ses-02_task-ssmvepmi_run-01 | | sub-15 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 332.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-15_ses-02_task-ssmvepmi_run-02 | | sub-15 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 333.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-15_ses-02_task-ssmvepmi_run-03 | | sub-15 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 351.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-15_ses-02_task-ssmvepmi_run-04 | | sub-15 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 347.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-15_ses-02_task-ssmvepmi_run-05 | | sub-16 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 346.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-16_ses-01_task-ssmvepmi_run-01 | | sub-16 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 341.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-16_ses-01_task-ssmvepmi_run-02 | | sub-16 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 339.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-16_ses-01_task-ssmvepmi_run-03 | | sub-16 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 347.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-16_ses-01_task-ssmvepmi_run-04 | | sub-16 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 343.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-16_ses-01_task-ssmvepmi_run-05 | | sub-16 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 341.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-16_ses-01_task-ssmvepmi_run-06 | | sub-16 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 385.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-16_ses-02_task-ssmvepmi_run-01 | | sub-16 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 340.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-16_ses-02_task-ssmvepmi_run-02 | | sub-16 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 342.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-16_ses-02_task-ssmvepmi_run-03 | | sub-16 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 428.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-16_ses-02_task-ssmvepmi_run-04 | | sub-16 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 338.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-16_ses-02_task-ssmvepmi_run-05 | | sub-16 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 338.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-16_ses-02_task-ssmvepmi_run-06 | | sub-17 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 351.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-17_ses-01_task-ssmvepmi_run-01 | | sub-17 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 341.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-17_ses-01_task-ssmvepmi_run-02 | | sub-17 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 343.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-17_ses-01_task-ssmvepmi_run-03 | | sub-17 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 341.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-17_ses-01_task-ssmvepmi_run-04 | | sub-17 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 350.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-17_ses-01_task-ssmvepmi_run-05 | | sub-17 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 378.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-17_ses-01_task-ssmvepmi_run-06 | | sub-17 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 351.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-17_ses-02_task-ssmvepmi_run-01 | | sub-17 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 408.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-17_ses-02_task-ssmvepmi_run-02 | | sub-17 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 342.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-17_ses-02_task-ssmvepmi_run-03 | | sub-17 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 346.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-17_ses-02_task-ssmvepmi_run-04 | | sub-17 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 341.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-17_ses-02_task-ssmvepmi_run-05 | | sub-17 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 340.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-17_ses-02_task-ssmvepmi_run-06 | | sub-18 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 349.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-18_ses-01_task-ssmvepmi_run-01 | | sub-18 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 347.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-18_ses-01_task-ssmvepmi_run-02 | | sub-18 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 344.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-18_ses-01_task-ssmvepmi_run-03 | | sub-18 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 344.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-18_ses-01_task-ssmvepmi_run-04 | | sub-18 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 391.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-18_ses-01_task-ssmvepmi_run-05 | | sub-18 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 345.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-18_ses-01_task-ssmvepmi_run-06 | | sub-18 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 407.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-18_ses-02_task-ssmvepmi_run-01 | | sub-18 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 366.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-18_ses-02_task-ssmvepmi_run-02 | | sub-18 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 346.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-18_ses-02_task-ssmvepmi_run-03 | | sub-18 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 350.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-18_ses-02_task-ssmvepmi_run-04 | | sub-18 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 349.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-18_ses-02_task-ssmvepmi_run-05 | | sub-18 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 340.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-18_ses-02_task-ssmvepmi_run-06 | | sub-19 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 362.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-19_ses-01_task-ssmvepmi_run-01 | | sub-19 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 334.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-19_ses-01_task-ssmvepmi_run-02 | | sub-19 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 336.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-19_ses-01_task-ssmvepmi_run-03 | | sub-19 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 343.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-19_ses-01_task-ssmvepmi_run-04 | | sub-19 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 345.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-19_ses-01_task-ssmvepmi_run-05 | | sub-19 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 344.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-19_ses-01_task-ssmvepmi_run-06 | | sub-19 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 354.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-19_ses-02_task-ssmvepmi_run-01 | | sub-19 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 342.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-19_ses-02_task-ssmvepmi_run-02 | | sub-19 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 342.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-19_ses-02_task-ssmvepmi_run-03 | | sub-19 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 400.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-19_ses-02_task-ssmvepmi_run-04 | | sub-19 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 346.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-19_ses-02_task-ssmvepmi_run-05 | | sub-19 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 348.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-19_ses-02_task-ssmvepmi_run-06 | | sub-20 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 351.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-20_ses-01_task-ssmvepmi_run-01 | | sub-20 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 341.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-20_ses-01_task-ssmvepmi_run-02 | | sub-20 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 340.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-20_ses-01_task-ssmvepmi_run-03 | | sub-20 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 343.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-20_ses-01_task-ssmvepmi_run-04 | | sub-20 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 345.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-20_ses-01_task-ssmvepmi_run-05 | | sub-20 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 340.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-20_ses-01_task-ssmvepmi_run-06 | | sub-20 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 331.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-20_ses-02_task-ssmvepmi_run-01 | | sub-20 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 349.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-20_ses-02_task-ssmvepmi_run-02 | | sub-20 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 344.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-20_ses-02_task-ssmvepmi_run-03 | | sub-20 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 343.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-20_ses-02_task-ssmvepmi_run-04 | | sub-20 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 344.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-20_ses-02_task-ssmvepmi_run-05 | | sub-20 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 339.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-20_ses-02_task-ssmvepmi_run-06 | | sub-21 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 330.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-21_ses-01_task-ssmvepmi_run-01 | | sub-21 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 325.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-21_ses-01_task-ssmvepmi_run-02 | | sub-21 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 331.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-21_ses-01_task-ssmvepmi_run-03 | | sub-21 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 340.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-21_ses-01_task-ssmvepmi_run-04 | | sub-21 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 336.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-21_ses-01_task-ssmvepmi_run-05 | | sub-21 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 329.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-21_ses-01_task-ssmvepmi_run-06 | | sub-21 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 344.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-21_ses-02_task-ssmvepmi_run-01 | | sub-21 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 344.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-21_ses-02_task-ssmvepmi_run-02 | | sub-21 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 333.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-21_ses-02_task-ssmvepmi_run-03 | | sub-21 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 350.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-21_ses-02_task-ssmvepmi_run-04 | | sub-21 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 344.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-21_ses-02_task-ssmvepmi_run-05 | | sub-21 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 345.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-21_ses-02_task-ssmvepmi_run-06 | | sub-22 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 365.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-22_ses-01_task-ssmvepmi_run-01 | | sub-22 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 342.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-22_ses-01_task-ssmvepmi_run-02 | | sub-22 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 343.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-22_ses-01_task-ssmvepmi_run-03 | | sub-22 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 341.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-22_ses-01_task-ssmvepmi_run-04 | | sub-22 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 339.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-22_ses-01_task-ssmvepmi_run-05 | | sub-22 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 346.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-22_ses-01_task-ssmvepmi_run-06 | | sub-22 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 348.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-22_ses-02_task-ssmvepmi_run-01 | | sub-22 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 342.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-22_ses-02_task-ssmvepmi_run-02 | | sub-22 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 342.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-22_ses-02_task-ssmvepmi_run-03 | | sub-22 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 344.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-22_ses-02_task-ssmvepmi_run-04 | | sub-22 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 352.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-22_ses-02_task-ssmvepmi_run-05 | | sub-22 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 343.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-22_ses-02_task-ssmvepmi_run-06 | | sub-23 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 344.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-23_ses-01_task-ssmvepmi_run-01 | | sub-23 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 343.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-23_ses-01_task-ssmvepmi_run-02 | | sub-23 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 340.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-23_ses-01_task-ssmvepmi_run-03 | | sub-23 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 341.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-23_ses-01_task-ssmvepmi_run-04 | | sub-23 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 341.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-23_ses-01_task-ssmvepmi_run-05 | | sub-23 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 349.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-23_ses-01_task-ssmvepmi_run-06 | | sub-23 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 364.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-23_ses-02_task-ssmvepmi_run-01 | | sub-23 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 338.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-23_ses-02_task-ssmvepmi_run-02 | | sub-23 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 352.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-23_ses-02_task-ssmvepmi_run-03 | | sub-23 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 342.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-23_ses-02_task-ssmvepmi_run-04 | | sub-23 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 342.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-23_ses-02_task-ssmvepmi_run-05 | | sub-23 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 342.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-23_ses-02_task-ssmvepmi_run-06 | | sub-24 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 330.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-24_ses-01_task-ssmvepmi_run-01 | | sub-24 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 346.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-24_ses-01_task-ssmvepmi_run-02 | | sub-24 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 335.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-24_ses-01_task-ssmvepmi_run-03 | | sub-24 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 330.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-24_ses-01_task-ssmvepmi_run-04 | | sub-24 | ses-01 | ssmvepmi | N/A | 5 | 64 | 1000.0 | 69 | 678.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-24_ses-01_task-ssmvepmi_run-05 | | sub-24 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 380.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-24_ses-02_task-ssmvepmi_run-01 | | sub-24 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 348.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-24_ses-02_task-ssmvepmi_run-02 | | sub-24 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 346.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-24_ses-02_task-ssmvepmi_run-03 | | sub-24 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 345.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-24_ses-02_task-ssmvepmi_run-04 | | sub-24 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 353.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-24_ses-02_task-ssmvepmi_run-05 | | sub-24 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 344.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-24_ses-02_task-ssmvepmi_run-06 | | sub-25 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 347.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-25_ses-01_task-ssmvepmi_run-01 | | sub-25 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 345.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-25_ses-01_task-ssmvepmi_run-02 | | sub-25 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 353.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-25_ses-01_task-ssmvepmi_run-03 | | sub-25 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 342.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-25_ses-01_task-ssmvepmi_run-04 | | sub-25 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 344.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-25_ses-01_task-ssmvepmi_run-05 | | sub-25 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 342.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-25_ses-01_task-ssmvepmi_run-06 | | sub-25 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 356.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-25_ses-02_task-ssmvepmi_run-01 | | sub-25 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 374.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-25_ses-02_task-ssmvepmi_run-02 | | sub-25 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 359.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-25_ses-02_task-ssmvepmi_run-03 | | sub-25 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 342.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-25_ses-02_task-ssmvepmi_run-04 | | sub-25 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 344.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-25_ses-02_task-ssmvepmi_run-05 | | sub-25 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 340.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-25_ses-02_task-ssmvepmi_run-06 | | sub-26 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 329.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-26_ses-01_task-ssmvepmi_run-01 | | sub-26 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 331.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-26_ses-01_task-ssmvepmi_run-02 | | sub-26 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 347.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-26_ses-01_task-ssmvepmi_run-03 | | sub-26 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 331.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-26_ses-01_task-ssmvepmi_run-04 | | sub-26 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 329.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-26_ses-01_task-ssmvepmi_run-05 | | sub-26 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 333.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-26_ses-01_task-ssmvepmi_run-06 | | sub-26 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 328.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-26_ses-02_task-ssmvepmi_run-01 | | sub-26 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 336.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-26_ses-02_task-ssmvepmi_run-02 | | sub-26 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 338.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-26_ses-02_task-ssmvepmi_run-03 | | sub-26 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 346.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-26_ses-02_task-ssmvepmi_run-04 | | sub-26 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 343.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-26_ses-02_task-ssmvepmi_run-05 | | sub-26 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 335.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-26_ses-02_task-ssmvepmi_run-06 | | sub-27 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 346.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-27_ses-01_task-ssmvepmi_run-01 | | sub-27 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 351.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-27_ses-01_task-ssmvepmi_run-02 | | sub-27 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 351.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-27_ses-01_task-ssmvepmi_run-03 | | sub-27 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 344.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-27_ses-01_task-ssmvepmi_run-04 | | sub-27 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 351.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-27_ses-01_task-ssmvepmi_run-05 | | sub-27 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 359.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-27_ses-01_task-ssmvepmi_run-06 | | sub-27 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 340.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-27_ses-02_task-ssmvepmi_run-01 | | sub-27 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 347.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-27_ses-02_task-ssmvepmi_run-02 | | sub-27 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 349.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-27_ses-02_task-ssmvepmi_run-03 | | sub-27 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 345.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-27_ses-02_task-ssmvepmi_run-04 | | sub-27 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 345.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-27_ses-02_task-ssmvepmi_run-05 | | sub-27 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 345.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-27_ses-02_task-ssmvepmi_run-06 | | sub-28 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 352.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-28_ses-01_task-ssmvepmi_run-01 | | sub-28 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 345.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-28_ses-01_task-ssmvepmi_run-02 | | sub-28 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 345.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-28_ses-01_task-ssmvepmi_run-03 | | sub-28 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 349.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-28_ses-01_task-ssmvepmi_run-04 | | sub-28 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 345.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-28_ses-01_task-ssmvepmi_run-05 | | sub-28 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 349.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-28_ses-01_task-ssmvepmi_run-06 | | sub-28 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 344.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-28_ses-02_task-ssmvepmi_run-01 | | sub-28 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 341.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-28_ses-02_task-ssmvepmi_run-02 | | sub-28 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 342.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-28_ses-02_task-ssmvepmi_run-03 | | sub-28 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 342.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-28_ses-02_task-ssmvepmi_run-04 | | sub-28 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 343.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-28_ses-02_task-ssmvepmi_run-05 | | sub-28 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 347.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-28_ses-02_task-ssmvepmi_run-06 | | sub-29 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 349.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-29_ses-01_task-ssmvepmi_run-01 | | sub-29 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 341.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-29_ses-01_task-ssmvepmi_run-02 | | sub-29 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 343.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-29_ses-01_task-ssmvepmi_run-03 | | sub-29 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 342.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-29_ses-01_task-ssmvepmi_run-04 | | sub-29 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 344.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-29_ses-01_task-ssmvepmi_run-05 | | sub-29 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 342.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-29_ses-01_task-ssmvepmi_run-06 | | sub-29 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 343.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-29_ses-02_task-ssmvepmi_run-01 | | sub-29 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 344.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-29_ses-02_task-ssmvepmi_run-02 | | sub-29 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 345.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-29_ses-02_task-ssmvepmi_run-03 | | sub-29 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 345.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-29_ses-02_task-ssmvepmi_run-04 | | sub-29 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 347.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-29_ses-02_task-ssmvepmi_run-05 | | sub-29 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 343.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-29_ses-02_task-ssmvepmi_run-06 | | sub-30 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 347.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-30_ses-01_task-ssmvepmi_run-01 | | sub-30 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 349.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-30_ses-01_task-ssmvepmi_run-02 | | sub-30 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 348.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-30_ses-01_task-ssmvepmi_run-03 | | sub-30 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 344.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-30_ses-01_task-ssmvepmi_run-04 | | sub-30 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 347.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-30_ses-01_task-ssmvepmi_run-05 | | sub-30 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 346.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-30_ses-01_task-ssmvepmi_run-06 | | sub-30 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 345.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-30_ses-02_task-ssmvepmi_run-01 | | sub-30 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 341.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-30_ses-02_task-ssmvepmi_run-02 | | sub-30 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 328.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-30_ses-02_task-ssmvepmi_run-03 | | sub-30 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 340.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-30_ses-02_task-ssmvepmi_run-04 | | sub-30 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 340.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-30_ses-02_task-ssmvepmi_run-05 | | sub-30 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 342.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-30_ses-02_task-ssmvepmi_run-06 | | sub-31 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 368.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-31_ses-01_task-ssmvepmi_run-01 | | sub-31 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 342.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-31_ses-01_task-ssmvepmi_run-02 | | sub-31 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 341.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-31_ses-01_task-ssmvepmi_run-03 | | sub-31 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 341.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-31_ses-01_task-ssmvepmi_run-04 | | sub-31 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 343.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-31_ses-01_task-ssmvepmi_run-05 | | sub-31 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 341.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-31_ses-01_task-ssmvepmi_run-06 | | sub-31 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 348.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-31_ses-02_task-ssmvepmi_run-01 | | sub-31 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 342.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-31_ses-02_task-ssmvepmi_run-02 | | sub-31 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 339.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-31_ses-02_task-ssmvepmi_run-03 | | sub-31 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 339.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-31_ses-02_task-ssmvepmi_run-04 | | sub-31 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 344.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-31_ses-02_task-ssmvepmi_run-05 | | sub-31 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 339.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-31_ses-02_task-ssmvepmi_run-06 | | sub-32 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 355.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-32_ses-01_task-ssmvepmi_run-01 | | sub-32 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 354.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-32_ses-01_task-ssmvepmi_run-02 | | sub-32 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 349.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-32_ses-01_task-ssmvepmi_run-03 | | sub-32 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 352.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-32_ses-01_task-ssmvepmi_run-04 | | sub-32 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 344.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-32_ses-01_task-ssmvepmi_run-05 | | sub-32 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 348.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-32_ses-01_task-ssmvepmi_run-06 | | sub-32 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 347.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-32_ses-02_task-ssmvepmi_run-01 | | sub-32 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 349.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-32_ses-02_task-ssmvepmi_run-02 | | sub-32 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 348.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-32_ses-02_task-ssmvepmi_run-03 | | sub-32 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 348.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-32_ses-02_task-ssmvepmi_run-04 | | sub-32 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 344.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-32_ses-02_task-ssmvepmi_run-05 | | sub-32 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 342.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-32_ses-02_task-ssmvepmi_run-06 | | sub-33 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 348.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-33_ses-01_task-ssmvepmi_run-01 | | sub-33 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 345.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-33_ses-01_task-ssmvepmi_run-02 | | sub-33 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 345.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-33_ses-01_task-ssmvepmi_run-03 | | sub-33 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 343.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-33_ses-01_task-ssmvepmi_run-04 | | sub-33 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 342.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-33_ses-01_task-ssmvepmi_run-05 | | sub-33 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 342.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-33_ses-01_task-ssmvepmi_run-06 | | sub-33 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 332.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-33_ses-02_task-ssmvepmi_run-01 | | sub-33 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 342.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-33_ses-02_task-ssmvepmi_run-02 | | sub-33 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 345.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-33_ses-02_task-ssmvepmi_run-03 | | sub-33 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 369.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-33_ses-02_task-ssmvepmi_run-04 | | sub-33 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 344.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-33_ses-02_task-ssmvepmi_run-05 | | sub-33 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 345.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-33_ses-02_task-ssmvepmi_run-06 | | sub-34 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 353.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-34_ses-01_task-ssmvepmi_run-01 | | sub-34 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 346.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-34_ses-01_task-ssmvepmi_run-02 | | sub-34 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 347.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-34_ses-01_task-ssmvepmi_run-03 | | sub-34 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 343.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-34_ses-01_task-ssmvepmi_run-04 | | sub-34 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 346.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-34_ses-01_task-ssmvepmi_run-05 | | sub-34 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 346.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-34_ses-01_task-ssmvepmi_run-06 | | sub-34 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 344.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-34_ses-02_task-ssmvepmi_run-01 | | sub-34 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 337.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-34_ses-02_task-ssmvepmi_run-02 | | sub-34 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 338.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-34_ses-02_task-ssmvepmi_run-03 | | sub-34 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 343.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-34_ses-02_task-ssmvepmi_run-04 | | sub-34 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 339.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-34_ses-02_task-ssmvepmi_run-05 | | sub-34 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 338.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-34_ses-02_task-ssmvepmi_run-06 | | sub-35 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 349.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-35_ses-01_task-ssmvepmi_run-01 | | sub-35 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 347.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-35_ses-01_task-ssmvepmi_run-02 | | sub-35 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 357.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-35_ses-01_task-ssmvepmi_run-03 | | sub-35 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 328.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-35_ses-01_task-ssmvepmi_run-04 | | sub-35 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 435.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-35_ses-01_task-ssmvepmi_run-05 | | sub-35 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 327.8 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-35_ses-01_task-ssmvepmi_run-06 | | sub-35 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 338.9 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-35_ses-02_task-ssmvepmi_run-01 | | sub-35 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 347.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-35_ses-02_task-ssmvepmi_run-02 | | sub-35 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 352.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-35_ses-02_task-ssmvepmi_run-03 | | sub-35 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 342.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-35_ses-02_task-ssmvepmi_run-04 | | sub-35 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 330.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-35_ses-02_task-ssmvepmi_run-05 | | sub-35 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 334.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-35_ses-02_task-ssmvepmi_run-06 | | sub-36 | ses-01 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 329.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-36_ses-01_task-ssmvepmi_run-01 | | sub-36 | ses-01 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 345.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-36_ses-01_task-ssmvepmi_run-02 | | sub-36 | ses-01 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 332.5 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-36_ses-01_task-ssmvepmi_run-03 | | sub-36 | ses-01 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 346.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-36_ses-01_task-ssmvepmi_run-04 | | sub-36 | ses-01 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 330.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-36_ses-01_task-ssmvepmi_run-05 | | sub-36 | ses-01 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 341.6 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-36_ses-01_task-ssmvepmi_run-06 | | sub-36 | ses-02 | ssmvepmi | N/A | 1 | 32 | 1000.0 | 69 | 377.2 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-36_ses-02_task-ssmvepmi_run-01 | | sub-36 | ses-02 | ssmvepmi | N/A | 2 | 32 | 1000.0 | 69 | 333.3 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-36_ses-02_task-ssmvepmi_run-02 | | sub-36 | ses-02 | ssmvepmi | N/A | 3 | 32 | 1000.0 | 69 | 346.0 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-36_ses-02_task-ssmvepmi_run-03 | | sub-36 | ses-02 | ssmvepmi | N/A | 4 | 32 | 1000.0 | 69 | 332.1 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-36_ses-02_task-ssmvepmi_run-04 | | sub-36 | ses-02 | ssmvepmi | N/A | 5 | 32 | 1000.0 | 69 | 379.7 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-36_ses-02_task-ssmvepmi_run-05 | | sub-36 | ses-02 | ssmvepmi | N/A | 6 | 32 | 1000.0 | 69 | 361.4 | 50 | nose | left_MI,right_MI,left_AO,right_AO | sub-36_ses-02_task-ssmvepmi_run-06 | | sub-37 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 453.6 | 50 | nose | left_hand,right_hand | sub-37_ses-01_task-hybrid_acq-graz_run-01 | | sub-37 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 415.8 | 50 | nose | left_hand,right_hand | sub-37_ses-01_task-hybrid_acq-graz_run-02 | | sub-37 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 417.3 | 50 | nose | left_hand,right_hand | sub-37_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-37 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 416.1 | 50 | nose | left_hand,right_hand | sub-37_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-37 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 427.9 | 50 | nose | left_hand,right_hand | sub-37_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-37 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 415.6 | 50 | nose | left_hand,right_hand | sub-37_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-37 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 414.4 | 50 | nose | left_hand,right_hand | sub-37_ses-01_task-hybrid_acq-video_run-01 | | sub-37 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 418.1 | 50 | nose | left_hand,right_hand | sub-37_ses-01_task-hybrid_acq-video_run-02 | | sub-37 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 418.2 | 50 | nose | left_hand,right_hand | sub-37_ses-02_task-hybrid_acq-graz_run-01 | | sub-37 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 411.3 | 50 | nose | left_hand,right_hand | sub-37_ses-02_task-hybrid_acq-graz_run-02 | | sub-37 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 414.6 | 50 | nose | left_hand,right_hand | sub-37_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-37 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 412.1 | 50 | nose | left_hand,right_hand | sub-37_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-37 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 443.3 | 50 | nose | left_hand,right_hand | sub-37_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-37 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 412.6 | 50 | nose | left_hand,right_hand | sub-37_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-37 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 435.9 | 50 | nose | left_hand,right_hand | sub-37_ses-02_task-hybrid_acq-video_run-01 | | sub-37 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 412.3 | 50 | nose | left_hand,right_hand | sub-37_ses-02_task-hybrid_acq-video_run-02 | | sub-38 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 424.1 | 50 | nose | left_hand,right_hand | sub-38_ses-01_task-hybrid_acq-graz_run-01 | | sub-38 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 412.7 | 50 | nose | left_hand,right_hand | sub-38_ses-01_task-hybrid_acq-graz_run-02 | | sub-38 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 413.5 | 50 | nose | left_hand,right_hand | sub-38_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-38 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 417.1 | 50 | nose | left_hand,right_hand | sub-38_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-38 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 414.0 | 50 | nose | left_hand,right_hand | sub-38_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-38 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 415.2 | 50 | nose | left_hand,right_hand | sub-38_ses-01_task-hybrid_acq-video_run-01 | | sub-38 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 414.1 | 50 | nose | left_hand,right_hand | sub-38_ses-01_task-hybrid_acq-video_run-02 | | sub-38 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 420.8 | 50 | nose | left_hand,right_hand | sub-38_ses-02_task-hybrid_acq-graz_run-01 | | sub-38 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 415.8 | 50 | nose | left_hand,right_hand | sub-38_ses-02_task-hybrid_acq-graz_run-02 | | sub-38 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 423.1 | 50 | nose | left_hand,right_hand | sub-38_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-38 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 417.0 | 50 | nose | left_hand,right_hand | sub-38_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-38 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 423.4 | 50 | nose | left_hand,right_hand | sub-38_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-38 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 413.3 | 50 | nose | left_hand,right_hand | sub-38_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-38 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 418.4 | 50 | nose | left_hand,right_hand | sub-38_ses-02_task-hybrid_acq-video_run-01 | | sub-38 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 416.3 | 50 | nose | left_hand,right_hand | sub-38_ses-02_task-hybrid_acq-video_run-02 | | sub-39 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 413.2 | 50 | nose | left_hand,right_hand | sub-39_ses-01_task-hybrid_acq-graz_run-01 | | sub-39 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 413.5 | 50 | nose | left_hand,right_hand | sub-39_ses-01_task-hybrid_acq-graz_run-02 | | sub-39 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 423.8 | 50 | nose | left_hand,right_hand | sub-39_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-39 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 412.2 | 50 | nose | left_hand,right_hand | sub-39_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-39 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 411.9 | 50 | nose | left_hand,right_hand | sub-39_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-39 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 413.2 | 50 | nose | left_hand,right_hand | sub-39_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-39 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 413.5 | 50 | nose | left_hand,right_hand | sub-39_ses-01_task-hybrid_acq-video_run-01 | | sub-39 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 413.9 | 50 | nose | left_hand,right_hand | sub-39_ses-01_task-hybrid_acq-video_run-02 | | sub-39 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 423.2 | 50 | nose | left_hand,right_hand | sub-39_ses-02_task-hybrid_acq-graz_run-01 | | sub-39 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 411.4 | 50 | nose | left_hand,right_hand | sub-39_ses-02_task-hybrid_acq-graz_run-02 | | sub-39 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 414.7 | 50 | nose | left_hand,right_hand | sub-39_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-39 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 408.2 | 50 | nose | left_hand,right_hand | sub-39_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-39 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 423.5 | 50 | nose | left_hand,right_hand | sub-39_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-39 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 416.8 | 50 | nose | left_hand,right_hand | sub-39_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-39 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 417.5 | 50 | nose | left_hand,right_hand | sub-39_ses-02_task-hybrid_acq-video_run-01 | | sub-39 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 413.2 | 50 | nose | left_hand,right_hand | sub-39_ses-02_task-hybrid_acq-video_run-02 | | sub-40 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 420.2 | 50 | nose | left_hand,right_hand | sub-40_ses-01_task-hybrid_acq-graz_run-01 | | sub-40 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 419.3 | 50 | nose | left_hand,right_hand | sub-40_ses-01_task-hybrid_acq-graz_run-02 | | sub-40 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 415.5 | 50 | nose | left_hand,right_hand | sub-40_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-40 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 415.1 | 50 | nose | left_hand,right_hand | sub-40_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-40 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 439.1 | 50 | nose | left_hand,right_hand | sub-40_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-40 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 415.3 | 50 | nose | left_hand,right_hand | sub-40_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-40 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 417.8 | 50 | nose | left_hand,right_hand | sub-40_ses-01_task-hybrid_acq-video_run-01 | | sub-40 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 414.5 | 50 | nose | left_hand,right_hand | sub-40_ses-01_task-hybrid_acq-video_run-02 | | sub-40 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 412.0 | 50 | nose | left_hand,right_hand | sub-40_ses-02_task-hybrid_acq-graz_run-01 | | sub-40 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 415.4 | 50 | nose | left_hand,right_hand | sub-40_ses-02_task-hybrid_acq-graz_run-02 | | sub-40 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 410.8 | 50 | nose | left_hand,right_hand | sub-40_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-40 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 421.9 | 50 | nose | left_hand,right_hand | sub-40_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-40 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 434.2 | 50 | nose | left_hand,right_hand | sub-40_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-40 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 442.8 | 50 | nose | left_hand,right_hand | sub-40_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-40 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 413.4 | 50 | nose | left_hand,right_hand | sub-40_ses-02_task-hybrid_acq-video_run-01 | | sub-40 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 413.7 | 50 | nose | left_hand,right_hand | sub-40_ses-02_task-hybrid_acq-video_run-02 | | sub-41 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 414.5 | 50 | nose | left_hand,right_hand | sub-41_ses-01_task-hybrid_acq-graz_run-01 | | sub-41 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 413.4 | 50 | nose | left_hand,right_hand | sub-41_ses-01_task-hybrid_acq-graz_run-02 | | sub-41 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 413.9 | 50 | nose | left_hand,right_hand | sub-41_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-41 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 413.6 | 50 | nose | left_hand,right_hand | sub-41_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-41 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 422.5 | 50 | nose | left_hand,right_hand | sub-41_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-41 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 413.7 | 50 | nose | left_hand,right_hand | sub-41_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-41 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 414.9 | 50 | nose | left_hand,right_hand | sub-41_ses-01_task-hybrid_acq-video_run-01 | | sub-41 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 423.3 | 50 | nose | left_hand,right_hand | sub-41_ses-01_task-hybrid_acq-video_run-02 | | sub-41 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 417.4 | 50 | nose | left_hand,right_hand | sub-41_ses-02_task-hybrid_acq-graz_run-01 | | sub-41 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 423.6 | 50 | nose | left_hand,right_hand | sub-41_ses-02_task-hybrid_acq-graz_run-02 | | sub-41 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 419.1 | 50 | nose | left_hand,right_hand | sub-41_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-41 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 430.5 | 50 | nose | left_hand,right_hand | sub-41_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-41 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 436.9 | 50 | nose | left_hand,right_hand | sub-41_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-41 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 413.5 | 50 | nose | left_hand,right_hand | sub-41_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-41 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 415.2 | 50 | nose | left_hand,right_hand | sub-41_ses-02_task-hybrid_acq-video_run-01 | | sub-41 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 451.0 | 50 | nose | left_hand,right_hand | sub-41_ses-02_task-hybrid_acq-video_run-02 | | sub-42 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 411.6 | 50 | nose | left_hand,right_hand | sub-42_ses-01_task-hybrid_acq-graz_run-01 | | sub-42 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 440.1 | 50 | nose | left_hand,right_hand | sub-42_ses-01_task-hybrid_acq-graz_run-02 | | sub-42 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 416.1 | 50 | nose | left_hand,right_hand | sub-42_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-42 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 421.0 | 50 | nose | left_hand,right_hand | sub-42_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-42 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 410.8 | 50 | nose | left_hand,right_hand | sub-42_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-42 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 416.2 | 50 | nose | left_hand,right_hand | sub-42_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-42 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 424.2 | 50 | nose | left_hand,right_hand | sub-42_ses-01_task-hybrid_acq-video_run-01 | | sub-42 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 414.9 | 50 | nose | left_hand,right_hand | sub-42_ses-01_task-hybrid_acq-video_run-02 | | sub-42 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 409.8 | 50 | nose | left_hand,right_hand | sub-42_ses-02_task-hybrid_acq-graz_run-01 | | sub-42 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 418.7 | 50 | nose | left_hand,right_hand | sub-42_ses-02_task-hybrid_acq-graz_run-02 | | sub-42 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 417.8 | 50 | nose | left_hand,right_hand | sub-42_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-42 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 447.9 | 50 | nose | left_hand,right_hand | sub-42_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-42 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 413.5 | 50 | nose | left_hand,right_hand | sub-42_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-42 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 414.4 | 50 | nose | left_hand,right_hand | sub-42_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-42 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 619.9 | 50 | nose | left_hand,right_hand | sub-42_ses-02_task-hybrid_acq-video_run-01 | | sub-42 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 414.7 | 50 | nose | left_hand,right_hand | sub-42_ses-02_task-hybrid_acq-video_run-02 | | sub-43 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 418.6 | 50 | nose | left_hand,right_hand | sub-43_ses-01_task-hybrid_acq-graz_run-01 | | sub-43 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 417.4 | 50 | nose | left_hand,right_hand | sub-43_ses-01_task-hybrid_acq-graz_run-02 | | sub-43 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 414.5 | 50 | nose | left_hand,right_hand | sub-43_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-43 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 417.4 | 50 | nose | left_hand,right_hand | sub-43_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-43 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 413.7 | 50 | nose | left_hand,right_hand | sub-43_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-43 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 414.7 | 50 | nose | left_hand,right_hand | sub-43_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-43 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 415.3 | 50 | nose | left_hand,right_hand | sub-43_ses-01_task-hybrid_acq-video_run-01 | | sub-43 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 421.6 | 50 | nose | left_hand,right_hand | sub-43_ses-01_task-hybrid_acq-video_run-02 | | sub-43 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 426.2 | 50 | nose | left_hand,right_hand | sub-43_ses-02_task-hybrid_acq-graz_run-01 | | sub-43 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 413.5 | 50 | nose | left_hand,right_hand | sub-43_ses-02_task-hybrid_acq-graz_run-02 | | sub-43 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 412.6 | 50 | nose | left_hand,right_hand | sub-43_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-43 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 412.4 | 50 | nose | left_hand,right_hand | sub-43_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-43 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 415.1 | 50 | nose | left_hand,right_hand | sub-43_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-43 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 413.1 | 50 | nose | left_hand,right_hand | sub-43_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-43 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 414.5 | 50 | nose | left_hand,right_hand | sub-43_ses-02_task-hybrid_acq-video_run-01 | | sub-43 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 417.4 | 50 | nose | left_hand,right_hand | sub-43_ses-02_task-hybrid_acq-video_run-02 | | sub-44 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 412.7 | 50 | nose | left_hand,right_hand | sub-44_ses-01_task-hybrid_acq-graz_run-01 | | sub-44 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 414.0 | 50 | nose | left_hand,right_hand | sub-44_ses-01_task-hybrid_acq-graz_run-02 | | sub-44 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 414.5 | 50 | nose | left_hand,right_hand | sub-44_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-44 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 414.8 | 50 | nose | left_hand,right_hand | sub-44_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-44 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 416.3 | 50 | nose | left_hand,right_hand | sub-44_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-44 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 417.5 | 50 | nose | left_hand,right_hand | sub-44_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-44 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 422.3 | 50 | nose | left_hand,right_hand | sub-44_ses-01_task-hybrid_acq-video_run-01 | | sub-44 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 412.4 | 50 | nose | left_hand,right_hand | sub-44_ses-01_task-hybrid_acq-video_run-02 | | sub-44 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 409.4 | 50 | nose | left_hand,right_hand | sub-44_ses-02_task-hybrid_acq-graz_run-01 | | sub-44 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 411.2 | 50 | nose | left_hand,right_hand | sub-44_ses-02_task-hybrid_acq-graz_run-02 | | sub-44 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 412.2 | 50 | nose | left_hand,right_hand | sub-44_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-44 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 498.8 | 50 | nose | left_hand,right_hand | sub-44_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-44 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 409.4 | 50 | nose | left_hand,right_hand | sub-44_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-44 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 411.3 | 50 | nose | left_hand,right_hand | sub-44_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-44 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 408.4 | 50 | nose | left_hand,right_hand | sub-44_ses-02_task-hybrid_acq-video_run-01 | | sub-44 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 418.4 | 50 | nose | left_hand,right_hand | sub-44_ses-02_task-hybrid_acq-video_run-02 | | sub-45 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 408.6 | 50 | nose | left_hand,right_hand | sub-45_ses-01_task-hybrid_acq-graz_run-01 | | sub-45 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 408.4 | 50 | nose | left_hand,right_hand | sub-45_ses-01_task-hybrid_acq-graz_run-02 | | sub-45 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 415.4 | 50 | nose | left_hand,right_hand | sub-45_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-45 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 408.5 | 50 | nose | left_hand,right_hand | sub-45_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-45 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 408.7 | 50 | nose | left_hand,right_hand | sub-45_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-45 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 410.0 | 50 | nose | left_hand,right_hand | sub-45_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-45 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 409.6 | 50 | nose | left_hand,right_hand | sub-45_ses-01_task-hybrid_acq-video_run-01 | | sub-45 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 409.2 | 50 | nose | left_hand,right_hand | sub-45_ses-01_task-hybrid_acq-video_run-02 | | sub-45 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 408.8 | 50 | nose | left_hand,right_hand | sub-45_ses-02_task-hybrid_acq-graz_run-01 | | sub-45 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 407.4 | 50 | nose | left_hand,right_hand | sub-45_ses-02_task-hybrid_acq-graz_run-02 | | sub-45 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 409.8 | 50 | nose | left_hand,right_hand | sub-45_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-45 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 421.2 | 50 | nose | left_hand,right_hand | sub-45_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-45 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 406.7 | 50 | nose | left_hand,right_hand | sub-45_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-45 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 411.5 | 50 | nose | left_hand,right_hand | sub-45_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-45 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 422.2 | 50 | nose | left_hand,right_hand | sub-45_ses-02_task-hybrid_acq-video_run-01 | | sub-45 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 421.0 | 50 | nose | left_hand,right_hand | sub-45_ses-02_task-hybrid_acq-video_run-02 | | sub-46 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 410.3 | 50 | nose | left_hand,right_hand | sub-46_ses-01_task-hybrid_acq-graz_run-01 | | sub-46 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 412.0 | 50 | nose | left_hand,right_hand | sub-46_ses-01_task-hybrid_acq-graz_run-02 | | sub-46 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 411.5 | 50 | nose | left_hand,right_hand | sub-46_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-46 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 408.6 | 50 | nose | left_hand,right_hand | sub-46_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-46 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 411.7 | 50 | nose | left_hand,right_hand | sub-46_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-46 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 412.7 | 50 | nose | left_hand,right_hand | sub-46_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-46 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 412.9 | 50 | nose | left_hand,right_hand | sub-46_ses-01_task-hybrid_acq-video_run-01 | | sub-46 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 410.5 | 50 | nose | left_hand,right_hand | sub-46_ses-01_task-hybrid_acq-video_run-02 | | sub-46 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 411.8 | 50 | nose | left_hand,right_hand | sub-46_ses-02_task-hybrid_acq-graz_run-01 | | sub-46 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 415.4 | 50 | nose | left_hand,right_hand | sub-46_ses-02_task-hybrid_acq-graz_run-02 | | sub-46 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 411.1 | 50 | nose | left_hand,right_hand | sub-46_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-46 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 411.0 | 50 | nose | left_hand,right_hand | sub-46_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-46 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 418.8 | 50 | nose | left_hand,right_hand | sub-46_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-46 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 414.8 | 50 | nose | left_hand,right_hand | sub-46_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-46 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 429.3 | 50 | nose | left_hand,right_hand | sub-46_ses-02_task-hybrid_acq-video_run-01 | | sub-46 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 421.0 | 50 | nose | left_hand,right_hand | sub-46_ses-02_task-hybrid_acq-video_run-02 | | sub-47 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 412.6 | 50 | nose | left_hand,right_hand | sub-47_ses-01_task-hybrid_acq-graz_run-01 | | sub-47 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 413.3 | 50 | nose | left_hand,right_hand | sub-47_ses-01_task-hybrid_acq-graz_run-02 | | sub-47 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 409.6 | 50 | nose | left_hand,right_hand | sub-47_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-47 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 418.0 | 50 | nose | left_hand,right_hand | sub-47_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-47 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 454.8 | 50 | nose | left_hand,right_hand | sub-47_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-47 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 411.9 | 50 | nose | left_hand,right_hand | sub-47_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-47 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 411.1 | 50 | nose | left_hand,right_hand | sub-47_ses-01_task-hybrid_acq-video_run-01 | | sub-47 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 410.8 | 50 | nose | left_hand,right_hand | sub-47_ses-01_task-hybrid_acq-video_run-02 | | sub-47 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 412.8 | 50 | nose | left_hand,right_hand | sub-47_ses-02_task-hybrid_acq-graz_run-01 | | sub-47 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 421.1 | 50 | nose | left_hand,right_hand | sub-47_ses-02_task-hybrid_acq-graz_run-02 | | sub-47 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 412.1 | 50 | nose | left_hand,right_hand | sub-47_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-47 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 413.4 | 50 | nose | left_hand,right_hand | sub-47_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-47 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 434.5 | 50 | nose | left_hand,right_hand | sub-47_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-47 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 414.6 | 50 | nose | left_hand,right_hand | sub-47_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-47 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 411.4 | 50 | nose | left_hand,right_hand | sub-47_ses-02_task-hybrid_acq-video_run-01 | | sub-47 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 412.7 | 50 | nose | left_hand,right_hand | sub-47_ses-02_task-hybrid_acq-video_run-02 | | sub-48 | ses-01 | hybrid | graz | 1 | 39 | 1000.0 | 69 | 415.7 | 50 | nose | left_hand,right_hand | sub-48_ses-01_task-hybrid_acq-graz_run-01 | | sub-48 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 422.4 | 50 | nose | left_hand,right_hand | sub-48_ses-01_task-hybrid_acq-graz_run-02 | | sub-48 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 414.6 | 50 | nose | left_hand,right_hand | sub-48_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-48 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 412.3 | 50 | nose | left_hand,right_hand | sub-48_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-48 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 414.6 | 50 | nose | left_hand,right_hand | sub-48_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-48 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 414.9 | 50 | nose | left_hand,right_hand | sub-48_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-48 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 420.4 | 50 | nose | left_hand,right_hand | sub-48_ses-01_task-hybrid_acq-video_run-01 | | sub-48 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 416.6 | 50 | nose | left_hand,right_hand | sub-48_ses-01_task-hybrid_acq-video_run-02 | | sub-48 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 442.2 | 50 | nose | left_hand,right_hand | sub-48_ses-02_task-hybrid_acq-graz_run-01 | | sub-48 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 412.6 | 50 | nose | left_hand,right_hand | sub-48_ses-02_task-hybrid_acq-graz_run-02 | | sub-48 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 426.4 | 50 | nose | left_hand,right_hand | sub-48_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-48 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 413.9 | 50 | nose | left_hand,right_hand | sub-48_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-48 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 416.9 | 50 | nose | left_hand,right_hand | sub-48_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-48 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 418.8 | 50 | nose | left_hand,right_hand | sub-48_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-48 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 414.2 | 50 | nose | left_hand,right_hand | sub-48_ses-02_task-hybrid_acq-video_run-01 | | sub-48 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 414.9 | 50 | nose | left_hand,right_hand | sub-48_ses-02_task-hybrid_acq-video_run-02 | | sub-49 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 421.9 | 50 | nose | left_hand,right_hand | sub-49_ses-01_task-hybrid_acq-graz_run-01 | | sub-49 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 420.6 | 50 | nose | left_hand,right_hand | sub-49_ses-01_task-hybrid_acq-graz_run-02 | | sub-49 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 416.4 | 50 | nose | left_hand,right_hand | sub-49_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-49 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 412.3 | 50 | nose | left_hand,right_hand | sub-49_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-49 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 413.1 | 50 | nose | left_hand,right_hand | sub-49_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-49 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 413.2 | 50 | nose | left_hand,right_hand | sub-49_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-49 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 410.8 | 50 | nose | left_hand,right_hand | sub-49_ses-01_task-hybrid_acq-video_run-01 | | sub-49 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 410.0 | 50 | nose | left_hand,right_hand | sub-49_ses-01_task-hybrid_acq-video_run-02 | | sub-49 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 436.8 | 50 | nose | left_hand,right_hand | sub-49_ses-02_task-hybrid_acq-graz_run-01 | | sub-49 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 425.9 | 50 | nose | left_hand,right_hand | sub-49_ses-02_task-hybrid_acq-graz_run-02 | | sub-49 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 429.8 | 50 | nose | left_hand,right_hand | sub-49_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-49 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 416.1 | 50 | nose | left_hand,right_hand | sub-49_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-49 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 414.5 | 50 | nose | left_hand,right_hand | sub-49_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-49 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 416.9 | 50 | nose | left_hand,right_hand | sub-49_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-49 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 417.0 | 50 | nose | left_hand,right_hand | sub-49_ses-02_task-hybrid_acq-video_run-01 | | sub-49 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 422.6 | 50 | nose | left_hand,right_hand | sub-49_ses-02_task-hybrid_acq-video_run-02 | | sub-50 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 408.3 | 50 | nose | left_hand,right_hand | sub-50_ses-01_task-hybrid_acq-graz_run-01 | | sub-50 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 408.0 | 50 | nose | left_hand,right_hand | sub-50_ses-01_task-hybrid_acq-graz_run-02 | | sub-50 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 413.3 | 50 | nose | left_hand,right_hand | sub-50_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-50 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 407.8 | 50 | nose | left_hand,right_hand | sub-50_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-50 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 428.8 | 50 | nose | left_hand,right_hand | sub-50_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-50 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 412.0 | 50 | nose | left_hand,right_hand | sub-50_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-50 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 413.9 | 50 | nose | left_hand,right_hand | sub-50_ses-01_task-hybrid_acq-video_run-01 | | sub-50 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 408.1 | 50 | nose | left_hand,right_hand | sub-50_ses-01_task-hybrid_acq-video_run-02 | | sub-50 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 419.5 | 50 | nose | left_hand,right_hand | sub-50_ses-02_task-hybrid_acq-graz_run-01 | | sub-50 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 415.8 | 50 | nose | left_hand,right_hand | sub-50_ses-02_task-hybrid_acq-graz_run-02 | | sub-50 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 419.3 | 50 | nose | left_hand,right_hand | sub-50_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-50 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 421.7 | 50 | nose | left_hand,right_hand | sub-50_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-50 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 447.6 | 50 | nose | left_hand,right_hand | sub-50_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-50 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 417.6 | 50 | nose | left_hand,right_hand | sub-50_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-50 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 437.3 | 50 | nose | left_hand,right_hand | sub-50_ses-02_task-hybrid_acq-video_run-01 | | sub-50 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 430.8 | 50 | nose | left_hand,right_hand | sub-50_ses-02_task-hybrid_acq-video_run-02 | | sub-51 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 404.6 | 50 | nose | left_hand,right_hand | sub-51_ses-01_task-hybrid_acq-graz_run-01 | | sub-51 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 406.8 | 50 | nose | left_hand,right_hand | sub-51_ses-01_task-hybrid_acq-graz_run-02 | | sub-51 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 410.2 | 50 | nose | left_hand,right_hand | sub-51_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-51 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 408.2 | 50 | nose | left_hand,right_hand | sub-51_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-51 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 414.0 | 50 | nose | left_hand,right_hand | sub-51_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-51 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 407.2 | 50 | nose | left_hand,right_hand | sub-51_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-51 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 408.9 | 50 | nose | left_hand,right_hand | sub-51_ses-01_task-hybrid_acq-video_run-01 | | sub-51 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 412.8 | 50 | nose | left_hand,right_hand | sub-51_ses-01_task-hybrid_acq-video_run-02 | | sub-51 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 410.1 | 50 | nose | left_hand,right_hand | sub-51_ses-02_task-hybrid_acq-graz_run-01 | | sub-51 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 410.0 | 50 | nose | left_hand,right_hand | sub-51_ses-02_task-hybrid_acq-graz_run-02 | | sub-51 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 410.6 | 50 | nose | left_hand,right_hand | sub-51_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-51 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 411.4 | 50 | nose | left_hand,right_hand | sub-51_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-51 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 409.4 | 50 | nose | left_hand,right_hand | sub-51_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-51 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 406.4 | 50 | nose | left_hand,right_hand | sub-51_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-51 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 407.5 | 50 | nose | left_hand,right_hand | sub-51_ses-02_task-hybrid_acq-video_run-01 | | sub-51 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 406.5 | 50 | nose | left_hand,right_hand | sub-51_ses-02_task-hybrid_acq-video_run-02 | | sub-52 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 425.8 | 50 | nose | left_hand,right_hand | sub-52_ses-01_task-hybrid_acq-graz_run-01 | | sub-52 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 412.1 | 50 | nose | left_hand,right_hand | sub-52_ses-01_task-hybrid_acq-graz_run-02 | | sub-52 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 413.7 | 50 | nose | left_hand,right_hand | sub-52_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-52 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 421.2 | 50 | nose | left_hand,right_hand | sub-52_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-52 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 420.1 | 50 | nose | left_hand,right_hand | sub-52_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-52 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 412.9 | 50 | nose | left_hand,right_hand | sub-52_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-52 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 427.7 | 50 | nose | left_hand,right_hand | sub-52_ses-01_task-hybrid_acq-video_run-01 | | sub-52 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 428.5 | 50 | nose | left_hand,right_hand | sub-52_ses-01_task-hybrid_acq-video_run-02 | | sub-52 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 420.9 | 50 | nose | left_hand,right_hand | sub-52_ses-02_task-hybrid_acq-graz_run-01 | | sub-52 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 411.0 | 50 | nose | left_hand,right_hand | sub-52_ses-02_task-hybrid_acq-graz_run-02 | | sub-52 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 413.7 | 50 | nose | left_hand,right_hand | sub-52_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-52 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 412.9 | 50 | nose | left_hand,right_hand | sub-52_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-52 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 426.7 | 50 | nose | left_hand,right_hand | sub-52_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-52 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 418.8 | 50 | nose | left_hand,right_hand | sub-52_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-52 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 413.2 | 50 | nose | left_hand,right_hand | sub-52_ses-02_task-hybrid_acq-video_run-01 | | sub-52 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 410.0 | 50 | nose | left_hand,right_hand | sub-52_ses-02_task-hybrid_acq-video_run-02 | | sub-53 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 410.7 | 50 | nose | left_hand,right_hand | sub-53_ses-01_task-hybrid_acq-graz_run-01 | | sub-53 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 408.3 | 50 | nose | left_hand,right_hand | sub-53_ses-01_task-hybrid_acq-graz_run-02 | | sub-53 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 408.6 | 50 | nose | left_hand,right_hand | sub-53_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-53 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 408.8 | 50 | nose | left_hand,right_hand | sub-53_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-53 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 408.9 | 50 | nose | left_hand,right_hand | sub-53_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-53 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 408.4 | 50 | nose | left_hand,right_hand | sub-53_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-53 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 409.6 | 50 | nose | left_hand,right_hand | sub-53_ses-01_task-hybrid_acq-video_run-01 | | sub-53 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 408.5 | 50 | nose | left_hand,right_hand | sub-53_ses-01_task-hybrid_acq-video_run-02 | | sub-53 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 410.9 | 50 | nose | left_hand,right_hand | sub-53_ses-02_task-hybrid_acq-graz_run-01 | | sub-53 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 411.5 | 50 | nose | left_hand,right_hand | sub-53_ses-02_task-hybrid_acq-graz_run-02 | | sub-53 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 410.0 | 50 | nose | left_hand,right_hand | sub-53_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-53 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 412.6 | 50 | nose | left_hand,right_hand | sub-53_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-53 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 413.8 | 50 | nose | left_hand,right_hand | sub-53_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-53 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 419.5 | 50 | nose | left_hand,right_hand | sub-53_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-53 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 422.0 | 50 | nose | left_hand,right_hand | sub-53_ses-02_task-hybrid_acq-video_run-01 | | sub-53 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 409.9 | 50 | nose | left_hand,right_hand | sub-53_ses-02_task-hybrid_acq-video_run-02 | | sub-54 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 409.7 | 50 | nose | left_hand,right_hand | sub-54_ses-01_task-hybrid_acq-graz_run-01 | | sub-54 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 412.3 | 50 | nose | left_hand,right_hand | sub-54_ses-01_task-hybrid_acq-graz_run-02 | | sub-54 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 411.6 | 50 | nose | left_hand,right_hand | sub-54_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-54 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 412.2 | 50 | nose | left_hand,right_hand | sub-54_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-54 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 407.2 | 50 | nose | left_hand,right_hand | sub-54_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-54 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 407.8 | 50 | nose | left_hand,right_hand | sub-54_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-54 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 413.4 | 50 | nose | left_hand,right_hand | sub-54_ses-01_task-hybrid_acq-video_run-01 | | sub-54 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 407.6 | 50 | nose | left_hand,right_hand | sub-54_ses-01_task-hybrid_acq-video_run-02 | | sub-54 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 411.7 | 50 | nose | left_hand,right_hand | sub-54_ses-02_task-hybrid_acq-graz_run-01 | | sub-54 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 437.9 | 50 | nose | left_hand,right_hand | sub-54_ses-02_task-hybrid_acq-graz_run-02 | | sub-54 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 416.4 | 50 | nose | left_hand,right_hand | sub-54_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-54 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 454.4 | 50 | nose | left_hand,right_hand | sub-54_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-54 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 453.9 | 50 | nose | left_hand,right_hand | sub-54_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-54 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 410.3 | 50 | nose | left_hand,right_hand | sub-54_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-54 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 435.9 | 50 | nose | left_hand,right_hand | sub-54_ses-02_task-hybrid_acq-video_run-01 | | sub-54 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 411.0 | 50 | nose | left_hand,right_hand | sub-54_ses-02_task-hybrid_acq-video_run-02 | | sub-55 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 410.1 | 50 | nose | left_hand,right_hand | sub-55_ses-01_task-hybrid_acq-graz_run-01 | | sub-55 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 410.7 | 50 | nose | left_hand,right_hand | sub-55_ses-01_task-hybrid_acq-graz_run-02 | | sub-55 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 410.7 | 50 | nose | left_hand,right_hand | sub-55_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-55 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 413.9 | 50 | nose | left_hand,right_hand | sub-55_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-55 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 419.8 | 50 | nose | left_hand,right_hand | sub-55_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-55 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 415.7 | 50 | nose | left_hand,right_hand | sub-55_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-55 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 409.6 | 50 | nose | left_hand,right_hand | sub-55_ses-01_task-hybrid_acq-video_run-01 | | sub-55 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 420.4 | 50 | nose | left_hand,right_hand | sub-55_ses-01_task-hybrid_acq-video_run-02 | | sub-55 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 411.3 | 50 | nose | left_hand,right_hand | sub-55_ses-02_task-hybrid_acq-graz_run-01 | | sub-55 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 414.8 | 50 | nose | left_hand,right_hand | sub-55_ses-02_task-hybrid_acq-graz_run-02 | | sub-55 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 422.8 | 50 | nose | left_hand,right_hand | sub-55_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-55 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 410.5 | 50 | nose | left_hand,right_hand | sub-55_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-55 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 423.8 | 50 | nose | left_hand,right_hand | sub-55_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-55 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 414.6 | 50 | nose | left_hand,right_hand | sub-55_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-55 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 415.6 | 50 | nose | left_hand,right_hand | sub-55_ses-02_task-hybrid_acq-video_run-01 | | sub-55 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 414.9 | 50 | nose | left_hand,right_hand | sub-55_ses-02_task-hybrid_acq-video_run-02 | | sub-56 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 411.7 | 50 | nose | left_hand,right_hand | sub-56_ses-01_task-hybrid_acq-graz_run-01 | | sub-56 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 576.5 | 50 | nose | left_hand,right_hand | sub-56_ses-01_task-hybrid_acq-graz_run-02 | | sub-56 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 409.2 | 50 | nose | left_hand,right_hand | sub-56_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-56 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 417.7 | 50 | nose | left_hand,right_hand | sub-56_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-56 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 410.3 | 50 | nose | left_hand,right_hand | sub-56_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-56 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 411.1 | 50 | nose | left_hand,right_hand | sub-56_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-56 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 412.9 | 50 | nose | left_hand,right_hand | sub-56_ses-01_task-hybrid_acq-video_run-01 | | sub-56 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 411.6 | 50 | nose | left_hand,right_hand | sub-56_ses-01_task-hybrid_acq-video_run-02 | | sub-56 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 422.3 | 50 | nose | left_hand,right_hand | sub-56_ses-02_task-hybrid_acq-graz_run-01 | | sub-56 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 431.0 | 50 | nose | left_hand,right_hand | sub-56_ses-02_task-hybrid_acq-graz_run-02 | | sub-56 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 413.9 | 50 | nose | left_hand,right_hand | sub-56_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-56 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 411.5 | 50 | nose | left_hand,right_hand | sub-56_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-56 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 419.2 | 50 | nose | left_hand,right_hand | sub-56_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-56 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 419.8 | 50 | nose | left_hand,right_hand | sub-56_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-56 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 432.9 | 50 | nose | left_hand,right_hand | sub-56_ses-02_task-hybrid_acq-video_run-01 | | sub-56 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 416.5 | 50 | nose | left_hand,right_hand | sub-56_ses-02_task-hybrid_acq-video_run-02 | | sub-57 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 413.0 | 50 | nose | left_hand,right_hand | sub-57_ses-01_task-hybrid_acq-graz_run-01 | | sub-57 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 409.1 | 50 | nose | left_hand,right_hand | sub-57_ses-01_task-hybrid_acq-graz_run-02 | | sub-57 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 414.2 | 50 | nose | left_hand,right_hand | sub-57_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-57 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 409.2 | 50 | nose | left_hand,right_hand | sub-57_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-57 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 417.5 | 50 | nose | left_hand,right_hand | sub-57_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-57 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 411.8 | 50 | nose | left_hand,right_hand | sub-57_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-57 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 412.5 | 50 | nose | left_hand,right_hand | sub-57_ses-01_task-hybrid_acq-video_run-01 | | sub-57 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 412.6 | 50 | nose | left_hand,right_hand | sub-57_ses-01_task-hybrid_acq-video_run-02 | | sub-57 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 409.7 | 50 | nose | left_hand,right_hand | sub-57_ses-02_task-hybrid_acq-graz_run-01 | | sub-57 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 410.0 | 50 | nose | left_hand,right_hand | sub-57_ses-02_task-hybrid_acq-graz_run-02 | | sub-57 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 411.7 | 50 | nose | left_hand,right_hand | sub-57_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-57 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 408.7 | 50 | nose | left_hand,right_hand | sub-57_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-57 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 415.7 | 50 | nose | left_hand,right_hand | sub-57_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-57 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 409.5 | 50 | nose | left_hand,right_hand | sub-57_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-57 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 408.8 | 50 | nose | left_hand,right_hand | sub-57_ses-02_task-hybrid_acq-video_run-01 | | sub-57 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 412.1 | 50 | nose | left_hand,right_hand | sub-57_ses-02_task-hybrid_acq-video_run-02 | | sub-58 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 415.6 | 50 | nose | left_hand,right_hand | sub-58_ses-01_task-hybrid_acq-graz_run-01 | | sub-58 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 420.3 | 50 | nose | left_hand,right_hand | sub-58_ses-01_task-hybrid_acq-graz_run-02 | | sub-58 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 421.4 | 50 | nose | left_hand,right_hand | sub-58_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-58 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 416.8 | 50 | nose | left_hand,right_hand | sub-58_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-58 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 418.8 | 50 | nose | left_hand,right_hand | sub-58_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-58 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 416.0 | 50 | nose | left_hand,right_hand | sub-58_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-58 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 418.2 | 50 | nose | left_hand,right_hand | sub-58_ses-01_task-hybrid_acq-video_run-01 | | sub-58 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 416.4 | 50 | nose | left_hand,right_hand | sub-58_ses-01_task-hybrid_acq-video_run-02 | | sub-58 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 415.7 | 50 | nose | left_hand,right_hand | sub-58_ses-02_task-hybrid_acq-graz_run-01 | | sub-58 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 414.5 | 50 | nose | left_hand,right_hand | sub-58_ses-02_task-hybrid_acq-graz_run-02 | | sub-58 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 418.3 | 50 | nose | left_hand,right_hand | sub-58_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-58 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 415.3 | 50 | nose | left_hand,right_hand | sub-58_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-58 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 415.4 | 50 | nose | left_hand,right_hand | sub-58_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-58 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 413.5 | 50 | nose | left_hand,right_hand | sub-58_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-58 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 415.3 | 50 | nose | left_hand,right_hand | sub-58_ses-02_task-hybrid_acq-video_run-01 | | sub-58 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 413.5 | 50 | nose | left_hand,right_hand | sub-58_ses-02_task-hybrid_acq-video_run-02 | | sub-59 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 414.3 | 50 | nose | left_hand,right_hand | sub-59_ses-01_task-hybrid_acq-graz_run-01 | | sub-59 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 415.0 | 50 | nose | left_hand,right_hand | sub-59_ses-01_task-hybrid_acq-graz_run-02 | | sub-59 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 452.2 | 50 | nose | left_hand,right_hand | sub-59_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-59 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 411.7 | 50 | nose | left_hand,right_hand | sub-59_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-59 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 409.2 | 50 | nose | left_hand,right_hand | sub-59_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-59 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 414.5 | 50 | nose | left_hand,right_hand | sub-59_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-59 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 410.5 | 50 | nose | left_hand,right_hand | sub-59_ses-01_task-hybrid_acq-video_run-01 | | sub-59 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 412.0 | 50 | nose | left_hand,right_hand | sub-59_ses-01_task-hybrid_acq-video_run-02 | | sub-59 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 416.6 | 50 | nose | left_hand,right_hand | sub-59_ses-02_task-hybrid_acq-graz_run-01 | | sub-59 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 410.8 | 50 | nose | left_hand,right_hand | sub-59_ses-02_task-hybrid_acq-graz_run-02 | | sub-59 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 412.2 | 50 | nose | left_hand,right_hand | sub-59_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-59 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 415.9 | 50 | nose | left_hand,right_hand | sub-59_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-59 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 422.9 | 50 | nose | left_hand,right_hand | sub-59_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-59 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 411.9 | 50 | nose | left_hand,right_hand | sub-59_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-59 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 410.9 | 50 | nose | left_hand,right_hand | sub-59_ses-02_task-hybrid_acq-video_run-01 | | sub-59 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 413.6 | 50 | nose | left_hand,right_hand | sub-59_ses-02_task-hybrid_acq-video_run-02 | | sub-60 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 408.6 | 50 | nose | left_hand,right_hand | sub-60_ses-01_task-hybrid_acq-graz_run-01 | | sub-60 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 410.7 | 50 | nose | left_hand,right_hand | sub-60_ses-01_task-hybrid_acq-graz_run-02 | | sub-60 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 412.4 | 50 | nose | left_hand,right_hand | sub-60_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-60 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 407.9 | 50 | nose | left_hand,right_hand | sub-60_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-60 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 412.3 | 50 | nose | left_hand,right_hand | sub-60_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-60 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 407.8 | 50 | nose | left_hand,right_hand | sub-60_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-60 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 412.8 | 50 | nose | left_hand,right_hand | sub-60_ses-01_task-hybrid_acq-video_run-01 | | sub-60 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 415.9 | 50 | nose | left_hand,right_hand | sub-60_ses-01_task-hybrid_acq-video_run-02 | | sub-60 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 419.4 | 50 | nose | left_hand,right_hand | sub-60_ses-02_task-hybrid_acq-graz_run-01 | | sub-60 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 412.7 | 50 | nose | left_hand,right_hand | sub-60_ses-02_task-hybrid_acq-graz_run-02 | | sub-60 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 412.2 | 50 | nose | left_hand,right_hand | sub-60_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-60 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 411.0 | 50 | nose | left_hand,right_hand | sub-60_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-60 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 424.0 | 50 | nose | left_hand,right_hand | sub-60_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-60 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 412.5 | 50 | nose | left_hand,right_hand | sub-60_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-60 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 413.2 | 50 | nose | left_hand,right_hand | sub-60_ses-02_task-hybrid_acq-video_run-01 | | sub-60 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 417.2 | 50 | nose | left_hand,right_hand | sub-60_ses-02_task-hybrid_acq-video_run-02 | | sub-61 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 408.6 | 50 | nose | left_hand,right_hand | sub-61_ses-01_task-hybrid_acq-graz_run-01 | | sub-61 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 410.7 | 50 | nose | left_hand,right_hand | sub-61_ses-01_task-hybrid_acq-graz_run-02 | | sub-61 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 412.4 | 50 | nose | left_hand,right_hand | sub-61_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-61 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 407.9 | 50 | nose | left_hand,right_hand | sub-61_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-61 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 412.3 | 50 | nose | left_hand,right_hand | sub-61_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-61 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 407.8 | 50 | nose | left_hand,right_hand | sub-61_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-61 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 412.8 | 50 | nose | left_hand,right_hand | sub-61_ses-01_task-hybrid_acq-video_run-01 | | sub-61 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 415.9 | 50 | nose | left_hand,right_hand | sub-61_ses-01_task-hybrid_acq-video_run-02 | | sub-61 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 409.5 | 50 | nose | left_hand,right_hand | sub-61_ses-02_task-hybrid_acq-graz_run-01 | | sub-61 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 414.4 | 50 | nose | left_hand,right_hand | sub-61_ses-02_task-hybrid_acq-graz_run-02 | | sub-61 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 417.0 | 50 | nose | left_hand,right_hand | sub-61_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-61 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 418.1 | 50 | nose | left_hand,right_hand | sub-61_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-61 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 427.5 | 50 | nose | left_hand,right_hand | sub-61_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-61 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 421.4 | 50 | nose | left_hand,right_hand | sub-61_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-61 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 411.0 | 50 | nose | left_hand,right_hand | sub-61_ses-02_task-hybrid_acq-video_run-01 | | sub-61 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 418.2 | 50 | nose | left_hand,right_hand | sub-61_ses-02_task-hybrid_acq-video_run-02 | | sub-62 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 411.1 | 50 | nose | left_hand,right_hand | sub-62_ses-01_task-hybrid_acq-graz_run-01 | | sub-62 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 417.1 | 50 | nose | left_hand,right_hand | sub-62_ses-01_task-hybrid_acq-graz_run-02 | | sub-62 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 420.1 | 50 | nose | left_hand,right_hand | sub-62_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-62 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 419.9 | 50 | nose | left_hand,right_hand | sub-62_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-62 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 424.7 | 50 | nose | left_hand,right_hand | sub-62_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-62 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 412.5 | 50 | nose | left_hand,right_hand | sub-62_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-62 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 453.1 | 50 | nose | left_hand,right_hand | sub-62_ses-01_task-hybrid_acq-video_run-01 | | sub-62 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 411.1 | 50 | nose | left_hand,right_hand | sub-62_ses-01_task-hybrid_acq-video_run-02 | | sub-62 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 415.4 | 50 | nose | left_hand,right_hand | sub-62_ses-02_task-hybrid_acq-graz_run-01 | | sub-62 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 415.9 | 50 | nose | left_hand,right_hand | sub-62_ses-02_task-hybrid_acq-graz_run-02 | | sub-62 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 411.9 | 50 | nose | left_hand,right_hand | sub-62_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-62 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 411.6 | 50 | nose | left_hand,right_hand | sub-62_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-62 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 410.8 | 50 | nose | left_hand,right_hand | sub-62_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-62 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 409.8 | 50 | nose | left_hand,right_hand | sub-62_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-62 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 415.4 | 50 | nose | left_hand,right_hand | sub-62_ses-02_task-hybrid_acq-video_run-01 | | sub-62 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 409.9 | 50 | nose | left_hand,right_hand | sub-62_ses-02_task-hybrid_acq-video_run-02 | | sub-63 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 413.8 | 50 | nose | left_hand,right_hand | sub-63_ses-01_task-hybrid_acq-graz_run-01 | | sub-63 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 413.2 | 50 | nose | left_hand,right_hand | sub-63_ses-01_task-hybrid_acq-graz_run-02 | | sub-63 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 413.4 | 50 | nose | left_hand,right_hand | sub-63_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-63 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 416.6 | 50 | nose | left_hand,right_hand | sub-63_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-63 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 413.8 | 50 | nose | left_hand,right_hand | sub-63_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-63 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 417.5 | 50 | nose | left_hand,right_hand | sub-63_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-63 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 411.8 | 50 | nose | left_hand,right_hand | sub-63_ses-01_task-hybrid_acq-video_run-01 | | sub-63 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 412.8 | 50 | nose | left_hand,right_hand | sub-63_ses-01_task-hybrid_acq-video_run-02 | | sub-63 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 410.1 | 50 | nose | left_hand,right_hand | sub-63_ses-02_task-hybrid_acq-graz_run-01 | | sub-63 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 411.2 | 50 | nose | left_hand,right_hand | sub-63_ses-02_task-hybrid_acq-graz_run-02 | | sub-63 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 413.1 | 50 | nose | left_hand,right_hand | sub-63_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-63 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 411.3 | 50 | nose | left_hand,right_hand | sub-63_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-63 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 428.0 | 50 | nose | left_hand,right_hand | sub-63_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-63 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 417.1 | 50 | nose | left_hand,right_hand | sub-63_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-63 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 411.4 | 50 | nose | left_hand,right_hand | sub-63_ses-02_task-hybrid_acq-video_run-01 | | sub-63 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 409.8 | 50 | nose | left_hand,right_hand | sub-63_ses-02_task-hybrid_acq-video_run-02 | | sub-64 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 410.9 | 50 | nose | left_hand,right_hand | sub-64_ses-01_task-hybrid_acq-graz_run-01 | | sub-64 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 412.8 | 50 | nose | left_hand,right_hand | sub-64_ses-01_task-hybrid_acq-graz_run-02 | | sub-64 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 415.2 | 50 | nose | left_hand,right_hand | sub-64_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-64 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 430.8 | 50 | nose | left_hand,right_hand | sub-64_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-64 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 425.1 | 50 | nose | left_hand,right_hand | sub-64_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-64 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 415.1 | 50 | nose | left_hand,right_hand | sub-64_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-64 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 412.2 | 50 | nose | left_hand,right_hand | sub-64_ses-01_task-hybrid_acq-video_run-01 | | sub-64 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 410.0 | 50 | nose | left_hand,right_hand | sub-64_ses-01_task-hybrid_acq-video_run-02 | | sub-64 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 410.0 | 50 | nose | left_hand,right_hand | sub-64_ses-02_task-hybrid_acq-graz_run-01 | | sub-64 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 411.8 | 50 | nose | left_hand,right_hand | sub-64_ses-02_task-hybrid_acq-graz_run-02 | | sub-64 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 412.3 | 50 | nose | left_hand,right_hand | sub-64_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-64 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 414.1 | 50 | nose | left_hand,right_hand | sub-64_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-64 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 410.2 | 50 | nose | left_hand,right_hand | sub-64_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-64 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 408.2 | 50 | nose | left_hand,right_hand | sub-64_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-64 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 410.2 | 50 | nose | left_hand,right_hand | sub-64_ses-02_task-hybrid_acq-video_run-01 | | sub-64 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 418.3 | 50 | nose | left_hand,right_hand | sub-64_ses-02_task-hybrid_acq-video_run-02 | | sub-65 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 448.9 | 50 | nose | left_hand,right_hand | sub-65_ses-01_task-hybrid_acq-graz_run-01 | | sub-65 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 409.9 | 50 | nose | left_hand,right_hand | sub-65_ses-01_task-hybrid_acq-graz_run-02 | | sub-65 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 409.5 | 50 | nose | left_hand,right_hand | sub-65_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-65 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 412.8 | 50 | nose | left_hand,right_hand | sub-65_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-65 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 419.0 | 50 | nose | left_hand,right_hand | sub-65_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-65 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 416.0 | 50 | nose | left_hand,right_hand | sub-65_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-65 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 413.2 | 50 | nose | left_hand,right_hand | sub-65_ses-01_task-hybrid_acq-video_run-01 | | sub-65 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 411.2 | 50 | nose | left_hand,right_hand | sub-65_ses-01_task-hybrid_acq-video_run-02 | | sub-65 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 425.5 | 50 | nose | left_hand,right_hand | sub-65_ses-02_task-hybrid_acq-graz_run-01 | | sub-65 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 412.4 | 50 | nose | left_hand,right_hand | sub-65_ses-02_task-hybrid_acq-graz_run-02 | | sub-65 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 424.8 | 50 | nose | left_hand,right_hand | sub-65_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-65 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 412.4 | 50 | nose | left_hand,right_hand | sub-65_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-65 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 414.1 | 50 | nose | left_hand,right_hand | sub-65_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-65 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 410.3 | 50 | nose | left_hand,right_hand | sub-65_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-65 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 413.3 | 50 | nose | left_hand,right_hand | sub-65_ses-02_task-hybrid_acq-video_run-01 | | sub-65 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 411.5 | 50 | nose | left_hand,right_hand | sub-65_ses-02_task-hybrid_acq-video_run-02 | | sub-66 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 411.7 | 50 | nose | left_hand,right_hand | sub-66_ses-01_task-hybrid_acq-graz_run-01 | | sub-66 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 412.0 | 50 | nose | left_hand,right_hand | sub-66_ses-01_task-hybrid_acq-graz_run-02 | | sub-66 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 414.4 | 50 | nose | left_hand,right_hand | sub-66_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-66 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 411.6 | 50 | nose | left_hand,right_hand | sub-66_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-66 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 412.5 | 50 | nose | left_hand,right_hand | sub-66_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-66 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 412.7 | 50 | nose | left_hand,right_hand | sub-66_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-66 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 413.9 | 50 | nose | left_hand,right_hand | sub-66_ses-01_task-hybrid_acq-video_run-01 | | sub-66 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 411.6 | 50 | nose | left_hand,right_hand | sub-66_ses-01_task-hybrid_acq-video_run-02 | | sub-66 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 410.0 | 50 | nose | left_hand,right_hand | sub-66_ses-02_task-hybrid_acq-graz_run-01 | | sub-66 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 413.5 | 50 | nose | left_hand,right_hand | sub-66_ses-02_task-hybrid_acq-graz_run-02 | | sub-66 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 412.1 | 50 | nose | left_hand,right_hand | sub-66_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-66 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 409.6 | 50 | nose | left_hand,right_hand | sub-66_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-66 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 413.8 | 50 | nose | left_hand,right_hand | sub-66_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-66 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 409.5 | 50 | nose | left_hand,right_hand | sub-66_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-66 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 429.5 | 50 | nose | left_hand,right_hand | sub-66_ses-02_task-hybrid_acq-video_run-01 | | sub-66 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 411.0 | 50 | nose | left_hand,right_hand | sub-66_ses-02_task-hybrid_acq-video_run-02 | | sub-67 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 417.2 | 50 | nose | left_hand,right_hand | sub-67_ses-01_task-hybrid_acq-graz_run-01 | | sub-67 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 430.3 | 50 | nose | left_hand,right_hand | sub-67_ses-01_task-hybrid_acq-graz_run-02 | | sub-67 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 428.6 | 50 | nose | left_hand,right_hand | sub-67_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-67 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 421.2 | 50 | nose | left_hand,right_hand | sub-67_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-67 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 418.6 | 50 | nose | left_hand,right_hand | sub-67_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-67 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 416.7 | 50 | nose | left_hand,right_hand | sub-67_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-67 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 424.6 | 50 | nose | left_hand,right_hand | sub-67_ses-01_task-hybrid_acq-video_run-01 | | sub-67 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 417.1 | 50 | nose | left_hand,right_hand | sub-67_ses-01_task-hybrid_acq-video_run-02 | | sub-67 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 410.3 | 50 | nose | left_hand,right_hand | sub-67_ses-02_task-hybrid_acq-graz_run-01 | | sub-67 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 418.4 | 50 | nose | left_hand,right_hand | sub-67_ses-02_task-hybrid_acq-graz_run-02 | | sub-67 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 464.8 | 50 | nose | left_hand,right_hand | sub-67_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-67 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 416.1 | 50 | nose | left_hand,right_hand | sub-67_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-67 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 411.7 | 50 | nose | left_hand,right_hand | sub-67_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-67 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 419.7 | 50 | nose | left_hand,right_hand | sub-67_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-67 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 409.5 | 50 | nose | left_hand,right_hand | sub-67_ses-02_task-hybrid_acq-video_run-01 | | sub-67 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 434.3 | 50 | nose | left_hand,right_hand | sub-67_ses-02_task-hybrid_acq-video_run-02 | | sub-68 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 416.4 | 50 | nose | left_hand,right_hand | sub-68_ses-01_task-hybrid_acq-graz_run-01 | | sub-68 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 413.3 | 50 | nose | left_hand,right_hand | sub-68_ses-01_task-hybrid_acq-graz_run-02 | | sub-68 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 417.7 | 50 | nose | left_hand,right_hand | sub-68_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-68 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 419.8 | 50 | nose | left_hand,right_hand | sub-68_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-68 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 416.4 | 50 | nose | left_hand,right_hand | sub-68_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-68 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 414.7 | 50 | nose | left_hand,right_hand | sub-68_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-68 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 411.3 | 50 | nose | left_hand,right_hand | sub-68_ses-01_task-hybrid_acq-video_run-01 | | sub-68 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 419.9 | 50 | nose | left_hand,right_hand | sub-68_ses-01_task-hybrid_acq-video_run-02 | | sub-68 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 415.1 | 50 | nose | left_hand,right_hand | sub-68_ses-02_task-hybrid_acq-graz_run-01 | | sub-68 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 413.0 | 50 | nose | left_hand,right_hand | sub-68_ses-02_task-hybrid_acq-graz_run-02 | | sub-68 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 423.9 | 50 | nose | left_hand,right_hand | sub-68_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-68 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 414.6 | 50 | nose | left_hand,right_hand | sub-68_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-68 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 412.5 | 50 | nose | left_hand,right_hand | sub-68_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-68 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 412.0 | 50 | nose | left_hand,right_hand | sub-68_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-68 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 417.0 | 50 | nose | left_hand,right_hand | sub-68_ses-02_task-hybrid_acq-video_run-01 | | sub-68 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 416.3 | 50 | nose | left_hand,right_hand | sub-68_ses-02_task-hybrid_acq-video_run-02 | | sub-69 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 421.4 | 50 | nose | left_hand,right_hand | sub-69_ses-01_task-hybrid_acq-graz_run-01 | | sub-69 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 412.8 | 50 | nose | left_hand,right_hand | sub-69_ses-01_task-hybrid_acq-graz_run-02 | | sub-69 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 416.6 | 50 | nose | left_hand,right_hand | sub-69_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-69 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 412.0 | 50 | nose | left_hand,right_hand | sub-69_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-69 | ses-01 | hybrid | ssvideo | 1 | 37 | 1000.0 | 69 | 384.8 | 50 | nose | left_hand,right_hand | sub-69_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-69 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 410.7 | 50 | nose | left_hand,right_hand | sub-69_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-69 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 409.6 | 50 | nose | left_hand,right_hand | sub-69_ses-01_task-hybrid_acq-video_run-01 | | sub-69 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 410.2 | 50 | nose | left_hand,right_hand | sub-69_ses-01_task-hybrid_acq-video_run-02 | | sub-69 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 416.3 | 50 | nose | left_hand,right_hand | sub-69_ses-02_task-hybrid_acq-graz_run-01 | | sub-69 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 418.9 | 50 | nose | left_hand,right_hand | sub-69_ses-02_task-hybrid_acq-graz_run-02 | | sub-69 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 412.3 | 50 | nose | left_hand,right_hand | sub-69_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-69 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 413.0 | 50 | nose | left_hand,right_hand | sub-69_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-69 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 410.8 | 50 | nose | left_hand,right_hand | sub-69_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-69 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 414.8 | 50 | nose | left_hand,right_hand | sub-69_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-69 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 411.1 | 50 | nose | left_hand,right_hand | sub-69_ses-02_task-hybrid_acq-video_run-01 | | sub-69 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 408.4 | 50 | nose | left_hand,right_hand | sub-69_ses-02_task-hybrid_acq-video_run-02 | | sub-70 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 413.8 | 50 | nose | left_hand,right_hand | sub-70_ses-01_task-hybrid_acq-graz_run-01 | | sub-70 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 414.1 | 50 | nose | left_hand,right_hand | sub-70_ses-01_task-hybrid_acq-graz_run-02 | | sub-70 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 412.6 | 50 | nose | left_hand,right_hand | sub-70_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-70 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 411.6 | 50 | nose | left_hand,right_hand | sub-70_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-70 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 414.6 | 50 | nose | left_hand,right_hand | sub-70_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-70 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 411.8 | 50 | nose | left_hand,right_hand | sub-70_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-70 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 418.4 | 50 | nose | left_hand,right_hand | sub-70_ses-01_task-hybrid_acq-video_run-01 | | sub-70 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 414.3 | 50 | nose | left_hand,right_hand | sub-70_ses-01_task-hybrid_acq-video_run-02 | | sub-70 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 415.4 | 50 | nose | left_hand,right_hand | sub-70_ses-02_task-hybrid_acq-graz_run-01 | | sub-70 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 413.0 | 50 | nose | left_hand,right_hand | sub-70_ses-02_task-hybrid_acq-graz_run-02 | | sub-70 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 413.4 | 50 | nose | left_hand,right_hand | sub-70_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-70 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 419.4 | 50 | nose | left_hand,right_hand | sub-70_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-70 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 415.5 | 50 | nose | left_hand,right_hand | sub-70_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-70 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 413.2 | 50 | nose | left_hand,right_hand | sub-70_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-70 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 414.5 | 50 | nose | left_hand,right_hand | sub-70_ses-02_task-hybrid_acq-video_run-01 | | sub-70 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 415.6 | 50 | nose | left_hand,right_hand | sub-70_ses-02_task-hybrid_acq-video_run-02 | | sub-71 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 413.3 | 50 | nose | left_hand,right_hand | sub-71_ses-01_task-hybrid_acq-graz_run-01 | | sub-71 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 413.3 | 50 | nose | left_hand,right_hand | sub-71_ses-01_task-hybrid_acq-graz_run-02 | | sub-71 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 409.6 | 50 | nose | left_hand,right_hand | sub-71_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-71 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 414.2 | 50 | nose | left_hand,right_hand | sub-71_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-71 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 407.3 | 50 | nose | left_hand,right_hand | sub-71_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-71 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 413.9 | 50 | nose | left_hand,right_hand | sub-71_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-71 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 410.3 | 50 | nose | left_hand,right_hand | sub-71_ses-01_task-hybrid_acq-video_run-01 | | sub-71 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 408.5 | 50 | nose | left_hand,right_hand | sub-71_ses-01_task-hybrid_acq-video_run-02 | | sub-71 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 427.9 | 50 | nose | left_hand,right_hand | sub-71_ses-02_task-hybrid_acq-graz_run-01 | | sub-71 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 409.6 | 50 | nose | left_hand,right_hand | sub-71_ses-02_task-hybrid_acq-graz_run-02 | | sub-71 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 413.3 | 50 | nose | left_hand,right_hand | sub-71_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-71 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 413.1 | 50 | nose | left_hand,right_hand | sub-71_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-71 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 432.3 | 50 | nose | left_hand,right_hand | sub-71_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-71 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 413.5 | 50 | nose | left_hand,right_hand | sub-71_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-71 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 412.4 | 50 | nose | left_hand,right_hand | sub-71_ses-02_task-hybrid_acq-video_run-01 | | sub-71 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 435.5 | 50 | nose | left_hand,right_hand | sub-71_ses-02_task-hybrid_acq-video_run-02 | | sub-72 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 408.7 | 50 | nose | left_hand,right_hand | sub-72_ses-01_task-hybrid_acq-graz_run-01 | | sub-72 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 407.9 | 50 | nose | left_hand,right_hand | sub-72_ses-01_task-hybrid_acq-graz_run-02 | | sub-72 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 412.8 | 50 | nose | left_hand,right_hand | sub-72_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-72 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 410.9 | 50 | nose | left_hand,right_hand | sub-72_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-72 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 412.1 | 50 | nose | left_hand,right_hand | sub-72_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-72 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 409.6 | 50 | nose | left_hand,right_hand | sub-72_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-72 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 415.1 | 50 | nose | left_hand,right_hand | sub-72_ses-01_task-hybrid_acq-video_run-01 | | sub-72 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 413.8 | 50 | nose | left_hand,right_hand | sub-72_ses-01_task-hybrid_acq-video_run-02 | | sub-72 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 412.2 | 50 | nose | left_hand,right_hand | sub-72_ses-02_task-hybrid_acq-graz_run-01 | | sub-72 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 411.0 | 50 | nose | left_hand,right_hand | sub-72_ses-02_task-hybrid_acq-graz_run-02 | | sub-72 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 411.6 | 50 | nose | left_hand,right_hand | sub-72_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-72 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 408.6 | 50 | nose | left_hand,right_hand | sub-72_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-72 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 431.8 | 50 | nose | left_hand,right_hand | sub-72_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-72 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 408.9 | 50 | nose | left_hand,right_hand | sub-72_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-72 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 410.5 | 50 | nose | left_hand,right_hand | sub-72_ses-02_task-hybrid_acq-video_run-01 | | sub-72 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 408.8 | 50 | nose | left_hand,right_hand | sub-72_ses-02_task-hybrid_acq-video_run-02 | | sub-73 | ses-01 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 426.5 | 50 | nose | left_hand,right_hand | sub-73_ses-01_task-hybrid_acq-graz_run-01 | | sub-73 | ses-01 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 418.6 | 50 | nose | left_hand,right_hand | sub-73_ses-01_task-hybrid_acq-graz_run-02 | | sub-73 | ses-01 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 411.2 | 50 | nose | left_hand,right_hand | sub-73_ses-01_task-hybrid_acq-ssmvep_run-01 | | sub-73 | ses-01 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 416.4 | 50 | nose | left_hand,right_hand | sub-73_ses-01_task-hybrid_acq-ssmvep_run-02 | | sub-73 | ses-01 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 412.4 | 50 | nose | left_hand,right_hand | sub-73_ses-01_task-hybrid_acq-ssvideo_run-01 | | sub-73 | ses-01 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 414.6 | 50 | nose | left_hand,right_hand | sub-73_ses-01_task-hybrid_acq-ssvideo_run-02 | | sub-73 | ses-01 | hybrid | video | 1 | 40 | 1000.0 | 69 | 411.0 | 50 | nose | left_hand,right_hand | sub-73_ses-01_task-hybrid_acq-video_run-01 | | sub-73 | ses-01 | hybrid | video | 2 | 40 | 1000.0 | 69 | 422.7 | 50 | nose | left_hand,right_hand | sub-73_ses-01_task-hybrid_acq-video_run-02 | | sub-73 | ses-02 | hybrid | graz | 1 | 40 | 1000.0 | 69 | 416.5 | 50 | nose | left_hand,right_hand | sub-73_ses-02_task-hybrid_acq-graz_run-01 | | sub-73 | ses-02 | hybrid | graz | 2 | 40 | 1000.0 | 69 | 417.0 | 50 | nose | left_hand,right_hand | sub-73_ses-02_task-hybrid_acq-graz_run-02 | | sub-73 | ses-02 | hybrid | ssmvep | 1 | 40 | 1000.0 | 69 | 408.9 | 50 | nose | left_hand,right_hand | sub-73_ses-02_task-hybrid_acq-ssmvep_run-01 | | sub-73 | ses-02 | hybrid | ssmvep | 2 | 40 | 1000.0 | 69 | 415.8 | 50 | nose | left_hand,right_hand | sub-73_ses-02_task-hybrid_acq-ssmvep_run-02 | | sub-73 | ses-02 | hybrid | ssvideo | 1 | 40 | 1000.0 | 69 | 417.8 | 50 | nose | left_hand,right_hand | sub-73_ses-02_task-hybrid_acq-ssvideo_run-01 | | sub-73 | ses-02 | hybrid | ssvideo | 2 | 40 | 1000.0 | 69 | 421.8 | 50 | nose | left_hand,right_hand | sub-73_ses-02_task-hybrid_acq-ssvideo_run-02 | | sub-73 | ses-02 | hybrid | video | 1 | 40 | 1000.0 | 69 | 417.3 | 50 | nose | left_hand,right_hand | sub-73_ses-02_task-hybrid_acq-video_run-01 | | sub-73 | ses-02 | hybrid | video | 2 | 40 | 1000.0 | 69 | 419.4 | 50 | nose | left_hand,right_hand | sub-73_ses-02_task-hybrid_acq-video_run-02 | | sub-74 | ses-01 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 328.44 | 50 | nose | left_hand,right_hand | sub-74_ses-01_task-hybridonline_acq-graz_run-01 | | sub-74 | ses-01 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 337.64 | 50 | nose | left_hand,right_hand | sub-74_ses-01_task-hybridonline_acq-graz_run-02 | | sub-74 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 372.0 | 50 | nose | left_hand,right_hand | sub-74_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-74 | ses-01 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 344.68 | 50 | nose | left_hand,right_hand | sub-74_ses-01_task-hybridonline_acq-ssmvep_run-01 | | sub-74 | ses-01 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 335.08 | 50 | nose | left_hand,right_hand | sub-74_ses-01_task-hybridonline_acq-ssmvep_run-02 | | sub-74 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 339.8 | 50 | nose | left_hand,right_hand | sub-74_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-74 | ses-01 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 387.68 | 50 | nose | left_hand,right_hand | sub-74_ses-01_task-hybridonline_acq-ssvideo_run-01 | | sub-74 | ses-01 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 332.2 | 50 | nose | left_hand,right_hand | sub-74_ses-01_task-hybridonline_acq-ssvideo_run-02 | | sub-74 | ses-01 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 363.8 | 50 | nose | left_hand,right_hand | sub-74_ses-01_task-hybridonline_acq-ssvideoonline_run-01 | | sub-74 | ses-01 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 407.12 | 50 | nose | left_hand,right_hand | sub-74_ses-01_task-hybridonline_acq-video_run-01 | | sub-74 | ses-01 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 356.68 | 50 | nose | left_hand,right_hand | sub-74_ses-01_task-hybridonline_acq-video_run-02 | | sub-74 | ses-01 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 394.16 | 50 | nose | left_hand,right_hand | sub-74_ses-01_task-hybridonline_acq-videoonline_run-01 | | sub-74 | ses-02 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 332.76 | 50 | nose | left_hand,right_hand | sub-74_ses-02_task-hybridonline_acq-graz_run-01 | | sub-74 | ses-02 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 331.28 | 50 | nose | left_hand,right_hand | sub-74_ses-02_task-hybridonline_acq-graz_run-02 | | sub-74 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 343.52 | 50 | nose | left_hand,right_hand | sub-74_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-74 | ses-02 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 329.08 | 50 | nose | left_hand,right_hand | sub-74_ses-02_task-hybridonline_acq-ssmvep_run-01 | | sub-74 | ses-02 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 338.12 | 50 | nose | left_hand,right_hand | sub-74_ses-02_task-hybridonline_acq-ssmvep_run-02 | | sub-74 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 349.32 | 50 | nose | left_hand,right_hand | sub-74_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-74 | ses-02 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 329.48 | 50 | nose | left_hand,right_hand | sub-74_ses-02_task-hybridonline_acq-ssvideo_run-01 | | sub-74 | ses-02 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 382.08 | 50 | nose | left_hand,right_hand | sub-74_ses-02_task-hybridonline_acq-ssvideo_run-02 | | sub-74 | ses-02 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 347.84 | 50 | nose | left_hand,right_hand | sub-74_ses-02_task-hybridonline_acq-ssvideoonline_run-01 | | sub-74 | ses-02 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 415.28 | 50 | nose | left_hand,right_hand | sub-74_ses-02_task-hybridonline_acq-video_run-01 | | sub-74 | ses-02 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 378.4 | 50 | nose | left_hand,right_hand | sub-74_ses-02_task-hybridonline_acq-video_run-02 | | sub-74 | ses-02 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 396.64 | 50 | nose | left_hand,right_hand | sub-74_ses-02_task-hybridonline_acq-videoonline_run-01 | | sub-75 | ses-01 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 350.96 | 50 | nose | left_hand,right_hand | sub-75_ses-01_task-hybridonline_acq-graz_run-01 | | sub-75 | ses-01 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 328.92 | 50 | nose | left_hand,right_hand | sub-75_ses-01_task-hybridonline_acq-graz_run-02 | | sub-75 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 331.72 | 50 | nose | left_hand,right_hand | sub-75_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-75 | ses-01 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 343.44 | 50 | nose | left_hand,right_hand | sub-75_ses-01_task-hybridonline_acq-ssmvep_run-01 | | sub-75 | ses-01 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 339.68 | 50 | nose | left_hand,right_hand | sub-75_ses-01_task-hybridonline_acq-ssmvep_run-02 | | sub-75 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 369.68 | 50 | nose | left_hand,right_hand | sub-75_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-75 | ses-01 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 333.96 | 50 | nose | left_hand,right_hand | sub-75_ses-01_task-hybridonline_acq-ssvideo_run-01 | | sub-75 | ses-01 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 335.8 | 50 | nose | left_hand,right_hand | sub-75_ses-01_task-hybridonline_acq-ssvideo_run-02 | | sub-75 | ses-01 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 525.52 | 50 | nose | left_hand,right_hand | sub-75_ses-01_task-hybridonline_acq-ssvideoonline_run-01 | | sub-75 | ses-01 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 407.92 | 50 | nose | left_hand,right_hand | sub-75_ses-01_task-hybridonline_acq-video_run-01 | | sub-75 | ses-01 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 389.48 | 50 | nose | left_hand,right_hand | sub-75_ses-01_task-hybridonline_acq-video_run-02 | | sub-75 | ses-01 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 545.92 | 50 | nose | left_hand,right_hand | sub-75_ses-01_task-hybridonline_acq-videoonline_run-01 | | sub-75 | ses-02 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 329.28 | 50 | nose | left_hand,right_hand | sub-75_ses-02_task-hybridonline_acq-graz_run-01 | | sub-75 | ses-02 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 327.96 | 50 | nose | left_hand,right_hand | sub-75_ses-02_task-hybridonline_acq-graz_run-02 | | sub-75 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 345.96 | 50 | nose | left_hand,right_hand | sub-75_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-75 | ses-02 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 330.08 | 50 | nose | left_hand,right_hand | sub-75_ses-02_task-hybridonline_acq-ssmvep_run-01 | | sub-75 | ses-02 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 332.0 | 50 | nose | left_hand,right_hand | sub-75_ses-02_task-hybridonline_acq-ssmvep_run-02 | | sub-75 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 374.76 | 50 | nose | left_hand,right_hand | sub-75_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-75 | ses-02 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 352.96 | 50 | nose | left_hand,right_hand | sub-75_ses-02_task-hybridonline_acq-ssvideo_run-01 | | sub-75 | ses-02 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 327.0 | 50 | nose | left_hand,right_hand | sub-75_ses-02_task-hybridonline_acq-ssvideo_run-02 | | sub-75 | ses-02 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 333.48 | 50 | nose | left_hand,right_hand | sub-75_ses-02_task-hybridonline_acq-ssvideoonline_run-01 | | sub-75 | ses-02 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 398.56 | 50 | nose | left_hand,right_hand | sub-75_ses-02_task-hybridonline_acq-video_run-01 | | sub-75 | ses-02 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 389.12 | 50 | nose | left_hand,right_hand | sub-75_ses-02_task-hybridonline_acq-video_run-02 | | sub-75 | ses-02 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 401.32 | 50 | nose | left_hand,right_hand | sub-75_ses-02_task-hybridonline_acq-videoonline_run-01 | | sub-76 | ses-01 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 335.0 | 50 | nose | left_hand,right_hand | sub-76_ses-01_task-hybridonline_acq-graz_run-01 | | sub-76 | ses-01 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 335.84 | 50 | nose | left_hand,right_hand | sub-76_ses-01_task-hybridonline_acq-graz_run-02 | | sub-76 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 395.4 | 50 | nose | left_hand,right_hand | sub-76_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-76 | ses-01 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 335.2 | 50 | nose | left_hand,right_hand | sub-76_ses-01_task-hybridonline_acq-ssmvep_run-01 | | sub-76 | ses-01 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 329.48 | 50 | nose | left_hand,right_hand | sub-76_ses-01_task-hybridonline_acq-ssmvep_run-02 | | sub-76 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 353.4 | 50 | nose | left_hand,right_hand | sub-76_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-76 | ses-01 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 356.16 | 50 | nose | left_hand,right_hand | sub-76_ses-01_task-hybridonline_acq-ssvideo_run-01 | | sub-76 | ses-01 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 329.28 | 50 | nose | left_hand,right_hand | sub-76_ses-01_task-hybridonline_acq-ssvideo_run-02 | | sub-76 | ses-01 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 656.56 | 50 | nose | left_hand,right_hand | sub-76_ses-01_task-hybridonline_acq-ssvideoonline_run-01 | | sub-76 | ses-01 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 425.68 | 50 | nose | left_hand,right_hand | sub-76_ses-01_task-hybridonline_acq-video_run-01 | | sub-76 | ses-01 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 378.44 | 50 | nose | left_hand,right_hand | sub-76_ses-01_task-hybridonline_acq-video_run-02 | | sub-76 | ses-01 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 418.84 | 50 | nose | left_hand,right_hand | sub-76_ses-01_task-hybridonline_acq-videoonline_run-01 | | sub-76 | ses-02 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 333.32 | 50 | nose | left_hand,right_hand | sub-76_ses-02_task-hybridonline_acq-graz_run-01 | | sub-76 | ses-02 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 327.36 | 50 | nose | left_hand,right_hand | sub-76_ses-02_task-hybridonline_acq-graz_run-02 | | sub-76 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 473.8 | 50 | nose | left_hand,right_hand | sub-76_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-76 | ses-02 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 340.36 | 50 | nose | left_hand,right_hand | sub-76_ses-02_task-hybridonline_acq-ssmvep_run-01 | | sub-76 | ses-02 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 338.48 | 50 | nose | left_hand,right_hand | sub-76_ses-02_task-hybridonline_acq-ssmvep_run-02 | | sub-76 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 334.92 | 50 | nose | left_hand,right_hand | sub-76_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-76 | ses-02 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 337.36 | 50 | nose | left_hand,right_hand | sub-76_ses-02_task-hybridonline_acq-ssvideo_run-01 | | sub-76 | ses-02 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 330.24 | 50 | nose | left_hand,right_hand | sub-76_ses-02_task-hybridonline_acq-ssvideo_run-02 | | sub-76 | ses-02 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 332.68 | 50 | nose | left_hand,right_hand | sub-76_ses-02_task-hybridonline_acq-ssvideoonline_run-01 | | sub-76 | ses-02 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 391.44 | 50 | nose | left_hand,right_hand | sub-76_ses-02_task-hybridonline_acq-video_run-01 | | sub-76 | ses-02 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 410.16 | 50 | nose | left_hand,right_hand | sub-76_ses-02_task-hybridonline_acq-video_run-02 | | sub-76 | ses-02 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 393.0 | 50 | nose | left_hand,right_hand | sub-76_ses-02_task-hybridonline_acq-videoonline_run-01 | | sub-77 | ses-01 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 338.0 | 50 | nose | left_hand,right_hand | sub-77_ses-01_task-hybridonline_acq-graz_run-01 | | sub-77 | ses-01 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 336.84 | 50 | nose | left_hand,right_hand | sub-77_ses-01_task-hybridonline_acq-graz_run-02 | | sub-77 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 411.04 | 50 | nose | left_hand,right_hand | sub-77_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-77 | ses-01 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 330.56 | 50 | nose | left_hand,right_hand | sub-77_ses-01_task-hybridonline_acq-ssmvep_run-01 | | sub-77 | ses-01 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 338.68 | 50 | nose | left_hand,right_hand | sub-77_ses-01_task-hybridonline_acq-ssmvep_run-02 | | sub-77 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 332.6 | 50 | nose | left_hand,right_hand | sub-77_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-77 | ses-01 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 331.28 | 50 | nose | left_hand,right_hand | sub-77_ses-01_task-hybridonline_acq-ssvideo_run-01 | | sub-77 | ses-01 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 330.36 | 50 | nose | left_hand,right_hand | sub-77_ses-01_task-hybridonline_acq-ssvideo_run-02 | | sub-77 | ses-01 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 341.28 | 50 | nose | left_hand,right_hand | sub-77_ses-01_task-hybridonline_acq-ssvideoonline_run-01 | | sub-77 | ses-01 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 560.52 | 50 | nose | left_hand,right_hand | sub-77_ses-01_task-hybridonline_acq-video_run-01 | | sub-77 | ses-01 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 358.8 | 50 | nose | left_hand,right_hand | sub-77_ses-01_task-hybridonline_acq-video_run-02 | | sub-77 | ses-01 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 425.4 | 50 | nose | left_hand,right_hand | sub-77_ses-01_task-hybridonline_acq-videoonline_run-01 | | sub-77 | ses-02 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 386.28 | 50 | nose | left_hand,right_hand | sub-77_ses-02_task-hybridonline_acq-graz_run-01 | | sub-77 | ses-02 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 416.44 | 50 | nose | left_hand,right_hand | sub-77_ses-02_task-hybridonline_acq-graz_run-02 | | sub-77 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 415.76 | 50 | nose | left_hand,right_hand | sub-77_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-77 | ses-02 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 343.68 | 50 | nose | left_hand,right_hand | sub-77_ses-02_task-hybridonline_acq-ssmvep_run-01 | | sub-77 | ses-02 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 378.88 | 50 | nose | left_hand,right_hand | sub-77_ses-02_task-hybridonline_acq-ssmvep_run-02 | | sub-77 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 361.12 | 50 | nose | left_hand,right_hand | sub-77_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-77 | ses-02 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 474.92 | 50 | nose | left_hand,right_hand | sub-77_ses-02_task-hybridonline_acq-ssvideo_run-01 | | sub-77 | ses-02 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 329.0 | 50 | nose | left_hand,right_hand | sub-77_ses-02_task-hybridonline_acq-ssvideo_run-02 | | sub-77 | ses-02 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 396.04 | 50 | nose | left_hand,right_hand | sub-77_ses-02_task-hybridonline_acq-ssvideoonline_run-01 | | sub-77 | ses-02 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 440.32 | 50 | nose | left_hand,right_hand | sub-77_ses-02_task-hybridonline_acq-video_run-01 | | sub-77 | ses-02 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 370.44 | 50 | nose | left_hand,right_hand | sub-77_ses-02_task-hybridonline_acq-video_run-02 | | sub-77 | ses-02 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 443.6 | 50 | nose | left_hand,right_hand | sub-77_ses-02_task-hybridonline_acq-videoonline_run-01 | | sub-78 | ses-01 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 338.72 | 50 | nose | left_hand,right_hand | sub-78_ses-01_task-hybridonline_acq-graz_run-01 | | sub-78 | ses-01 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 330.76 | 50 | nose | left_hand,right_hand | sub-78_ses-01_task-hybridonline_acq-graz_run-02 | | sub-78 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 328.24 | 50 | nose | left_hand,right_hand | sub-78_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-78 | ses-01 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 328.44 | 50 | nose | left_hand,right_hand | sub-78_ses-01_task-hybridonline_acq-ssmvep_run-01 | | sub-78 | ses-01 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 328.2 | 50 | nose | left_hand,right_hand | sub-78_ses-01_task-hybridonline_acq-ssmvep_run-02 | | sub-78 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 328.44 | 50 | nose | left_hand,right_hand | sub-78_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-78 | ses-01 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 334.2 | 50 | nose | left_hand,right_hand | sub-78_ses-01_task-hybridonline_acq-ssvideo_run-01 | | sub-78 | ses-01 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 328.32 | 50 | nose | left_hand,right_hand | sub-78_ses-01_task-hybridonline_acq-ssvideo_run-02 | | sub-78 | ses-01 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 327.44 | 50 | nose | left_hand,right_hand | sub-78_ses-01_task-hybridonline_acq-ssvideoonline_run-01 | | sub-78 | ses-01 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 386.72 | 50 | nose | left_hand,right_hand | sub-78_ses-01_task-hybridonline_acq-video_run-01 | | sub-78 | ses-01 | hybridonline | video | 2 | 39 | 1000.0 | 68 | 362.4 | 50 | nose | left_hand,right_hand | sub-78_ses-01_task-hybridonline_acq-video_run-02 | | sub-78 | ses-01 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 389.84 | 50 | nose | left_hand,right_hand | sub-78_ses-01_task-hybridonline_acq-videoonline_run-01 | | sub-78 | ses-02 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 337.6 | 50 | nose | left_hand,right_hand | sub-78_ses-02_task-hybridonline_acq-graz_run-01 | | sub-78 | ses-02 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 330.32 | 50 | nose | left_hand,right_hand | sub-78_ses-02_task-hybridonline_acq-graz_run-02 | | sub-78 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 340.52 | 50 | nose | left_hand,right_hand | sub-78_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-78 | ses-02 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 372.4 | 50 | nose | left_hand,right_hand | sub-78_ses-02_task-hybridonline_acq-ssmvep_run-01 | | sub-78 | ses-02 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 329.84 | 50 | nose | left_hand,right_hand | sub-78_ses-02_task-hybridonline_acq-ssmvep_run-02 | | sub-78 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 334.96 | 50 | nose | left_hand,right_hand | sub-78_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-78 | ses-02 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 328.36 | 50 | nose | left_hand,right_hand | sub-78_ses-02_task-hybridonline_acq-ssvideo_run-01 | | sub-78 | ses-02 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 330.36 | 50 | nose | left_hand,right_hand | sub-78_ses-02_task-hybridonline_acq-ssvideo_run-02 | | sub-78 | ses-02 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 344.36 | 50 | nose | left_hand,right_hand | sub-78_ses-02_task-hybridonline_acq-ssvideoonline_run-01 | | sub-78 | ses-02 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 394.96 | 50 | nose | left_hand,right_hand | sub-78_ses-02_task-hybridonline_acq-video_run-01 | | sub-78 | ses-02 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 529.64 | 50 | nose | left_hand,right_hand | sub-78_ses-02_task-hybridonline_acq-video_run-02 | | sub-78 | ses-02 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 418.8 | 50 | nose | left_hand,right_hand | sub-78_ses-02_task-hybridonline_acq-videoonline_run-01 | | sub-79 | ses-01 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 330.56 | 50 | nose | left_hand,right_hand | sub-79_ses-01_task-hybridonline_acq-graz_run-01 | | sub-79 | ses-01 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 345.6 | 50 | nose | left_hand,right_hand | sub-79_ses-01_task-hybridonline_acq-graz_run-02 | | sub-79 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 402.96 | 50 | nose | left_hand,right_hand | sub-79_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-79 | ses-01 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 352.68 | 50 | nose | left_hand,right_hand | sub-79_ses-01_task-hybridonline_acq-ssmvep_run-01 | | sub-79 | ses-01 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 328.44 | 50 | nose | left_hand,right_hand | sub-79_ses-01_task-hybridonline_acq-ssmvep_run-02 | | sub-79 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 332.4 | 50 | nose | left_hand,right_hand | sub-79_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-79 | ses-01 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 331.0 | 50 | nose | left_hand,right_hand | sub-79_ses-01_task-hybridonline_acq-ssvideo_run-01 | | sub-79 | ses-01 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 328.64 | 50 | nose | left_hand,right_hand | sub-79_ses-01_task-hybridonline_acq-ssvideo_run-02 | | sub-79 | ses-01 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 331.48 | 50 | nose | left_hand,right_hand | sub-79_ses-01_task-hybridonline_acq-ssvideoonline_run-01 | | sub-79 | ses-01 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 411.76 | 50 | nose | left_hand,right_hand | sub-79_ses-01_task-hybridonline_acq-video_run-01 | | sub-79 | ses-01 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 474.76 | 50 | nose | left_hand,right_hand | sub-79_ses-01_task-hybridonline_acq-video_run-02 | | sub-79 | ses-01 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 393.32 | 50 | nose | left_hand,right_hand | sub-79_ses-01_task-hybridonline_acq-videoonline_run-01 | | sub-79 | ses-02 | hybridonline | graz | 1 | 40 | 1000.0 | 64 | 335.68 | 50 | nose | left_hand,right_hand | sub-79_ses-02_task-hybridonline_acq-graz_run-01 | | sub-79 | ses-02 | hybridonline | graz | 2 | 40 | 1000.0 | 64 | 345.0 | 50 | nose | left_hand,right_hand | sub-79_ses-02_task-hybridonline_acq-graz_run-02 | | sub-79 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 64 | 332.04 | 50 | nose | left_hand,right_hand | sub-79_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-79 | ses-02 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 64 | 330.88 | 50 | nose | left_hand,right_hand | sub-79_ses-02_task-hybridonline_acq-ssmvep_run-01 | | sub-79 | ses-02 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 64 | 332.16 | 50 | nose | left_hand,right_hand | sub-79_ses-02_task-hybridonline_acq-ssmvep_run-02 | | sub-79 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 64 | 332.2 | 50 | nose | left_hand,right_hand | sub-79_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-79 | ses-02 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 64 | 347.0 | 50 | nose | left_hand,right_hand | sub-79_ses-02_task-hybridonline_acq-ssvideo_run-01 | | sub-79 | ses-02 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 64 | 337.04 | 50 | nose | left_hand,right_hand | sub-79_ses-02_task-hybridonline_acq-ssvideo_run-02 | | sub-79 | ses-02 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 64 | 345.68 | 50 | nose | left_hand,right_hand | sub-79_ses-02_task-hybridonline_acq-ssvideoonline_run-01 | | sub-79 | ses-02 | hybridonline | video | 1 | 40 | 1000.0 | 64 | 408.2 | 50 | nose | left_hand,right_hand | sub-79_ses-02_task-hybridonline_acq-video_run-01 | | sub-79 | ses-02 | hybridonline | video | 2 | 40 | 1000.0 | 64 | 382.44 | 50 | nose | left_hand,right_hand | sub-79_ses-02_task-hybridonline_acq-video_run-02 | | sub-79 | ses-02 | hybridonline | videoonline | 1 | 40 | 1000.0 | 64 | 379.4 | 50 | nose | left_hand,right_hand | sub-79_ses-02_task-hybridonline_acq-videoonline_run-01 | | sub-80 | ses-01 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 328.08 | 50 | nose | left_hand,right_hand | sub-80_ses-01_task-hybridonline_acq-graz_run-01 | | sub-80 | ses-01 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 327.52 | 50 | nose | left_hand,right_hand | sub-80_ses-01_task-hybridonline_acq-graz_run-02 | | sub-80 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 342.92 | 50 | nose | left_hand,right_hand | sub-80_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-80 | ses-01 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 336.6 | 50 | nose | left_hand,right_hand | sub-80_ses-01_task-hybridonline_acq-ssmvep_run-01 | | sub-80 | ses-01 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 337.28 | 50 | nose | left_hand,right_hand | sub-80_ses-01_task-hybridonline_acq-ssmvep_run-02 | | sub-80 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 364.12 | 50 | nose | left_hand,right_hand | sub-80_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-80 | ses-01 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 328.44 | 50 | nose | left_hand,right_hand | sub-80_ses-01_task-hybridonline_acq-ssvideo_run-01 | | sub-80 | ses-01 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 337.36 | 50 | nose | left_hand,right_hand | sub-80_ses-01_task-hybridonline_acq-ssvideo_run-02 | | sub-80 | ses-01 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 355.2 | 50 | nose | left_hand,right_hand | sub-80_ses-01_task-hybridonline_acq-ssvideoonline_run-01 | | sub-80 | ses-01 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 397.12 | 50 | nose | left_hand,right_hand | sub-80_ses-01_task-hybridonline_acq-video_run-01 | | sub-80 | ses-01 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 366.32 | 50 | nose | left_hand,right_hand | sub-80_ses-01_task-hybridonline_acq-video_run-02 | | sub-80 | ses-01 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 401.52 | 50 | nose | left_hand,right_hand | sub-80_ses-01_task-hybridonline_acq-videoonline_run-01 | | sub-80 | ses-02 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 335.0 | 50 | nose | left_hand,right_hand | sub-80_ses-02_task-hybridonline_acq-graz_run-01 | | sub-80 | ses-02 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 328.84 | 50 | nose | left_hand,right_hand | sub-80_ses-02_task-hybridonline_acq-graz_run-02 | | sub-80 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 351.2 | 50 | nose | left_hand,right_hand | sub-80_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-80 | ses-02 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 330.72 | 50 | nose | left_hand,right_hand | sub-80_ses-02_task-hybridonline_acq-ssmvep_run-01 | | sub-80 | ses-02 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 327.32 | 50 | nose | left_hand,right_hand | sub-80_ses-02_task-hybridonline_acq-ssmvep_run-02 | | sub-80 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 327.8 | 50 | nose | left_hand,right_hand | sub-80_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-80 | ses-02 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 328.4 | 50 | nose | left_hand,right_hand | sub-80_ses-02_task-hybridonline_acq-ssvideo_run-01 | | sub-80 | ses-02 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 326.56 | 50 | nose | left_hand,right_hand | sub-80_ses-02_task-hybridonline_acq-ssvideo_run-02 | | sub-80 | ses-02 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 336.36 | 50 | nose | left_hand,right_hand | sub-80_ses-02_task-hybridonline_acq-ssvideoonline_run-01 | | sub-80 | ses-02 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 384.0 | 50 | nose | left_hand,right_hand | sub-80_ses-02_task-hybridonline_acq-video_run-01 | | sub-80 | ses-02 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 356.8 | 50 | nose | left_hand,right_hand | sub-80_ses-02_task-hybridonline_acq-video_run-02 | | sub-80 | ses-02 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 379.92 | 50 | nose | left_hand,right_hand | sub-80_ses-02_task-hybridonline_acq-videoonline_run-01 | | sub-81 | ses-01 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 329.72 | 50 | nose | left_hand,right_hand | sub-81_ses-01_task-hybridonline_acq-graz_run-01 | | sub-81 | ses-01 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 330.52 | 50 | nose | left_hand,right_hand | sub-81_ses-01_task-hybridonline_acq-graz_run-02 | | sub-81 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 343.08 | 50 | nose | left_hand,right_hand | sub-81_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-81 | ses-01 | hybridonline | ssmvep | 1 | 39 | 1000.0 | 68 | 329.44 | 50 | nose | left_hand,right_hand | sub-81_ses-01_task-hybridonline_acq-ssmvep_run-01 | | sub-81 | ses-01 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 332.16 | 50 | nose | left_hand,right_hand | sub-81_ses-01_task-hybridonline_acq-ssmvep_run-02 | | sub-81 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 332.08 | 50 | nose | left_hand,right_hand | sub-81_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-81 | ses-01 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 338.48 | 50 | nose | left_hand,right_hand | sub-81_ses-01_task-hybridonline_acq-ssvideo_run-01 | | sub-81 | ses-01 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 350.52 | 50 | nose | left_hand,right_hand | sub-81_ses-01_task-hybridonline_acq-ssvideo_run-02 | | sub-81 | ses-01 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 330.4 | 50 | nose | left_hand,right_hand | sub-81_ses-01_task-hybridonline_acq-ssvideoonline_run-01 | | sub-81 | ses-01 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 392.16 | 50 | nose | left_hand,right_hand | sub-81_ses-01_task-hybridonline_acq-video_run-01 | | sub-81 | ses-01 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 365.92 | 50 | nose | left_hand,right_hand | sub-81_ses-01_task-hybridonline_acq-video_run-02 | | sub-81 | ses-01 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 400.36 | 50 | nose | left_hand,right_hand | sub-81_ses-01_task-hybridonline_acq-videoonline_run-01 | | sub-81 | ses-02 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 328.28 | 50 | nose | left_hand,right_hand | sub-81_ses-02_task-hybridonline_acq-graz_run-01 | | sub-81 | ses-02 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 327.76 | 50 | nose | left_hand,right_hand | sub-81_ses-02_task-hybridonline_acq-graz_run-02 | | sub-81 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 333.32 | 50 | nose | left_hand,right_hand | sub-81_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-81 | ses-02 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 329.56 | 50 | nose | left_hand,right_hand | sub-81_ses-02_task-hybridonline_acq-ssmvep_run-01 | | sub-81 | ses-02 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 327.68 | 50 | nose | left_hand,right_hand | sub-81_ses-02_task-hybridonline_acq-ssmvep_run-02 | | sub-81 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 327.68 | 50 | nose | left_hand,right_hand | sub-81_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-81 | ses-02 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 328.56 | 50 | nose | left_hand,right_hand | sub-81_ses-02_task-hybridonline_acq-ssvideo_run-01 | | sub-81 | ses-02 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 327.8 | 50 | nose | left_hand,right_hand | sub-81_ses-02_task-hybridonline_acq-ssvideo_run-02 | | sub-81 | ses-02 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 333.32 | 50 | nose | left_hand,right_hand | sub-81_ses-02_task-hybridonline_acq-ssvideoonline_run-01 | | sub-81 | ses-02 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 380.32 | 50 | nose | left_hand,right_hand | sub-81_ses-02_task-hybridonline_acq-video_run-01 | | sub-81 | ses-02 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 348.56 | 50 | nose | left_hand,right_hand | sub-81_ses-02_task-hybridonline_acq-video_run-02 | | sub-81 | ses-02 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 374.36 | 50 | nose | left_hand,right_hand | sub-81_ses-02_task-hybridonline_acq-videoonline_run-01 | | sub-82 | ses-01 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 335.96 | 50 | nose | left_hand,right_hand | sub-82_ses-01_task-hybridonline_acq-graz_run-01 | | sub-82 | ses-01 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 329.48 | 50 | nose | left_hand,right_hand | sub-82_ses-01_task-hybridonline_acq-graz_run-02 | | sub-82 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 331.64 | 50 | nose | left_hand,right_hand | sub-82_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-82 | ses-01 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 329.64 | 50 | nose | left_hand,right_hand | sub-82_ses-01_task-hybridonline_acq-ssmvep_run-01 | | sub-82 | ses-01 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 327.48 | 50 | nose | left_hand,right_hand | sub-82_ses-01_task-hybridonline_acq-ssmvep_run-02 | | sub-82 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 332.4 | 50 | nose | left_hand,right_hand | sub-82_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-82 | ses-01 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 327.48 | 50 | nose | left_hand,right_hand | sub-82_ses-01_task-hybridonline_acq-ssvideo_run-01 | | sub-82 | ses-01 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 327.4 | 50 | nose | left_hand,right_hand | sub-82_ses-01_task-hybridonline_acq-ssvideo_run-02 | | sub-82 | ses-01 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 329.64 | 50 | nose | left_hand,right_hand | sub-82_ses-01_task-hybridonline_acq-ssvideoonline_run-01 | | sub-82 | ses-01 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 395.76 | 50 | nose | left_hand,right_hand | sub-82_ses-01_task-hybridonline_acq-video_run-01 | | sub-82 | ses-01 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 359.0 | 50 | nose | left_hand,right_hand | sub-82_ses-01_task-hybridonline_acq-video_run-02 | | sub-82 | ses-01 | hybridonline | videoonline | 1 | 39 | 1000.0 | 68 | 404.52 | 50 | nose | left_hand,right_hand | sub-82_ses-01_task-hybridonline_acq-videoonline_run-01 | | sub-82 | ses-02 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 340.96 | 50 | nose | left_hand,right_hand | sub-82_ses-02_task-hybridonline_acq-graz_run-01 | | sub-82 | ses-02 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 327.56 | 50 | nose | left_hand,right_hand | sub-82_ses-02_task-hybridonline_acq-graz_run-02 | | sub-82 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 329.0 | 50 | nose | left_hand,right_hand | sub-82_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-82 | ses-02 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 334.24 | 50 | nose | left_hand,right_hand | sub-82_ses-02_task-hybridonline_acq-ssmvep_run-01 | | sub-82 | ses-02 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 327.84 | 50 | nose | left_hand,right_hand | sub-82_ses-02_task-hybridonline_acq-ssmvep_run-02 | | sub-82 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 340.6 | 50 | nose | left_hand,right_hand | sub-82_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-82 | ses-02 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 327.56 | 50 | nose | left_hand,right_hand | sub-82_ses-02_task-hybridonline_acq-ssvideo_run-01 | | sub-82 | ses-02 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 327.4 | 50 | nose | left_hand,right_hand | sub-82_ses-02_task-hybridonline_acq-ssvideo_run-02 | | sub-82 | ses-02 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 331.12 | 50 | nose | left_hand,right_hand | sub-82_ses-02_task-hybridonline_acq-ssvideoonline_run-01 | | sub-82 | ses-02 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 378.56 | 50 | nose | left_hand,right_hand | sub-82_ses-02_task-hybridonline_acq-video_run-01 | | sub-82 | ses-02 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 345.08 | 50 | nose | left_hand,right_hand | sub-82_ses-02_task-hybridonline_acq-video_run-02 | | sub-82 | ses-02 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 373.96 | 50 | nose | left_hand,right_hand | sub-82_ses-02_task-hybridonline_acq-videoonline_run-01 | | sub-83 | ses-01 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 333.16 | 50 | nose | left_hand,right_hand | sub-83_ses-01_task-hybridonline_acq-graz_run-01 | | sub-83 | ses-01 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 327.6 | 50 | nose | left_hand,right_hand | sub-83_ses-01_task-hybridonline_acq-graz_run-02 | | sub-83 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 328.96 | 50 | nose | left_hand,right_hand | sub-83_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-83 | ses-01 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 328.96 | 50 | nose | left_hand,right_hand | sub-83_ses-01_task-hybridonline_acq-ssmvep_run-01 | | sub-83 | ses-01 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 338.8 | 50 | nose | left_hand,right_hand | sub-83_ses-01_task-hybridonline_acq-ssmvep_run-02 | | sub-83 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 333.04 | 50 | nose | left_hand,right_hand | sub-83_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-83 | ses-01 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 330.72 | 50 | nose | left_hand,right_hand | sub-83_ses-01_task-hybridonline_acq-ssvideo_run-01 | | sub-83 | ses-01 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 327.64 | 50 | nose | left_hand,right_hand | sub-83_ses-01_task-hybridonline_acq-ssvideo_run-02 | | sub-83 | ses-01 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 337.36 | 50 | nose | left_hand,right_hand | sub-83_ses-01_task-hybridonline_acq-ssvideoonline_run-01 | | sub-83 | ses-01 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 626.96 | 50 | nose | left_hand,right_hand | sub-83_ses-01_task-hybridonline_acq-video_run-01 | | sub-83 | ses-01 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 360.92 | 50 | nose | left_hand,right_hand | sub-83_ses-01_task-hybridonline_acq-video_run-02 | | sub-83 | ses-01 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 392.4 | 50 | nose | left_hand,right_hand | sub-83_ses-01_task-hybridonline_acq-videoonline_run-01 | | sub-83 | ses-02 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 330.76 | 50 | nose | left_hand,right_hand | sub-83_ses-02_task-hybridonline_acq-graz_run-01 | | sub-83 | ses-02 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 328.96 | 50 | nose | left_hand,right_hand | sub-83_ses-02_task-hybridonline_acq-graz_run-02 | | sub-83 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 338.36 | 50 | nose | left_hand,right_hand | sub-83_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-83 | ses-02 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 344.56 | 50 | nose | left_hand,right_hand | sub-83_ses-02_task-hybridonline_acq-ssmvep_run-01 | | sub-83 | ses-02 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 335.08 | 50 | nose | left_hand,right_hand | sub-83_ses-02_task-hybridonline_acq-ssmvep_run-02 | | sub-83 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 339.88 | 50 | nose | left_hand,right_hand | sub-83_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-83 | ses-02 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 331.8 | 50 | nose | left_hand,right_hand | sub-83_ses-02_task-hybridonline_acq-ssvideo_run-01 | | sub-83 | ses-02 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 332.2 | 50 | nose | left_hand,right_hand | sub-83_ses-02_task-hybridonline_acq-ssvideo_run-02 | | sub-83 | ses-02 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 328.28 | 50 | nose | left_hand,right_hand | sub-83_ses-02_task-hybridonline_acq-ssvideoonline_run-01 | | sub-83 | ses-02 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 377.52 | 50 | nose | left_hand,right_hand | sub-83_ses-02_task-hybridonline_acq-video_run-01 | | sub-83 | ses-02 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 352.32 | 50 | nose | left_hand,right_hand | sub-83_ses-02_task-hybridonline_acq-video_run-02 | | sub-83 | ses-02 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 384.96 | 50 | nose | left_hand,right_hand | sub-83_ses-02_task-hybridonline_acq-videoonline_run-01 | | sub-84 | ses-01 | hybridonline | graz | 1 | 40 | 1000.0 | 68 | 329.44 | 50 | nose | left_hand,right_hand | sub-84_ses-01_task-hybridonline_acq-graz_run-01 | | sub-84 | ses-01 | hybridonline | graz | 2 | 40 | 1000.0 | 68 | 327.44 | 50 | nose | left_hand,right_hand | sub-84_ses-01_task-hybridonline_acq-graz_run-02 | | sub-84 | ses-01 | hybridonline | grazonline | 1 | 39 | 1000.0 | 68 | 381.44 | 50 | nose | left_hand,right_hand | sub-84_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-84 | ses-01 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 68 | 338.24 | 50 | nose | left_hand,right_hand | sub-84_ses-01_task-hybridonline_acq-ssmvep_run-01 | | sub-84 | ses-01 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 68 | 328.4 | 50 | nose | left_hand,right_hand | sub-84_ses-01_task-hybridonline_acq-ssmvep_run-02 | | sub-84 | ses-01 | hybridonline | grazonline | 1 | 40 | 1000.0 | 68 | 352.44 | 50 | nose | left_hand,right_hand | sub-84_ses-01_task-hybridonline_acq-grazonline_run-01 | | sub-84 | ses-01 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 68 | 333.92 | 50 | nose | left_hand,right_hand | sub-84_ses-01_task-hybridonline_acq-ssvideo_run-01 | | sub-84 | ses-01 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 68 | 328.32 | 50 | nose | left_hand,right_hand | sub-84_ses-01_task-hybridonline_acq-ssvideo_run-02 | | sub-84 | ses-01 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 68 | 327.4 | 50 | nose | left_hand,right_hand | sub-84_ses-01_task-hybridonline_acq-ssvideoonline_run-01 | | sub-84 | ses-01 | hybridonline | video | 1 | 40 | 1000.0 | 68 | 398.28 | 50 | nose | left_hand,right_hand | sub-84_ses-01_task-hybridonline_acq-video_run-01 | | sub-84 | ses-01 | hybridonline | video | 2 | 40 | 1000.0 | 68 | 352.08 | 50 | nose | left_hand,right_hand | sub-84_ses-01_task-hybridonline_acq-video_run-02 | | sub-84 | ses-01 | hybridonline | videoonline | 1 | 40 | 1000.0 | 68 | 390.76 | 50 | nose | left_hand,right_hand | sub-84_ses-01_task-hybridonline_acq-videoonline_run-01 | | sub-84 | ses-02 | hybridonline | graz | 1 | 40 | 1000.0 | 64 | 332.36 | 50 | nose | left_hand,right_hand | sub-84_ses-02_task-hybridonline_acq-graz_run-01 | | sub-84 | ses-02 | hybridonline | graz | 2 | 40 | 1000.0 | 64 | 334.4 | 50 | nose | left_hand,right_hand | sub-84_ses-02_task-hybridonline_acq-graz_run-02 | | sub-84 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 64 | 328.48 | 50 | nose | left_hand,right_hand | sub-84_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-84 | ses-02 | hybridonline | ssmvep | 1 | 40 | 1000.0 | 64 | 335.6 | 50 | nose | left_hand,right_hand | sub-84_ses-02_task-hybridonline_acq-ssmvep_run-01 | | sub-84 | ses-02 | hybridonline | ssmvep | 2 | 40 | 1000.0 | 64 | 330.2 | 50 | nose | left_hand,right_hand | sub-84_ses-02_task-hybridonline_acq-ssmvep_run-02 | | sub-84 | ses-02 | hybridonline | grazonline | 1 | 40 | 1000.0 | 64 | 332.72 | 50 | nose | left_hand,right_hand | sub-84_ses-02_task-hybridonline_acq-grazonline_run-01 | | sub-84 | ses-02 | hybridonline | ssvideo | 1 | 40 | 1000.0 | 64 | 331.88 | 50 | nose | left_hand,right_hand | sub-84_ses-02_task-hybridonline_acq-ssvideo_run-01 | | sub-84 | ses-02 | hybridonline | ssvideo | 2 | 40 | 1000.0 | 64 | 339.12 | 50 | nose | left_hand,right_hand | sub-84_ses-02_task-hybridonline_acq-ssvideo_run-02 | | sub-84 | ses-02 | hybridonline | ssvideoonline | 1 | 40 | 1000.0 | 64 | 514.16 | 50 | nose | left_hand,right_hand | sub-84_ses-02_task-hybridonline_acq-ssvideoonline_run-01 | | sub-84 | ses-02 | hybridonline | video | 1 | 40 | 1000.0 | 64 | 379.16 | 50 | nose | left_hand,right_hand | sub-84_ses-02_task-hybridonline_acq-video_run-01 | | sub-84 | ses-02 | hybridonline | video | 2 | 40 | 1000.0 | 64 | 354.64 | 50 | nose | left_hand,right_hand | sub-84_ses-02_task-hybridonline_acq-video_run-02 | | sub-84 | ses-02 | hybridonline | videoonline | 1 | 40 | 1000.0 | 64 | 345.76 | 50 | nose | left_hand,right_hand | sub-84_ses-02_task-hybridonline_acq-videoonline_run-01 | ``` **Notes** - Reference electrode: **nose** ## Dataset Information | Dataset ID | `DS007221` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Cross-Environment Multi-Paradigm Motor Imagery EEG Dataset | | Author (year) | `Xinwei2026` | | Canonical | — | | Importable as | `DS007221`, `Xinwei2026` | | Year | 2026 | | Authors | Sun Xinwei, Wang Kun, Pan Lincong, Cao Yupei, Meng Lin | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007221.v1.0.1](https://doi.org/10.18112/openneuro.ds007221.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007221) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007221) | [Source URL](https://openneuro.org/datasets/ds007221) | ### Copy-paste BibTeX ```bibtex @dataset{ds007221, title = {Cross-Environment Multi-Paradigm Motor Imagery EEG Dataset}, author = {Sun Xinwei and Wang Kun and Pan Lincong and Cao Yupei and Meng Lin}, doi = {10.18112/openneuro.ds007221.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007221.v1.0.1}, } ``` ## Technical Details - Subjects: 84 - Recordings: 1265 - Tasks: 4 - Channels: 69 (1023), 68 (220), 64 (22) - Sampling rate (Hz): 1000.0 - Duration (hours): 135.3057763888889 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 124.8 GB - File count: 1265 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007221.v1.0.1 - Source: openneuro - OpenNeuro: [ds007221](https://openneuro.org/datasets/ds007221) - NeMAR: [ds007221](https://nemar.org/dataexplorer/detail?dataset_id=ds007221) ## API Reference Use the `DS007221` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007221(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cross-Environment Multi-Paradigm Motor Imagery EEG Dataset * **Study:** `ds007221` (OpenNeuro) * **Author (year):** `Xinwei2026` * **Canonical:** — Also importable as: `DS007221`, `Xinwei2026`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 84; recordings: 1265; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007221](https://openneuro.org/datasets/ds007221) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007221](https://nemar.org/dataexplorer/detail?dataset_id=ds007221) DOI: [https://doi.org/10.18112/openneuro.ds007221.v1.0.1](https://doi.org/10.18112/openneuro.ds007221.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007221 >>> dataset = DS007221(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007221) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007221) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007262: eeg dataset, 18 subjects *Cognitive Workload 8-level arithmetic* Access recordings and metadata through EEGDash. **Citation:** Matthew Barras, Liam Booth (2026). *Cognitive Workload 8-level arithmetic*. [10.18112/openneuro.ds007262.v1.0.6](https://doi.org/10.18112/openneuro.ds007262.v1.0.6) Modality: eeg Subjects: 18 Recordings: 18 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007262 dataset = DS007262(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007262(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007262( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007262, title = {Cognitive Workload 8-level arithmetic}, author = {Matthew Barras and Liam Booth}, doi = {10.18112/openneuro.ds007262.v1.0.6}, url = {https://doi.org/10.18112/openneuro.ds007262.v1.0.6}, } ``` ## About This Dataset This dataset was generated from LSL/XDF recordings. Converted to bids with instructions and code [presented here](https://github.com/LMBooth/QT-arithmetic_study/tree/main/conversion_package) - Original recordings are stored under sourcedata/xdf/ as .xdf files (non-BIDS). - EEG was converted to BrainVision format (.vhdr/.eeg/.vmrk) under each sub-\*/eeg/. - \*_events.tsv was generated from marker streams and then aligned so onset is relative to the EEG start time. - Marker streams include task markers (arithmetic-Markers) and acquisition dropout annotations (UoHDataOffsetStream); events include a marker_stream column and marker definitions are in task-arithmetic_events.json. - Pupil Labs gaze/pupil data was exported from the XDF pupil_capture stream into sub-\*/eeg/ as eyetrack physio files (\*_recording-eyetrack_physio.tsv.gz + \*_recording-eyetrack_physio.json; PhysioType=eyetrack). - ECG is captured on the EEG system; the ECG channel is typed in \*_channels.tsv and exported as \*_recording-ecg_physio.tsv.gz + \*_recording-ecg_physio.json. - ML analysis note: participants excluded from the ML analysis remain in participants.tsv with analysis_included=false; no epoch rejection was applied to this raw dataset. - Participant IDs match the original XDF filenames; missing IDs correspond to excluded participants. ### View full README This dataset was generated from LSL/XDF recordings. Converted to bids with instructions and code [presented here](https://github.com/LMBooth/QT-arithmetic_study/tree/main/conversion_package) - Original recordings are stored under sourcedata/xdf/ as .xdf files (non-BIDS). - EEG was converted to BrainVision format (.vhdr/.eeg/.vmrk) under each sub-\*/eeg/. - \*_events.tsv was generated from marker streams and then aligned so onset is relative to the EEG start time. - Marker streams include task markers (arithmetic-Markers) and acquisition dropout annotations (UoHDataOffsetStream); events include a marker_stream column and marker definitions are in task-arithmetic_events.json. - Pupil Labs gaze/pupil data was exported from the XDF pupil_capture stream into sub-\*/eeg/ as eyetrack physio files (\*_recording-eyetrack_physio.tsv.gz + \*_recording-eyetrack_physio.json; PhysioType=eyetrack). - ECG is captured on the EEG system; the ECG channel is typed in \*_channels.tsv and exported as \*_recording-ecg_physio.tsv.gz + \*_recording-ecg_physio.json. - ML analysis note: participants excluded from the ML analysis remain in participants.tsv with analysis_included=false; no epoch rejection was applied to this raw dataset. - Participant IDs match the original XDF filenames; missing IDs correspond to excluded participants. Participants - N_recorded: 20 - N_released: 18 - Exclusions: 2 participants excluded due to multi-modal acquisition failures (sub-002, sub-017). - Demographics in participants.tsv: age (years), sex, handedness. - Excluded IDs remain in participants.tsv with analysis_included=false. Hardware and data collection - Combined EEG+ECG mobile EEG system (Bateson and Asghar, 2021; Clewett et al., 2016) and Pupil Labs Pupil Core, synchronized via Lab Streaming Layer (LSL). - EEG: 19-channel 10-20 montage (Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, O2), Ag/AgCl electrodes with linked-ear reference, 250 Hz; impedances checked and Neurgel EEG gel applied. - ECG: 3-lead on the same system; positive lead right shoulder/clavicle, negative lead left shoulder/clavicle, feedback lead lower left torso. - Pupillometry: Pupil Labs Pupil Core eye tracking with infrared illuminators; LSL relay with asynchronous sampling (timestamps per sample). Protocol summary - Arithmetic task difficulty was defined using Q-value ranges and randomized order across trials. - Task events encode difficulty in `trial_type` and `difficulty_range` (e.g.,baseline, 0.6-1.5, 1.5-2.4, …, 6.0-6.9). - Baseline for 60 seconds and then 70 questions, 10 at each difficulty level presented for 6 seconds each. Task: arithmetic Release notes - Recorded 20 participants; released 18. - Reason: multi-modal acquisition QC failure. - Participant IDs match original XDF filenames; missing IDs indicate excluded participants. **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 Clewett CJ, Langley P, Bateson AD et al (2016) Non-invasive, home-based electroencephalography hypoglycaemia warning system for personal monitoring using skin surface electrodes: a single-case feasibility study. Healthc Technol Lett 3:2-5. [https://doi.org/10.1049/htl.2015.0037](https://doi.org/10.1049/htl.2015.0037) Bateson AD, Asghar AUR (2021) Development and evaluation of a smartphone-based electroencephalography (EEG) system. IEEE Access. [https://doi.org/10.1109/ACCESS.2021.3079992](https://doi.org/10.1109/ACCESS.2021.3079992) ## Dataset Information | Dataset ID | `DS007262` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Cognitive Workload 8-level arithmetic | | Author (year) | `Barras2026_Cognitive` | | Canonical | `Barras2025` | | Importable as | `DS007262`, `Barras2026_Cognitive`, `Barras2025` | | Year | 2026 | | Authors | Matthew Barras, Liam Booth | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007262.v1.0.6](https://doi.org/10.18112/openneuro.ds007262.v1.0.6) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007262) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007262) | [Source URL](https://openneuro.org/datasets/ds007262) | ### Copy-paste BibTeX ```bibtex @dataset{ds007262, title = {Cognitive Workload 8-level arithmetic}, author = {Matthew Barras and Liam Booth}, doi = {10.18112/openneuro.ds007262.v1.0.6}, url = {https://doi.org/10.18112/openneuro.ds007262.v1.0.6}, } ``` ## Technical Details - Subjects: 18 - Recordings: 18 - Tasks: 1 - Channels: 24 - Sampling rate (Hz): 250.0 - Duration (hours): 4.58342 - Pathology: Healthy - Modality: — - Type: Attention - Size on disk: 378.9 MB - File count: 18 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007262.v1.0.6 - Source: openneuro - OpenNeuro: [ds007262](https://openneuro.org/datasets/ds007262) - NeMAR: [ds007262](https://nemar.org/dataexplorer/detail?dataset_id=ds007262) ## API Reference Use the `DS007262` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007262(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cognitive Workload 8-level arithmetic * **Study:** `ds007262` (OpenNeuro) * **Author (year):** `Barras2026_Cognitive` * **Canonical:** `Barras2025` Also importable as: `DS007262`, `Barras2026_Cognitive`, `Barras2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007262](https://openneuro.org/datasets/ds007262) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007262](https://nemar.org/dataexplorer/detail?dataset_id=ds007262) DOI: [https://doi.org/10.18112/openneuro.ds007262.v1.0.6](https://doi.org/10.18112/openneuro.ds007262.v1.0.6) ### Examples ```pycon >>> from eegdash.dataset import DS007262 >>> dataset = DS007262(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007262) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007262) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007314: eeg dataset, 2 subjects *tACS for Patients with Post-Stroke Anomia* Access recordings and metadata through EEGDash. **Citation:** Maria Martzoukou, Nefeli K. Dimitriou, Binbin Xu, Malo Renaud-d’Ambra, Anastasia Nousia, Alexandre Aksenov, Anne Beuter, Grigorios Nasios (2026). *tACS for Patients with Post-Stroke Anomia*. [10.18112/openneuro.ds007314.v1.0.0](https://doi.org/10.18112/openneuro.ds007314.v1.0.0) Modality: eeg Subjects: 2 Recordings: 14 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007314 dataset = DS007314(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007314(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007314( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007314, title = {tACS for Patients with Post-Stroke Anomia}, author = {Maria Martzoukou and Nefeli K. Dimitriou and Binbin Xu and Malo Renaud-d’Ambra and Anastasia Nousia and Alexandre Aksenov and Anne Beuter and Grigorios Nasios}, doi = {10.18112/openneuro.ds007314.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007314.v1.0.0}, } ``` ## About This Dataset *Title:* Transcranial Alternating Current Stimulation (tACS) for Patients with Post-Stroke Anomia: Preliminary Data on Picture Naming Performance **Dataset Description:**This dataset includes EEG recordings from two post-stroke patients with chronic anomia who participated in an 8-week individualized neuromodulation intervention using transcranial alternating current stimulation (tACS). The intervention alternated between stimulation and non-stimulation phases every two weeks and was designed to enhance naming abilities via cortical entrainment, guided by individual EEG profiles. *Data Overview:* - *Participants:* 2 individuals with post-stroke anomia (1 left-hemisphere lesion, 1 right-hemisphere lesion) - *Sessions:* EEG recorded every two weeks during the intervention (W1, W2, W4, W6, W8), and at follow-ups (W12, W20) - *Stimulation:* tACS applied during alternating weeks; frequency and montage were personalized based on initial EEG - *Tasks:* Picture naming task using a standardized set of stimuli; EEG recorded during task execution - *Modality:* EEG (recorded using Starstim-32), processed in EEGLAB and prepared for BIDS **Experimental Design:**A single-case experimental design (SCED, ABAB type) was employed. Behavioral and EEG data were collected across 24 naming sessions and 6 EEG recording sessions per participant. The data includes tACS and no-tACS conditions. **Purpose:**To investigate whether tACS improves naming accuracy and latency in chronic aphasia and whether those effects are sustained after intervention. *Data Notes:* - EEG recordings are organized in BIDS format, with sessions labeled by week (e.g., `week-01`, `week-12`) - Session and run numbers reflect weeks of intervention **Ethics:**All participants provided written informed consent. The study was approved by the Ethics Committee of the Medical School of Ioannina (approval nr. 49625) and conducted in accordance with the Declaration of Helsinki. **Contact:**For questions about the dataset, contact Maria Martzoukou (<[m.martzoukou@uoi.gr](mailto:m.martzoukou@uoi.gr)>) ## Dataset Information | Dataset ID | `DS007314` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | tACS for Patients with Post-Stroke Anomia | | Author (year) | `Martzoukou2026_tACS` | | Canonical | `Martzoukou2024_Post` | | Importable as | `DS007314`, `Martzoukou2026_tACS`, `Martzoukou2024_Post` | | Year | 2026 | | Authors | Maria Martzoukou, Nefeli K. Dimitriou, Binbin Xu, Malo Renaud-d’Ambra, Anastasia Nousia, Alexandre Aksenov, Anne Beuter, Grigorios Nasios | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007314.v1.0.0](https://doi.org/10.18112/openneuro.ds007314.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007314) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007314) | [Source URL](https://openneuro.org/datasets/ds007314) | ### Copy-paste BibTeX ```bibtex @dataset{ds007314, title = {tACS for Patients with Post-Stroke Anomia}, author = {Maria Martzoukou and Nefeli K. Dimitriou and Binbin Xu and Malo Renaud-d’Ambra and Anastasia Nousia and Alexandre Aksenov and Anne Beuter and Grigorios Nasios}, doi = {10.18112/openneuro.ds007314.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007314.v1.0.0}, } ``` ## Technical Details - Subjects: 2 - Recordings: 14 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 4.888012222222223 - Pathology: Other - Modality: Visual - Type: Clinical/Intervention - Size on disk: 1.1 GB - File count: 14 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007314.v1.0.0 - Source: openneuro - OpenNeuro: [ds007314](https://openneuro.org/datasets/ds007314) - NeMAR: [ds007314](https://nemar.org/dataexplorer/detail?dataset_id=ds007314) ## API Reference Use the `DS007314` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007314(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) tACS for Patients with Post-Stroke Anomia * **Study:** `ds007314` (OpenNeuro) * **Author (year):** `Martzoukou2026_tACS` * **Canonical:** `Martzoukou2024_Post` Also importable as: `DS007314`, `Martzoukou2026_tACS`, `Martzoukou2024_Post`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 2; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007314](https://openneuro.org/datasets/ds007314) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007314](https://nemar.org/dataexplorer/detail?dataset_id=ds007314) DOI: [https://doi.org/10.18112/openneuro.ds007314.v1.0.0](https://doi.org/10.18112/openneuro.ds007314.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007314 >>> dataset = DS007314(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007314) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007314) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007315: eeg dataset, 2 subjects *tACS for Patients with Post-Stroke Anomia* Access recordings and metadata through EEGDash. **Citation:** Maria Martzoukou, Nefeli K. Dimitriou, Binbin Xu, Malo Renaud-d’Ambra, Anastasia Nousia, Alexandre Aksenov, Anne Beuter, Grigorios Nasios (2026). *tACS for Patients with Post-Stroke Anomia*. [10.18112/openneuro.ds007315.v1.0.1](https://doi.org/10.18112/openneuro.ds007315.v1.0.1) Modality: eeg Subjects: 2 Recordings: 14 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007315 dataset = DS007315(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007315(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007315( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007315, title = {tACS for Patients with Post-Stroke Anomia}, author = {Maria Martzoukou and Nefeli K. Dimitriou and Binbin Xu and Malo Renaud-d’Ambra and Anastasia Nousia and Alexandre Aksenov and Anne Beuter and Grigorios Nasios}, doi = {10.18112/openneuro.ds007315.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007315.v1.0.1}, } ``` ## About This Dataset *Title:* Transcranial Alternating Current Stimulation (tACS) for Patients with Post-Stroke Anomia: Preliminary Data on Picture Naming Performance **Dataset Description:**This dataset includes EEG recordings from two post-stroke patients with chronic anomia who participated in an 8-week individualized neuromodulation intervention using transcranial alternating current stimulation (tACS). The intervention alternated between stimulation and non-stimulation phases every two weeks and was designed to enhance naming abilities via cortical entrainment, guided by individual EEG profiles. *Data Overview:* - *Participants:* 2 individuals with post-stroke anomia (1 left-hemisphere lesion, 1 right-hemisphere lesion) - *Sessions:* EEG recorded every two weeks during the intervention (W1, W2, W4, W6, W8), and at follow-ups (W12, W20) - *Stimulation:* tACS applied during alternating weeks; frequency and montage were personalized based on initial EEG - *Tasks:* Picture naming task using a standardized set of stimuli; EEG recorded during task execution - *Modality:* EEG (recorded using Starstim-32), processed in EEGLAB and prepared for BIDS **Experimental Design:**A single-case experimental design (SCED, ABAB type) was employed. Behavioral and EEG data were collected across 24 naming sessions and 6 EEG recording sessions per participant. The data includes tACS and no-tACS conditions. **Purpose:**To investigate whether tACS improves naming accuracy and latency in chronic aphasia and whether those effects are sustained after intervention. *Data Notes:* - EEG recordings are organized in BIDS format, with sessions labeled by week (e.g., `week-01`, `week-12`) - Session and run numbers reflect weeks of intervention **Ethics:**All participants provided written informed consent. The study was approved by the Ethics Committee of the Medical School of Ioannina (approval nr. 49625) and conducted in accordance with the Declaration of Helsinki. **Contact:**For questions about the dataset, contact Maria Martzoukou (<[m.martzoukou@uoi.gr](mailto:m.martzoukou@uoi.gr)>) ## Dataset Information | Dataset ID | `DS007315` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | tACS for Patients with Post-Stroke Anomia | | Author (year) | `Martzoukou2026_tACS_Patients` | | Canonical | `Martzoukou2024_Post_A` | | Importable as | `DS007315`, `Martzoukou2026_tACS_Patients`, `Martzoukou2024_Post_A` | | Year | 2026 | | Authors | Maria Martzoukou, Nefeli K. Dimitriou, Binbin Xu, Malo Renaud-d’Ambra, Anastasia Nousia, Alexandre Aksenov, Anne Beuter, Grigorios Nasios | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007315.v1.0.1](https://doi.org/10.18112/openneuro.ds007315.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007315) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007315) | [Source URL](https://openneuro.org/datasets/ds007315) | ### Copy-paste BibTeX ```bibtex @dataset{ds007315, title = {tACS for Patients with Post-Stroke Anomia}, author = {Maria Martzoukou and Nefeli K. Dimitriou and Binbin Xu and Malo Renaud-d’Ambra and Anastasia Nousia and Alexandre Aksenov and Anne Beuter and Grigorios Nasios}, doi = {10.18112/openneuro.ds007315.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007315.v1.0.1}, } ``` ## Technical Details - Subjects: 2 - Recordings: 14 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 4.888012222222223 - Pathology: Other - Modality: Visual - Type: Clinical/Intervention - Size on disk: 1.1 GB - File count: 14 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007315.v1.0.1 - Source: openneuro - OpenNeuro: [ds007315](https://openneuro.org/datasets/ds007315) - NeMAR: [ds007315](https://nemar.org/dataexplorer/detail?dataset_id=ds007315) ## API Reference Use the `DS007315` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007315(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) tACS for Patients with Post-Stroke Anomia * **Study:** `ds007315` (OpenNeuro) * **Author (year):** `Martzoukou2026_tACS_Patients` * **Canonical:** `Martzoukou2024_Post_A` Also importable as: `DS007315`, `Martzoukou2026_tACS_Patients`, `Martzoukou2024_Post_A`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 2; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007315](https://openneuro.org/datasets/ds007315) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007315](https://nemar.org/dataexplorer/detail?dataset_id=ds007315) DOI: [https://doi.org/10.18112/openneuro.ds007315.v1.0.1](https://doi.org/10.18112/openneuro.ds007315.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007315 >>> dataset = DS007315(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007315) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007315) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007322: eeg dataset, 57 subjects *Personalized smartphone notifications bias auditory salience across processing stages* Access recordings and metadata through EEGDash. **Citation:** Prakash Mishra, Tapan K Gandhi, Saurabh R. Gandhi (2026). *Personalized smartphone notifications bias auditory salience across processing stages*. [10.18112/openneuro.ds007322.v1.0.1](https://doi.org/10.18112/openneuro.ds007322.v1.0.1) Modality: eeg Subjects: 57 Recordings: 57 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007322 dataset = DS007322(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007322(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007322( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007322, title = {Personalized smartphone notifications bias auditory salience across processing stages}, author = {Prakash Mishra and Tapan K Gandhi and Saurabh R. Gandhi}, doi = {10.18112/openneuro.ds007322.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007322.v1.0.1}, } ``` ## About This Dataset Auditory Oddball Experiment to understand how Personalized smartphone notifications bias auditory salience across processing stages ## Dataset Information | Dataset ID | `DS007322` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Personalized smartphone notifications bias auditory salience across processing stages | | Author (year) | `Mishra2026` | | Canonical | `Mishra2024` | | Importable as | `DS007322`, `Mishra2026`, `Mishra2024` | | Year | 2026 | | Authors | Prakash Mishra, Tapan K Gandhi, Saurabh R. Gandhi | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007322.v1.0.1](https://doi.org/10.18112/openneuro.ds007322.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007322) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007322) | [Source URL](https://openneuro.org/datasets/ds007322) | ### Copy-paste BibTeX ```bibtex @dataset{ds007322, title = {Personalized smartphone notifications bias auditory salience across processing stages}, author = {Prakash Mishra and Tapan K Gandhi and Saurabh R. Gandhi}, doi = {10.18112/openneuro.ds007322.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007322.v1.0.1}, } ``` ## Technical Details - Subjects: 57 - Recordings: 57 - Tasks: 1 - Channels: 64 (31), 66 (26) - Sampling rate (Hz): 1000.0 - Duration (hours): 48.70112305555556 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 42.5 GB - File count: 57 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007322.v1.0.1 - Source: openneuro - OpenNeuro: [ds007322](https://openneuro.org/datasets/ds007322) - NeMAR: [ds007322](https://nemar.org/dataexplorer/detail?dataset_id=ds007322) ## API Reference Use the `DS007322` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007322(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Personalized smartphone notifications bias auditory salience across processing stages * **Study:** `ds007322` (OpenNeuro) * **Author (year):** `Mishra2026` * **Canonical:** `Mishra2024` Also importable as: `DS007322`, `Mishra2026`, `Mishra2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 57; recordings: 57; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007322](https://openneuro.org/datasets/ds007322) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007322](https://nemar.org/dataexplorer/detail?dataset_id=ds007322) DOI: [https://doi.org/10.18112/openneuro.ds007322.v1.0.1](https://doi.org/10.18112/openneuro.ds007322.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007322 >>> dataset = DS007322(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007322) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007322) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007338: eeg dataset, 1 subjects *EEGEyeNet Dataset* Access recordings and metadata through EEGDash. **Citation:** Martyna Beata Płomecka, Ard Kastrati, Nicolas Langer (2026). *EEGEyeNet Dataset*. [10.18112/openneuro.ds007338.v1.0.0](https://doi.org/10.18112/openneuro.ds007338.v1.0.0) Modality: eeg Subjects: 1 Recordings: 1 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007338 dataset = DS007338(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007338(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007338( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007338, title = {EEGEyeNet Dataset}, author = {Martyna Beata Płomecka and Ard Kastrati and Nicolas Langer}, doi = {10.18112/openneuro.ds007338.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007338.v1.0.0}, } ``` ## About This Dataset This is a BIDS standardized version of simultaneously collected EEG and eye-tracking data, taken from one subject from the [EEGEYENET](https://osf.io/ktv7m/) dataset. Acknowledgements go to Martyna Beata Płomecka, Ard Kastrati, and Nicolas Langer who designed the study, collected the data, and published the dataset to Open Science Framework. For access to the full dataset, please refer to the dataset DOI. **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `DS007338` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEGEyeNet Dataset | | Author (year) | `Plomecka2026` | | Canonical | `EEGEyeNet_v2`, `EEGEYENET` | | Importable as | `DS007338`, `Plomecka2026`, `EEGEyeNet_v2`, `EEGEYENET` | | Year | 2026 | | Authors | Martyna Beata Płomecka, Ard Kastrati, Nicolas Langer | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007338.v1.0.0](https://doi.org/10.18112/openneuro.ds007338.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007338) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007338) | [Source URL](https://openneuro.org/datasets/ds007338) | ### Copy-paste BibTeX ```bibtex @dataset{ds007338, title = {EEGEyeNet Dataset}, author = {Martyna Beata Płomecka and Ard Kastrati and Nicolas Langer}, doi = {10.18112/openneuro.ds007338.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007338.v1.0.0}, } ``` ## Technical Details - Subjects: 1 - Recordings: 1 - Tasks: 1 - Channels: 129 - Sampling rate (Hz): 500.0 - Duration (hours): 0.0898511111111111 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 39.9 MB - File count: 1 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007338.v1.0.0 - Source: openneuro - OpenNeuro: [ds007338](https://openneuro.org/datasets/ds007338) - NeMAR: [ds007338](https://nemar.org/dataexplorer/detail?dataset_id=ds007338) ## API Reference Use the `DS007338` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007338(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEGEyeNet Dataset * **Study:** `ds007338` (OpenNeuro) * **Author (year):** `Plomecka2026` * **Canonical:** `EEGEyeNet_v2`, `EEGEYENET` Also importable as: `DS007338`, `Plomecka2026`, `EEGEyeNet_v2`, `EEGEYENET`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007338](https://openneuro.org/datasets/ds007338) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007338](https://nemar.org/dataexplorer/detail?dataset_id=ds007338) DOI: [https://doi.org/10.18112/openneuro.ds007338.v1.0.0](https://doi.org/10.18112/openneuro.ds007338.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007338 >>> dataset = DS007338(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007338) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007338) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007347: eeg dataset, 5 subjects *Sterotactic Focused Ultrasound Mesencephalotomy for the Treatment of Head and Neck Cancer Pain* Access recordings and metadata through EEGDash. **Citation:** W. Jeffrey Elias, Chang-Chia Liu, Divine Nwafor, Patrick H. Finan, Mark Quigg, Shayan Moosa (2026). *Sterotactic Focused Ultrasound Mesencephalotomy for the Treatment of Head and Neck Cancer Pain*. [10.18112/openneuro.ds007347.v1.0.0](https://doi.org/10.18112/openneuro.ds007347.v1.0.0) Modality: eeg Subjects: 5 Recordings: 10 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007347 dataset = DS007347(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007347(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007347( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007347, title = {Sterotactic Focused Ultrasound Mesencephalotomy for the Treatment of Head and Neck Cancer Pain}, author = {W. Jeffrey Elias and Chang-Chia Liu and Divine Nwafor and Patrick H. Finan and Mark Quigg and Shayan Moosa}, doi = {10.18112/openneuro.ds007347.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007347.v1.0.0}, } ``` ## About This Dataset **README** **WARNING** Below is a template to write a README file for this BIDS dataset. If this message is still present, it means that the person exporting the file has decided not to update the template.If you are the researcher editing this README file, please remove this warning section. The README is usually the starting point for researchers using your dataand serves as a guidepost for users of your data. A clear and informativeREADME makes your data much more usable. In general you can include information in the README that is not captured by some otherfiles in the BIDS dataset (dataset_description.json, events.tsv, …). It can also be useful to also include information that might already bepresent in another file of the dataset but might be important for users to be aware ofbefore preprocessing or analysing the data. ### View full README **README** **WARNING** Below is a template to write a README file for this BIDS dataset. If this message is still present, it means that the person exporting the file has decided not to update the template.If you are the researcher editing this README file, please remove this warning section. The README is usually the starting point for researchers using your dataand serves as a guidepost for users of your data. A clear and informativeREADME makes your data much more usable. In general you can include information in the README that is not captured by some otherfiles in the BIDS dataset (dataset_description.json, events.tsv, …). It can also be useful to also include information that might already bepresent in another file of the dataset but might be important for users to be aware ofbefore preprocessing or analysing the data. If the README gets too long you have the possibility to create a `/doc` folderand add it to the `.bidsignore` file to make sure it is ignored by the BIDS validator. More info here: [https://neurostars.org/t/where-in-a-bids-dataset-should-i-put-notes-about-individual-mri-acqusitions/17315/3](https://neurostars.org/t/where-in-a-bids-dataset-should-i-put-notes-about-individual-mri-acqusitions/17315/3) **Details related to access to the data** - Data user agreement If the dataset requires a data user agreement, link to the relevant information. - Contact person Indicate the name and contact details (email and ORCID) of the person responsible for additional information. - Practical information to access the data If there is any special information related to access rights orhow to download the data make sure to include it.For example, if the dataset was curated using datalad,make sure to include the relevant section from the datalad handbook:http://handbook.datalad.org/en/latest/basics/101-180-FAQ.html#how-can-i-help-others-get-started-with-a-shared-dataset **Overview** - Project name (if relevant) - Year(s) that the project ran If no `scans.tsv` is included, this could at least cover when the data acquisitionstarter and ended. Local time of day is particularly relevant to subject state. - Brief overview of the tasks in the experiment A paragraph giving an overview of the experiment. This should include thegoals or purpose and a discussion about how the experiment tries to achievethese goals. - Description of the contents of the dataset An easy thing to add is the output of the bids-validator that describes what type ofdata and the number of subject one can expect to find in the dataset. - Independent variables A brief discussion of condition variables (sometimes called contrastsor independent variables) that were varied across the experiment. - Dependent variables A brief discussion of the response variables (sometimes called thedependent variables) that were measured and or calculated to assessthe effects of varying the condition variables. This might also includequestionnaires administered to assess behavioral aspects of the experiment. - Control variables A brief discussion of the control variables — that is what aspectswere explicitly controlled in this experiment. The control variables mightinclude subject pool, environmental conditions, set up, or other thingsthat were explicitly controlled. - Quality assessment of the data Provide a short summary of the quality of the data ideally with descriptive statistics if relevantand with a link to more comprehensive description (like with MRIQC) if possible. **Methods** **Subjects** A brief sentence about the subject pool in this experiment. Remember that `Control` or `Patient` status should be defined in the ``` `` ``` participants.tsv\`\`using a group column. - Information about the recruitment procedure- [ ] Subject inclusion criteria (if relevant)- [ ] Subject exclusion criteria (if relevant) **Apparatus** A summary of the equipment and environment setup for theexperiment. For example, was the experiment performed in a shielded roomwith the subject seated in a fixed position. **Initial setup** A summary of what setup was performed when a subject arrived. **Task organization** How the tasks were organized for a session.This is particularly important because BIDS datasets usually have task dataseparated into different files.) - Was task order counter-balanced?- [ ] What other activities were interspersed between tasks? - In what order were the tasks and other activities performed? **Task details** As much detail as possible about the task and the events that were recorded. **Additional data acquired** A brief indication of data other than theimaging data that was acquired as part of this experiment. In additionto data from other modalities and behavioral data, this might includequestionnaires and surveys, swabs, and clinical information. Indicatethe availability of this data. This is especially relevant if the data are not included in a `phenotype` folder.https://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html#phenotypic-and-assessment-data **Experimental location** This should include any additional information regarding thethe geographical location and facility that cannot be includedin the relevant json files. **Missing data** Mention something if some participants are missing some aspects of the data.This can take the form of a processing log and/or abnormalities about the dataset. Some examples: - A brain lesion or defect only present in one participant- Some experimental conditions missing on a given run for a participant because of some technical issue.- Any noticeable feature of the data for certain participants- Differences (even slight) in protocol for certain participants. **Notes** Any additional information or pointers to information thatmight be helpful to users of the dataset. Include qualitative informationrelated to how the data acquisition went. ## Dataset Information | Dataset ID | `DS007347` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Sterotactic Focused Ultrasound Mesencephalotomy for the Treatment of Head and Neck Cancer Pain | | Author (year) | `Elias2026` | | Canonical | — | | Importable as | `DS007347`, `Elias2026` | | Year | 2026 | | Authors | 1. Jeffrey Elias, Chang-Chia Liu, Divine Nwafor, Patrick H. Finan, Mark Quigg, Shayan Moosa | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007347.v1.0.0](https://doi.org/10.18112/openneuro.ds007347.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007347) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007347) | [Source URL](https://openneuro.org/datasets/ds007347) | ### Copy-paste BibTeX ```bibtex @dataset{ds007347, title = {Sterotactic Focused Ultrasound Mesencephalotomy for the Treatment of Head and Neck Cancer Pain}, author = {W. Jeffrey Elias and Chang-Chia Liu and Divine Nwafor and Patrick H. Finan and Mark Quigg and Shayan Moosa}, doi = {10.18112/openneuro.ds007347.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007347.v1.0.0}, } ``` ## Technical Details - Subjects: 5 - Recordings: 10 - Tasks: 1 - Channels: 50 (6), 102 (4) - Sampling rate (Hz): 256.0 (6), 512.0 (4) - Duration (hours): 4.473611111111111 - Pathology: Cancer - Modality: Resting State - Type: Clinical/Intervention - Size on disk: 1.6 GB - File count: 10 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007347.v1.0.0 - Source: openneuro - OpenNeuro: [ds007347](https://openneuro.org/datasets/ds007347) - NeMAR: [ds007347](https://nemar.org/dataexplorer/detail?dataset_id=ds007347) ## API Reference Use the `DS007347` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007347(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sterotactic Focused Ultrasound Mesencephalotomy for the Treatment of Head and Neck Cancer Pain * **Study:** `ds007347` (OpenNeuro) * **Author (year):** `Elias2026` * **Canonical:** — Also importable as: `DS007347`, `Elias2026`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Cancer`. Subjects: 5; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007347](https://openneuro.org/datasets/ds007347) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007347](https://nemar.org/dataexplorer/detail?dataset_id=ds007347) DOI: [https://doi.org/10.18112/openneuro.ds007347.v1.0.0](https://doi.org/10.18112/openneuro.ds007347.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007347 >>> dataset = DS007347(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007347) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007347) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007353: eeg, meg dataset, 32 subjects *HAD-MEEG* Access recordings and metadata through EEGDash. **Citation:** Guohao Zhang, Sai Ma, Ming Zhou, Shaohua Tang, Shuyi Zhen, Zheng Li, Zonglei Zhen (2026). *HAD-MEEG*. [10.18112/openneuro.ds007353.v1.0.0](https://doi.org/10.18112/openneuro.ds007353.v1.0.0) Modality: eeg, meg Subjects: 32 Recordings: 473 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007353 dataset = DS007353(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007353(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007353( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007353, title = {HAD-MEEG}, author = {Guohao Zhang and Sai Ma and Ming Zhou and Shaohua Tang and Shuyi Zhen and Zheng Li and Zonglei Zhen}, doi = {10.18112/openneuro.ds007353.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007353.v1.0.0}, } ``` ## About This Dataset Human action recognition is a core component of social cognition, engaging spatially distributed and temporally evolving neural responses that encode visual information and infer intention. To map the brain’s spatial organization supporting this process, we previously released the Human Action Dataset (HAD), a functional magnetic resonance imaging (fMRI) resource. However, fMRI’s limited temporal resolution constrains its ability to capture rapid neural dynamics. Here, we present the HAD-MEEG dataset, which extends HAD-fMRI, leveraging the millisecond-level temporal resolution of magnetoencephalography (MEG) and electroencephalography (EEG). HAD-MEEG were recorded in the same participants and with the same stimuli as HAD-fMRI, in which 30 participants viewed 21,600 video clips spanning 180 categories of human action. By integrating the temporal precision of M/EEG with the spatial precision of fMRI, HAD enables comprehensive spatiotemporal investigation of the neural mechanisms underlying human action recognition. ## Dataset Information | Dataset ID | `DS007353` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | HAD-MEEG | | Author (year) | `Zhang2026` | | Canonical | `HAD_MEEG`, `HADMEEG` | | Importable as | `DS007353`, `Zhang2026`, `HAD_MEEG`, `HADMEEG` | | Year | 2026 | | Authors | Guohao Zhang, Sai Ma, Ming Zhou, Shaohua Tang, Shuyi Zhen, Zheng Li, Zonglei Zhen | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007353.v1.0.0](https://doi.org/10.18112/openneuro.ds007353.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007353) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007353) | [Source URL](https://openneuro.org/datasets/ds007353) | ### Copy-paste BibTeX ```bibtex @dataset{ds007353, title = {HAD-MEEG}, author = {Guohao Zhang and Sai Ma and Ming Zhou and Shaohua Tang and Shuyi Zhen and Zheng Li and Zonglei Zhen}, doi = {10.18112/openneuro.ds007353.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007353.v1.0.0}, } ``` ## Technical Details - Subjects: 32 - Recordings: 473 - Tasks: 2 - Channels: 409 (240), 64 (224), 378 (9) - Sampling rate (Hz): 1200.0 (249), 1000.0 (224) - Duration (hours): 44.82657291666667 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 180.6 GB - File count: 473 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007353.v1.0.0 - Source: openneuro - OpenNeuro: [ds007353](https://openneuro.org/datasets/ds007353) - NeMAR: [ds007353](https://nemar.org/dataexplorer/detail?dataset_id=ds007353) ## API Reference Use the `DS007353` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007353(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HAD-MEEG * **Study:** `ds007353` (OpenNeuro) * **Author (year):** `Zhang2026` * **Canonical:** `HAD_MEEG`, `HADMEEG` Also importable as: `DS007353`, `Zhang2026`, `HAD_MEEG`, `HADMEEG`. Modality: `eeg, meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 32; recordings: 473; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007353](https://openneuro.org/datasets/ds007353) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007353](https://nemar.org/dataexplorer/detail?dataset_id=ds007353) DOI: [https://doi.org/10.18112/openneuro.ds007353.v1.0.0](https://doi.org/10.18112/openneuro.ds007353.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007353 >>> dataset = DS007353(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007353) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007353) * [eegdash.dataset.DS003645](eegdash.dataset.DS003645.md) # DS007358: eeg dataset, 2000 subjects *A subset of large-scale EEG dataset (India + Tanzania)* Access recordings and metadata through EEGDash. **Citation:** John Mary Vianney, Shailender Swaminathan, Jennifer Jane Newson, Dhanya Parameshwaran, Narayan Puthanmadam Subramaniyam, Swaeta Singha Roy, Revocatus Machunda, Achiwa Sapuli, Santanu Pramanik, John Victor Arun Kumar, Pramod Tiwari, G. Nelson Mathews Mathuram, Laurent Boniface Bembeleza, Joyce Philemon Laiser, Winifrida Julius Luhwago, Theresia Pastory Maduka, John Olais Mollel, Neema Gadiely Mollel, Adella Aloys Mugizi, Isaac Lwaga Mwamakula, Raymond Edwin Rweyemamu, Upendo Firimini Samweli, James Isaac Simpito, Kelvin Ewald Shirima, Anand Anbalagan, Suresh Kumar Arumugam, Vinitha Dhanapal, Kanimozhi Gunasekaran, Neelu Kashyap, Dheeraj Kumar, Durgesh Pandey, Poonam Pandey, Arunkumar Panneerselvam, Sonam Rai, Porselvi Rajendran, Santhoshkumar Sekar, Oliazhagan Sivalingam, Prahalad Soni, Pushpkala Soni, Tara C. Thiagarajan (2026). *A subset of large-scale EEG dataset (India + Tanzania)*. [10.18112/openneuro.ds007358.v1.0.0](https://doi.org/10.18112/openneuro.ds007358.v1.0.0) Modality: eeg Subjects: 2000 Recordings: 6000 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007358 dataset = DS007358(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007358(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007358( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007358, title = {A subset of large-scale EEG dataset (India + Tanzania)}, author = {John Mary Vianney and Shailender Swaminathan and Jennifer Jane Newson and Dhanya Parameshwaran and Narayan Puthanmadam Subramaniyam and Swaeta Singha Roy and Revocatus Machunda and Achiwa Sapuli and Santanu Pramanik and John Victor Arun Kumar and Pramod Tiwari and G. Nelson Mathews Mathuram and Laurent Boniface Bembeleza and Joyce Philemon Laiser and Winifrida Julius Luhwago and Theresia Pastory Maduka and John Olais Mollel and Neema Gadiely Mollel and Adella Aloys Mugizi and Isaac Lwaga Mwamakula and Raymond Edwin Rweyemamu and Upendo Firimini Samweli and James Isaac Simpito and Kelvin Ewald Shirima and Anand Anbalagan and Suresh Kumar Arumugam and Vinitha Dhanapal and Kanimozhi Gunasekaran and Neelu Kashyap and Dheeraj Kumar and Durgesh Pandey and Poonam Pandey and Arunkumar Panneerselvam and Sonam Rai and Porselvi Rajendran and Santhoshkumar Sekar and Oliazhagan Sivalingam and Prahalad Soni and Pushpkala Soni and Tara C. Thiagarajan}, doi = {10.18112/openneuro.ds007358.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007358.v1.0.0}, } ``` ## About This Dataset There is a growing imperative to understand the neurophysiological impact of our rapidly changing and diverse technological, social, chemical, and physical environments. To untangle the multidimensional and interacting effects requires data at scale across diverse populations, taking measurement out of a controlled lab environment and into the field. Electroencephalography (EEG), which has correlates with various environmental factors as well as cognitive and mental health outcomes, has the advantage of both portability and cost-effectiveness for this purpose. However, with numerous field researchers spread across diverse locations, data quality issues and researcher idle time due to insufficient participants can quickly become unmanageable and expensive problems. In programs we have established in India and Tanzania, we demonstrate that with appropriate training, structured teams, and daily automated analysis and feedback on data quality, nonspecialists can reliably collect EEG data alongside various survey and assessments with consistently high throughput and quality. Over a 30 week period, research teams were able to maintain an average of 25.6 participants per week, collecting data from a diverse sample of 7,933 participants ranging from Hadzabe hunter-gatherers to office workers. Furthermore, data quality, computed on the first 5,831 records using two common methods, PREP and FASTER, was comparable to benchmark datasets from controlled lab conditions. Altogether this resulted in a cost per participant of under $50, a fraction of the cost typical of such data collection, opening up the possibility for large-scale programs particularly in low- and middle-income countries. A subset of large-scale EEG recordings from India and Tanzania are uploaded here along with metadata like age, mental health quotient (MHQ) score, income and sex. This BIDS dataset was generated using MNE-BIDS from EDF source files. **References** Vianney JM, Swaminathan S, Newson JJ, Parameshwaran D, Subramaniyam NP, Roy SS, Machunda R, Sapuli A, Pramanik S, Kumar JV, Tiwari P. EEG Data Quality in Large-Scale Field Studies in India and Tanzania. Eneuro. 2025 Jul 1;12(7). Newson JJ, Pastukh V, Thiagarajan TC. Assessment of population well-being with the Mental Health Quotient: validation study. JMIR Mental Health. 2022 Apr 20;9(4):e34105. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `DS007358` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A subset of large-scale EEG dataset (India + Tanzania) | | Author (year) | `Vianney2026` | | Canonical | `Vianney2025` | | Importable as | `DS007358`, `Vianney2026`, `Vianney2025` | | Year | 2026 | | Authors | John Mary Vianney, Shailender Swaminathan, Jennifer Jane Newson, Dhanya Parameshwaran, Narayan Puthanmadam Subramaniyam, Swaeta Singha Roy, Revocatus Machunda, Achiwa Sapuli, Santanu Pramanik, John Victor Arun Kumar, Pramod Tiwari, G. Nelson Mathews Mathuram, Laurent Boniface Bembeleza, Joyce Philemon Laiser, Winifrida Julius Luhwago, Theresia Pastory Maduka, John Olais Mollel, Neema Gadiely Mollel, Adella Aloys Mugizi, Isaac Lwaga Mwamakula, Raymond Edwin Rweyemamu, Upendo Firimini Samweli, James Isaac Simpito, Kelvin Ewald Shirima, Anand Anbalagan, Suresh Kumar Arumugam, Vinitha Dhanapal, Kanimozhi Gunasekaran, Neelu Kashyap, Dheeraj Kumar, Durgesh Pandey, Poonam Pandey, Arunkumar Panneerselvam, Sonam Rai, Porselvi Rajendran, Santhoshkumar Sekar, Oliazhagan Sivalingam, Prahalad Soni, Pushpkala Soni, Tara C. Thiagarajan | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007358.v1.0.0](https://doi.org/10.18112/openneuro.ds007358.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007358) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007358) | [Source URL](https://openneuro.org/datasets/ds007358) | ### Copy-paste BibTeX ```bibtex @dataset{ds007358, title = {A subset of large-scale EEG dataset (India + Tanzania)}, author = {John Mary Vianney and Shailender Swaminathan and Jennifer Jane Newson and Dhanya Parameshwaran and Narayan Puthanmadam Subramaniyam and Swaeta Singha Roy and Revocatus Machunda and Achiwa Sapuli and Santanu Pramanik and John Victor Arun Kumar and Pramod Tiwari and G. Nelson Mathews Mathuram and Laurent Boniface Bembeleza and Joyce Philemon Laiser and Winifrida Julius Luhwago and Theresia Pastory Maduka and John Olais Mollel and Neema Gadiely Mollel and Adella Aloys Mugizi and Isaac Lwaga Mwamakula and Raymond Edwin Rweyemamu and Upendo Firimini Samweli and James Isaac Simpito and Kelvin Ewald Shirima and Anand Anbalagan and Suresh Kumar Arumugam and Vinitha Dhanapal and Kanimozhi Gunasekaran and Neelu Kashyap and Dheeraj Kumar and Durgesh Pandey and Poonam Pandey and Arunkumar Panneerselvam and Sonam Rai and Porselvi Rajendran and Santhoshkumar Sekar and Oliazhagan Sivalingam and Prahalad Soni and Pushpkala Soni and Tara C. Thiagarajan}, doi = {10.18112/openneuro.ds007358.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007358.v1.0.0}, } ``` ## Technical Details - Subjects: 2000 - Recordings: 6000 - Tasks: 3 - Channels: 62 (2408), 60 (833), 74 (811), 72 (770), 68 (707), 50 (216), 66 (150), 56 (63), 48 (29), 44 (6), 54 (4), 65 (3) - Sampling rate (Hz): 128.0 (5733), 256.0 (267) - Duration (hours): 276.1350466579861 - Pathology: Healthy - Modality: Resting State - Type: Resting-state - Size on disk: 16.1 GB - File count: 6000 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007358.v1.0.0 - Source: openneuro - OpenNeuro: [ds007358](https://openneuro.org/datasets/ds007358) - NeMAR: [ds007358](https://nemar.org/dataexplorer/detail?dataset_id=ds007358) ## API Reference Use the `DS007358` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007358(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A subset of large-scale EEG dataset (India + Tanzania) * **Study:** `ds007358` (OpenNeuro) * **Author (year):** `Vianney2026` * **Canonical:** `Vianney2025` Also importable as: `DS007358`, `Vianney2026`, `Vianney2025`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 2000; recordings: 6000; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007358](https://openneuro.org/datasets/ds007358) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007358](https://nemar.org/dataexplorer/detail?dataset_id=ds007358) DOI: [https://doi.org/10.18112/openneuro.ds007358.v1.0.0](https://doi.org/10.18112/openneuro.ds007358.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007358 >>> dataset = DS007358(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007358) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007358) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007406: eeg dataset, 10 subjects *EEG dataset on consumer responses to extreme versus traditional marketing videos* Access recordings and metadata through EEGDash. **Citation:** Allison Edit, Attila Pohlmann (2026). *EEG dataset on consumer responses to extreme versus traditional marketing videos*. [10.18112/openneuro.ds007406.v1.0.0](https://doi.org/10.18112/openneuro.ds007406.v1.0.0) Modality: eeg Subjects: 10 Recordings: 10 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007406 dataset = DS007406(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007406(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007406( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007406, title = {EEG dataset on consumer responses to extreme versus traditional marketing videos}, author = {Allison Edit and Attila Pohlmann}, doi = {10.18112/openneuro.ds007406.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007406.v1.0.0}, } ``` ## About This Dataset This dataset comprises EEG recordings from ten participants exposed to six marketing video stimuli from three companies (Red Bull, GoPro, Columbia Sportswear), categorized as traditional product-focused advertisements versus “extreme” authentic documentary-style videos. Data were collected using a 14-channel EMOTIV EPOC X headset. ## Dataset Information | Dataset ID | `DS007406` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG dataset on consumer responses to extreme versus traditional marketing videos | | Author (year) | `Edit2026` | | Canonical | `Edit2024` | | Importable as | `DS007406`, `Edit2026`, `Edit2024` | | Year | 2026 | | Authors | Allison Edit, Attila Pohlmann | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007406.v1.0.0](https://doi.org/10.18112/openneuro.ds007406.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007406) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007406) | [Source URL](https://openneuro.org/datasets/ds007406) | ### Copy-paste BibTeX ```bibtex @dataset{ds007406, title = {EEG dataset on consumer responses to extreme versus traditional marketing videos}, author = {Allison Edit and Attila Pohlmann}, doi = {10.18112/openneuro.ds007406.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007406.v1.0.0}, } ``` ## Technical Details - Subjects: 10 - Recordings: 10 - Tasks: 1 - Channels: 14 - Sampling rate (Hz): 256.0 - Duration (hours): 0.5000651041666667 - Pathology: Healthy - Modality: Multisensory - Type: Affect - Size on disk: 25.8 MB - File count: 10 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007406.v1.0.0 - Source: openneuro - OpenNeuro: [ds007406](https://openneuro.org/datasets/ds007406) - NeMAR: [ds007406](https://nemar.org/dataexplorer/detail?dataset_id=ds007406) ## API Reference Use the `DS007406` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007406(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG dataset on consumer responses to extreme versus traditional marketing videos * **Study:** `ds007406` (OpenNeuro) * **Author (year):** `Edit2026` * **Canonical:** `Edit2024` Also importable as: `DS007406`, `Edit2026`, `Edit2024`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 10; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007406](https://openneuro.org/datasets/ds007406) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007406](https://nemar.org/dataexplorer/detail?dataset_id=ds007406) DOI: [https://doi.org/10.18112/openneuro.ds007406.v1.0.0](https://doi.org/10.18112/openneuro.ds007406.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007406 >>> dataset = DS007406(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007406) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007406) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007420: fnirs dataset, 12 subjects *A Light Weight Multi-Distance fNIRS Dataset for Ball-Squeezing Task and Purposeful Motion Artifact Creation Task* Access recordings and metadata through EEGDash. **Citation:** Gao, Yuanyuan, Rogers, De’Ja, von Lühmann, Alexander, Ortega-Martinez, Antonio, Boas, David, Yücel, Meryem (2026). *A Light Weight Multi-Distance fNIRS Dataset for Ball-Squeezing Task and Purposeful Motion Artifact Creation Task*. [10.18112/openneuro.ds007420.v1.0.2](https://doi.org/10.18112/openneuro.ds007420.v1.0.2) Modality: fnirs Subjects: 12 Recordings: 60 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007420 dataset = DS007420(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007420(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007420( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007420, title = {A Light Weight Multi-Distance fNIRS Dataset for Ball-Squeezing Task and Purposeful Motion Artifact Creation Task}, author = {Gao, Yuanyuan and Rogers, De’Ja and von Lühmann, Alexander and Ortega-Martinez, Antonio and Boas, David and Yücel, Meryem}, doi = {10.18112/openneuro.ds007420.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds007420.v1.0.2}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS007420` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A Light Weight Multi-Distance fNIRS Dataset for Ball-Squeezing Task and Purposeful Motion Artifact Creation Task | | Author (year) | `Gao2026_Light_Weight_Multi` | | Canonical | `Gao2024` | | Importable as | `DS007420`, `Gao2026_Light_Weight_Multi`, `Gao2024` | | Year | 2026 | | Authors | Gao, Yuanyuan, Rogers, De’Ja, von Lühmann, Alexander, Ortega-Martinez, Antonio, Boas, David, Yücel, Meryem | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007420.v1.0.2](https://doi.org/10.18112/openneuro.ds007420.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007420) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007420) | [Source URL](https://openneuro.org/datasets/ds007420) | ### Copy-paste BibTeX ```bibtex @dataset{ds007420, title = {A Light Weight Multi-Distance fNIRS Dataset for Ball-Squeezing Task and Purposeful Motion Artifact Creation Task}, author = {Gao, Yuanyuan and Rogers, De’Ja and von Lühmann, Alexander and Ortega-Martinez, Antonio and Boas, David and Yücel, Meryem}, doi = {10.18112/openneuro.ds007420.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds007420.v1.0.2}, } ``` ## Technical Details - Subjects: 12 - Recordings: 60 - Tasks: 4 - Channels: 200 - Sampling rate (Hz): 8.719308035714286 (52), 11.625744047619047 (4), 8.719308035714288 (3), 11.625744047619051 - Duration (hours): Not calculated - Pathology: Healthy - Modality: Motor - Type: Motor - Size on disk: 560.7 MB - File count: 60 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007420.v1.0.2 - Source: openneuro - OpenNeuro: [ds007420](https://openneuro.org/datasets/ds007420) - NeMAR: [ds007420](https://nemar.org/dataexplorer/detail?dataset_id=ds007420) ## API Reference Use the `DS007420` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007420(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Light Weight Multi-Distance fNIRS Dataset for Ball-Squeezing Task and Purposeful Motion Artifact Creation Task * **Study:** `ds007420` (OpenNeuro) * **Author (year):** `Gao2026_Light_Weight_Multi` * **Canonical:** `Gao2024` Also importable as: `DS007420`, `Gao2026_Light_Weight_Multi`, `Gao2024`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 12; recordings: 60; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007420](https://openneuro.org/datasets/ds007420) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007420](https://nemar.org/dataexplorer/detail?dataset_id=ds007420) DOI: [https://doi.org/10.18112/openneuro.ds007420.v1.0.2](https://doi.org/10.18112/openneuro.ds007420.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS007420 >>> dataset = DS007420(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007420) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007420) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS007427: eeg dataset, 44 subjects *Comprehensive methodology for sample enrichment in EEG biomarker studies for Alzheimer’s risk classification* Access recordings and metadata through EEGDash. **Citation:** Verónica Henao Isaza, Carlos Andrés Tobón Quintero, John Fredy Ochoa Gómez (2026). *Comprehensive methodology for sample enrichment in EEG biomarker studies for Alzheimer’s risk classification*. [10.18112/openneuro.ds007427.v1.0.1](https://doi.org/10.18112/openneuro.ds007427.v1.0.1) Modality: eeg Subjects: 44 Recordings: 44 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007427 dataset = DS007427(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007427(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007427( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007427, title = {Comprehensive methodology for sample enrichment in EEG biomarker studies for Alzheimer’s risk classification}, author = {Verónica Henao Isaza and Carlos Andrés Tobón Quintero and John Fredy Ochoa Gómez}, doi = {10.18112/openneuro.ds007427.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007427.v1.0.1}, } ``` ## About This Dataset **References** Henao Isaza, V., Aguillon, D., Tobón Quintero, C. A., Lopera, F., & Ochoa Gómez, J. F. (2026). Comprehensive methodology for sample enrichment in EEG biomarker studies for Alzheimer’s risk classification. PLOS ONE. [https://doi.org/10.1371/journal.pone.0343722](https://doi.org/10.1371/journal.pone.0343722) ## Dataset Information | Dataset ID | `DS007427` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Comprehensive methodology for sample enrichment in EEG biomarker studies for Alzheimer’s risk classification | | Author (year) | `Isaza2026_Comprehensive` | | Canonical | `HenaoIsaza2026` | | Importable as | `DS007427`, `Isaza2026_Comprehensive`, `HenaoIsaza2026` | | Year | 2026 | | Authors | Verónica Henao Isaza, Carlos Andrés Tobón Quintero, John Fredy Ochoa Gómez | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007427.v1.0.1](https://doi.org/10.18112/openneuro.ds007427.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007427) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007427) | [Source URL](https://openneuro.org/datasets/ds007427) | ### Copy-paste BibTeX ```bibtex @dataset{ds007427, title = {Comprehensive methodology for sample enrichment in EEG biomarker studies for Alzheimer’s risk classification}, author = {Verónica Henao Isaza and Carlos Andrés Tobón Quintero and John Fredy Ochoa Gómez}, doi = {10.18112/openneuro.ds007427.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007427.v1.0.1}, } ``` ## Technical Details - Subjects: 44 - Recordings: 44 - Tasks: 1 - Channels: 60 - Sampling rate (Hz): 1000.0 - Duration (hours): 3.911543333333333 - Pathology: Dementia - Modality: Resting State - Type: Clinical/Intervention - Size on disk: 3.1 GB - File count: 44 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007427.v1.0.1 - Source: openneuro - OpenNeuro: [ds007427](https://openneuro.org/datasets/ds007427) - NeMAR: [ds007427](https://nemar.org/dataexplorer/detail?dataset_id=ds007427) ## API Reference Use the `DS007427` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007427(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Comprehensive methodology for sample enrichment in EEG biomarker studies for Alzheimer’s risk classification * **Study:** `ds007427` (OpenNeuro) * **Author (year):** `Isaza2026_Comprehensive` * **Canonical:** `HenaoIsaza2026` Also importable as: `DS007427`, `Isaza2026_Comprehensive`, `HenaoIsaza2026`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Dementia`. Subjects: 44; recordings: 44; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007427](https://openneuro.org/datasets/ds007427) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007427](https://nemar.org/dataexplorer/detail?dataset_id=ds007427) DOI: [https://doi.org/10.18112/openneuro.ds007427.v1.0.1](https://doi.org/10.18112/openneuro.ds007427.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007427 >>> dataset = DS007427(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007427) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007427) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007431: eeg dataset, 47 subjects *Diffuse predictions stabilize and reshape neural code during memory encoding* Access recordings and metadata through EEGDash. **Citation:** Nursena Ataseven, Sahcan Ozdemir, Wouter Kruijne, Daniel Schneider, Elkan G. Akyurek (2026). *Diffuse predictions stabilize and reshape neural code during memory encoding*. [10.18112/openneuro.ds007431.v1.0.0](https://doi.org/10.18112/openneuro.ds007431.v1.0.0) Modality: eeg Subjects: 47 Recordings: 47 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007431 dataset = DS007431(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007431(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007431( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007431, title = {Diffuse predictions stabilize and reshape neural code during memory encoding}, author = {Nursena Ataseven and Sahcan Ozdemir and Wouter Kruijne and Daniel Schneider and Elkan G. Akyurek}, doi = {10.18112/openneuro.ds007431.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007431.v1.0.0}, } ``` ## About This Dataset Experimental task: participants judged whether a probe grating was rotated clockwise or counterclockwise relative to a memorized orientation, which was either predictable or unpredictable. Each memory item was preceded by a central color cue (red, green, or blue). In half of the trials, two of these colors (predictive) cued two non-overlapping 90° segments of orientations that the grating was sampled from. Thus, participants knew the range of possible orientations of these items, but not their exact orientation. In the other half of the trials, a third (non-predictive) color was presented that signaled the item could have any possible orientation.The preprocessing and analysis scripts can be found on OSF: [https://osf.io/8evwh/](https://osf.io/8evwh/) ## Dataset Information | Dataset ID | `DS007431` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Diffuse predictions stabilize and reshape neural code during memory encoding | | Author (year) | `Ataseven2026` | | Canonical | `Ataseven2024` | | Importable as | `DS007431`, `Ataseven2026`, `Ataseven2024` | | Year | 2026 | | Authors | Nursena Ataseven, Sahcan Ozdemir, Wouter Kruijne, Daniel Schneider, Elkan G. Akyurek | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007431.v1.0.0](https://doi.org/10.18112/openneuro.ds007431.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007431) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007431) | [Source URL](https://openneuro.org/datasets/ds007431) | ### Copy-paste BibTeX ```bibtex @dataset{ds007431, title = {Diffuse predictions stabilize and reshape neural code during memory encoding}, author = {Nursena Ataseven and Sahcan Ozdemir and Wouter Kruijne and Daniel Schneider and Elkan G. Akyurek}, doi = {10.18112/openneuro.ds007431.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007431.v1.0.0}, } ``` ## Technical Details - Subjects: 47 - Recordings: 47 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 1000.0 - Duration (hours): 160.1695947222222 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 144.6 GB - File count: 47 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007431.v1.0.0 - Source: openneuro - OpenNeuro: [ds007431](https://openneuro.org/datasets/ds007431) - NeMAR: [ds007431](https://nemar.org/dataexplorer/detail?dataset_id=ds007431) ## API Reference Use the `DS007431` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007431(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Diffuse predictions stabilize and reshape neural code during memory encoding * **Study:** `ds007431` (OpenNeuro) * **Author (year):** `Ataseven2026` * **Canonical:** `Ataseven2024` Also importable as: `DS007431`, `Ataseven2026`, `Ataseven2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 47; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007431](https://openneuro.org/datasets/ds007431) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007431](https://nemar.org/dataexplorer/detail?dataset_id=ds007431) DOI: [https://doi.org/10.18112/openneuro.ds007431.v1.0.0](https://doi.org/10.18112/openneuro.ds007431.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007431 >>> dataset = DS007431(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007431) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007431) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007445: ieeg dataset, 19 subjects *Thalamocortical ictal iEEG dataset* Access recordings and metadata through EEGDash. **Citation:** Saarang Panchavati, Atsuro Daida, Sotaro Kanai, Shingo Oana, Hiroya Ono, Masaki Izumi, Kikuko Kaneko, Aria Fallah, Joe X Qiao, Noriko Salamon, Raman Sankar, Corey Arnold, William Speier, Hiroki Nariai (2026). *Thalamocortical ictal iEEG dataset*. [10.18112/openneuro.ds007445.v1.0.2](https://doi.org/10.18112/openneuro.ds007445.v1.0.2) Modality: ieeg Subjects: 19 Recordings: 66 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007445 dataset = DS007445(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007445(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007445( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007445, title = {Thalamocortical ictal iEEG dataset}, author = {Saarang Panchavati and Atsuro Daida and Sotaro Kanai and Shingo Oana and Hiroya Ono and Masaki Izumi and Kikuko Kaneko and Aria Fallah and Joe X Qiao and Noriko Salamon and Raman Sankar and Corey Arnold and William Speier and Hiroki Nariai}, doi = {10.18112/openneuro.ds007445.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds007445.v1.0.2}, } ``` ## About This Dataset We investigated thalamocortical network dynamics using intracranial EEG (iEEG) recordings with thalamic sampling from 19 patients with focal epilepsy (1). The iEEG dataset analyzed in this study is publicly shared here. BIDS converstion was performed according to references (2) and (3). References (1) Panchavati S, Daida A, Kanai S, Oana S, Ono H, Izumi M, Kaneko K, Fallah A, Qiao JX, Salamon N, Sankar R, Arnold C, Speier W, Nariai H (2026). Distinct Spectral and Directional Thalamocortical Network Dynamics Define Focal Seizure Evolution. medRxiv, 2026 Feb 4:2026.02.03.26345480. doi: 10.64898/2026.02.03.26345480. (2) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 (3) Holdgraf, C., Appelhoff, S., Bickel, S., Bouchard, K., D’Ambrosio, S., David, O., … Hermes, D. (2019). iEEG-BIDS, extending the Brain Imaging Data Structure specification to human intracranial electrophysiology. Scientific Data, 6, 102. [https://doi.org/10.1038/s41597-019-0105-7](https://doi.org/10.1038/s41597-019-0105-7) ## Dataset Information | Dataset ID | `DS007445` | |----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Thalamocortical ictal iEEG dataset | | Author (year) | `Panchavati2026` | | Canonical | — | | Importable as | `DS007445`, `Panchavati2026` | | Year | 2026 | | Authors | Saarang Panchavati, Atsuro Daida, Sotaro Kanai, Shingo Oana, Hiroya Ono, Masaki Izumi, Kikuko Kaneko, Aria Fallah, Joe X Qiao, Noriko Salamon, Raman Sankar, Corey Arnold, William Speier, Hiroki Nariai | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007445.v1.0.2](https://doi.org/10.18112/openneuro.ds007445.v1.0.2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007445) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007445) | [Source URL](https://openneuro.org/datasets/ds007445) | ### Copy-paste BibTeX ```bibtex @dataset{ds007445, title = {Thalamocortical ictal iEEG dataset}, author = {Saarang Panchavati and Atsuro Daida and Sotaro Kanai and Shingo Oana and Hiroya Ono and Masaki Izumi and Kikuko Kaneko and Aria Fallah and Joe X Qiao and Noriko Salamon and Raman Sankar and Corey Arnold and William Speier and Hiroki Nariai}, doi = {10.18112/openneuro.ds007445.v1.0.2}, url = {https://doi.org/10.18112/openneuro.ds007445.v1.0.2}, } ``` ## Technical Details - Subjects: 19 - Recordings: 66 - Tasks: 1 - Channels: 140 (10), 138 (10), 83 (6), 265 (6), 202 (5), 216 (5), 162 (4), 203 (3), 112 (3), 68 (2), 49 (2), 81 (2), 263, 120, 201, 139, 111, 124, 261, 137 - Sampling rate (Hz): 200.0 (42), 2000.0 (17), 200.00000000000003 (6), 1999.9999999999998 - Duration (hours): 73.70927875000001 - Pathology: Epilepsy - Modality: Other - Type: Clinical/Intervention - Size on disk: 50.5 GB - File count: 66 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007445.v1.0.2 - Source: openneuro - OpenNeuro: [ds007445](https://openneuro.org/datasets/ds007445) - NeMAR: [ds007445](https://nemar.org/dataexplorer/detail?dataset_id=ds007445) ## API Reference Use the `DS007445` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007445(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Thalamocortical ictal iEEG dataset * **Study:** `ds007445` (OpenNeuro) * **Author (year):** `Panchavati2026` * **Canonical:** — Also importable as: `DS007445`, `Panchavati2026`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 19; recordings: 66; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007445](https://openneuro.org/datasets/ds007445) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007445](https://nemar.org/dataexplorer/detail?dataset_id=ds007445) DOI: [https://doi.org/10.18112/openneuro.ds007445.v1.0.2](https://doi.org/10.18112/openneuro.ds007445.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS007445 >>> dataset = DS007445(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007445) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007445) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # DS007454: eeg dataset, 42 subjects *A common neural mechanism underlies experiences of passage of time* Access recordings and metadata through EEGDash. **Citation:** [Unspecified] (2026). *A common neural mechanism underlies experiences of passage of time*. [10.18112/openneuro.ds007454.v1.0.1](https://doi.org/10.18112/openneuro.ds007454.v1.0.1) Modality: eeg Subjects: 42 Recordings: 42 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007454 dataset = DS007454(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007454(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007454( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007454, title = {A common neural mechanism underlies experiences of passage of time}, author = {[Unspecified]}, doi = {10.18112/openneuro.ds007454.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007454.v1.0.1}, } ``` ## About This Dataset **Raw data for the study ‘A common neural mechanism underlies experiences of passage of time’** This repository contains the BIDS-formatted dataset generated from EEG and behavioral data. **Dataset Structure** ```text bids_dataset ``` ```text ├── sub-XXX │ ├── eeg │ └── sub-XXX_scans.tsv ├── dataset_description.json ├── participants.json ├── participants.tsv ├── README.md └── CHANGES.txt ├── sourcedata │ └── sub-XXX ``` References: Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896Pernet](https://doi.org/10.21105/joss.01896Pernet), C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) ## Dataset Information | Dataset ID | `DS007454` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | A common neural mechanism underlies experiences of passage of time | | Author (year) | `DS7454_TimePerception` | | Canonical | — | | Importable as | `DS007454`, `DS7454_TimePerception` | | Year | 2026 | | Authors | [Unspecified] | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007454.v1.0.1](https://doi.org/10.18112/openneuro.ds007454.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007454) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007454) | [Source URL](https://openneuro.org/datasets/ds007454) | ### Copy-paste BibTeX ```bibtex @dataset{ds007454, title = {A common neural mechanism underlies experiences of passage of time}, author = {[Unspecified]}, doi = {10.18112/openneuro.ds007454.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007454.v1.0.1}, } ``` ## Technical Details - Subjects: 42 - Recordings: 42 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 37.15316055555555 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 29.6 GB - File count: 42 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007454.v1.0.1 - Source: openneuro - OpenNeuro: [ds007454](https://openneuro.org/datasets/ds007454) - NeMAR: [ds007454](https://nemar.org/dataexplorer/detail?dataset_id=ds007454) ## API Reference Use the `DS007454` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007454(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A common neural mechanism underlies experiences of passage of time * **Study:** `ds007454` (OpenNeuro) * **Author (year):** `DS7454_TimePerception` * **Canonical:** — Also importable as: `DS007454`, `DS7454_TimePerception`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 42; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007454](https://openneuro.org/datasets/ds007454) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007454](https://nemar.org/dataexplorer/detail?dataset_id=ds007454) DOI: [https://doi.org/10.18112/openneuro.ds007454.v1.0.1](https://doi.org/10.18112/openneuro.ds007454.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007454 >>> dataset = DS007454(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007454) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007454) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007463: fnirs dataset, 8 subjects *Very-High-Density Diffuse Optical Tomography System Validation Dataset* Access recordings and metadata through EEGDash. **Citation:** Morgan Fogarty, Sean M. Rafferty, Zachary E. Markow, Anthony C. O’Sullivan, Calamity F. Svoboda, Tessa George, Kelsey King, Dana Wilhelm, Kalyan Tripathy, Emily M. Mugler, Stephanie Naufel, Allen Yin, Jason W. Trobaugh, Adam T. Eggebrecht, Edward J. Richter, Joseph P. Culver (2026). *Very-High-Density Diffuse Optical Tomography System Validation Dataset*. [10.18112/openneuro.ds007463.v1.1.1](https://doi.org/10.18112/openneuro.ds007463.v1.1.1) Modality: fnirs Subjects: 8 Recordings: 88 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007463 dataset = DS007463(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007463(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007463( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007463, title = {Very-High-Density Diffuse Optical Tomography System Validation Dataset}, author = {Morgan Fogarty and Sean M. Rafferty and Zachary E. Markow and Anthony C. O’Sullivan and Calamity F. Svoboda and Tessa George and Kelsey King and Dana Wilhelm and Kalyan Tripathy and Emily M. Mugler and Stephanie Naufel and Allen Yin and Jason W. Trobaugh and Adam T. Eggebrecht and Edward J. Richter and Joseph P. Culver}, doi = {10.18112/openneuro.ds007463.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds007463.v1.1.1}, } ``` ## About This Dataset This dataset consists of 8 participants completing functional localizer and movie-viewing tasks in both Very High Density Diffuse Optical Tomography (VHD-DOT) and fMRI. Sessions 1 and 2 for each subject include the VHD-DOT data in SNIRF format while sessions 3 or more include the fMRI data in NIFTI format. Preprocessed fMRI data used for comparisons to VHD-DOT are in the /derivatives folder and are in NIFTI format. More information on this data can be found here: Morgan Fogarty, Sean M. Rafferty, Zachary E. Markow, Anthony C. O’Sullivan, Calamity F. Svoboda, Tessa George, Kelsey King, Dana Wilhelm, Kalyan Tripathy, Emily M. Mugler, Stephanie Naufel, Allen Yin, Jason W. Trobaugh, Adam T. Eggebrecht, Edward J. Richter, Joseph P. Culver; Functional brain mapping using whole-head very high-density diffuse optical tomography. Imaging Neuroscience 2025; 3 IMAG.a.54. doi: [https://doi.org/10.1162/IMAG.a.54](https://doi.org/10.1162/IMAG.a.54) ## Dataset Information | Dataset ID | `DS007463` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Very-High-Density Diffuse Optical Tomography System Validation Dataset | | Author (year) | `Fogarty2026_Very` | | Canonical | `Fogarty2025` | | Importable as | `DS007463`, `Fogarty2026_Very`, `Fogarty2025` | | Year | 2026 | | Authors | Morgan Fogarty, Sean M. Rafferty, Zachary E. Markow, Anthony C. O’Sullivan, Calamity F. Svoboda, Tessa George, Kelsey King, Dana Wilhelm, Kalyan Tripathy, Emily M. Mugler, Stephanie Naufel, Allen Yin, Jason W. Trobaugh, Adam T. Eggebrecht, Edward J. Richter, Joseph P. Culver | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007463.v1.1.1](https://doi.org/10.18112/openneuro.ds007463.v1.1.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007463) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007463) | [Source URL](https://openneuro.org/datasets/ds007463) | ### Copy-paste BibTeX ```bibtex @dataset{ds007463, title = {Very-High-Density Diffuse Optical Tomography System Validation Dataset}, author = {Morgan Fogarty and Sean M. Rafferty and Zachary E. Markow and Anthony C. O’Sullivan and Calamity F. Svoboda and Tessa George and Kelsey King and Dana Wilhelm and Kalyan Tripathy and Emily M. Mugler and Stephanie Naufel and Allen Yin and Jason W. Trobaugh and Adam T. Eggebrecht and Edward J. Richter and Joseph P. Culver}, doi = {10.18112/openneuro.ds007463.v1.1.1}, url = {https://doi.org/10.18112/openneuro.ds007463.v1.1.1}, } ``` ## Technical Details - Subjects: 8 - Recordings: 88 - Tasks: 14 - Channels: 19086 (14), 19426 (11), 21518 (11), 19620 (11), 19528 (11), 19908 (10), 20218 (10), 20874 (10) - Sampling rate (Hz): 7.8125 - Duration (hours): 18.898382222222224 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 69.3 GB - File count: 88 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007463.v1.1.1 - Source: openneuro - OpenNeuro: [ds007463](https://openneuro.org/datasets/ds007463) - NeMAR: [ds007463](https://nemar.org/dataexplorer/detail?dataset_id=ds007463) ## API Reference Use the `DS007463` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007463(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Very-High-Density Diffuse Optical Tomography System Validation Dataset * **Study:** `ds007463` (OpenNeuro) * **Author (year):** `Fogarty2026_Very` * **Canonical:** `Fogarty2025` Also importable as: `DS007463`, `Fogarty2026_Very`, `Fogarty2025`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 8; recordings: 88; tasks: 14. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007463](https://openneuro.org/datasets/ds007463) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007463](https://nemar.org/dataexplorer/detail?dataset_id=ds007463) DOI: [https://doi.org/10.18112/openneuro.ds007463.v1.1.1](https://doi.org/10.18112/openneuro.ds007463.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS007463 >>> dataset = DS007463(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007463) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007463) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS007471: eeg dataset, 31 subjects *Joint agency EEG dataset* Access recordings and metadata through EEGDash. **Citation:** Zijun Zhou, Anna Zamm, Justin Christensen, Vinesh Rao, Janeen Loehr (2026). *Joint agency EEG dataset*. [10.18112/openneuro.ds007471.v1.0.0](https://doi.org/10.18112/openneuro.ds007471.v1.0.0) Modality: eeg Subjects: 31 Recordings: 31 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007471 dataset = DS007471(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007471(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007471( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007471, title = {Joint agency EEG dataset}, author = {Zijun Zhou and Anna Zamm and Justin Christensen and Vinesh Rao and Janeen Loehr}, doi = {10.18112/openneuro.ds007471.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007471.v1.0.0}, } ``` ## About This Dataset **Behavioural and EEG data from an EEG hyperscanning study examining cognitive and neural signals underlying the sense of joint agency during a musical joint action task** **Dataset Structure** The primary folder includes a separate folder for each pair:sub-## Each pair folder contains: **Behavioural Data** ### View full README **Behavioural and EEG data from an EEG hyperscanning study examining cognitive and neural signals underlying the sense of joint agency during a musical joint action task** **Dataset Structure** The primary folder includes a separate folder for each pair:sub-## Each pair folder contains: **Behavioural Data** Located in:sub-##/beh/ File:sub-##_task-jointaction_beh.tsv **EEG Data** Located in:sub-##/eeg/ Files (BrainVision format): sub-##_task-jointaction_eeg.eeg sub-##_task-jointaction_eeg.vhdr **sub-##task-jointactioneeg.vmrk** **Derivatives Folder** The `derivatives/` folder contains: - `behavioural_all.tsv` > Compiled behavioural data across all pairs. - `32chanElectrodePositions.elp` **Electrode positions used for EEG data acquisition and analysis.** **Behavioural Data Description** The following column descriptions apply to both: - `behavioural_all.tsv` **- sub-##task-jointactionbeh.tsv** **Pair Number** **Values: 1–32** **Participant Number** - The first one or two digits represent the pair number. - The last digit represents seating position: - `1` = left participant - `2` = right participant Examples: - `11` = left participant in pair 1 **- 202 = right participant in pair 20** **Block Number** **Test block number for a given trial (1–8).** **Trial Number** Each pair performed: - 8 tone sequences > - 4 musical duets > - 4 constant pitch sequences - 5 joint trials per sequence Total: - 40 test trials per pair **- Trial numbers range from 1–40** **Experimental Condition** - `0` = constant pitch sequences **- 1 = musical duets** **Part Performed** Indicates which part of the tone sequence the participant performed: - `0` = higher-pitch part (for constant pitch sequences) or melody part (for musical duets) - `1` = lower-pitch part (for constant pitch sequences) or accompaniment part (for musical duets) **Tone Sequence** 1. Twinkle Twinkle Little Star 2. Hush Little Baby 3. B.I.N.G.O. 4. Yankee Doodle 5. Constant pitch sequence with A4 as higher-pitch part 6. Constant pitch sequence with C5 as higher-pitch part 7. Constant pitch sequence with E♭5 as higher-pitch part **8. Constant pitch sequence with F♯5 as higher-pitch part** **Joint Agency Ratings** **Self-reported rating scale: 1–7** **Mean Synchronization Performance** The mean synchronization performance for each trial was calculated as follows. First, we calculated the absolute asynchrony between the two participants’ note onsets at each beat. Then, we converted each asynchrony to a proportion of the inter-onset interval (IOI) from the preceding note onset to the current note onset, which we averaged across the two participants and across all beats in the sequence. **Standard Deviation (SD) of Synchronization Performance** The SD of synchronization performance was defined as the standard deviation of the asynchronies across all beats in a given each trial. **EEG Data Description** For each EEG dataset within each pair’s folder: - Channels 1–32: left participant EEG - Channels 33–64: right participant EEG **Data are stored in BrainVision format.** **Event Codes (Test Section)** The following event markers are present during the test section (see Figure 1 for schematic reference): - **S1** – the beginning of the test trials portion of the experiment - **S10** – a condition marker indicating the beginning of a block of musical duets - **S11** – a condition marker indicating the beginning of a block of constant pitch sequences - **S105** – the start of each trial, triggered by pressing the space bar - **S128** – The first five S128s mark the metronome tone onsets. Remaining S128s mark the tone onsets from the left participant’s e-music box. - **S4** – tone onsets from the right participant’s e-music box - **S2** – the end of the left participant’s performance, marked one beat after the last of their 16-beat tone sequence - **S3** – the end of the right participant’s performance, marked one beat after the last of their 16-beat tone sequence - **S106** – the end of each trial after the rating scales were completed **- S107 – the end of each block** **Figure** [Illustration of the event codes occurring over time in the dataset.](derivatives/figures/Figure1_EventCodes.png) **Notes** - Data are organized in BIDS format. - BrainVision files (.eeg, .vhdr, .vmrk) contain raw hyperscanning EEG data. - Behavioural data are provided per pair and as a compiled dataset in the derivatives folder. ## Dataset Information | Dataset ID | `DS007471` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Joint agency EEG dataset | | Author (year) | `Zhou2026` | | Canonical | `Zhou2024` | | Importable as | `DS007471`, `Zhou2026`, `Zhou2024` | | Year | 2026 | | Authors | Zijun Zhou, Anna Zamm, Justin Christensen, Vinesh Rao, Janeen Loehr | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007471.v1.0.0](https://doi.org/10.18112/openneuro.ds007471.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007471) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007471) | [Source URL](https://openneuro.org/datasets/ds007471) | ### Copy-paste BibTeX ```bibtex @dataset{ds007471, title = {Joint agency EEG dataset}, author = {Zijun Zhou and Anna Zamm and Justin Christensen and Vinesh Rao and Janeen Loehr}, doi = {10.18112/openneuro.ds007471.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007471.v1.0.0}, } ``` ## Technical Details - Subjects: 31 - Recordings: 31 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 18.78229444444444 - Pathology: Healthy - Modality: Auditory - Type: Other - Size on disk: 8.1 GB - File count: 31 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007471.v1.0.0 - Source: openneuro - OpenNeuro: [ds007471](https://openneuro.org/datasets/ds007471) - NeMAR: [ds007471](https://nemar.org/dataexplorer/detail?dataset_id=ds007471) ## API Reference Use the `DS007471` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007471(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Joint agency EEG dataset * **Study:** `ds007471` (OpenNeuro) * **Author (year):** `Zhou2026` * **Canonical:** `Zhou2024` Also importable as: `DS007471`, `Zhou2026`, `Zhou2024`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 31; recordings: 31; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007471](https://openneuro.org/datasets/ds007471) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007471](https://nemar.org/dataexplorer/detail?dataset_id=ds007471) DOI: [https://doi.org/10.18112/openneuro.ds007471.v1.0.0](https://doi.org/10.18112/openneuro.ds007471.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007471 >>> dataset = DS007471(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007471) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007471) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007473: fnirs dataset, 5 subjects *High-Density Diffuse Optical Tomography Audiovisual Movie Viewing Dataset* Access recordings and metadata through EEGDash. **Citation:** Morgan Fogarty, Kalyan Tripathy, Alexandra M Svoboda, Mariel L Schroeder, Sean M Rafferty, Edward J Richter, Christopher Tracy, Patricia K Mansfield, Madison Booth, Andrew K Fishell, Arefeh Sherafati, Zachary E Markow, Muriah D Wheelock, Ana Maria Arbelaez, Bradley L Schlaggar, Christopher D Smyser, Adam T Eggebrecht, Joseph P Culver (2026). *High-Density Diffuse Optical Tomography Audiovisual Movie Viewing Dataset*. [10.18112/openneuro.ds007473.v1.0.0](https://doi.org/10.18112/openneuro.ds007473.v1.0.0) Modality: fnirs Subjects: 5 Recordings: 189 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007473 dataset = DS007473(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007473(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007473( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007473, title = {High-Density Diffuse Optical Tomography Audiovisual Movie Viewing Dataset}, author = {Morgan Fogarty and Kalyan Tripathy and Alexandra M Svoboda and Mariel L Schroeder and Sean M Rafferty and Edward J Richter and Christopher Tracy and Patricia K Mansfield and Madison Booth and Andrew K Fishell and Arefeh Sherafati and Zachary E Markow and Muriah D Wheelock and Ana Maria Arbelaez and Bradley L Schlaggar and Christopher D Smyser and Adam T Eggebrecht and Joseph P Culver}, doi = {10.18112/openneuro.ds007473.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007473.v1.0.0}, } ``` ## About This Dataset This dataset consists of 5 participants completing functional localizer and movie viewing tasks. These data are stored in SNIRF format with optode and landmark locations associated with subject specific head models. See the corresponding publication for more information about this dataset: Tripathy K, Fogarty M, et al., “Mapping brain function in adults and young children during naturalistic viewing with high-density diffuse optical tomography.” Human Brain Mapping. 2024 May;45(7):e26684. doi: 10.1002/hbm.26684. PMID: 38703090; PMCID: PMC11069306. Kalyan Tripathy, Zachary E. Markow, Morgan Fogarty, Mariel L. Schroeder, Alexa M. Svoboda, Adam T. Eggebrecht, Bradley L. Schlaggar, Jason W. Trobaugh, Joseph P. Culver, “Multisensory naturalistic decoding with high-density diffuse optical tomography,” Neurophoton. 12(1) 015002 (23 January 2025) [https://doi.org/10.1117/1.NPh.12.1.015002](https://doi.org/10.1117/1.NPh.12.1.015002) ## Dataset Information | Dataset ID | `DS007473` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | High-Density Diffuse Optical Tomography Audiovisual Movie Viewing Dataset | | Author (year) | `Fogarty2026_High` | | Canonical | `Tripathy2024` | | Importable as | `DS007473`, `Fogarty2026_High`, `Tripathy2024` | | Year | 2026 | | Authors | Morgan Fogarty, Kalyan Tripathy, Alexandra M Svoboda, Mariel L Schroeder, Sean M Rafferty, Edward J Richter, Christopher Tracy, Patricia K Mansfield, Madison Booth, Andrew K Fishell, Arefeh Sherafati, Zachary E Markow, Muriah D Wheelock, Ana Maria Arbelaez, Bradley L Schlaggar, Christopher D Smyser, Adam T Eggebrecht, Joseph P Culver | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007473.v1.0.0](https://doi.org/10.18112/openneuro.ds007473.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007473) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007473) | [Source URL](https://openneuro.org/datasets/ds007473) | ### Copy-paste BibTeX ```bibtex @dataset{ds007473, title = {High-Density Diffuse Optical Tomography Audiovisual Movie Viewing Dataset}, author = {Morgan Fogarty and Kalyan Tripathy and Alexandra M Svoboda and Mariel L Schroeder and Sean M Rafferty and Edward J Richter and Christopher Tracy and Patricia K Mansfield and Madison Booth and Andrew K Fishell and Arefeh Sherafati and Zachary E Markow and Muriah D Wheelock and Ana Maria Arbelaez and Bradley L Schlaggar and Christopher D Smyser and Adam T Eggebrecht and Joseph P Culver}, doi = {10.18112/openneuro.ds007473.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007473.v1.0.0}, } ``` ## Technical Details - Subjects: 5 - Recordings: 189 - Tasks: 19 - Channels: 6782 (71), 6928 (43), 6880 (38), 6750 (31), 7030 (6) - Sampling rate (Hz): 10.41666666666667 - Duration (hours): 51.12245333333332 - Pathology: Healthy - Modality: Multisensory - Type: Perception - Size on disk: 36.3 GB - File count: 189 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007473.v1.0.0 - Source: openneuro - OpenNeuro: [ds007473](https://openneuro.org/datasets/ds007473) - NeMAR: [ds007473](https://nemar.org/dataexplorer/detail?dataset_id=ds007473) ## API Reference Use the `DS007473` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007473(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High-Density Diffuse Optical Tomography Audiovisual Movie Viewing Dataset * **Study:** `ds007473` (OpenNeuro) * **Author (year):** `Fogarty2026_High` * **Canonical:** `Tripathy2024` Also importable as: `DS007473`, `Fogarty2026_High`, `Tripathy2024`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 5; recordings: 189; tasks: 19. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007473](https://openneuro.org/datasets/ds007473) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007473](https://nemar.org/dataexplorer/detail?dataset_id=ds007473) DOI: [https://doi.org/10.18112/openneuro.ds007473.v1.0.0](https://doi.org/10.18112/openneuro.ds007473.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007473 >>> dataset = DS007473(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007473) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007473) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS007477: fnirs dataset, 18 subjects *TimeSeries BIDS converted* Access recordings and metadata through EEGDash. **Citation:** Niu,Haijing, Zheng, Sha, Yuan, Haodong (2026). *TimeSeries BIDS converted*. [10.18112/openneuro.ds007477.v1.0.1](https://doi.org/10.18112/openneuro.ds007477.v1.0.1) Modality: fnirs Subjects: 18 Recordings: 36 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007477 dataset = DS007477(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007477(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007477( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007477, title = {TimeSeries BIDS converted}, author = {Niu,Haijing and Zheng, Sha and Yuan, Haodong}, doi = {10.18112/openneuro.ds007477.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007477.v1.0.1}, } ``` ## About This Dataset This dataset was converted from TimeSeriesHbORT_18sub_twoSessionICAdenoise(1).mat using `convert_mat_to_bids.py`. Notes: - Review and confirm `*_nirs.json` (SamplingFrequency, NIRSChannelCount, source/detector mapping) before public release. - This README is a placeholder to satisfy BIDS recommendations; replace with dataset-specific information as needed. ## Dataset Information | Dataset ID | `DS007477` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | TimeSeries BIDS converted | | Author (year) | `Niu2026` | | Canonical | — | | Importable as | `DS007477`, `Niu2026` | | Year | 2026 | | Authors | Niu,Haijing, Zheng, Sha, Yuan, Haodong | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007477.v1.0.1](https://doi.org/10.18112/openneuro.ds007477.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007477) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007477) | [Source URL](https://openneuro.org/datasets/ds007477) | ### Copy-paste BibTeX ```bibtex @dataset{ds007477, title = {TimeSeries BIDS converted}, author = {Niu,Haijing and Zheng, Sha and Yuan, Haodong}, doi = {10.18112/openneuro.ds007477.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007477.v1.0.1}, } ``` ## Technical Details - Subjects: 18 - Recordings: 36 - Tasks: 1 - Channels: 1 - Sampling rate (Hz): 10.0 - Duration (hours): Not calculated - Pathology: Not specified - Modality: Other - Type: — - Size on disk: 9.2 KB - File count: 36 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007477.v1.0.1 - Source: openneuro - OpenNeuro: [ds007477](https://openneuro.org/datasets/ds007477) - NeMAR: [ds007477](https://nemar.org/dataexplorer/detail?dataset_id=ds007477) ## API Reference Use the `DS007477` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007477(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TimeSeries BIDS converted * **Study:** `ds007477` (OpenNeuro) * **Author (year):** `Niu2026` * **Canonical:** — Also importable as: `DS007477`, `Niu2026`. Modality: `fnirs`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 18; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007477](https://openneuro.org/datasets/ds007477) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007477](https://nemar.org/dataexplorer/detail?dataset_id=ds007477) DOI: [https://doi.org/10.18112/openneuro.ds007477.v1.0.1](https://doi.org/10.18112/openneuro.ds007477.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007477 >>> dataset = DS007477(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007477) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007477) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # DS007521: eeg dataset, 23 subjects *The effect of hunger and state preferences on the neural processing of food images* Access recordings and metadata through EEGDash. **Citation:** Moerel, Denise, Chenh, Cecilia, Bowman, Sophie, Carlson, Thomas (2026). *The effect of hunger and state preferences on the neural processing of food images*. [10.18112/openneuro.ds007521.v1.0.1](https://doi.org/10.18112/openneuro.ds007521.v1.0.1) Modality: eeg Subjects: 23 Recordings: 46 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007521 dataset = DS007521(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007521(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007521( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007521, title = {The effect of hunger and state preferences on the neural processing of food images}, author = {Moerel, Denise and Chenh, Cecilia and Bowman, Sophie and Carlson, Thomas}, doi = {10.18112/openneuro.ds007521.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007521.v1.0.1}, } ``` ## About This Dataset A preprint of the manuscript can be found on bioRxiv: doi.org/10.1101/2025.09.09.674354 The experiment and analysis code can be found via the Open Science Framework: doi.org/10.17605/OSF.IO/ZFD7P Experiment Details: Human electroencephalography recordings from 23 participants, who did a letter task and calorie categorisation task. In the letter task, participants viewed rapid streams of overlaid food/non-food images and letters, pressing a button whenever they saw a vowel, while ignoring the images. This setup directed attention away from the visual objects, making them task-irrelevant. In contrast, the calorie categorisation task required participants to actively evaluate each food image and classify it as higher or lower in calories than bread, by pressing a button. Experiment length: 1 hour ## Dataset Information | Dataset ID | `DS007521` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The effect of hunger and state preferences on the neural processing of food images | | Author (year) | `Moerel2026` | | Canonical | `Moerel2025` | | Importable as | `DS007521`, `Moerel2026`, `Moerel2025` | | Year | 2026 | | Authors | Moerel, Denise, Chenh, Cecilia, Bowman, Sophie, Carlson, Thomas | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007521.v1.0.1](https://doi.org/10.18112/openneuro.ds007521.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007521) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007521) | [Source URL](https://openneuro.org/datasets/ds007521) | ### Copy-paste BibTeX ```bibtex @dataset{ds007521, title = {The effect of hunger and state preferences on the neural processing of food images}, author = {Moerel, Denise and Chenh, Cecilia and Bowman, Sophie and Carlson, Thomas}, doi = {10.18112/openneuro.ds007521.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007521.v1.0.1}, } ``` ## Technical Details - Subjects: 23 - Recordings: 46 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 100.0 - Duration (hours): 34.23017222222222 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 29.0 GB - File count: 46 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007521.v1.0.1 - Source: openneuro - OpenNeuro: [ds007521](https://openneuro.org/datasets/ds007521) - NeMAR: [ds007521](https://nemar.org/dataexplorer/detail?dataset_id=ds007521) ## API Reference Use the `DS007521` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007521(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effect of hunger and state preferences on the neural processing of food images * **Study:** `ds007521` (OpenNeuro) * **Author (year):** `Moerel2026` * **Canonical:** `Moerel2025` Also importable as: `DS007521`, `Moerel2026`, `Moerel2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 23; recordings: 46; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007521](https://openneuro.org/datasets/ds007521) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007521](https://nemar.org/dataexplorer/detail?dataset_id=ds007521) DOI: [https://doi.org/10.18112/openneuro.ds007521.v1.0.1](https://doi.org/10.18112/openneuro.ds007521.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007521 >>> dataset = DS007521(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007521) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007521) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007523: meg dataset, 58 subjects *LPP MEG Listen* Access recordings and metadata through EEGDash. **Citation:** Corentin Bel, Julie Bonnaire, Christophe Pallier, Jean-Rémi King (2026). *LPP MEG Listen*. [10.18112/openneuro.ds007523.v1.0.0](https://doi.org/10.18112/openneuro.ds007523.v1.0.0) Modality: meg Subjects: 58 Recordings: 579 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007523 dataset = DS007523(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007523(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007523( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007523, title = {LPP MEG Listen}, author = {Corentin Bel and Julie Bonnaire and Christophe Pallier and Jean-Rémi King}, doi = {10.18112/openneuro.ds007523.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007523.v1.0.0}, } ``` ## About This Dataset **Summary** This dataset contains magnetoencephalography (MEG) recordings collected while participants listened to the French audiobook of *Le Petit Prince* by Antoine de Saint-Exupéry. A complementary MEG dataset from the same project, using a reading (RSVP) paradigm, is available on OpenNeuro (accession number: ds007524). This data is analyzed in: d’Ascoli, S., Bel, C., Rapin, J. et al. Towards decoding individual words from non-invasive brain recordings. Nature Communications 16, 10521 (2025). [https://doi.org/10.1038/s41467-025-65499-0](https://doi.org/10.1038/s41467-025-65499-0) ### View full README **Summary** This dataset contains magnetoencephalography (MEG) recordings collected while participants listened to the French audiobook of *Le Petit Prince* by Antoine de Saint-Exupéry. A complementary MEG dataset from the same project, using a reading (RSVP) paradigm, is available on OpenNeuro (accession number: ds007524). This data is analyzed in: d’Ascoli, S., Bel, C., Rapin, J. et al. Towards decoding individual words from non-invasive brain recordings. Nature Communications 16, 10521 (2025). [https://doi.org/10.1038/s41467-025-65499-0](https://doi.org/10.1038/s41467-025-65499-0) **Participants** Fifty-eight healthy adults participated in the listening experiment (17 females; mean age = 27.8 years, SD = 5.5 years). All participants were native French speakers, right-handed, and reported no history of neurological disorders. Written informed consent was obtained prior to participation. The study was approved by the relevant **local ethics committee.** **Stimuli** The auditory stimulus consisted of the French audiobook version of \*Le Petit Prince\*. - Language: French - Format: Continuous audiobook - Segmentation: 9 parts - Mean duration per part: 10min50s - Standard deviation: 55s - Minimum duration: 9min40s - Maximum duration: 12min30s The same audiobook version was previously used in a publicly available **fMRI dataset (Li et al., 2022).** **Experimental Procedure** Participants were seated in the MEG system after informed consent and familiarization with the recording environment. Auditory stimuli were delivered through MEG-compatible earphones. Sound intensity was individually adjusted to a comfortable listening level before the experiment. Participants were instructed to listen attentively and remain as still as possible. The experiment consisted of 9 runs, corresponding to the 9 audiobook segments. Between runs, participants completed 4 multiple-choice comprehension questions presented visually on a screen (not reported here). Short breaks were provided between runs. Alertness and movement were monitored **via camera during recording.** **Acquisition** **MEG** MEG data for all three tasks were recorded inside the same magnetically shielded room using a whole-head Elekta Neuromag TRIUX MEG system (Elekta Oy, Helsinki, Finland), equipped with 102 magnetometers and 204 planar gradiometers. Data were recorded continuously with a sampling rate of 1000 Hz and an online low-pass filter at 330 Hz and high-pass filter at 0.1 Hz. Vertical and horizontal electrooculograms (EOG) and an electrocardiogram (ECG) were recorded simultaneously using bipolar electrodes to monitor eye movements and heartbeats. **Anatomical MRI** For each participant, a high-resolution T1-weighted anatomical MRI scan was acquired using a 3T Siemens Magnetom Prisma MRI scanner (Siemens Healthcare, Erlangen, Germany). A standard MPRAGE sequence was used. MRI scans were typically acquired right after the MEG recording. Scans were used for coregistration and cortical surface reconstruction for source analysis. **Data Organization** **Raw Data** The root directory includes: - `dataset_description.json` - `participants.tsv` and `participants.json` - `task-listen_events.json` - `sub-01` to `sub-58` - `sourcedata/` Each subject directory (`sub-XX`) contains one session (`ses-01`) with: - `anat/`: T1-weighted MRI (`sub-XX_ses-01_T1w.nii.gz`) and > corresponding JSON sidecar - `meg/`: 9 MEG runs (`task-listen_run-01` to `run-09`), each including: - continuous MEG data (`*_meg.fif`) - sidecar JSON files - `events.tsv` and `channels.tsv` files - coordinate system file (`*_coordsystem.json`) - calibration and crosstalk files - `sub-XX_ses-01_scans.tsv`: scan-level metadata Each run corresponds to one audiobook segment. Acquisition parameters are provided in the corresponding sidecar JSON **files.** **References** Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., & Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. *Scientific Data*, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) Li, Jixing, et al. “Le Petit Prince Multilingual Naturalistic fMRI Corpus.” Scientific Data, vol. 9, no. 1, Aug. 2022, p. 530. www.nature.com, [https://doi.org/10.1038/s41597-022-01625-7](https://doi.org/10.1038/s41597-022-01625-7). ## Dataset Information | Dataset ID | `DS007523` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | LPP MEG Listen | | Author (year) | `Bel2026` | | Canonical | `Dascoli2025` | | Importable as | `DS007523`, `Bel2026`, `Dascoli2025` | | Year | 2026 | | Authors | Corentin Bel, Julie Bonnaire, Christophe Pallier, Jean-Rémi King | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007523.v1.0.0](https://doi.org/10.18112/openneuro.ds007523.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007523) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007523) | [Source URL](https://openneuro.org/datasets/ds007523) | ### Copy-paste BibTeX ```bibtex @dataset{ds007523, title = {LPP MEG Listen}, author = {Corentin Bel and Julie Bonnaire and Christophe Pallier and Jean-Rémi King}, doi = {10.18112/openneuro.ds007523.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007523.v1.0.0}, } ``` ## Technical Details - Subjects: 58 - Recordings: 579 - Tasks: 1 - Channels: 346 (484), 404 (9), 400 (9), 329 (9), 343 (9), 321 - Sampling rate (Hz): 1000.0 - Duration (hours): 94.80763305555556 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 444.8 GB - File count: 579 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007523.v1.0.0 - Source: openneuro - OpenNeuro: [ds007523](https://openneuro.org/datasets/ds007523) - NeMAR: [ds007523](https://nemar.org/dataexplorer/detail?dataset_id=ds007523) ## API Reference Use the `DS007523` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007523(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LPP MEG Listen * **Study:** `ds007523` (OpenNeuro) * **Author (year):** `Bel2026` * **Canonical:** `Dascoli2025` Also importable as: `DS007523`, `Bel2026`, `Dascoli2025`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 58; recordings: 579; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007523](https://openneuro.org/datasets/ds007523) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007523](https://nemar.org/dataexplorer/detail?dataset_id=ds007523) DOI: [https://doi.org/10.18112/openneuro.ds007523.v1.0.0](https://doi.org/10.18112/openneuro.ds007523.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007523 >>> dataset = DS007523(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007523) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007523) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS007524: meg dataset, 50 subjects *LittlePrince_MEG_French_Read_Pallier2025* Access recordings and metadata through EEGDash. **Citation:** Corentin Bel, Julie Bonnaire, Jean-Rémi King, Christophe Pallier (2026). *LittlePrince_MEG_French_Read_Pallier2025*. [10.18112/openneuro.ds007524.v1.0.1](https://doi.org/10.18112/openneuro.ds007524.v1.0.1) Modality: meg Subjects: 50 Recordings: 500 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007524 dataset = DS007524(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007524(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007524( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007524, title = {LittlePrince_MEG_French_Read_Pallier2025}, author = {Corentin Bel and Julie Bonnaire and Jean-Rémi King and Christophe Pallier}, doi = {10.18112/openneuro.ds007524.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007524.v1.0.1}, } ``` ## About This Dataset **Summary** This dataset contains magnetoencephalography (MEG) recordings collected while participants read the French text of *Le Petit Prince* presented using a rapid serial visual presentation (RSVP) paradigm. A separate dataset containing MEG recordings from the auditory listening paradigm is available on OpenNeuro (accession number: ds007523). This data is analyzed in: d’Ascoli, S., Bel, C., Rapin, J. et al. Towards decoding individual words from non-invasive brain recordings. Nature Communications 16, 10521 (2025). [https://doi.org/10.1038/s41467-025-65499-0](https://doi.org/10.1038/s41467-025-65499-0) **Participants** ### View full README **Summary** This dataset contains magnetoencephalography (MEG) recordings collected while participants read the French text of *Le Petit Prince* presented using a rapid serial visual presentation (RSVP) paradigm. A separate dataset containing MEG recordings from the auditory listening paradigm is available on OpenNeuro (accession number: ds007523). This data is analyzed in: d’Ascoli, S., Bel, C., Rapin, J. et al. Towards decoding individual words from non-invasive brain recordings. Nature Communications 16, 10521 (2025). [https://doi.org/10.1038/s41467-025-65499-0](https://doi.org/10.1038/s41467-025-65499-0) **Participants** Fifty healthy adults participated in the reading experiment (10 females; mean age = 28.4 years, SD = 5.7 years). All participants were native French speakers, right-handed, and reported no history of neurological disorders. Written informed consent was obtained prior to participation. The study was approved by the relevant local ethics committee. **Stimuli** The stimulus consisted of the French text of \*Le Petit Prince\*. The text was presented using a rapid serial visual presentation (RSVP) paradigm: - Words were displayed individually in white font on a black background - Word duration: 225 ms - Inter-word interval: 50 ms (black screen) - Sentence-final pause: 500 ms Timing parameters were selected based on pilot testing to maintain attention and reading fluency. The text was segmented into 9 parts corresponding to the 9 experimental runs. - Mean run duration: 8min10s - SD: 40s **- Range: 7min10s to 9min** **Experimental Procedure** After informed consent and familiarization with the MEG environment, participants were seated in the MEG chair inside a magnetically shielded room facing a projection screen. Viewing distance was fixed at 100 cm. Words appeared sequentially at the center of the screen. Participants were instructed to maintain fixation and read attentively while minimizing movement. The experiment consisted of 9 runs. Short breaks were provided between runs. After each run, participants completed 4 multiple-choice comprehension questions to assess engagement (behavioral responses are not included in this release). **Acquisition** **MEG** MEG data for all three tasks were recorded inside the same magnetically shielded room using a whole-head Elekta Neuromag TRIUX MEG system (Elekta Oy, Helsinki, Finland), equipped with 102 magnetometers and 204 planar gradiometers. Data were recorded continuously with a sampling rate of 1000 Hz and an online low-pass filter at 330 Hz and high-pass filter at 0.1 Hz. Vertical and horizontal electrooculograms (EOG) and an electrocardiogram (ECG) were recorded simultaneously using bipolar electrodes to monitor eye movements and heartbeats. **Anatomical MRI** For each participant, a high-resolution T1-weighted anatomical MRI scan was acquired using a 3T Siemens Magnetom Prisma MRI scanner (Siemens Healthcare, Erlangen, Germany). A standard MPRAGE sequence was used. MRI scans were typically acquired right after the MEG recording. Scans were used for coregistration and cortical surface reconstruction for source analysis. **Data Organization** **Raw Data** The root directory includes: - `dataset_description.json` - `participants.tsv` and `participants.json` - `task-read_events.json` - `sub-01` to `sub-50` - `sourcedata/` - `derivatives/` Each subject directory (`sub-XX`) contains one session (`ses-01`) with: - `anat/`: T1-weighted MRI (`sub-XX_ses-01_T1w.nii.gz`) and corresponding JSON sidecar - `meg/`: 9 MEG runs (`task-read_run-01` to `run-09`), each including: > - continuous MEG data (`*_meg.fif`) > - sidecar JSON files > - `events.tsv` and `channels.tsv` files > - coordinate system file (`*_coordsystem.json`) > - calibration and crosstalk files - `sub-XX_ses-01_scans.tsv`: scan-level metadata Each run corresponds to one text segment. Acquisition parameters are provided in the corresponding sidecar JSON files. **Derivatives** The `derivatives/` directory contains: - `freesurfer/`: subject-specific FreeSurfer reconstructions and morph maps - `preprocessed_data/`: preprocessed MEG data (including SSS-processed files), forward and inverse solutions, noise covariance matrices, source spaces, transformation files, evoked data, and source time courses. **Reference** Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., & Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. *Scientific Data*, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `DS007524` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | LittlePrince_MEG_French_Read_Pallier2025 | | Author (year) | `Pallier2025` | | Canonical | `LittlePrince` | | Importable as | `DS007524`, `Pallier2025`, `LittlePrince` | | Year | 2026 | | Authors | Corentin Bel, Julie Bonnaire, Jean-Rémi King, Christophe Pallier | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007524.v1.0.1](https://doi.org/10.18112/openneuro.ds007524.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007524) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007524) | [Source URL](https://openneuro.org/datasets/ds007524) | ### Copy-paste BibTeX ```bibtex @dataset{ds007524, title = {LittlePrince_MEG_French_Read_Pallier2025}, author = {Corentin Bel and Julie Bonnaire and Jean-Rémi King and Christophe Pallier}, doi = {10.18112/openneuro.ds007524.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007524.v1.0.1}, } ``` ## Technical Details - Subjects: 50 - Recordings: 500 - Tasks: 1 - Channels: 346 (414), 339 (27), 338 (9) - Sampling rate (Hz): 1000.0 - Duration (hours): 63.99154166666667 - Pathology: Healthy - Modality: Visual - Type: Other - Size on disk: 298.6 GB - File count: 500 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007524.v1.0.1 - Source: openneuro - OpenNeuro: [ds007524](https://openneuro.org/datasets/ds007524) - NeMAR: [ds007524](https://nemar.org/dataexplorer/detail?dataset_id=ds007524) ## API Reference Use the `DS007524` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007524(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LittlePrince_MEG_French_Read_Pallier2025 * **Study:** `ds007524` (OpenNeuro) * **Author (year):** `Pallier2025` * **Canonical:** `LittlePrince` Also importable as: `DS007524`, `Pallier2025`, `LittlePrince`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 50; recordings: 500; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007524](https://openneuro.org/datasets/ds007524) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007524](https://nemar.org/dataexplorer/detail?dataset_id=ds007524) DOI: [https://doi.org/10.18112/openneuro.ds007524.v1.0.1](https://doi.org/10.18112/openneuro.ds007524.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007524 >>> dataset = DS007524(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007524) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007524) * [eegdash.dataset.DS000117](eegdash.dataset.DS000117.md) * [eegdash.dataset.DS000246](eegdash.dataset.DS000246.md) * [eegdash.dataset.DS000247](eegdash.dataset.DS000247.md) * [eegdash.dataset.DS000248](eegdash.dataset.DS000248.md) * [eegdash.dataset.DS002001](eegdash.dataset.DS002001.md) # DS007526: eeg dataset, 144 subjects *PD-EEG: Resting-State & Walking EEG in Parkinson’s Disease* Access recordings and metadata through EEGDash. **Citation:** Zoya Katzir, Daniel Vered, Inbal Maidan ([inbalm@tlvmc.gov.il](mailto:inbalm@tlvmc.gov.il)) (2026). *PD-EEG: Resting-State & Walking EEG in Parkinson’s Disease*. [10.18112/openneuro.ds007526.v1.0.0](https://doi.org/10.18112/openneuro.ds007526.v1.0.0) Modality: eeg Subjects: 144 Recordings: 277 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007526 dataset = DS007526(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007526(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007526( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007526, title = {PD-EEG: Resting-State & Walking EEG in Parkinson's Disease}, author = {Zoya Katzir and Daniel Vered and Inbal Maidan (inbalm@tlvmc.gov.il)}, doi = {10.18112/openneuro.ds007526.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007526.v1.0.0}, } ``` ## About This Dataset **PD-EEG: Resting-State & Walking EEG in Parkinson’s Disease** **Overview** This dataset contains EEG recordings from Parkinson’s disease (PD) patients and healthy controls (HC), collected under two behavioral conditions: resting state (sitting) and walking. The dataset was acquired at the Neurology Institute, Tel Aviv Sourasky Medical Center. **Participants** ### View full README **PD-EEG: Resting-State & Walking EEG in Parkinson’s Disease** **Overview** This dataset contains EEG recordings from Parkinson’s disease (PD) patients and healthy controls (HC), collected under two behavioral conditions: resting state (sitting) and walking. The dataset was acquired at the Neurology Institute, Tel Aviv Sourasky Medical Center. **Participants** - *Parkinson’s disease (PD):* 116 participants - *Healthy controls (HC):* 28 participants **Inclusion criteria (PD):** - Age 40–90 - Hoehn & Yahr stage ≤ 3 - MoCA ≥ 21 - Able to walk independently **Exclusion criteria:** - History of stroke or major neurological disorder - Brain surgery - Significant head injury - Inability to walk independently All participants provided informed consent. The study was approved by the local ethics committee and conducted in accordance with the Declaration of Helsinki. **Experimental Design** Each participant underwent EEG recording under two conditions: 1. **Resting State** (144 Recordings) > - Sitting > - Eyes open > - Duration: ~4 minutes 1. **Walking** (133 Recordings) - Walking on a treadmill at a comfortable speed while holding the handrails. - Duration: ~4 minutes Additional clinical data were collected, including: - **Demographic data** - **LEDD** (Levodopa Equivalent Daily Dose) - a measure of anti-parkinsonian medication dosage. - **MoCA** (Montreal Cognitive Assessment) - a global measure of cognitive function. - **MDS-UPDRS** - Movement Disorder Society Unified Parkinson’s Disease Rating Scale - the gold standard clinical rating scale for Parkinson’s Disease. - **CTT** - Color Trails Test - a measure of executive function and processing speed. **EEG Acquisition** - *System:* 64-channel Geodesic EEG System 400 (EGI system) **- Montage: International 10–20 system** **Data Organization** This dataset follows the **Brain Imaging Data Structure (BIDS)** specification. Typical structure: ```text sub-001/ eeg/ sub-001_task-rest_eeg.\* sub-001_task-walk_eeg.\* participants.tsv participants.json dataset_description.json ``` **Inbal Maidan, PhD** Tel Aviv Sourasky Medical Center Email: [inbalm@tlvmc.gov.il](mailto:inbalm@tlvmc.gov.il) ## Dataset Information | Dataset ID | `DS007526` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PD-EEG: Resting-State & Walking EEG in Parkinson’s Disease | | Author (year) | `Katzir2026` | | Canonical | `PD_EEG`, `PDEEG` | | Importable as | `DS007526`, `Katzir2026`, `PD_EEG`, `PDEEG` | | Year | 2026 | | Authors | Zoya Katzir, Daniel Vered, Inbal Maidan ([inbalm@tlvmc.gov.il](mailto:inbalm@tlvmc.gov.il)) | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007526.v1.0.0](https://doi.org/10.18112/openneuro.ds007526.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007526) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007526) | [Source URL](https://openneuro.org/datasets/ds007526) | ### Copy-paste BibTeX ```bibtex @dataset{ds007526, title = {PD-EEG: Resting-State & Walking EEG in Parkinson's Disease}, author = {Zoya Katzir and Daniel Vered and Inbal Maidan (inbalm@tlvmc.gov.il)}, doi = {10.18112/openneuro.ds007526.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007526.v1.0.0}, } ``` ## Technical Details - Subjects: 144 - Recordings: 277 - Tasks: 2 - Channels: 65 - Sampling rate (Hz): 250.0 - Duration (hours): 19.48837555555556 - Pathology: Parkinson’s - Modality: Motor - Type: Clinical/Intervention - Size on disk: 4.3 GB - File count: 277 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007526.v1.0.0 - Source: openneuro - OpenNeuro: [ds007526](https://openneuro.org/datasets/ds007526) - NeMAR: [ds007526](https://nemar.org/dataexplorer/detail?dataset_id=ds007526) ## API Reference Use the `DS007526` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007526(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PD-EEG: Resting-State & Walking EEG in Parkinson’s Disease * **Study:** `ds007526` (OpenNeuro) * **Author (year):** `Katzir2026` * **Canonical:** `PD_EEG`, `PDEEG` Also importable as: `DS007526`, `Katzir2026`, `PD_EEG`, `PDEEG`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Parkinson's`. Subjects: 144; recordings: 277; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007526](https://openneuro.org/datasets/ds007526) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007526](https://nemar.org/dataexplorer/detail?dataset_id=ds007526) DOI: [https://doi.org/10.18112/openneuro.ds007526.v1.0.0](https://doi.org/10.18112/openneuro.ds007526.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007526 >>> dataset = DS007526(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007526) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007526) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007554: eeg, fnirs dataset, 30 subjects *Multimodal dataset from the CMx7-MM Experiment* Access recordings and metadata through EEGDash. **Citation:** Zaineb Ajra, Grégoire Vergotte, Stéphane Perrey, Lilian Evra, Simon Pla, Gérard Dray, Jacky Montmain, Binbin Xu (2026). *Multimodal dataset from the CMx7-MM Experiment*. [10.18112/openneuro.ds007554.v1.0.0](https://doi.org/10.18112/openneuro.ds007554.v1.0.0) Modality: eeg, fnirs Subjects: 30 Recordings: 1034 License: CC0 Source: openneuro Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007554 dataset = DS007554(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007554(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007554( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007554, title = {Multimodal dataset from the CMx7-MM Experiment}, author = {Zaineb Ajra and Grégoire Vergotte and Stéphane Perrey and Lilian Evra and Simon Pla and Gérard Dray and Jacky Montmain and Binbin Xu}, doi = {10.18112/openneuro.ds007554.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007554.v1.0.0}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DS007554` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Multimodal dataset from the CMx7-MM Experiment | | Author (year) | `Ajra2026` | | Canonical | — | | Importable as | `DS007554`, `Ajra2026` | | Year | 2026 | | Authors | Zaineb Ajra, Grégoire Vergotte, Stéphane Perrey, Lilian Evra, Simon Pla, Gérard Dray, Jacky Montmain, Binbin Xu | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007554.v1.0.0](https://doi.org/10.18112/openneuro.ds007554.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007554) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007554) | [Source URL](https://openneuro.org/datasets/ds007554) | ### Copy-paste BibTeX ```bibtex @dataset{ds007554, title = {Multimodal dataset from the CMx7-MM Experiment}, author = {Zaineb Ajra and Grégoire Vergotte and Stéphane Perrey and Lilian Evra and Simon Pla and Gérard Dray and Jacky Montmain and Binbin Xu}, doi = {10.18112/openneuro.ds007554.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007554.v1.0.0}, } ``` ## Technical Details - Subjects: 30 - Recordings: 1034 - Tasks: 7 - Channels: 32 - Sampling rate (Hz): 10.0 (519), 250.002243692634 (26), 250.00227279699027 (13), 250.00027916425347 (12), 250.00220003611236 (12), 250.0003082681523 (10), 250.00206906663882 (7), 250.0003344161922 (7), 250.0022582448113 (7), 249.99969708769964 (7), 250.00079612894984 (7), 250.0003892134066 (7), 250.0009023984434 (7), 249.9997989509009 (7), 250.0001952063645 (7), 250.0004683397171 (7), 250.0001184960745 (7), 250.00098976743945 (7), 250.00004633330667 (7), 250.00197788831903 (7), 250.00085794657687 (7), 250.00092087268928 (7), 250.0006918490773 (7), 250.00020640453602 (7), 250.0009635339967 (7), 249.99990081418517 (7), 249.99995902181342 (7), 250.00038784916032 (7), 250.00028848659534 (7), 250.0007861528667 (7), 250.0003228201043 (7), 249.99991536608968 (7), 250.00119525819466 (7), 250.00208361879578 (7), 249.99972619146294 (7), 250.00217093177304 (7), 250.00094787351506 (6), 250.0020981709544 (6), 249.99996084080226 (6), 250.00055565156606 (6), 250.0004374167852 (6), 250.00064410035029 (6), 250.00228734917093 (6), 250.00205451448358 (6), 250.00056292755625 (6), 250.0006486478473 (6), 250.00001813896364 (6), 250.00026461230658 (6), 250.00067934345654 (6), 250.0002209564761 (6), 250.00218548394184 (6), 250.00074721488596 (6), 250.00023550841792 (6), 250.00157429431061 (6), 250.0001383628698 (6), 250.00057674056998 (6), 250.0009258749471 (6), 250.00077165770458 (6), 250.00231645353733 (6), 250.0006891205781 (6), 250.00080547975216 (6), 250.0004249680335 (6), 250.00278212432153 (6), 250.00195992551573 (6), 250.0008785240374 (5), 250.00310227399197 (5), 250.00043923578096 (5), 250.00045378774817 (5), 250.00042468381545 (5), 250.00072163519718 (5), 250.00041013185162 (5), 249.99993719394965 (4), 250.00224414738952 (4), 250.0004295723662 (3), 250.00074846544865 (3), 249.99984988253266 (3), 249.99994446990382 (3), 250.0005021616826 (3), 250.00218593869715 (2), 250.00059930751343 (2), 250.00041146767637 (2) - Duration (hours): 61.88401802488944 - Pathology: Healthy - Modality: — - Type: Other - Size on disk: 4.2 GB - File count: 1034 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007554.v1.0.0 - Source: openneuro - OpenNeuro: [ds007554](https://openneuro.org/datasets/ds007554) - NeMAR: [ds007554](https://nemar.org/dataexplorer/detail?dataset_id=ds007554) ## API Reference Use the `DS007554` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007554(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal dataset from the CMx7-MM Experiment * **Study:** `ds007554` (OpenNeuro) * **Author (year):** `Ajra2026` * **Canonical:** — Also importable as: `DS007554`, `Ajra2026`. Modality: `eeg, fnirs`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 30; recordings: 1034; tasks: 7. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007554](https://openneuro.org/datasets/ds007554) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007554](https://nemar.org/dataexplorer/detail?dataset_id=ds007554) DOI: [https://doi.org/10.18112/openneuro.ds007554.v1.0.0](https://doi.org/10.18112/openneuro.ds007554.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007554 >>> dataset = DS007554(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007554) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007554) * [eegdash.dataset.DS004514](eegdash.dataset.DS004514.md) * [eegdash.dataset.DS004541](eegdash.dataset.DS004541.md) # DS007558: eeg dataset, 67 subjects *EEG Pre/Post Intervention Dataset* Access recordings and metadata through EEGDash. **Citation:** Mengsha Qi (2026). *EEG Pre/Post Intervention Dataset*. [10.18112/openneuro.ds007558.v1.0.0](https://doi.org/10.18112/openneuro.ds007558.v1.0.0) Modality: eeg Subjects: 67 Recordings: 121 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007558 dataset = DS007558(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007558(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007558( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007558, title = {EEG Pre/Post Intervention Dataset}, author = {Mengsha Qi}, doi = {10.18112/openneuro.ds007558.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007558.v1.0.0}, } ``` ## About This Dataset **Dataset Description** **Overview** This dataset contains EEG recordings from a study investigating neural activity changes before and after an intervention. The data are organized following the Brain Imaging Data Structure (BIDS) specification. The dataset includes multiple participant groups and timepoints: - Group 1, Group 2, Group 3 - Pre-intervention (pre) and Post-intervention (post) ### View full README **Dataset Description** **Overview** This dataset contains EEG recordings from a study investigating neural activity changes before and after an intervention. The data are organized following the Brain Imaging Data Structure (BIDS) specification. The dataset includes multiple participant groups and timepoints: - Group 1, Group 2, Group 3 - Pre-intervention (pre) and Post-intervention (post) **Participants** Participants are labeled using anonymized IDs (e.g., sub-001, sub-002, etc.). Demographic and session-related information are provided in the corresponding TSV files where applicable. **Data Structure** The dataset follows the BIDS format: - `sub-XXX/` > - `ses-pre/` or `ses-post/` > - `eeg/` > > - EEG recordings (.edf) > > - Metadata files (.json) > > - Events files (.tsv) Each subject contains EEG recordings organized by session (pre/post). **Experimental Design** The study compares neural activity before and after an intervention. Participants are divided into different groups to evaluate potential differences in outcomes. **Data Acquisition** EEG data were recorded using standard acquisition systems. Detailed acquisition parameters are stored in the accompanying JSON sidecar files. **Data Processing** The dataset has been reorganized into BIDS format. File naming, metadata, and structure have been standardized to ensure compatibility with BIDS-compliant tools. **Known Issues** - Some warnings may appear during BIDS validation but do not affect data usability. - All critical validation errors have been resolved. **Usage Notes** This dataset can be used for: - EEG signal analysis - Functional connectivity studies - Pre/post intervention comparisons **License** Please refer to the dataset repository for licensing information. **Acknowledgements** We thank all participants and researchers involved in data collection and processing. ## Dataset Information | Dataset ID | `DS007558` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG Pre/Post Intervention Dataset | | Author (year) | `Qi2026` | | Canonical | — | | Importable as | `DS007558`, `Qi2026` | | Year | 2026 | | Authors | Mengsha Qi | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007558.v1.0.0](https://doi.org/10.18112/openneuro.ds007558.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007558) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007558) | [Source URL](https://openneuro.org/datasets/ds007558) | ### Copy-paste BibTeX ```bibtex @dataset{ds007558, title = {EEG Pre/Post Intervention Dataset}, author = {Mengsha Qi}, doi = {10.18112/openneuro.ds007558.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007558.v1.0.0}, } ``` ## Technical Details - Subjects: 67 - Recordings: 121 - Tasks: 1 - Channels: 19 (106), 21 (13), 20 (2) - Sampling rate (Hz): 200.0 - Duration (hours): 25.98491805555556 - Pathology: Not specified - Modality: Resting State - Type: Clinical/Intervention - Size on disk: 686.4 MB - File count: 121 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007558.v1.0.0 - Source: openneuro - OpenNeuro: [ds007558](https://openneuro.org/datasets/ds007558) - NeMAR: [ds007558](https://nemar.org/dataexplorer/detail?dataset_id=ds007558) ## API Reference Use the `DS007558` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007558(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Pre/Post Intervention Dataset * **Study:** `ds007558` (OpenNeuro) * **Author (year):** `Qi2026` * **Canonical:** — Also importable as: `DS007558`, `Qi2026`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 67; recordings: 121; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007558](https://openneuro.org/datasets/ds007558) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007558](https://nemar.org/dataexplorer/detail?dataset_id=ds007558) DOI: [https://doi.org/10.18112/openneuro.ds007558.v1.0.0](https://doi.org/10.18112/openneuro.ds007558.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007558 >>> dataset = DS007558(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007558) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007558) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007591: eeg dataset, 3 subjects *Delineating neural contributions to EEG-based speech decoding* Access recordings and metadata through EEGDash. **Citation:** Motoshige Sato, Yasuo Kabe, Sensho Nobe, Akito Yoshida, Masakazu Inoue, Mayumi Shimizu, Kenichi Tomeoka, Shuntaro Sasai (2026). *Delineating neural contributions to EEG-based speech decoding*. [10.18112/openneuro.ds007591.v1.0.1](https://doi.org/10.18112/openneuro.ds007591.v1.0.1) Modality: eeg Subjects: 3 Recordings: 21 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007591 dataset = DS007591(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007591(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007591( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007591, title = {Delineating neural contributions to EEG-based speech decoding}, author = {Motoshige Sato and Yasuo Kabe and Sensho Nobe and Akito Yoshida and Masakazu Inoue and Mayumi Shimizu and Kenichi Tomeoka and Shuntaro Sasai}, doi = {10.18112/openneuro.ds007591.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007591.v1.0.1}, } ``` ## About This Dataset **Delineating neural contributions to EEG-based speech decoding** **Overview** 128-channel EEG recordings during speech production tasks. Participants produced one of 5 color words (green, magenta, orange, violet, yellow) under three speech conditions: overt, minimally overt, and covert. Each trial consists of 5 repetitions of the same word (1.25 sec per repetition). ### View full README **Delineating neural contributions to EEG-based speech decoding** **Overview** 128-channel EEG recordings during speech production tasks. Participants produced one of 5 color words (green, magenta, orange, violet, yellow) under three speech conditions: overt, minimally overt, and covert. Each trial consists of 5 repetitions of the same word (1.25 sec per repetition). **Channel layout (139 channels total)** - Channels 1-128: EEG - Channels 129-130: DISPLAY (bipolar pair, misc) - Channels 131-132: MIC (bipolar pair, misc) - Channels 133-134: EOG (bipolar pair) - Channels 135-136: EMG upper orbicularis oris (bipolar pair) - Channels 137-138: EMG lower orbicularis oris (bipolar pair) - Channel 139: TRIGGER (marks trial onsets) **Session types** - calibration: Offline data collection for decoder training - online: Real-time decoding with trained decoder **Preprocessing note** The EEG channels were recorded with a 10x preamp gain. Raw values have been converted to Volts (×1e-6). **Code** Code for data loading, preprocessing, and decoding models is available at: [https://github.com/arayabrain/uhd-gmail-public](https://github.com/arayabrain/uhd-gmail-public) ## Dataset Information | Dataset ID | `DS007591` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Delineating neural contributions to EEG-based speech decoding | | Author (year) | `Sato2026_Delineating` | | Canonical | `Sato2025` | | Importable as | `DS007591`, `Sato2026_Delineating`, `Sato2025` | | Year | 2026 | | Authors | Motoshige Sato, Yasuo Kabe, Sensho Nobe, Akito Yoshida, Masakazu Inoue, Mayumi Shimizu, Kenichi Tomeoka, Shuntaro Sasai | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007591.v1.0.1](https://doi.org/10.18112/openneuro.ds007591.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007591) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007591) | [Source URL](https://openneuro.org/datasets/ds007591) | ### Copy-paste BibTeX ```bibtex @dataset{ds007591, title = {Delineating neural contributions to EEG-based speech decoding}, author = {Motoshige Sato and Yasuo Kabe and Sensho Nobe and Akito Yoshida and Masakazu Inoue and Mayumi Shimizu and Kenichi Tomeoka and Shuntaro Sasai}, doi = {10.18112/openneuro.ds007591.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007591.v1.0.1}, } ``` ## Technical Details - Subjects: 3 - Recordings: 21 - Tasks: 3 - Channels: 139 - Sampling rate (Hz): 256.0 - Duration (hours): 6.775833333333333 - Pathology: Healthy - Modality: — - Type: Motor - Size on disk: 1.6 GB - File count: 21 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007591.v1.0.1 - Source: openneuro - OpenNeuro: [ds007591](https://openneuro.org/datasets/ds007591) - NeMAR: [ds007591](https://nemar.org/dataexplorer/detail?dataset_id=ds007591) ## API Reference Use the `DS007591` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007591(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Delineating neural contributions to EEG-based speech decoding * **Study:** `ds007591` (OpenNeuro) * **Author (year):** `Sato2026_Delineating` * **Canonical:** `Sato2025` Also importable as: `DS007591`, `Sato2026_Delineating`, `Sato2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 3; recordings: 21; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007591](https://openneuro.org/datasets/ds007591) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007591](https://nemar.org/dataexplorer/detail?dataset_id=ds007591) DOI: [https://doi.org/10.18112/openneuro.ds007591.v1.0.1](https://doi.org/10.18112/openneuro.ds007591.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007591 >>> dataset = DS007591(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007591) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007591) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007602: eeg dataset, 3 subjects *EEG-Speech Brain Decoding Dataset* Access recordings and metadata through EEGDash. **Citation:** Motoshige Sato, Masakazu Inoue, Kenichi Tomeoka, Ilya Horiguchi, Eri Hatakeyama, Yuya Kita, Atsushi Yamamoto, Ippei Fujisawa, Shuntaro Sasai (2026). *EEG-Speech Brain Decoding Dataset*. [10.18112/openneuro.ds007602.v1.0.1](https://doi.org/10.18112/openneuro.ds007602.v1.0.1) Modality: eeg Subjects: 3 Recordings: 113 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007602 dataset = DS007602(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007602(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007602( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007602, title = {EEG-Speech Brain Decoding Dataset}, author = {Motoshige Sato and Masakazu Inoue and Kenichi Tomeoka and Ilya Horiguchi and Eri Hatakeyama and Yuya Kita and Atsushi Yamamoto and Ippei Fujisawa and Shuntaro Sasai}, doi = {10.18112/openneuro.ds007602.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007602.v1.0.1}, } ``` ## About This Dataset **EEG-Speech Brain Decoding Dataset** **Overview** This dataset contains EEG recordings and audio data. **Sessions** ### View full README **EEG-Speech Brain Decoding Dataset** **Overview** This dataset contains EEG recordings and audio data. **Sessions** Sessions are labeled by recording date in YYYYMMDD format. - Example: `ses-20240401` = recorded on April 1, 2024 Multiple recordings on the same day are distinguished by run numbers: - `run-N`: Nth recording of the day **Tasks** - **speechopen**: Overt speech production task - Participants vocalize visually presented text **File Format Notes** **EEG Data** Raw EEG data is stored: - **Path**: `sub-*/ses-*/eeg/*_eeg.edf` - **Note**: EDF format is not officially part of BIDS-EEG specification - Files are excluded in `.bidsignore` but documented here for reference - Future releases may include EDF conversions for full BIDS compliance **Behavioral Data (Audio)** Vocal recordings are stored in `beh/` directories: - **Path**: `sub-*/ses-*/beh/*_recording-vocal_beh.wav` - **Note**: Not officially part of BIDS-EEG spec, but included for analysis convenience - Excluded in `.bidsignore` **Directory Structure** ```text dataset_root/ ``` ```text ├── README (this file) ├── CHANGES (version history) ├── dataset_description.json (dataset metadata) ├── participants.tsv (participant information) ├── participants.json (participant column descriptions) ├── task-speechopen_eeg.json (task-level EEG metadata) ├── task-speechopen_events.json (events column descriptions) ├── .bidsignore (files to ignore in validation) │ ├── code/ (analysis and preprocessing code) │ ├── preprocessing/ (EEG and audio preprocessing) │ ├── training/ (model training scripts) │ ├── evaluation/ (evaluation metrics) │ └── bids/ (BIDS conversion scripts) │ ├── sub-01/ (participant data) │ └── ses-YYYYMMDD/ (session by date) │ ├── eeg/ (EEG recordings) │ └── beh/ (behavioral/audio data) │ └── derivatives/ (processed data) └── pipeline-standard/ (standard preprocessing) ``` ## Dataset Information | Dataset ID | `DS007602` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG-Speech Brain Decoding Dataset | | Author (year) | `Sato2026_Speech` | | Canonical | `Sato2024` | | Importable as | `DS007602`, `Sato2026_Speech`, `Sato2024` | | Year | 2026 | | Authors | Motoshige Sato, Masakazu Inoue, Kenichi Tomeoka, Ilya Horiguchi, Eri Hatakeyama, Yuya Kita, Atsushi Yamamoto, Ippei Fujisawa, Shuntaro Sasai | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007602.v1.0.1](https://doi.org/10.18112/openneuro.ds007602.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007602) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007602) | [Source URL](https://openneuro.org/datasets/ds007602) | ### Copy-paste BibTeX ```bibtex @dataset{ds007602, title = {EEG-Speech Brain Decoding Dataset}, author = {Motoshige Sato and Masakazu Inoue and Kenichi Tomeoka and Ilya Horiguchi and Eri Hatakeyama and Yuya Kita and Atsushi Yamamoto and Ippei Fujisawa and Shuntaro Sasai}, doi = {10.18112/openneuro.ds007602.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds007602.v1.0.1}, } ``` ## Technical Details - Subjects: 3 - Recordings: 113 - Tasks: 1 - Channels: 134 - Sampling rate (Hz): 1200.0 - Duration (hours): 44.18638888888889 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 49.6 GB - File count: 113 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007602.v1.0.1 - Source: openneuro - OpenNeuro: [ds007602](https://openneuro.org/datasets/ds007602) - NeMAR: [ds007602](https://nemar.org/dataexplorer/detail?dataset_id=ds007602) ## API Reference Use the `DS007602` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007602(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG-Speech Brain Decoding Dataset * **Study:** `ds007602` (OpenNeuro) * **Author (year):** `Sato2026_Speech` * **Canonical:** `Sato2024` Also importable as: `DS007602`, `Sato2026_Speech`, `Sato2024`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 3; recordings: 113; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007602](https://openneuro.org/datasets/ds007602) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007602](https://nemar.org/dataexplorer/detail?dataset_id=ds007602) DOI: [https://doi.org/10.18112/openneuro.ds007602.v1.0.1](https://doi.org/10.18112/openneuro.ds007602.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007602 >>> dataset = DS007602(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007602) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007602) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007609: eeg dataset, 51 subjects *Resting-State EEG and Trait Anxiety* Access recordings and metadata through EEGDash. **Citation:** Tamari Shalamberidze, Kyle Nash, Jeremy B. Caplan (2026). *Resting-State EEG and Trait Anxiety*. [10.18112/openneuro.ds007609.v1.0.0](https://doi.org/10.18112/openneuro.ds007609.v1.0.0) Modality: eeg Subjects: 51 Recordings: 51 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007609 dataset = DS007609(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007609(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007609( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007609, title = {Resting-State EEG and Trait Anxiety}, author = {Tamari Shalamberidze and Kyle Nash and Jeremy B. Caplan}, doi = {10.18112/openneuro.ds007609.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007609.v1.0.0}, } ``` ## About This Dataset **Resting-State EEG and Trait Anxiety** This dataset contains resting-state EEG recordings from 51 participants, collected as part of a study examining the relationship between resting-state EEG alpha/theta power, oscillatory dynamics, and trait anxiety. **Participants** 51 right-handed undergraduate students (25 female) from the University of ### View full README **Resting-State EEG and Trait Anxiety** This dataset contains resting-state EEG recordings from 51 participants, collected as part of a study examining the relationship between resting-state EEG alpha/theta power, oscillatory dynamics, and trait anxiety. **Participants** 51 right-handed undergraduate students (25 female) from the University of Alberta, aged 17-51 years (mean = 20.4, SD = 4.9), participated for course credit. **Authors** Tamari Shalamberidze (a), Kyle Nash (a,b), Jeremy B. Caplan (a,b) (a) Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB, Canada (b) Department of Psychology, University of Alberta, Edmonton, AB, Canada Corresponding Author: Tamari Shalamberidze ([shalambe@ualberta.ca](mailto:shalambe@ualberta.ca)) **Related Publication** Shalamberidze, T., Nash, K., & Caplan, J.B. (2025). Resting-state EEG and trait anxiety. Imaging Neuroscience. [https://doi.org/10.1162/IMAG.a.44](https://doi.org/10.1162/IMAG.a.44) **Recording** EEG was recorded using a 256-channel EGI HydroCel Geodesic Sensor Net with Net Amps amplifier. The original sampling rate was 500 Hz. Online reference was Cz. **Paradigm** Participants completed a resting-state protocol consisting of alternating 1-minute eyes-open (EO) and 1-minute eyes-closed (EC) blocks, repeated twice (EO-EC-EO-EC), for a total of 4 minutes. Transitions between blocks were signaled by an auditory beep. **Preprocessing** Data were preprocessed in EEGLAB (MATLAB) with the following steps: - Bandpass filter: 0.1-50 Hz - Line noise removal: CleanLine at 60 Hz and 120 Hz - Channel rejection: kurtosis-based (2x threshold), applied twice - Re-referencing to the average - ICA decomposition (runica, extended) - Artifact component removal via ICLabel (>0.8 probability threshold) + visual inspection - Spherical interpolation of removed channels **Phenotype Data** The phenotype/ directory contains anxiety and personality questionnaire scores: - STAI: State-Trait Anxiety Inventory (Spielberger et al., 1983) - TIPI: Ten-Item Personality Inventory, emotional stability subscale (Gosling et al., 2003) - BIS/FFFS: Behavioural Inhibition Scale and Fight-Flight-Freeze System > from the RST-PQ (Corr & Cooper, 2016), with Heym and Jackson factor structures. > BIS data are unavailable for the first 5 participants. **Ethics** This study received ethics approval from the University of Alberta Research Ethics Board. Project Name: “Physiological Bases of Human Memory”, No. Pro00113334. **Funding** Partly supported by the Social Sciences and Humanities Research Council in Canada (SSHRC), and the Natural Sciences and Engineering Research Council of Canada (NSERC). **License** This dataset is made available under the Creative Commons Attribution 4.0 International License (CC BY 4.0). ## Dataset Information | Dataset ID | `DS007609` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Resting-State EEG and Trait Anxiety | | Author (year) | `Shalamberidze2026` | | Canonical | `Shalamberidze2025` | | Importable as | `DS007609`, `Shalamberidze2026`, `Shalamberidze2025` | | Year | 2026 | | Authors | Tamari Shalamberidze, Kyle Nash, Jeremy B. Caplan | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007609.v1.0.0](https://doi.org/10.18112/openneuro.ds007609.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007609) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007609) | [Source URL](https://openneuro.org/datasets/ds007609) | ### Copy-paste BibTeX ```bibtex @dataset{ds007609, title = {Resting-State EEG and Trait Anxiety}, author = {Tamari Shalamberidze and Kyle Nash and Jeremy B. Caplan}, doi = {10.18112/openneuro.ds007609.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007609.v1.0.0}, } ``` ## Technical Details - Subjects: 51 - Recordings: 51 - Tasks: 1 - Channels: 256 - Sampling rate (Hz): 500.0 - Duration (hours): 4.057284444444445 - Pathology: Healthy - Modality: Resting State - Type: Affect - Size on disk: 7.0 GB - File count: 51 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007609.v1.0.0 - Source: openneuro - OpenNeuro: [ds007609](https://openneuro.org/datasets/ds007609) - NeMAR: [ds007609](https://nemar.org/dataexplorer/detail?dataset_id=ds007609) ## API Reference Use the `DS007609` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007609(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting-State EEG and Trait Anxiety * **Study:** `ds007609` (OpenNeuro) * **Author (year):** `Shalamberidze2026` * **Canonical:** `Shalamberidze2025` Also importable as: `DS007609`, `Shalamberidze2026`, `Shalamberidze2025`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 51; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007609](https://openneuro.org/datasets/ds007609) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007609](https://nemar.org/dataexplorer/detail?dataset_id=ds007609) DOI: [https://doi.org/10.18112/openneuro.ds007609.v1.0.0](https://doi.org/10.18112/openneuro.ds007609.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007609 >>> dataset = DS007609(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007609) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007609) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # DS007615: eeg dataset, 69 subjects *LDAEP and resting-state EEG in healthy women* Access recordings and metadata through EEGDash. **Citation:** Henrik Normannseth, Stein Andersson, Christoffer Hatlestad-Hall (2026). *LDAEP and resting-state EEG in healthy women*. [10.18112/openneuro.ds007615.v1.0.0](https://doi.org/10.18112/openneuro.ds007615.v1.0.0) Modality: eeg Subjects: 69 Recordings: 192 License: CC0 Source: openneuro Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import DS007615 dataset = DS007615(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = DS007615(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = DS007615( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ds007615, title = {LDAEP and resting-state EEG in healthy women}, author = {Henrik Normannseth and Stein Andersson and Christoffer Hatlestad-Hall}, doi = {10.18112/openneuro.ds007615.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007615.v1.0.0}, } ``` ## About This Dataset **LDAEP and resting-state EEG in healthy women** **The dataset at a glance** - 69 participants, all female. - Age range: 19-40 years, mean age 25.3 years (SD 4.2). - A single recording session comprising two paradigms: LDAEP (54 participants) and resting-state (eyes-open and eyes-closed; 69 participants). - 64 EEG channels and 4 auxiliary oculogram channels. ### View full README **LDAEP and resting-state EEG in healthy women** **The dataset at a glance** - 69 participants, all female. - Age range: 19-40 years, mean age 25.3 years (SD 4.2). - A single recording session comprising two paradigms: LDAEP (54 participants) and resting-state (eyes-open and eyes-closed; 69 participants). - 64 EEG channels and 4 auxiliary oculogram channels. - Signal sampling rate: 2048 Hz. - Additional data include hormonal contraceptive use, menstrual cycle phase, depressive symptoms (BDI-II), impulsivity (UPPS-P), and lifestyle factors. **Introduction** This dataset contains EEG recordings from 69 healthy women, acquired at the Department of Psychology, University of Oslo, Norway. The data were collected as part of a study investigating the relationship between hormonal contraceptive use and central serotonergic activity indexed by the loudness dependence of auditory evoked potentials (LDAEP). Two paradigms were recorded per participant: a resting-state recording (four minutes eyes closed followed by four minutes eyes open) and na LDAEP paradigm (1000 Hz tones at five intensity levels: 55, 65, 75, 85, and 95 dB SPL; 80 trials per level). The resting-state recording was always conducted first to avoid auditory stimulus contamination. Resting-state data are available for all 69 participants. Due to a technical issue with the auditory stimulation equipment, LDAEP data are available for 54 of the 69 participants. The data were recorded with a BioSemi ActiveTwo system, using 64 Ag-AgCl electrodes positioned according to the extended 10-20 system (10-10), at a sampling rate of 2048 Hz. Raw data are stored in BrainVision format (triplet of `*.eeg`, `*.vhdr`, `*.vmrk`). Alongside the EEG data, the dataset includes questionnaire data on hormonal contraceptive use, menstrual cycle, depressive symptoms (BDI-II), impulsivity (UPPS-P), and lifestyle factors. **Disclaimer** The dataset is provided “as is”. The authors take no responsibility with regard to data quality. The user is solely responsible for ascertaining that the data used for publications or in other contexts fulfil the required quality criteria. **The data** **Raw data files** Each participant’s EEG data are stored in the `sub-##/eeg/` directory in BrainVision format. Up to two tasks are included per participant: - `task-rest`: Eyes-closed (`acq-ec`) and eyes-open (`acq-eo`) resting-state (4 minutes each). Available for all 69 participants. - `task-ldaep`: LDAEP auditory stimulation paradigm. Available for 54 participants only. Each `task`/`acq` combination comprises three data files (`*.eeg`, `*.vhdr`, `*.vmrk`) and is accompanied by a sidecar metadata file (`*.json`), a channels information file (`*_channels.tsv`), and an events file (`*_events.tsv` with `*_events.json`). The data signals are unfiltered, except for a standard software anti-aliasing filter (recorded in Norway; the line noise frequency is 50 Hz). The EOG channels – HOG1, HOG2, VOG1, VOG2 – are positioned near the outer canthi (1 = left, 2 = right) and above (1) and below (2) the right eye. Please note that the data does not come with any pre-defined quality assessment. Whilst most data files are of high quality, individual files may vary. It is the user’s responsibility to verify the quality of each data file. The dataset does not include quality assessment or preprocessing code. For an example LDAEP pipeline, please refer to the paper referenced below. **Participant and phenotype data** Core demographic variables are provided in participants.tsv at the root level (age, sex, and hormonal contraceptive user group). Additional participant-level data are organised in the `phenotype/` directory: - `hc_usage.tsv`: Hormonal contraceptive usage details and menstrual cycle data (26 variables). Includes contraceptive type, progestin type, duration of use, self-reported mood changes and side effects (coded yes/no), prior HC use history, pregnancy history, and menstrual cycle phase. - `lifestyle.tsv`: Medication use, nicotine use and type, alcohol consumption, and recreational drug use and frequency (7 variables). - `bdi.tsv`: Beck Depression Inventory-II total score, cognitive-affective subscale score, and somatic subscale score (4 variables). Subscales follow the female-specific factor structure of Dozois et al. (1998). - `upps.tsv`: UPPS-P Impulsive Behavior Scale subscale scores: lack of perseverance, urgency, lack of premeditation, and sensation seeking (5 variables). Each phenotype file is accompanied by a JSON sidecar with variable descriptions and coding schemes. To protect participant privacy, free-text questionnaire responses were excluded from the dataset; only coded categorical and numeric variables are provided. **How to cite** All use of this dataset in a publication context requires the following paper to be cited: Normannseth, H., Hatlestad-Hall, C., Rygvold, T. W., Hadzic, A., & Andersson, S. (2025). Hormonal contraceptive use is associated with reduced central serotonergic activity indexed by the loudness dependence of auditory evoked potentials. Frontiers in Human Neuroscience, 19, 1647425. [https://doi.org/10.3389/fnhum.2025.1647425](https://doi.org/10.3389/fnhum.2025.1647425) A dataset descriptor article is currently in works. **Contact** Questions regarding the dataset may be addressed to the corresponding author, Christoffer Hatlestad-Hall, Department of Neurology, Oslo University Hospital, Norway (chrihat (at) ous-research.no). **References** The dataset was standardised and organised in accordance with BIDS using [MNE-BIDS](https://mne.tools/mne-bids/): Appelhoff, S., Sanderson, M., Brooks, T., Van Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A., & Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software, 4(44), 1896. [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) The relevant [BIDS specification](https://bids-specification.readthedocs.io/en/stable/) publications: Gorgolewski, K. J., Auer, T., Calhoun, V. D., Craddock, R. C., Das, S., Duff, E. P., Flandin, G., Ghosh, S. S., Glatard, T., Halchenko, Y. O., Handwerker, D. A., Hanke, M., Keator, D., Li, X., Michael, Z., Maumet, C., Nichols, B. N., Nichols, T. E., Pellman, J., … Poldrack, R. A. (2016). The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Scientific Data, 3(1), 160044. [https://doi.org/10.1038/sdata.2016.44](https://doi.org/10.1038/sdata.2016.44) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., & Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6(1), 103–108. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Other articles: Dozois, D.J., Dobson, K.S., & Ahnberg, J.L. (1998). A psychometric evaluation of the Beck Depression inventory-II. Psychological Assessment, 10, 83-89. [https://doi.org/10.1037/1040-3590.10.2.83](https://doi.org/10.1037/1040-3590.10.2.83) ## Dataset Information | Dataset ID | `DS007615` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | LDAEP and resting-state EEG in healthy women | | Author (year) | `Normannseth2026` | | Canonical | — | | Importable as | `DS007615`, `Normannseth2026` | | Year | 2026 | | Authors | Henrik Normannseth, Stein Andersson, Christoffer Hatlestad-Hall | | License | CC0 | | Citation / DOI | [doi:10.18112/openneuro.ds007615.v1.0.0](https://doi.org/10.18112/openneuro.ds007615.v1.0.0) | | Source links | [OpenNeuro](https://openneuro.org/datasets/ds007615) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ds007615) | [Source URL](https://openneuro.org/datasets/ds007615) | ### Copy-paste BibTeX ```bibtex @dataset{ds007615, title = {LDAEP and resting-state EEG in healthy women}, author = {Henrik Normannseth and Stein Andersson and Christoffer Hatlestad-Hall}, doi = {10.18112/openneuro.ds007615.v1.0.0}, url = {https://doi.org/10.18112/openneuro.ds007615.v1.0.0}, } ``` ## Technical Details - Subjects: 69 - Recordings: 192 - Tasks: 2 - Channels: 68 - Sampling rate (Hz): 2048.0 - Duration (hours): 18.548532307942708 - Pathology: Healthy - Modality: Auditory - Type: Perception - Size on disk: 34.6 GB - File count: 192 - Format: BIDS - License: CC0 - DOI: doi:10.18112/openneuro.ds007615.v1.0.0 - Source: openneuro - OpenNeuro: [ds007615](https://openneuro.org/datasets/ds007615) - NeMAR: [ds007615](https://nemar.org/dataexplorer/detail?dataset_id=ds007615) ## API Reference Use the `DS007615` class to access this dataset programmatically. ### *class* eegdash.dataset.DS007615(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LDAEP and resting-state EEG in healthy women * **Study:** `ds007615` (OpenNeuro) * **Author (year):** `Normannseth2026` * **Canonical:** — Also importable as: `DS007615`, `Normannseth2026`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 69; recordings: 192; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007615](https://openneuro.org/datasets/ds007615) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007615](https://nemar.org/dataexplorer/detail?dataset_id=ds007615) DOI: [https://doi.org/10.18112/openneuro.ds007615.v1.0.0](https://doi.org/10.18112/openneuro.ds007615.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007615 >>> dataset = DS007615(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ds007615) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ds007615) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # Dascoli2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Dascoli2025 dataset = Dascoli2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Dascoli2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Dascoli2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{dascoli2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DASCOLI2025` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `DASCOLI2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/dascoli2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=dascoli2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [dascoli2025](https://openneuro.org/datasets/dascoli2025) - NeMAR: [dascoli2025](https://nemar.org/dataexplorer/detail?dataset_id=dascoli2025) ## API Reference Use the `Dascoli2025` class to access this dataset programmatically. ### eegdash.dataset.Dascoli2025 alias of [`DS007523`](eegdash.dataset.DS007523.md#eegdash.dataset.DS007523) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/dascoli2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=dascoli2025) # Delorme: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Delorme dataset = Delorme(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Delorme(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Delorme( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{delorme, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DELORME` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `DELORME` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/delorme) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=delorme) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [delorme](https://openneuro.org/datasets/delorme) - NeMAR: [delorme](https://nemar.org/dataexplorer/detail?dataset_id=delorme) ## API Reference Use the `Delorme` class to access this dataset programmatically. ### eegdash.dataset.Delorme alias of [`DS003061`](eegdash.dataset.DS003061.md#eegdash.dataset.DS003061) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/delorme) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=delorme) # Dubois2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Dubois2024 dataset = Dubois2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Dubois2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Dubois2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{dubois2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `DUBOIS2024` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `DUBOIS2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/dubois2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=dubois2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [dubois2024](https://openneuro.org/datasets/dubois2024) - NeMAR: [dubois2024](https://nemar.org/dataexplorer/detail?dataset_id=dubois2024) ## API Reference Use the `Dubois2024` class to access this dataset programmatically. ### eegdash.dataset.Dubois2024 alias of [`DS006545`](eegdash.dataset.DS006545.md#eegdash.dataset.DS006545) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/dubois2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=dubois2024) # EEG2025R1: eeg dataset, 136 subjects *Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted)*. [10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) Modality: eeg Subjects: 136 Recordings: 1342 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R1 dataset = EEG2025R1(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R1(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R1( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r1, title = {Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005505.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005505.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 1** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 1** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R1` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted) | | Author (year) | `Shirazi2024_R1_bdf` | | Canonical | `HBN_r1_bdf` | | Importable as | `EEG2025R1`, `Shirazi2024_R1_bdf`, `HBN_r1_bdf` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r1) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r1) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r1) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r1, title = {Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005505.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005505.v1.0.1}, } ``` ## Technical Details - Subjects: 136 - Recordings: 1342 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 20.6 GB - File count: 1342 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005505.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r1](https://openneuro.org/datasets/eeg2025r1) - NeMAR: [eeg2025r1](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r1) ## API Reference Use the `EEG2025R1` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R1(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted) * **Study:** `EEG2025r1` (NeMAR) * **Author (year):** `Shirazi2024_R1_bdf` * **Canonical:** `HBN_r1_bdf` Also importable as: `EEG2025R1`, `Shirazi2024_R1_bdf`, `HBN_r1_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 136; recordings: 1342; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r1](https://openneuro.org/datasets/EEG2025r1) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r1](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r1) DOI: [https://doi.org/10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R1 >>> dataset = EEG2025R1(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r1) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r1) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R10: eeg dataset, 533 subjects *Healthy Brain Network (HBN) EEG - Release 10 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2025). *Healthy Brain Network (HBN) EEG - Release 10 (BDF Converted)*. Modality: eeg Subjects: 533 Recordings: 2516 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R10 dataset = EEG2025R10(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R10(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R10( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r10, title = {Healthy Brain Network (HBN) EEG - Release 10 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 9** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 9** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R10` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 10 (BDF Converted) | | Author (year) | `Shirazi2025_R10_bdf` | | Canonical | `HBN_r10_bdf` | | Importable as | `EEG2025R10`, `Shirazi2025_R10_bdf`, `HBN_r10_bdf` | | Year | 2025 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r10) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r10) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r10) | ## Technical Details - Subjects: 533 - Recordings: 2516 - Tasks: 8 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 32.1 GB - File count: 2516 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: — - Source: nemar - OpenNeuro: [eeg2025r10](https://openneuro.org/datasets/eeg2025r10) - NeMAR: [eeg2025r10](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r10) ## API Reference Use the `EEG2025R10` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R10(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 10 (BDF Converted) * **Study:** `EEG2025r10` (NeMAR) * **Author (year):** `Shirazi2025_R10_bdf` * **Canonical:** `HBN_r10_bdf` Also importable as: `EEG2025R10`, `Shirazi2025_R10_bdf`, `HBN_r10_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 533; recordings: 2516; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r10](https://openneuro.org/datasets/EEG2025r10) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r10](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r10) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R10 >>> dataset = EEG2025R10(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r10) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r10) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R10MINI: eeg dataset, 20 subjects *Healthy Brain Network (HBN) EEG - Release 10 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2025). *Healthy Brain Network (HBN) EEG - Release 10 (BDF Converted)*. Modality: eeg Subjects: 20 Recordings: 220 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R10MINI dataset = EEG2025R10MINI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R10MINI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R10MINI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r10mini, title = {Healthy Brain Network (HBN) EEG - Release 10 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 9** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 9** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R10MINI` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 10 (BDF Converted) | | Author (year) | `Shirazi2025_R10_bdf_mini` | | Canonical | `HBN_r10_bdf_mini` | | Importable as | `EEG2025R10MINI`, `Shirazi2025_R10_bdf_mini`, `HBN_r10_bdf_mini` | | Year | 2025 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r10mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r10mini) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r10mini) | ## Technical Details - Subjects: 20 - Recordings: 220 - Tasks: 8 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 2.8 GB - File count: 220 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: — - Source: nemar - OpenNeuro: [eeg2025r10mini](https://openneuro.org/datasets/eeg2025r10mini) - NeMAR: [eeg2025r10mini](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r10mini) ## API Reference Use the `EEG2025R10MINI` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R10MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 10 (BDF Converted) * **Study:** `EEG2025r10mini` (NeMAR) * **Author (year):** `Shirazi2025_R10_bdf_mini` * **Canonical:** `HBN_r10_bdf_mini` Also importable as: `EEG2025R10MINI`, `Shirazi2025_R10_bdf_mini`, `HBN_r10_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 220; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r10mini](https://openneuro.org/datasets/EEG2025r10mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r10mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r10mini) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R10MINI >>> dataset = EEG2025R10MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r10mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r10mini) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R11: eeg dataset, 430 subjects *Healthy Brain Network (HBN) EEG - Release 11 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2025). *Healthy Brain Network (HBN) EEG - Release 11 (BDF Converted)*. Modality: eeg Subjects: 430 Recordings: 3397 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R11 dataset = EEG2025R11(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R11(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R11( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r11, title = {Healthy Brain Network (HBN) EEG - Release 11 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 11** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 11** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R11` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 11 (BDF Converted) | | Author (year) | `Shirazi2025_R11_bdf` | | Canonical | `HBN_r11_bdf` | | Importable as | `EEG2025R11`, `Shirazi2025_R11_bdf`, `HBN_r11_bdf` | | Year | 2025 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r11) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r11) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r11) | ## Technical Details - Subjects: 430 - Recordings: 3397 - Tasks: 8 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 43.8 GB - File count: 3397 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: — - Source: nemar - OpenNeuro: [eeg2025r11](https://openneuro.org/datasets/eeg2025r11) - NeMAR: [eeg2025r11](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r11) ## API Reference Use the `EEG2025R11` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R11(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 11 (BDF Converted) * **Study:** `EEG2025r11` (NeMAR) * **Author (year):** `Shirazi2025_R11_bdf` * **Canonical:** `HBN_r11_bdf` Also importable as: `EEG2025R11`, `Shirazi2025_R11_bdf`, `HBN_r11_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 430; recordings: 3397; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r11](https://openneuro.org/datasets/EEG2025r11) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r11](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r11) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R11 >>> dataset = EEG2025R11(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r11) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r11) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R11MINI: eeg dataset, 20 subjects *Healthy Brain Network (HBN) EEG - Release 11 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2025). *Healthy Brain Network (HBN) EEG - Release 11 (BDF Converted)*. Modality: eeg Subjects: 20 Recordings: 220 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R11MINI dataset = EEG2025R11MINI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R11MINI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R11MINI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r11mini, title = {Healthy Brain Network (HBN) EEG - Release 11 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 11** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 11** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R11MINI` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 11 (BDF Converted) | | Author (year) | `Shirazi2025_R11_bdf_mini` | | Canonical | `HBN_r11_bdf_mini` | | Importable as | `EEG2025R11MINI`, `Shirazi2025_R11_bdf_mini`, `HBN_r11_bdf_mini` | | Year | 2025 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r11mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r11mini) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r11mini) | ## Technical Details - Subjects: 20 - Recordings: 220 - Tasks: 8 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 2.8 GB - File count: 220 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: — - Source: nemar - OpenNeuro: [eeg2025r11mini](https://openneuro.org/datasets/eeg2025r11mini) - NeMAR: [eeg2025r11mini](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r11mini) ## API Reference Use the `EEG2025R11MINI` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R11MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 11 (BDF Converted) * **Study:** `EEG2025r11mini` (NeMAR) * **Author (year):** `Shirazi2025_R11_bdf_mini` * **Canonical:** `HBN_r11_bdf_mini` Also importable as: `EEG2025R11MINI`, `Shirazi2025_R11_bdf_mini`, `HBN_r11_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 220; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r11mini](https://openneuro.org/datasets/EEG2025r11mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r11mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r11mini) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R11MINI >>> dataset = EEG2025R11MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r11mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r11mini) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R1MINI: eeg dataset, 20 subjects *Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted)*. [10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) Modality: eeg Subjects: 20 Recordings: 239 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R1MINI dataset = EEG2025R1MINI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R1MINI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R1MINI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r1mini, title = {Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005505.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005505.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 1** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 1** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R1MINI` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted) | | Author (year) | `Shirazi2024_R1_bdf_mini` | | Canonical | `HBN_r1_bdf_mini` | | Importable as | `EEG2025R1MINI`, `Shirazi2024_R1_bdf_mini`, `HBN_r1_bdf_mini` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r1mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r1mini) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r1mini) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r1mini, title = {Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005505.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005505.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 239 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 3.7 GB - File count: 239 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005505.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r1mini](https://openneuro.org/datasets/eeg2025r1mini) - NeMAR: [eeg2025r1mini](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r1mini) ## API Reference Use the `EEG2025R1MINI` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R1MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted) * **Study:** `EEG2025r1mini` (NeMAR) * **Author (year):** `Shirazi2024_R1_bdf_mini` * **Canonical:** `HBN_r1_bdf_mini` Also importable as: `EEG2025R1MINI`, `Shirazi2024_R1_bdf_mini`, `HBN_r1_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 239; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r1mini](https://openneuro.org/datasets/EEG2025r1mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r1mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r1mini) DOI: [https://doi.org/10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R1MINI >>> dataset = EEG2025R1MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r1mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r1mini) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R2: eeg dataset, 150 subjects *Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted)*. [10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) Modality: eeg Subjects: 150 Recordings: 1405 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R2 dataset = EEG2025R2(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R2(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R2( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r2, title = {Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005506.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005506.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 2** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 2** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R2` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted) | | Author (year) | `Shirazi2024_R2_bdf` | | Canonical | `HBN_r2_bdf` | | Importable as | `EEG2025R2`, `Shirazi2024_R2_bdf`, `HBN_r2_bdf` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r2) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r2) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r2) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r2, title = {Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005506.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005506.v1.0.1}, } ``` ## Technical Details - Subjects: 150 - Recordings: 1405 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 22.4 GB - File count: 1405 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005506.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r2](https://openneuro.org/datasets/eeg2025r2) - NeMAR: [eeg2025r2](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r2) ## API Reference Use the `EEG2025R2` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R2(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted) * **Study:** `EEG2025r2` (NeMAR) * **Author (year):** `Shirazi2024_R2_bdf` * **Canonical:** `HBN_r2_bdf` Also importable as: `EEG2025R2`, `Shirazi2024_R2_bdf`, `HBN_r2_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 150; recordings: 1405; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r2](https://openneuro.org/datasets/EEG2025r2) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r2](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r2) DOI: [https://doi.org/10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R2 >>> dataset = EEG2025R2(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r2) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r2) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R2MINI: eeg dataset, 20 subjects *Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted)*. [10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) Modality: eeg Subjects: 20 Recordings: 240 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R2MINI dataset = EEG2025R2MINI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R2MINI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R2MINI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r2mini, title = {Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005506.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005506.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 2** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 2** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R2MINI` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted) | | Author (year) | `Shirazi2024_R2_bdf_mini` | | Canonical | `HBN_r2_bdf_mini` | | Importable as | `EEG2025R2MINI`, `Shirazi2024_R2_bdf_mini`, `HBN_r2_bdf_mini` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r2mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r2mini) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r2mini) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r2mini, title = {Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005506.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005506.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 240 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 3.8 GB - File count: 240 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005506.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r2mini](https://openneuro.org/datasets/eeg2025r2mini) - NeMAR: [eeg2025r2mini](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r2mini) ## API Reference Use the `EEG2025R2MINI` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R2MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted) * **Study:** `EEG2025r2mini` (NeMAR) * **Author (year):** `Shirazi2024_R2_bdf_mini` * **Canonical:** `HBN_r2_bdf_mini` Also importable as: `EEG2025R2MINI`, `Shirazi2024_R2_bdf_mini`, `HBN_r2_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 240; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r2mini](https://openneuro.org/datasets/EEG2025r2mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r2mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r2mini) DOI: [https://doi.org/10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R2MINI >>> dataset = EEG2025R2MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r2mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r2mini) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R3: eeg dataset, 184 subjects *Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted)*. [10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) Modality: eeg Subjects: 184 Recordings: 1812 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R3 dataset = EEG2025R3(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R3(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R3( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r3, title = {Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005507.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005507.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 3** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 3** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R3` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted) | | Author (year) | `Shirazi2024_R3_bdf` | | Canonical | `HBN_r3_bdf` | | Importable as | `EEG2025R3`, `Shirazi2024_R3_bdf`, `HBN_r3_bdf` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r3) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r3) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r3) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r3, title = {Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005507.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005507.v1.0.1}, } ``` ## Technical Details - Subjects: 184 - Recordings: 1812 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 27.9 GB - File count: 1812 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005507.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r3](https://openneuro.org/datasets/eeg2025r3) - NeMAR: [eeg2025r3](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r3) ## API Reference Use the `EEG2025R3` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R3(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted) * **Study:** `EEG2025r3` (NeMAR) * **Author (year):** `Shirazi2024_R3_bdf` * **Canonical:** `HBN_r3_bdf` Also importable as: `EEG2025R3`, `Shirazi2024_R3_bdf`, `HBN_r3_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 184; recordings: 1812; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r3](https://openneuro.org/datasets/EEG2025r3) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r3](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r3) DOI: [https://doi.org/10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R3 >>> dataset = EEG2025R3(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r3) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r3) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R3MINI: eeg dataset, 20 subjects *Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted)*. [10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) Modality: eeg Subjects: 20 Recordings: 240 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R3MINI dataset = EEG2025R3MINI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R3MINI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R3MINI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r3mini, title = {Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005507.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005507.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 3** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 3** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R3MINI` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted) | | Author (year) | `Shirazi2024_R3_bdf_mini` | | Canonical | `HBN_r3_bdf_mini` | | Importable as | `EEG2025R3MINI`, `Shirazi2024_R3_bdf_mini`, `HBN_r3_bdf_mini` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r3mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r3mini) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r3mini) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r3mini, title = {Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005507.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005507.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 240 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 3.7 GB - File count: 240 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005507.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r3mini](https://openneuro.org/datasets/eeg2025r3mini) - NeMAR: [eeg2025r3mini](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r3mini) ## API Reference Use the `EEG2025R3MINI` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R3MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted) * **Study:** `EEG2025r3mini` (NeMAR) * **Author (year):** `Shirazi2024_R3_bdf_mini` * **Canonical:** `HBN_r3_bdf_mini` Also importable as: `EEG2025R3MINI`, `Shirazi2024_R3_bdf_mini`, `HBN_r3_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 240; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r3mini](https://openneuro.org/datasets/EEG2025r3mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r3mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r3mini) DOI: [https://doi.org/10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R3MINI >>> dataset = EEG2025R3MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r3mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r3mini) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R4: eeg dataset, 324 subjects *Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted)*. [10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) Modality: eeg Subjects: 324 Recordings: 3342 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R4 dataset = EEG2025R4(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R4(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R4( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r4, title = {Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005508.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005508.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 4** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 4** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R4` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted) | | Author (year) | `Shirazi2024_R4_bdf` | | Canonical | `HBN_r4_bdf` | | Importable as | `EEG2025R4`, `Shirazi2024_R4_bdf`, `HBN_r4_bdf` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r4) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r4) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r4) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r4, title = {Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005508.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005508.v1.0.1}, } ``` ## Technical Details - Subjects: 324 - Recordings: 3342 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 46.0 GB - File count: 3342 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005508.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r4](https://openneuro.org/datasets/eeg2025r4) - NeMAR: [eeg2025r4](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r4) ## API Reference Use the `EEG2025R4` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R4(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted) * **Study:** `EEG2025r4` (NeMAR) * **Author (year):** `Shirazi2024_R4_bdf` * **Canonical:** `HBN_r4_bdf` Also importable as: `EEG2025R4`, `Shirazi2024_R4_bdf`, `HBN_r4_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 324; recordings: 3342; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r4](https://openneuro.org/datasets/EEG2025r4) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r4](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r4) DOI: [https://doi.org/10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R4 >>> dataset = EEG2025R4(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r4) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r4) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R4MINI: eeg dataset, 20 subjects *Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted)*. [10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) Modality: eeg Subjects: 20 Recordings: 240 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R4MINI dataset = EEG2025R4MINI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R4MINI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R4MINI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r4mini, title = {Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005508.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005508.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 4** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 4** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R4MINI` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted) | | Author (year) | `Shirazi2024_R4_bdf_mini` | | Canonical | `HBN_r4_bdf_mini` | | Importable as | `EEG2025R4MINI`, `Shirazi2024_R4_bdf_mini`, `HBN_r4_bdf_mini` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r4mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r4mini) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r4mini) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r4mini, title = {Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005508.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005508.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 240 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 3.3 GB - File count: 240 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005508.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r4mini](https://openneuro.org/datasets/eeg2025r4mini) - NeMAR: [eeg2025r4mini](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r4mini) ## API Reference Use the `EEG2025R4MINI` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R4MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted) * **Study:** `EEG2025r4mini` (NeMAR) * **Author (year):** `Shirazi2024_R4_bdf_mini` * **Canonical:** `HBN_r4_bdf_mini` Also importable as: `EEG2025R4MINI`, `Shirazi2024_R4_bdf_mini`, `HBN_r4_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 240; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r4mini](https://openneuro.org/datasets/EEG2025r4mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r4mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r4mini) DOI: [https://doi.org/10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R4MINI >>> dataset = EEG2025R4MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r4mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r4mini) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R5: eeg dataset, 330 subjects *Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted)*. [10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) Modality: eeg Subjects: 330 Recordings: 3326 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R5 dataset = EEG2025R5(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R5(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R5( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r5, title = {Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005509.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005509.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 5** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 5** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R5` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted) | | Author (year) | `Shirazi2024_R5_bdf` | | Canonical | `HBN_r5_bdf` | | Importable as | `EEG2025R5`, `Shirazi2024_R5_bdf`, `HBN_r5_bdf` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r5) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r5) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r5) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r5, title = {Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005509.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005509.v1.0.1}, } ``` ## Technical Details - Subjects: 330 - Recordings: 3326 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 44.8 GB - File count: 3326 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005509.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r5](https://openneuro.org/datasets/eeg2025r5) - NeMAR: [eeg2025r5](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r5) ## API Reference Use the `EEG2025R5` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R5(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted) * **Study:** `EEG2025r5` (NeMAR) * **Author (year):** `Shirazi2024_R5_bdf` * **Canonical:** `HBN_r5_bdf` Also importable as: `EEG2025R5`, `Shirazi2024_R5_bdf`, `HBN_r5_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 330; recordings: 3326; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r5](https://openneuro.org/datasets/EEG2025r5) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r5](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r5) DOI: [https://doi.org/10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R5 >>> dataset = EEG2025R5(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r5) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r5) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R5MINI: eeg dataset, 20 subjects *Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted)*. [10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) Modality: eeg Subjects: 20 Recordings: 240 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R5MINI dataset = EEG2025R5MINI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R5MINI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R5MINI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r5mini, title = {Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005509.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005509.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 5** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 5** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R5MINI` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted) | | Author (year) | `Shirazi2024_R5_bdf_mini` | | Canonical | `HBN_r5_bdf_mini` | | Importable as | `EEG2025R5MINI`, `Shirazi2024_R5_bdf_mini`, `HBN_r5_bdf_mini` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r5mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r5mini) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r5mini) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r5mini, title = {Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005509.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005509.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 240 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 3.2 GB - File count: 240 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005509.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r5mini](https://openneuro.org/datasets/eeg2025r5mini) - NeMAR: [eeg2025r5mini](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r5mini) ## API Reference Use the `EEG2025R5MINI` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R5MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted) * **Study:** `EEG2025r5mini` (NeMAR) * **Author (year):** `Shirazi2024_R5_bdf_mini` * **Canonical:** `HBN_r5_bdf_mini` Also importable as: `EEG2025R5MINI`, `Shirazi2024_R5_bdf_mini`, `HBN_r5_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 240; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r5mini](https://openneuro.org/datasets/EEG2025r5mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r5mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r5mini) DOI: [https://doi.org/10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R5MINI >>> dataset = EEG2025R5MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r5mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r5mini) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R6: eeg dataset, 135 subjects *Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted)*. [10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) Modality: eeg Subjects: 135 Recordings: 1227 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R6 dataset = EEG2025R6(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R6(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R6( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r6, title = {Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005510.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005510.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 6** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 6** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R6` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted) | | Author (year) | `Shirazi2024_R6_bdf` | | Canonical | `HBN_r6_bdf` | | Importable as | `EEG2025R6`, `Shirazi2024_R6_bdf`, `HBN_r6_bdf` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r6) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r6) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r6) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r6, title = {Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005510.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005510.v1.0.1}, } ``` ## Technical Details - Subjects: 135 - Recordings: 1227 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 18.2 GB - File count: 1227 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005510.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r6](https://openneuro.org/datasets/eeg2025r6) - NeMAR: [eeg2025r6](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r6) ## API Reference Use the `EEG2025R6` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R6(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted) * **Study:** `EEG2025r6` (NeMAR) * **Author (year):** `Shirazi2024_R6_bdf` * **Canonical:** `HBN_r6_bdf` Also importable as: `EEG2025R6`, `Shirazi2024_R6_bdf`, `HBN_r6_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 135; recordings: 1227; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r6](https://openneuro.org/datasets/EEG2025r6) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r6](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r6) DOI: [https://doi.org/10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R6 >>> dataset = EEG2025R6(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r6) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r6) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R6MINI: eeg dataset, 20 subjects *Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted)*. [10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) Modality: eeg Subjects: 20 Recordings: 237 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R6MINI dataset = EEG2025R6MINI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R6MINI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R6MINI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r6mini, title = {Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005510.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005510.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 6** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 6** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R6MINI` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted) | | Author (year) | `Shirazi2024_R6_bdf_mini` | | Canonical | `HBN_r6_bdf_mini` | | Importable as | `EEG2025R6MINI`, `Shirazi2024_R6_bdf_mini`, `HBN_r6_bdf_mini` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r6mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r6mini) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r6mini) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r6mini, title = {Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005510.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005510.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 237 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 3.5 GB - File count: 237 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005510.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r6mini](https://openneuro.org/datasets/eeg2025r6mini) - NeMAR: [eeg2025r6mini](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r6mini) ## API Reference Use the `EEG2025R6MINI` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R6MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted) * **Study:** `EEG2025r6mini` (NeMAR) * **Author (year):** `Shirazi2024_R6_bdf_mini` * **Canonical:** `HBN_r6_bdf_mini` Also importable as: `EEG2025R6MINI`, `Shirazi2024_R6_bdf_mini`, `HBN_r6_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 237; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r6mini](https://openneuro.org/datasets/EEG2025r6mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r6mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r6mini) DOI: [https://doi.org/10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R6MINI >>> dataset = EEG2025R6MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r6mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r6mini) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R7: eeg dataset, 381 subjects *Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted)*. [10.18112/openneuro.ds005511.v1.0.1](https://doi.org/10.18112/openneuro.ds005511.v1.0.1) Modality: eeg Subjects: 381 Recordings: 3100 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R7 dataset = EEG2025R7(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R7(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R7( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r7, title = {Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005511.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005511.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 7** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 7** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R7` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted) | | Author (year) | `Shirazi2024_R7_bdf` | | Canonical | `HBN_r7_bdf` | | Importable as | `EEG2025R7`, `Shirazi2024_R7_bdf`, `HBN_r7_bdf` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005511.v1.0.1](https://doi.org/10.18112/openneuro.ds005511.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r7) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r7) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r7) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r7, title = {Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005511.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005511.v1.0.1}, } ``` ## Technical Details - Subjects: 381 - Recordings: 3100 - Tasks: 10 - Channels: 129 (3090), 6 (10) - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: — - File count: 3100 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005511.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r7](https://openneuro.org/datasets/eeg2025r7) - NeMAR: [eeg2025r7](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r7) ## API Reference Use the `EEG2025R7` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R7(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted) * **Study:** `EEG2025r7` (NeMAR) * **Author (year):** `Shirazi2024_R7_bdf` * **Canonical:** `HBN_r7_bdf` Also importable as: `EEG2025R7`, `Shirazi2024_R7_bdf`, `HBN_r7_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 381; recordings: 3100; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r7](https://openneuro.org/datasets/EEG2025r7) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r7](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r7) DOI: [https://doi.org/10.18112/openneuro.ds005511.v1.0.1](https://doi.org/10.18112/openneuro.ds005511.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R7 >>> dataset = EEG2025R7(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r7) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r7) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R7MINI: eeg dataset, 20 subjects *Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted)*. [10.18112/openneuro.ds005511.v1.0.1](https://doi.org/10.18112/openneuro.ds005511.v1.0.1) Modality: eeg Subjects: 20 Recordings: 239 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R7MINI dataset = EEG2025R7MINI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R7MINI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R7MINI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r7mini, title = {Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005511.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005511.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 7** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 7** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R7MINI` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted) | | Author (year) | `Shirazi2024_R7_bdf_mini` | | Canonical | `HBN_r7_bdf_mini` | | Importable as | `EEG2025R7MINI`, `Shirazi2024_R7_bdf_mini`, `HBN_r7_bdf_mini` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005511.v1.0.1](https://doi.org/10.18112/openneuro.ds005511.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r7mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r7mini) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r7mini) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r7mini, title = {Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005511.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005511.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 239 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: — - File count: 239 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005511.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r7mini](https://openneuro.org/datasets/eeg2025r7mini) - NeMAR: [eeg2025r7mini](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r7mini) ## API Reference Use the `EEG2025R7MINI` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R7MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted) * **Study:** `EEG2025r7mini` (NeMAR) * **Author (year):** `Shirazi2024_R7_bdf_mini` * **Canonical:** `HBN_r7_bdf_mini` Also importable as: `EEG2025R7MINI`, `Shirazi2024_R7_bdf_mini`, `HBN_r7_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 239; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r7mini](https://openneuro.org/datasets/EEG2025r7mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r7mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r7mini) DOI: [https://doi.org/10.18112/openneuro.ds005511.v1.0.1](https://doi.org/10.18112/openneuro.ds005511.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R7MINI >>> dataset = EEG2025R7MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r7mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r7mini) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R8: eeg dataset, 257 subjects *Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted)*. [10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) Modality: eeg Subjects: 257 Recordings: 2320 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R8 dataset = EEG2025R8(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R8(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R8( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r8, title = {Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005512.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005512.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 8** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 8** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R8` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted) | | Author (year) | `Shirazi2024_R8_bdf` | | Canonical | `HBN_r8_bdf` | | Importable as | `EEG2025R8`, `Shirazi2024_R8_bdf`, `HBN_r8_bdf` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r8) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r8) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r8) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r8, title = {Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005512.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005512.v1.0.1}, } ``` ## Technical Details - Subjects: 257 - Recordings: 2320 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 31.4 GB - File count: 2320 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005512.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r8](https://openneuro.org/datasets/eeg2025r8) - NeMAR: [eeg2025r8](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r8) ## API Reference Use the `EEG2025R8` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R8(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted) * **Study:** `EEG2025r8` (NeMAR) * **Author (year):** `Shirazi2024_R8_bdf` * **Canonical:** `HBN_r8_bdf` Also importable as: `EEG2025R8`, `Shirazi2024_R8_bdf`, `HBN_r8_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 257; recordings: 2320; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r8](https://openneuro.org/datasets/EEG2025r8) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r8](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r8) DOI: [https://doi.org/10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R8 >>> dataset = EEG2025R8(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r8) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r8) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R8MINI: eeg dataset, 20 subjects *Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted)*. [10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) Modality: eeg Subjects: 20 Recordings: 238 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R8MINI dataset = EEG2025R8MINI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R8MINI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R8MINI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r8mini, title = {Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005512.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005512.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 8** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 8** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R8MINI` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted) | | Author (year) | `Shirazi2024_R8_bdf_mini` | | Canonical | `HBN_r8_bdf_mini` | | Importable as | `EEG2025R8MINI`, `Shirazi2024_R8_bdf_mini`, `HBN_r8_bdf_mini` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r8mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r8mini) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r8mini) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r8mini, title = {Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005512.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005512.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 238 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 3.2 GB - File count: 238 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005512.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r8mini](https://openneuro.org/datasets/eeg2025r8mini) - NeMAR: [eeg2025r8mini](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r8mini) ## API Reference Use the `EEG2025R8MINI` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R8MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted) * **Study:** `EEG2025r8mini` (NeMAR) * **Author (year):** `Shirazi2024_R8_bdf_mini` * **Canonical:** `HBN_r8_bdf_mini` Also importable as: `EEG2025R8MINI`, `Shirazi2024_R8_bdf_mini`, `HBN_r8_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 238; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r8mini](https://openneuro.org/datasets/EEG2025r8mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r8mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r8mini) DOI: [https://doi.org/10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R8MINI >>> dataset = EEG2025R8MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r8mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r8mini) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R9: eeg dataset, 295 subjects *Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted)*. [10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) Modality: eeg Subjects: 295 Recordings: 2885 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R9 dataset = EEG2025R9(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R9(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R9( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r9, title = {Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005514.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005514.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 9** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 9** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R9` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted) | | Author (year) | `Shirazi2024_R9_bdf` | | Canonical | `HBN_r9_bdf` | | Importable as | `EEG2025R9`, `Shirazi2024_R9_bdf`, `HBN_r9_bdf` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r9) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r9) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r9) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r9, title = {Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005514.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005514.v1.0.1}, } ``` ## Technical Details - Subjects: 295 - Recordings: 2885 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 37.0 GB - File count: 2885 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005514.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r9](https://openneuro.org/datasets/eeg2025r9) - NeMAR: [eeg2025r9](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r9) ## API Reference Use the `EEG2025R9` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R9(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted) * **Study:** `EEG2025r9` (NeMAR) * **Author (year):** `Shirazi2024_R9_bdf` * **Canonical:** `HBN_r9_bdf` Also importable as: `EEG2025R9`, `Shirazi2024_R9_bdf`, `HBN_r9_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 295; recordings: 2885; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r9](https://openneuro.org/datasets/EEG2025r9) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r9](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r9) DOI: [https://doi.org/10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R9 >>> dataset = EEG2025R9(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r9) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r9) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEG2025R9MINI: eeg dataset, 20 subjects *Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted)* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (2024). *Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted)*. [10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) Modality: eeg Subjects: 20 Recordings: 237 License: CC-BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEG2025R9MINI dataset = EEG2025R9MINI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEG2025R9MINI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEG2025R9MINI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eeg2025r9mini, title = {Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005514.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005514.v1.0.1}, } ``` ## About This Dataset **The HBN-EEG Dataset** This is **Release 9** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** ### View full README **The HBN-EEG Dataset** This is **Release 9** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants. **Tasks** The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment. **Passive Tasks** 1. **Resting State**: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross. 2. **Surround Suppression**: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded. 3. **Movie Watching**: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations. **Active Tasks** 1. **Contrast Change Detection**: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses. 2. **Sequence Learning**: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups. 3. **Symbol Search**: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. The data is now included with the EEG data within the `events.tsv` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from the CBCL questionnaire, these factors provide valuable insights into the psychopathology of the participants, adding a rich layer of interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. \*\\\*Future Releases:\* We are committed to enhancing this dataset with additional, valuable features in its next stages, including: > \*\\\*Personalized EEG Electrode Locations:\* To offer more detailed insights into individual neural activity patterns. > \*\\\*Personalized Lead Field Matrix:\* Enabling better understanding and interpretation of EEG data. > \*\\\*Eye-Tracking Data:\* Providing a window into the visual attention and processing mechanisms during EEG experiments. **Other HBN-EEG Datasets** For access all releases of the HBN-EEG dataset, follow this [link on NEMAR.org](https://nemar.org/dataexplorer/local?search=HBN-EEG). The links to the individual releases are below: **Release 1 | DS005505** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1` - *Total subjects:* 136 **Release 2 | DS005506** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2` - *Total subjects:* 152 **Release 3 | DS005507** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3` - *Total subjects:* 183 **Release 4 | DS005508** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4` - *Total subjects:* 324 **Release 5 | DS005509** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5` - *Total subjects:* 330 **Release 6 | DS05510** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6` - *Total subjects:* 134 **Release 7 | DS005511** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7` - *Total subjects:* 381 **Release 8 | DS005512** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8` - *Total subjects:* 257 **Release 9 | DS005514** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9` - *Total subjects:* 295 **Release 10 | DS005515** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10` - *Total subjects:* 533 **Release 11 | DS005516** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11` - *Total subjects:* 430 **Release NC | –NOT FOR COMMERCIAL USE– This dataset is intended for research purposes only under the CC-BY-NC-SA-4.0 License and is not currently hosted on OpenNeuro/NEMAR. Any commercial use is prohibited.** - *S3 URI:* `s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NC` - *Total subjects:* 458 **Copyright and License** The HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper ([https://doi.org/10.1101/2024.10.03.615261](https://doi.org/10.1101/2024.10.03.615261)) as well as the original HBN publication ([https://dx.doi.org/10.1038/sdata.2017.181](https://dx.doi.org/10.1038/sdata.2017.181)). **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `EEG2025R9MINI` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted) | | Author (year) | `Shirazi2024_R9_bdf_mini` | | Canonical | `HBN_r9_bdf_mini` | | Importable as | `EEG2025R9MINI`, `Shirazi2024_R9_bdf_mini`, `HBN_r9_bdf_mini` | | Year | 2024 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-SA 4.0 | | Citation / DOI | [10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/eeg2025r9mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r9mini) | [Source URL](https://nemar.org/dataexplorer/detail/EEG2025r9mini) | ### Copy-paste BibTeX ```bibtex @dataset{eeg2025r9mini, title = {Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted)}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.18112/openneuro.ds005514.v1.0.1}, url = {https://doi.org/10.18112/openneuro.ds005514.v1.0.1}, } ``` ## Technical Details - Subjects: 20 - Recordings: 237 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 100.0 - Duration (hours): Not calculated - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 3.0 GB - File count: 237 - Format: BIDS - License: CC-BY-SA 4.0 - DOI: 10.18112/openneuro.ds005514.v1.0.1 - Source: nemar - OpenNeuro: [eeg2025r9mini](https://openneuro.org/datasets/eeg2025r9mini) - NeMAR: [eeg2025r9mini](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r9mini) ## API Reference Use the `EEG2025R9MINI` class to access this dataset programmatically. ### *class* eegdash.dataset.EEG2025R9MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted) * **Study:** `EEG2025r9mini` (NeMAR) * **Author (year):** `Shirazi2024_R9_bdf_mini` * **Canonical:** `HBN_r9_bdf_mini` Also importable as: `EEG2025R9MINI`, `Shirazi2024_R9_bdf_mini`, `HBN_r9_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 237; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r9mini](https://openneuro.org/datasets/EEG2025r9mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r9mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r9mini) DOI: [https://doi.org/10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R9MINI >>> dataset = EEG2025R9MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eeg2025r9mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eeg2025r9mini) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # EEGAsymmetries: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEGAsymmetries dataset = EEGAsymmetries(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEGAsymmetries(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEGAsymmetries( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eegasymmetries, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `EEGASYMMETRIES` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `EEGASYMMETRIES` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/eegasymmetries) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eegasymmetries) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [eegasymmetries](https://openneuro.org/datasets/eegasymmetries) - NeMAR: [eegasymmetries](https://nemar.org/dataexplorer/detail?dataset_id=eegasymmetries) ## API Reference Use the `EEGAsymmetries` class to access this dataset programmatically. ### eegdash.dataset.EEGAsymmetries alias of [`DS007172`](eegdash.dataset.DS007172.md#eegdash.dataset.DS007172) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eegasymmetries) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eegasymmetries) # EEGChallengeDataset ### *class* eegdash.dataset.EEGChallengeDataset(release: str, cache_dir: str, mini: bool = True, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset helper for the EEG 2025 Challenge. This class simplifies access to the EEG 2025 Challenge datasets. It is a specialized version of `EEGDashDataset` that is pre-configured for the challenge’s data releases. It automatically maps a release name (e.g., “R1”) to the corresponding OpenNeuro dataset and handles the selection of subject subsets (e.g., “mini” release). * **Parameters:** * **release** (*str*) – The name of the challenge release to load. Must be one of the keys in `RELEASE_TO_OPENNEURO_DATASET_MAP` (e.g., “R1”, “R2”, …, “R11”). * **cache_dir** (*str*) – The local directory where the dataset will be downloaded and cached. * **mini** (*bool* *,* *default True*) – If True, the dataset is restricted to the official “mini” subset of subjects for the specified release. If False, all subjects for the release are included. * **query** (*dict* *,* *optional*) – An additional MongoDB-style query to apply as a filter. This query is combined with the release and subject filters using a logical AND. The query must not contain the `dataset` key, as this is determined by the `release` parameter. * **s3_bucket** (*str* *,* *optional*) – The base S3 bucket URI where the challenge data is stored. Defaults to the official challenge bucket. * **\*\*kwargs** – Additional keyword arguments that are passed directly to the `EEGDashDataset` constructor. * **Raises:** **ValueError** – If the specified `release` is unknown, or if the `query` argument contains a `dataset` key. Also raised if `mini` is True and a requested subject is not part of the official mini-release subset. #### SEE ALSO `EEGDashDataset` : The base class for creating datasets from queries. # EEGEYENET: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEGEYENET dataset = EEGEYENET(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEGEYENET(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEGEYENET( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eegeyenet, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `EEGEYENET` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `EEGEYENET` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/eegeyenet) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eegeyenet) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [eegeyenet](https://openneuro.org/datasets/eegeyenet) - NeMAR: [eegeyenet](https://nemar.org/dataexplorer/detail?dataset_id=eegeyenet) ## API Reference Use the `EEGEYENET` class to access this dataset programmatically. ### eegdash.dataset.EEGEYENET alias of [`DS007338`](eegdash.dataset.DS007338.md#eegdash.dataset.DS007338) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eegeyenet) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eegeyenet) # EEGEyeNet: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEGEyeNet dataset = EEGEyeNet(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEGEyeNet(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEGEyeNet( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eegeyenet, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `EEGEYENET` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `EEGEYENET` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/eegeyenet) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eegeyenet) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [eegeyenet](https://openneuro.org/datasets/eegeyenet) - NeMAR: [eegeyenet](https://nemar.org/dataexplorer/detail?dataset_id=eegeyenet) ## API Reference Use the `EEGEyeNet` class to access this dataset programmatically. ### eegdash.dataset.EEGEyeNet alias of [`DS005872`](eegdash.dataset.DS005872.md#eegdash.dataset.DS005872) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eegeyenet) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eegeyenet) # EEGEyeNet_v2: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEGEyeNet_v2 dataset = EEGEyeNet_v2(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEGEyeNet_v2(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEGEyeNet_v2( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eegeyenet_v2, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `EEGEYENET_V2` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `EEGEYENET_V2` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/eegeyenet_v2) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eegeyenet_v2) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [eegeyenet_v2](https://openneuro.org/datasets/eegeyenet_v2) - NeMAR: [eegeyenet_v2](https://nemar.org/dataexplorer/detail?dataset_id=eegeyenet_v2) ## API Reference Use the `EEGEyeNet_v2` class to access this dataset programmatically. ### eegdash.dataset.EEGEyeNet_v2 alias of [`DS007338`](eegdash.dataset.DS007338.md#eegdash.dataset.DS007338) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eegeyenet_v2) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eegeyenet_v2) # EEGMotorMovementImagery: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EEGMotorMovementImagery dataset = EEGMotorMovementImagery(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EEGMotorMovementImagery(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EEGMotorMovementImagery( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eegmotormovementimagery, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `EEGMOTORMOVEMENTIMAGERY` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `EEGMOTORMOVEMENTIMAGERY` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/eegmotormovementimagery) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eegmotormovementimagery) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [eegmotormovementimagery](https://openneuro.org/datasets/eegmotormovementimagery) - NeMAR: [eegmotormovementimagery](https://nemar.org/dataexplorer/detail?dataset_id=eegmotormovementimagery) ## API Reference Use the `EEGMotorMovementImagery` class to access this dataset programmatically. ### eegdash.dataset.EEGMotorMovementImagery alias of [`DS004362`](eegdash.dataset.DS004362.md#eegdash.dataset.DS004362) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eegmotormovementimagery) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eegmotormovementimagery) # EESM17: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EESM17 dataset = EESM17(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EESM17(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EESM17( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eesm17, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `EESM17` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `EESM17` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/eesm17) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eesm17) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [eesm17](https://openneuro.org/datasets/eesm17) - NeMAR: [eesm17](https://nemar.org/dataexplorer/detail?dataset_id=eesm17) ## API Reference Use the `EESM17` class to access this dataset programmatically. ### eegdash.dataset.EESM17 alias of [`DS004348`](eegdash.dataset.DS004348.md#eegdash.dataset.DS004348) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eesm17) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eesm17) # EESM19: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EESM19 dataset = EESM19(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EESM19(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EESM19( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eesm19, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `EESM19` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `EESM19` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/eesm19) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eesm19) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [eesm19](https://openneuro.org/datasets/eesm19) - NeMAR: [eesm19](https://nemar.org/dataexplorer/detail?dataset_id=eesm19) ## API Reference Use the `EESM19` class to access this dataset programmatically. ### eegdash.dataset.EESM19 alias of [`DS005185`](eegdash.dataset.DS005185.md#eegdash.dataset.DS005185) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eesm19) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eesm19) # EESM23: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EESM23 dataset = EESM23(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EESM23(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EESM23( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eesm23, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `EESM23` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `EESM23` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/eesm23) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eesm23) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [eesm23](https://openneuro.org/datasets/eesm23) - NeMAR: [eesm23](https://nemar.org/dataexplorer/detail?dataset_id=eesm23) ## API Reference Use the `EESM23` class to access this dataset programmatically. ### eegdash.dataset.EESM23 alias of [`DS005178`](eegdash.dataset.DS005178.md#eegdash.dataset.DS005178) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eesm23) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eesm23) # EPFLP300: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EPFLP300 dataset = EPFLP300(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EPFLP300(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EPFLP300( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{epflp300, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `EPFLP300` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `EPFLP300` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/epflp300) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=epflp300) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [epflp300](https://openneuro.org/datasets/epflp300) - NeMAR: [epflp300](https://nemar.org/dataexplorer/detail?dataset_id=epflp300) ## API Reference Use the `EPFLP300` class to access this dataset programmatically. ### eegdash.dataset.EPFLP300 alias of [`NM000231`](eegdash.dataset.NM000231.md#eegdash.dataset.NM000231) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/epflp300) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=epflp300) # EPFLP300Dataset: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EPFLP300Dataset dataset = EPFLP300Dataset(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EPFLP300Dataset(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EPFLP300Dataset( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{epflp300dataset, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `EPFLP300DATASET` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `EPFLP300DATASET` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/epflp300dataset) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=epflp300dataset) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [epflp300dataset](https://openneuro.org/datasets/epflp300dataset) - NeMAR: [epflp300dataset](https://nemar.org/dataexplorer/detail?dataset_id=epflp300dataset) ## API Reference Use the `EPFLP300Dataset` class to access this dataset programmatically. ### eegdash.dataset.EPFLP300Dataset alias of [`NM000231`](eegdash.dataset.NM000231.md#eegdash.dataset.NM000231) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/epflp300dataset) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=epflp300dataset) # EPFL_P300: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EPFL_P300 dataset = EPFL_P300(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EPFL_P300(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EPFL_P300( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{epfl_p300, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `EPFL_P300` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `EPFL_P300` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/epfl_p300) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=epfl_p300) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [epfl_p300](https://openneuro.org/datasets/epfl_p300) - NeMAR: [epfl_p300](https://nemar.org/dataexplorer/detail?dataset_id=epfl_p300) ## API Reference Use the `EPFL_P300` class to access this dataset programmatically. ### eegdash.dataset.EPFL_P300 alias of [`NM000231`](eegdash.dataset.NM000231.md#eegdash.dataset.NM000231) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/epfl_p300) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=epfl_p300) # ERDetect: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import ERDetect dataset = ERDetect(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = ERDetect(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = ERDetect( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{erdetect, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ERDETECT` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ERDETECT` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/erdetect) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=erdetect) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [erdetect](https://openneuro.org/datasets/erdetect) - NeMAR: [erdetect](https://nemar.org/dataexplorer/detail?dataset_id=erdetect) ## API Reference Use the `ERDetect` class to access this dataset programmatically. ### eegdash.dataset.ERDetect alias of [`DS004774`](eegdash.dataset.DS004774.md#eegdash.dataset.DS004774) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/erdetect) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=erdetect) # ERPCORE: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import ERPCORE dataset = ERPCORE(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = ERPCORE(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = ERPCORE( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{erpcore, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ERPCORE` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ERPCORE` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/erpcore) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=erpcore) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [erpcore](https://openneuro.org/datasets/erpcore) - NeMAR: [erpcore](https://nemar.org/dataexplorer/detail?dataset_id=erpcore) ## API Reference Use the `ERPCORE` class to access this dataset programmatically. ### eegdash.dataset.ERPCORE alias of [`NM000132`](eegdash.dataset.NM000132.md#eegdash.dataset.NM000132) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/erpcore) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=erpcore) # ERP_CORE: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import ERP_CORE dataset = ERP_CORE(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = ERP_CORE(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = ERP_CORE( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{erp_core, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ERP_CORE` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ERP_CORE` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/erp_core) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=erp_core) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [erp_core](https://openneuro.org/datasets/erp_core) - NeMAR: [erp_core](https://nemar.org/dataexplorer/detail?dataset_id=erp_core) ## API Reference Use the `ERP_CORE` class to access this dataset programmatically. ### eegdash.dataset.ERP_CORE alias of [`NM000132`](eegdash.dataset.NM000132.md#eegdash.dataset.NM000132) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/erp_core) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=erp_core) # ER_Detect: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import ER_Detect dataset = ER_Detect(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = ER_Detect(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = ER_Detect( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{er_detect, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ER_DETECT` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ER_DETECT` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/er_detect) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=er_detect) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [er_detect](https://openneuro.org/datasets/er_detect) - NeMAR: [er_detect](https://nemar.org/dataexplorer/detail?dataset_id=er_detect) ## API Reference Use the `ER_Detect` class to access this dataset programmatically. ### eegdash.dataset.ER_Detect alias of [`DS004774`](eegdash.dataset.DS004774.md#eegdash.dataset.DS004774) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/er_detect) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=er_detect) # Edit2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Edit2024 dataset = Edit2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Edit2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Edit2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{edit2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `EDIT2024` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `EDIT2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/edit2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=edit2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [edit2024](https://openneuro.org/datasets/edit2024) - NeMAR: [edit2024](https://nemar.org/dataexplorer/detail?dataset_id=edit2024) ## API Reference Use the `Edit2024` class to access this dataset programmatically. ### eegdash.dataset.Edit2024 alias of [`DS007406`](eegdash.dataset.DS007406.md#eegdash.dataset.DS007406) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/edit2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=edit2024) # EldBETA: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import EldBETA dataset = EldBETA(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = EldBETA(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = EldBETA( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eldbeta, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ELDBETA` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ELDBETA` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/eldbeta) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eldbeta) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [eldbeta](https://openneuro.org/datasets/eldbeta) - NeMAR: [eldbeta](https://nemar.org/dataexplorer/detail?dataset_id=eldbeta) ## API Reference Use the `EldBETA` class to access this dataset programmatically. ### eegdash.dataset.EldBETA alias of [`NM000130`](eegdash.dataset.NM000130.md#eegdash.dataset.NM000130) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eldbeta) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eldbeta) # Ester2022: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Ester2022 dataset = Ester2022(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Ester2022(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Ester2022( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ester2022, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ESTER2022` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ESTER2022` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/ester2022) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ester2022) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [ester2022](https://openneuro.org/datasets/ester2022) - NeMAR: [ester2022](https://nemar.org/dataexplorer/detail?dataset_id=ester2022) ## API Reference Use the `Ester2022` class to access this dataset programmatically. ### eegdash.dataset.Ester2022 alias of [`DS004519`](eegdash.dataset.DS004519.md#eegdash.dataset.DS004519) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ester2022) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ester2022) # Ester2024_E1: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Ester2024_E1 dataset = Ester2024_E1(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Ester2024_E1(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Ester2024_E1( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ester2024_e1, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ESTER2024_E1` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ESTER2024_E1` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/ester2024_e1) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ester2024_e1) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [ester2024_e1](https://openneuro.org/datasets/ester2024_e1) - NeMAR: [ester2024_e1](https://nemar.org/dataexplorer/detail?dataset_id=ester2024_e1) ## API Reference Use the `Ester2024_E1` class to access this dataset programmatically. ### eegdash.dataset.Ester2024_E1 alias of [`DS004521`](eegdash.dataset.DS004521.md#eegdash.dataset.DS004521) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ester2024_e1) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ester2024_e1) # Ester2024_E2: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Ester2024_E2 dataset = Ester2024_E2(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Ester2024_E2(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Ester2024_E2( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ester2024_e2, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ESTER2024_E2` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ESTER2024_E2` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/ester2024_e2) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ester2024_e2) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [ester2024_e2](https://openneuro.org/datasets/ester2024_e2) - NeMAR: [ester2024_e2](https://nemar.org/dataexplorer/detail?dataset_id=ester2024_e2) ## API Reference Use the `Ester2024_E2` class to access this dataset programmatically. ### eegdash.dataset.Ester2024_E2 alias of [`DS004520`](eegdash.dataset.DS004520.md#eegdash.dataset.DS004520) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ester2024_e2) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ester2024_e2) # FACED: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import FACED dataset = FACED(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = FACED(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = FACED( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{faced, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `FACED` | |----------------|---------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `FACED` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/faced) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=faced) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [faced](https://openneuro.org/datasets/faced) - NeMAR: [faced](https://nemar.org/dataexplorer/detail?dataset_id=faced) ## API Reference Use the `FACED` class to access this dataset programmatically. ### eegdash.dataset.FACED alias of [`NM000112`](eegdash.dataset.NM000112.md#eegdash.dataset.NM000112) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/faced) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=faced) # FLUX: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import FLUX dataset = FLUX(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = FLUX(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = FLUX( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{flux, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `FLUX` | |----------------|-------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `FLUX` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/flux) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=flux) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [flux](https://openneuro.org/datasets/flux) - NeMAR: [flux](https://nemar.org/dataexplorer/detail?dataset_id=flux) ## API Reference Use the `FLUX` class to access this dataset programmatically. ### eegdash.dataset.FLUX alias of [`DS004346`](eegdash.dataset.DS004346.md#eegdash.dataset.DS004346) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/flux) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=flux) # FRL_DiscreteGestures: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import FRL_DiscreteGestures dataset = FRL_DiscreteGestures(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = FRL_DiscreteGestures(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = FRL_DiscreteGestures( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{frl_discretegestures, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `FRL_DISCRETEGESTURES` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `FRL_DISCRETEGESTURES` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/frl_discretegestures) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=frl_discretegestures) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [frl_discretegestures](https://openneuro.org/datasets/frl_discretegestures) - NeMAR: [frl_discretegestures](https://nemar.org/dataexplorer/detail?dataset_id=frl_discretegestures) ## API Reference Use the `FRL_DiscreteGestures` class to access this dataset programmatically. ### eegdash.dataset.FRL_DiscreteGestures alias of [`NM000105`](eegdash.dataset.NM000105.md#eegdash.dataset.NM000105) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/frl_discretegestures) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=frl_discretegestures) # FRL_Handwriting: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import FRL_Handwriting dataset = FRL_Handwriting(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = FRL_Handwriting(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = FRL_Handwriting( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{frl_handwriting, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `FRL_HANDWRITING` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `FRL_HANDWRITING` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/frl_handwriting) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=frl_handwriting) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [frl_handwriting](https://openneuro.org/datasets/frl_handwriting) - NeMAR: [frl_handwriting](https://nemar.org/dataexplorer/detail?dataset_id=frl_handwriting) ## API Reference Use the `FRL_Handwriting` class to access this dataset programmatically. ### eegdash.dataset.FRL_Handwriting alias of [`NM000106`](eegdash.dataset.NM000106.md#eegdash.dataset.NM000106) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/frl_handwriting) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=frl_handwriting) # FRL_WristControl: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import FRL_WristControl dataset = FRL_WristControl(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = FRL_WristControl(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = FRL_WristControl( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{frl_wristcontrol, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `FRL_WRISTCONTROL` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `FRL_WRISTCONTROL` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/frl_wristcontrol) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=frl_wristcontrol) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [frl_wristcontrol](https://openneuro.org/datasets/frl_wristcontrol) - NeMAR: [frl_wristcontrol](https://nemar.org/dataexplorer/detail?dataset_id=frl_wristcontrol) ## API Reference Use the `FRL_WristControl` class to access this dataset programmatically. ### eegdash.dataset.FRL_WristControl alias of [`NM000107`](eegdash.dataset.NM000107.md#eegdash.dataset.NM000107) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/frl_wristcontrol) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=frl_wristcontrol) # FernandezRodriguez2023: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import FernandezRodriguez2023 dataset = FernandezRodriguez2023(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = FernandezRodriguez2023(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = FernandezRodriguez2023( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{fernandezrodriguez2023, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `FERNANDEZRODRIGUEZ2023` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `FERNANDEZRODRIGUEZ2023` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/fernandezrodriguez2023) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=fernandezrodriguez2023) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [fernandezrodriguez2023](https://openneuro.org/datasets/fernandezrodriguez2023) - NeMAR: [fernandezrodriguez2023](https://nemar.org/dataexplorer/detail?dataset_id=fernandezrodriguez2023) ## API Reference Use the `FernandezRodriguez2023` class to access this dataset programmatically. ### eegdash.dataset.FernandezRodriguez2023 alias of [`NM000240`](eegdash.dataset.NM000240.md#eegdash.dataset.NM000240) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/fernandezrodriguez2023) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=fernandezrodriguez2023) # Ferron2019: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Ferron2019 dataset = Ferron2019(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Ferron2019(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Ferron2019( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ferron2019, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `FERRON2019` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `FERRON2019` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/ferron2019) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ferron2019) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [ferron2019](https://openneuro.org/datasets/ferron2019) - NeMAR: [ferron2019](https://nemar.org/dataexplorer/detail?dataset_id=ferron2019) ## API Reference Use the `Ferron2019` class to access this dataset programmatically. ### eegdash.dataset.Ferron2019 alias of [`DS004541`](eegdash.dataset.DS004541.md#eegdash.dataset.DS004541) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ferron2019) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ferron2019) # Flankers_FAR: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Flankers_FAR dataset = Flankers_FAR(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Flankers_FAR(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Flankers_FAR( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{flankers_far, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `FLANKERS_FAR` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `FLANKERS_FAR` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/flankers_far) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=flankers_far) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [flankers_far](https://openneuro.org/datasets/flankers_far) - NeMAR: [flankers_far](https://nemar.org/dataexplorer/detail?dataset_id=flankers_far) ## API Reference Use the `Flankers_FAR` class to access this dataset programmatically. ### eegdash.dataset.Flankers_FAR alias of [`DS005868`](eegdash.dataset.DS005868.md#eegdash.dataset.DS005868) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/flankers_far) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=flankers_far) # Flankers_NEAR: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Flankers_NEAR dataset = Flankers_NEAR(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Flankers_NEAR(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Flankers_NEAR( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{flankers_near, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `FLANKERS_NEAR` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `FLANKERS_NEAR` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/flankers_near) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=flankers_near) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [flankers_near](https://openneuro.org/datasets/flankers_near) - NeMAR: [flankers_near](https://nemar.org/dataexplorer/detail?dataset_id=flankers_near) ## API Reference Use the `Flankers_NEAR` class to access this dataset programmatically. ### eegdash.dataset.Flankers_NEAR alias of [`DS005866`](eegdash.dataset.DS005866.md#eegdash.dataset.DS005866) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/flankers_near) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=flankers_near) # Fogarty2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Fogarty2025 dataset = Fogarty2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Fogarty2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Fogarty2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{fogarty2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `FOGARTY2025` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `FOGARTY2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/fogarty2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=fogarty2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [fogarty2025](https://openneuro.org/datasets/fogarty2025) - NeMAR: [fogarty2025](https://nemar.org/dataexplorer/detail?dataset_id=fogarty2025) ## API Reference Use the `Fogarty2025` class to access this dataset programmatically. ### eegdash.dataset.Fogarty2025 alias of [`DS007463`](eegdash.dataset.DS007463.md#eegdash.dataset.DS007463) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/fogarty2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=fogarty2025) # Formica2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Formica2025 dataset = Formica2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Formica2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Formica2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{formica2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `FORMICA2025` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `FORMICA2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/formica2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=formica2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [formica2025](https://openneuro.org/datasets/formica2025) - NeMAR: [formica2025](https://nemar.org/dataexplorer/detail?dataset_id=formica2025) ## API Reference Use the `Formica2025` class to access this dataset programmatically. ### eegdash.dataset.Formica2025 alias of [`DS005406`](eegdash.dataset.DS005406.md#eegdash.dataset.DS005406) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/formica2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=formica2025) # ForrestGump_MEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import ForrestGump_MEG dataset = ForrestGump_MEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = ForrestGump_MEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = ForrestGump_MEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{forrestgump_meg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `FORRESTGUMP_MEG` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `FORRESTGUMP_MEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/forrestgump_meg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=forrestgump_meg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [forrestgump_meg](https://openneuro.org/datasets/forrestgump_meg) - NeMAR: [forrestgump_meg](https://nemar.org/dataexplorer/detail?dataset_id=forrestgump_meg) ## API Reference Use the `ForrestGump_MEG` class to access this dataset programmatically. ### eegdash.dataset.ForrestGump_MEG alias of [`DS003633`](eegdash.dataset.DS003633.md#eegdash.dataset.DS003633) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/forrestgump_meg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=forrestgump_meg) # FuentesGuerra2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import FuentesGuerra2024 dataset = FuentesGuerra2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = FuentesGuerra2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = FuentesGuerra2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{fuentesguerra2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `FUENTESGUERRA2024` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `FUENTESGUERRA2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/fuentesguerra2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=fuentesguerra2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [fuentesguerra2024](https://openneuro.org/datasets/fuentesguerra2024) - NeMAR: [fuentesguerra2024](https://nemar.org/dataexplorer/detail?dataset_id=fuentesguerra2024) ## API Reference Use the `FuentesGuerra2024` class to access this dataset programmatically. ### eegdash.dataset.FuentesGuerra2024 alias of [`DS007180`](eegdash.dataset.DS007180.md#eegdash.dataset.DS007180) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/fuentesguerra2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=fuentesguerra2024) # Gama2019: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Gama2019 dataset = Gama2019(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Gama2019(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Gama2019( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{gama2019, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `GAMA2019` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `GAMA2019` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/gama2019) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=gama2019) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [gama2019](https://openneuro.org/datasets/gama2019) - NeMAR: [gama2019](https://nemar.org/dataexplorer/detail?dataset_id=gama2019) ## API Reference Use the `Gama2019` class to access this dataset programmatically. ### eegdash.dataset.Gama2019 alias of [`DS005420`](eegdash.dataset.DS005420.md#eegdash.dataset.DS005420) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/gama2019) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=gama2019) # Gao2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Gao2024 dataset = Gao2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Gao2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Gao2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{gao2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `GAO2024` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `GAO2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/gao2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=gao2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [gao2024](https://openneuro.org/datasets/gao2024) - NeMAR: [gao2024](https://nemar.org/dataexplorer/detail?dataset_id=gao2024) ## API Reference Use the `Gao2024` class to access this dataset programmatically. ### eegdash.dataset.Gao2024 alias of [`DS007420`](eegdash.dataset.DS007420.md#eegdash.dataset.DS007420) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/gao2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=gao2024) # Gao2026: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Gao2026 dataset = Gao2026(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Gao2026(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Gao2026( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{gao2026, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `GAO2026` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `GAO2026` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/gao2026) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=gao2026) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [gao2026](https://openneuro.org/datasets/gao2026) - NeMAR: [gao2026](https://nemar.org/dataexplorer/detail?dataset_id=gao2026) ## API Reference Use the `Gao2026` class to access this dataset programmatically. ### eegdash.dataset.Gao2026 alias of [`NM000242`](eegdash.dataset.NM000242.md#eegdash.dataset.NM000242) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/gao2026) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=gao2026) # Ghaffari2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Ghaffari2024 dataset = Ghaffari2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Ghaffari2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Ghaffari2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ghaffari2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `GHAFFARI2024` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `GHAFFARI2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/ghaffari2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ghaffari2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [ghaffari2024](https://openneuro.org/datasets/ghaffari2024) - NeMAR: [ghaffari2024](https://nemar.org/dataexplorer/detail?dataset_id=ghaffari2024) ## API Reference Use the `Ghaffari2024` class to access this dataset programmatically. ### eegdash.dataset.Ghaffari2024 alias of [`DS006547`](eegdash.dataset.DS006547.md#eegdash.dataset.DS006547) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ghaffari2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ghaffari2024) # GuttmannFlury2025_ME: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import GuttmannFlury2025_ME dataset = GuttmannFlury2025_ME(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = GuttmannFlury2025_ME(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = GuttmannFlury2025_ME( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{guttmannflury2025_me, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `GUTTMANNFLURY2025_ME` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `GUTTMANNFLURY2025_ME` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/guttmannflury2025_me) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=guttmannflury2025_me) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [guttmannflury2025_me](https://openneuro.org/datasets/guttmannflury2025_me) - NeMAR: [guttmannflury2025_me](https://nemar.org/dataexplorer/detail?dataset_id=guttmannflury2025_me) ## API Reference Use the `GuttmannFlury2025_ME` class to access this dataset programmatically. ### eegdash.dataset.GuttmannFlury2025_ME alias of [`NM000227`](eegdash.dataset.NM000227.md#eegdash.dataset.NM000227) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/guttmannflury2025_me) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=guttmannflury2025_me) # GuttmannFlury2025_MIME: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import GuttmannFlury2025_MIME dataset = GuttmannFlury2025_MIME(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = GuttmannFlury2025_MIME(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = GuttmannFlury2025_MIME( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{guttmannflury2025_mime, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `GUTTMANNFLURY2025_MIME` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `GUTTMANNFLURY2025_MIME` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/guttmannflury2025_mime) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=guttmannflury2025_mime) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [guttmannflury2025_mime](https://openneuro.org/datasets/guttmannflury2025_mime) - NeMAR: [guttmannflury2025_mime](https://nemar.org/dataexplorer/detail?dataset_id=guttmannflury2025_mime) ## API Reference Use the `GuttmannFlury2025_MIME` class to access this dataset programmatically. ### eegdash.dataset.GuttmannFlury2025_MIME alias of [`NM000235`](eegdash.dataset.NM000235.md#eegdash.dataset.NM000235) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/guttmannflury2025_mime) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=guttmannflury2025_mime) # HADMEEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HADMEEG dataset = HADMEEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HADMEEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HADMEEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hadmeeg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HADMEEG` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HADMEEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hadmeeg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hadmeeg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hadmeeg](https://openneuro.org/datasets/hadmeeg) - NeMAR: [hadmeeg](https://nemar.org/dataexplorer/detail?dataset_id=hadmeeg) ## API Reference Use the `HADMEEG` class to access this dataset programmatically. ### eegdash.dataset.HADMEEG alias of [`DS007353`](eegdash.dataset.DS007353.md#eegdash.dataset.DS007353) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hadmeeg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hadmeeg) # HAD_MEEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HAD_MEEG dataset = HAD_MEEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HAD_MEEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HAD_MEEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{had_meeg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HAD_MEEG` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HAD_MEEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/had_meeg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=had_meeg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [had_meeg](https://openneuro.org/datasets/had_meeg) - NeMAR: [had_meeg](https://nemar.org/dataexplorer/detail?dataset_id=had_meeg) ## API Reference Use the `HAD_MEEG` class to access this dataset programmatically. ### eegdash.dataset.HAD_MEEG alias of [`DS007353`](eegdash.dataset.DS007353.md#eegdash.dataset.DS007353) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/had_meeg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=had_meeg) # HBN_EEG_NC: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_EEG_NC dataset = HBN_EEG_NC(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_EEG_NC(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_EEG_NC( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_eeg_nc, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_EEG_NC` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_EEG_NC` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_eeg_nc) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_eeg_nc) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_eeg_nc](https://openneuro.org/datasets/hbn_eeg_nc) - NeMAR: [hbn_eeg_nc](https://nemar.org/dataexplorer/detail?dataset_id=hbn_eeg_nc) ## API Reference Use the `HBN_EEG_NC` class to access this dataset programmatically. ### eegdash.dataset.HBN_EEG_NC alias of [`NM000103`](eegdash.dataset.NM000103.md#eegdash.dataset.NM000103) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_eeg_nc) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_eeg_nc) # HBN_NoCommercial: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_NoCommercial dataset = HBN_NoCommercial(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_NoCommercial(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_NoCommercial( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_nocommercial, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_NOCOMMERCIAL` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_NOCOMMERCIAL` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_nocommercial) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_nocommercial) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_nocommercial](https://openneuro.org/datasets/hbn_nocommercial) - NeMAR: [hbn_nocommercial](https://nemar.org/dataexplorer/detail?dataset_id=hbn_nocommercial) ## API Reference Use the `HBN_NoCommercial` class to access this dataset programmatically. ### eegdash.dataset.HBN_NoCommercial alias of [`NM000103`](eegdash.dataset.NM000103.md#eegdash.dataset.NM000103) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_nocommercial) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_nocommercial) # HBN_r1: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r1 dataset = HBN_r1(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r1(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r1( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r1, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R1` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R1` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r1) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r1) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r1](https://openneuro.org/datasets/hbn_r1) - NeMAR: [hbn_r1](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r1) ## API Reference Use the `HBN_r1` class to access this dataset programmatically. ### eegdash.dataset.HBN_r1 alias of [`DS005505`](eegdash.dataset.DS005505.md#eegdash.dataset.DS005505) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r1) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r1) # HBN_r10: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r10 dataset = HBN_r10(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r10(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r10( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r10, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R10` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R10` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r10) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r10) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r10](https://openneuro.org/datasets/hbn_r10) - NeMAR: [hbn_r10](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r10) ## API Reference Use the `HBN_r10` class to access this dataset programmatically. ### eegdash.dataset.HBN_r10 alias of [`DS005515`](eegdash.dataset.DS005515.md#eegdash.dataset.DS005515) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r10) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r10) # HBN_r10_bdf: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r10_bdf dataset = HBN_r10_bdf(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r10_bdf(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r10_bdf( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r10_bdf, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R10_BDF` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R10_BDF` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r10_bdf) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r10_bdf) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r10_bdf](https://openneuro.org/datasets/hbn_r10_bdf) - NeMAR: [hbn_r10_bdf](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r10_bdf) ## API Reference Use the `HBN_r10_bdf` class to access this dataset programmatically. ### eegdash.dataset.HBN_r10_bdf alias of [`EEG2025R10`](eegdash.dataset.EEG2025R10.md#eegdash.dataset.EEG2025R10) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r10_bdf) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r10_bdf) # HBN_r10_bdf_mini: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r10_bdf_mini dataset = HBN_r10_bdf_mini(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r10_bdf_mini(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r10_bdf_mini( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r10_bdf_mini, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R10_BDF_MINI` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R10_BDF_MINI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r10_bdf_mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r10_bdf_mini) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r10_bdf_mini](https://openneuro.org/datasets/hbn_r10_bdf_mini) - NeMAR: [hbn_r10_bdf_mini](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r10_bdf_mini) ## API Reference Use the `HBN_r10_bdf_mini` class to access this dataset programmatically. ### eegdash.dataset.HBN_r10_bdf_mini alias of [`EEG2025R10MINI`](eegdash.dataset.EEG2025R10MINI.md#eegdash.dataset.EEG2025R10MINI) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r10_bdf_mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r10_bdf_mini) # HBN_r11: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r11 dataset = HBN_r11(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r11(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r11( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r11, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R11` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R11` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r11) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r11) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r11](https://openneuro.org/datasets/hbn_r11) - NeMAR: [hbn_r11](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r11) ## API Reference Use the `HBN_r11` class to access this dataset programmatically. ### eegdash.dataset.HBN_r11 alias of [`DS005516`](eegdash.dataset.DS005516.md#eegdash.dataset.DS005516) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r11) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r11) # HBN_r11_bdf: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r11_bdf dataset = HBN_r11_bdf(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r11_bdf(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r11_bdf( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r11_bdf, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R11_BDF` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R11_BDF` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r11_bdf) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r11_bdf) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r11_bdf](https://openneuro.org/datasets/hbn_r11_bdf) - NeMAR: [hbn_r11_bdf](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r11_bdf) ## API Reference Use the `HBN_r11_bdf` class to access this dataset programmatically. ### eegdash.dataset.HBN_r11_bdf alias of [`EEG2025R11`](eegdash.dataset.EEG2025R11.md#eegdash.dataset.EEG2025R11) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r11_bdf) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r11_bdf) # HBN_r11_bdf_mini: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r11_bdf_mini dataset = HBN_r11_bdf_mini(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r11_bdf_mini(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r11_bdf_mini( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r11_bdf_mini, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R11_BDF_MINI` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R11_BDF_MINI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r11_bdf_mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r11_bdf_mini) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r11_bdf_mini](https://openneuro.org/datasets/hbn_r11_bdf_mini) - NeMAR: [hbn_r11_bdf_mini](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r11_bdf_mini) ## API Reference Use the `HBN_r11_bdf_mini` class to access this dataset programmatically. ### eegdash.dataset.HBN_r11_bdf_mini alias of [`EEG2025R11MINI`](eegdash.dataset.EEG2025R11MINI.md#eegdash.dataset.EEG2025R11MINI) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r11_bdf_mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r11_bdf_mini) # HBN_r1_bdf: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r1_bdf dataset = HBN_r1_bdf(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r1_bdf(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r1_bdf( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r1_bdf, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R1_BDF` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R1_BDF` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r1_bdf) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r1_bdf) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r1_bdf](https://openneuro.org/datasets/hbn_r1_bdf) - NeMAR: [hbn_r1_bdf](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r1_bdf) ## API Reference Use the `HBN_r1_bdf` class to access this dataset programmatically. ### eegdash.dataset.HBN_r1_bdf alias of [`EEG2025R1`](eegdash.dataset.EEG2025R1.md#eegdash.dataset.EEG2025R1) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r1_bdf) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r1_bdf) # HBN_r1_bdf_mini: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r1_bdf_mini dataset = HBN_r1_bdf_mini(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r1_bdf_mini(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r1_bdf_mini( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r1_bdf_mini, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R1_BDF_MINI` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R1_BDF_MINI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r1_bdf_mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r1_bdf_mini) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r1_bdf_mini](https://openneuro.org/datasets/hbn_r1_bdf_mini) - NeMAR: [hbn_r1_bdf_mini](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r1_bdf_mini) ## API Reference Use the `HBN_r1_bdf_mini` class to access this dataset programmatically. ### eegdash.dataset.HBN_r1_bdf_mini alias of [`EEG2025R1MINI`](eegdash.dataset.EEG2025R1MINI.md#eegdash.dataset.EEG2025R1MINI) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r1_bdf_mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r1_bdf_mini) # HBN_r2: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r2 dataset = HBN_r2(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r2(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r2( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r2, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R2` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R2` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r2) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r2) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r2](https://openneuro.org/datasets/hbn_r2) - NeMAR: [hbn_r2](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r2) ## API Reference Use the `HBN_r2` class to access this dataset programmatically. ### eegdash.dataset.HBN_r2 alias of [`DS005506`](eegdash.dataset.DS005506.md#eegdash.dataset.DS005506) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r2) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r2) # HBN_r2_bdf: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r2_bdf dataset = HBN_r2_bdf(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r2_bdf(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r2_bdf( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r2_bdf, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R2_BDF` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R2_BDF` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r2_bdf) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r2_bdf) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r2_bdf](https://openneuro.org/datasets/hbn_r2_bdf) - NeMAR: [hbn_r2_bdf](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r2_bdf) ## API Reference Use the `HBN_r2_bdf` class to access this dataset programmatically. ### eegdash.dataset.HBN_r2_bdf alias of [`EEG2025R2`](eegdash.dataset.EEG2025R2.md#eegdash.dataset.EEG2025R2) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r2_bdf) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r2_bdf) # HBN_r2_bdf_mini: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r2_bdf_mini dataset = HBN_r2_bdf_mini(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r2_bdf_mini(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r2_bdf_mini( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r2_bdf_mini, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R2_BDF_MINI` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R2_BDF_MINI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r2_bdf_mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r2_bdf_mini) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r2_bdf_mini](https://openneuro.org/datasets/hbn_r2_bdf_mini) - NeMAR: [hbn_r2_bdf_mini](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r2_bdf_mini) ## API Reference Use the `HBN_r2_bdf_mini` class to access this dataset programmatically. ### eegdash.dataset.HBN_r2_bdf_mini alias of [`EEG2025R2MINI`](eegdash.dataset.EEG2025R2MINI.md#eegdash.dataset.EEG2025R2MINI) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r2_bdf_mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r2_bdf_mini) # HBN_r3: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r3 dataset = HBN_r3(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r3(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r3( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r3, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R3` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R3` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r3) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r3) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r3](https://openneuro.org/datasets/hbn_r3) - NeMAR: [hbn_r3](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r3) ## API Reference Use the `HBN_r3` class to access this dataset programmatically. ### eegdash.dataset.HBN_r3 alias of [`DS005507`](eegdash.dataset.DS005507.md#eegdash.dataset.DS005507) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r3) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r3) # HBN_r3_bdf: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r3_bdf dataset = HBN_r3_bdf(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r3_bdf(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r3_bdf( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r3_bdf, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R3_BDF` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R3_BDF` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r3_bdf) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r3_bdf) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r3_bdf](https://openneuro.org/datasets/hbn_r3_bdf) - NeMAR: [hbn_r3_bdf](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r3_bdf) ## API Reference Use the `HBN_r3_bdf` class to access this dataset programmatically. ### eegdash.dataset.HBN_r3_bdf alias of [`EEG2025R3`](eegdash.dataset.EEG2025R3.md#eegdash.dataset.EEG2025R3) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r3_bdf) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r3_bdf) # HBN_r3_bdf_mini: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r3_bdf_mini dataset = HBN_r3_bdf_mini(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r3_bdf_mini(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r3_bdf_mini( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r3_bdf_mini, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R3_BDF_MINI` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R3_BDF_MINI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r3_bdf_mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r3_bdf_mini) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r3_bdf_mini](https://openneuro.org/datasets/hbn_r3_bdf_mini) - NeMAR: [hbn_r3_bdf_mini](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r3_bdf_mini) ## API Reference Use the `HBN_r3_bdf_mini` class to access this dataset programmatically. ### eegdash.dataset.HBN_r3_bdf_mini alias of [`EEG2025R3MINI`](eegdash.dataset.EEG2025R3MINI.md#eegdash.dataset.EEG2025R3MINI) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r3_bdf_mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r3_bdf_mini) # HBN_r4: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r4 dataset = HBN_r4(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r4(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r4( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r4, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R4` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R4` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r4) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r4) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r4](https://openneuro.org/datasets/hbn_r4) - NeMAR: [hbn_r4](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r4) ## API Reference Use the `HBN_r4` class to access this dataset programmatically. ### eegdash.dataset.HBN_r4 alias of [`DS005508`](eegdash.dataset.DS005508.md#eegdash.dataset.DS005508) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r4) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r4) # HBN_r4_bdf: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r4_bdf dataset = HBN_r4_bdf(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r4_bdf(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r4_bdf( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r4_bdf, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R4_BDF` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R4_BDF` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r4_bdf) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r4_bdf) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r4_bdf](https://openneuro.org/datasets/hbn_r4_bdf) - NeMAR: [hbn_r4_bdf](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r4_bdf) ## API Reference Use the `HBN_r4_bdf` class to access this dataset programmatically. ### eegdash.dataset.HBN_r4_bdf alias of [`EEG2025R4`](eegdash.dataset.EEG2025R4.md#eegdash.dataset.EEG2025R4) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r4_bdf) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r4_bdf) # HBN_r4_bdf_mini: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r4_bdf_mini dataset = HBN_r4_bdf_mini(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r4_bdf_mini(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r4_bdf_mini( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r4_bdf_mini, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R4_BDF_MINI` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R4_BDF_MINI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r4_bdf_mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r4_bdf_mini) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r4_bdf_mini](https://openneuro.org/datasets/hbn_r4_bdf_mini) - NeMAR: [hbn_r4_bdf_mini](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r4_bdf_mini) ## API Reference Use the `HBN_r4_bdf_mini` class to access this dataset programmatically. ### eegdash.dataset.HBN_r4_bdf_mini alias of [`EEG2025R4MINI`](eegdash.dataset.EEG2025R4MINI.md#eegdash.dataset.EEG2025R4MINI) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r4_bdf_mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r4_bdf_mini) # HBN_r5: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r5 dataset = HBN_r5(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r5(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r5( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r5, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R5` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R5` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r5) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r5) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r5](https://openneuro.org/datasets/hbn_r5) - NeMAR: [hbn_r5](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r5) ## API Reference Use the `HBN_r5` class to access this dataset programmatically. ### eegdash.dataset.HBN_r5 alias of [`DS005509`](eegdash.dataset.DS005509.md#eegdash.dataset.DS005509) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r5) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r5) # HBN_r5_bdf: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r5_bdf dataset = HBN_r5_bdf(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r5_bdf(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r5_bdf( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r5_bdf, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R5_BDF` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R5_BDF` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r5_bdf) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r5_bdf) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r5_bdf](https://openneuro.org/datasets/hbn_r5_bdf) - NeMAR: [hbn_r5_bdf](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r5_bdf) ## API Reference Use the `HBN_r5_bdf` class to access this dataset programmatically. ### eegdash.dataset.HBN_r5_bdf alias of [`EEG2025R5`](eegdash.dataset.EEG2025R5.md#eegdash.dataset.EEG2025R5) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r5_bdf) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r5_bdf) # HBN_r5_bdf_mini: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r5_bdf_mini dataset = HBN_r5_bdf_mini(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r5_bdf_mini(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r5_bdf_mini( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r5_bdf_mini, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R5_BDF_MINI` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R5_BDF_MINI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r5_bdf_mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r5_bdf_mini) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r5_bdf_mini](https://openneuro.org/datasets/hbn_r5_bdf_mini) - NeMAR: [hbn_r5_bdf_mini](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r5_bdf_mini) ## API Reference Use the `HBN_r5_bdf_mini` class to access this dataset programmatically. ### eegdash.dataset.HBN_r5_bdf_mini alias of [`EEG2025R5MINI`](eegdash.dataset.EEG2025R5MINI.md#eegdash.dataset.EEG2025R5MINI) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r5_bdf_mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r5_bdf_mini) # HBN_r6: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r6 dataset = HBN_r6(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r6(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r6( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r6, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R6` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R6` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r6) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r6) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r6](https://openneuro.org/datasets/hbn_r6) - NeMAR: [hbn_r6](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r6) ## API Reference Use the `HBN_r6` class to access this dataset programmatically. ### eegdash.dataset.HBN_r6 alias of [`DS005510`](eegdash.dataset.DS005510.md#eegdash.dataset.DS005510) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r6) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r6) # HBN_r6_bdf: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r6_bdf dataset = HBN_r6_bdf(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r6_bdf(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r6_bdf( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r6_bdf, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R6_BDF` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R6_BDF` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r6_bdf) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r6_bdf) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r6_bdf](https://openneuro.org/datasets/hbn_r6_bdf) - NeMAR: [hbn_r6_bdf](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r6_bdf) ## API Reference Use the `HBN_r6_bdf` class to access this dataset programmatically. ### eegdash.dataset.HBN_r6_bdf alias of [`EEG2025R6`](eegdash.dataset.EEG2025R6.md#eegdash.dataset.EEG2025R6) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r6_bdf) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r6_bdf) # HBN_r6_bdf_mini: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r6_bdf_mini dataset = HBN_r6_bdf_mini(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r6_bdf_mini(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r6_bdf_mini( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r6_bdf_mini, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R6_BDF_MINI` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R6_BDF_MINI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r6_bdf_mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r6_bdf_mini) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r6_bdf_mini](https://openneuro.org/datasets/hbn_r6_bdf_mini) - NeMAR: [hbn_r6_bdf_mini](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r6_bdf_mini) ## API Reference Use the `HBN_r6_bdf_mini` class to access this dataset programmatically. ### eegdash.dataset.HBN_r6_bdf_mini alias of [`EEG2025R6MINI`](eegdash.dataset.EEG2025R6MINI.md#eegdash.dataset.EEG2025R6MINI) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r6_bdf_mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r6_bdf_mini) # HBN_r7_bdf: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r7_bdf dataset = HBN_r7_bdf(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r7_bdf(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r7_bdf( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r7_bdf, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R7_BDF` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R7_BDF` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r7_bdf) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r7_bdf) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r7_bdf](https://openneuro.org/datasets/hbn_r7_bdf) - NeMAR: [hbn_r7_bdf](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r7_bdf) ## API Reference Use the `HBN_r7_bdf` class to access this dataset programmatically. ### eegdash.dataset.HBN_r7_bdf alias of [`EEG2025R7`](eegdash.dataset.EEG2025R7.md#eegdash.dataset.EEG2025R7) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r7_bdf) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r7_bdf) # HBN_r7_bdf_mini: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r7_bdf_mini dataset = HBN_r7_bdf_mini(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r7_bdf_mini(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r7_bdf_mini( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r7_bdf_mini, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R7_BDF_MINI` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R7_BDF_MINI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r7_bdf_mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r7_bdf_mini) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r7_bdf_mini](https://openneuro.org/datasets/hbn_r7_bdf_mini) - NeMAR: [hbn_r7_bdf_mini](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r7_bdf_mini) ## API Reference Use the `HBN_r7_bdf_mini` class to access this dataset programmatically. ### eegdash.dataset.HBN_r7_bdf_mini alias of [`EEG2025R7MINI`](eegdash.dataset.EEG2025R7MINI.md#eegdash.dataset.EEG2025R7MINI) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r7_bdf_mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r7_bdf_mini) # HBN_r8: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r8 dataset = HBN_r8(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r8(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r8( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r8, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R8` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R8` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r8) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r8) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r8](https://openneuro.org/datasets/hbn_r8) - NeMAR: [hbn_r8](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r8) ## API Reference Use the `HBN_r8` class to access this dataset programmatically. ### eegdash.dataset.HBN_r8 alias of [`DS005512`](eegdash.dataset.DS005512.md#eegdash.dataset.DS005512) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r8) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r8) # HBN_r8_bdf: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r8_bdf dataset = HBN_r8_bdf(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r8_bdf(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r8_bdf( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r8_bdf, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R8_BDF` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R8_BDF` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r8_bdf) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r8_bdf) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r8_bdf](https://openneuro.org/datasets/hbn_r8_bdf) - NeMAR: [hbn_r8_bdf](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r8_bdf) ## API Reference Use the `HBN_r8_bdf` class to access this dataset programmatically. ### eegdash.dataset.HBN_r8_bdf alias of [`EEG2025R8`](eegdash.dataset.EEG2025R8.md#eegdash.dataset.EEG2025R8) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r8_bdf) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r8_bdf) # HBN_r8_bdf_mini: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r8_bdf_mini dataset = HBN_r8_bdf_mini(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r8_bdf_mini(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r8_bdf_mini( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r8_bdf_mini, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R8_BDF_MINI` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R8_BDF_MINI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r8_bdf_mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r8_bdf_mini) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r8_bdf_mini](https://openneuro.org/datasets/hbn_r8_bdf_mini) - NeMAR: [hbn_r8_bdf_mini](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r8_bdf_mini) ## API Reference Use the `HBN_r8_bdf_mini` class to access this dataset programmatically. ### eegdash.dataset.HBN_r8_bdf_mini alias of [`EEG2025R8MINI`](eegdash.dataset.EEG2025R8MINI.md#eegdash.dataset.EEG2025R8MINI) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r8_bdf_mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r8_bdf_mini) # HBN_r9: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r9 dataset = HBN_r9(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r9(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r9( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r9, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R9` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R9` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r9) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r9) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r9](https://openneuro.org/datasets/hbn_r9) - NeMAR: [hbn_r9](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r9) ## API Reference Use the `HBN_r9` class to access this dataset programmatically. ### eegdash.dataset.HBN_r9 alias of [`DS005514`](eegdash.dataset.DS005514.md#eegdash.dataset.DS005514) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r9) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r9) # HBN_r9_bdf: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r9_bdf dataset = HBN_r9_bdf(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r9_bdf(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r9_bdf( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r9_bdf, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R9_BDF` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R9_BDF` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r9_bdf) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r9_bdf) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r9_bdf](https://openneuro.org/datasets/hbn_r9_bdf) - NeMAR: [hbn_r9_bdf](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r9_bdf) ## API Reference Use the `HBN_r9_bdf` class to access this dataset programmatically. ### eegdash.dataset.HBN_r9_bdf alias of [`EEG2025R9`](eegdash.dataset.EEG2025R9.md#eegdash.dataset.EEG2025R9) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r9_bdf) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r9_bdf) # HBN_r9_bdf_mini: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HBN_r9_bdf_mini dataset = HBN_r9_bdf_mini(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HBN_r9_bdf_mini(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HBN_r9_bdf_mini( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hbn_r9_bdf_mini, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HBN_R9_BDF_MINI` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HBN_R9_BDF_MINI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hbn_r9_bdf_mini) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r9_bdf_mini) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hbn_r9_bdf_mini](https://openneuro.org/datasets/hbn_r9_bdf_mini) - NeMAR: [hbn_r9_bdf_mini](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r9_bdf_mini) ## API Reference Use the `HBN_r9_bdf_mini` class to access this dataset programmatically. ### eegdash.dataset.HBN_r9_bdf_mini alias of [`EEG2025R9MINI`](eegdash.dataset.EEG2025R9MINI.md#eegdash.dataset.EEG2025R9MINI) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hbn_r9_bdf_mini) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hbn_r9_bdf_mini) # HEFMIICH: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HEFMIICH dataset = HEFMIICH(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HEFMIICH(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HEFMIICH( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hefmiich, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HEFMIICH` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HEFMIICH` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hefmiich) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hefmiich) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hefmiich](https://openneuro.org/datasets/hefmiich) - NeMAR: [hefmiich](https://nemar.org/dataexplorer/detail?dataset_id=hefmiich) ## API Reference Use the `HEFMIICH` class to access this dataset programmatically. ### eegdash.dataset.HEFMIICH alias of [`NM000347`](eegdash.dataset.NM000347.md#eegdash.dataset.NM000347) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hefmiich) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hefmiich) # HEFMI_ICH: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HEFMI_ICH dataset = HEFMI_ICH(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HEFMI_ICH(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HEFMI_ICH( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hefmi_ich, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HEFMI_ICH` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HEFMI_ICH` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hefmi_ich) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hefmi_ich) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hefmi_ich](https://openneuro.org/datasets/hefmi_ich) - NeMAR: [hefmi_ich](https://nemar.org/dataexplorer/detail?dataset_id=hefmi_ich) ## API Reference Use the `HEFMI_ICH` class to access this dataset programmatically. ### eegdash.dataset.HEFMI_ICH alias of [`NM000347`](eegdash.dataset.NM000347.md#eegdash.dataset.NM000347) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hefmi_ich) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hefmi_ich) # HID: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HID dataset = HID(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HID(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HID( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hid, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HID` | |----------------|-----------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HID` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hid) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hid) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hid](https://openneuro.org/datasets/hid) - NeMAR: [hid](https://nemar.org/dataexplorer/detail?dataset_id=hid) ## API Reference Use the `HID` class to access this dataset programmatically. ### eegdash.dataset.HID alias of [`DS004851`](eegdash.dataset.DS004851.md#eegdash.dataset.DS004851) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hid) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hid) # HUPiEEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HUPiEEG dataset = HUPiEEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HUPiEEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HUPiEEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hupieeg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HUPIEEG` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HUPIEEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hupieeg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hupieeg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hupieeg](https://openneuro.org/datasets/hupieeg) - NeMAR: [hupieeg](https://nemar.org/dataexplorer/detail?dataset_id=hupieeg) ## API Reference Use the `HUPiEEG` class to access this dataset programmatically. ### eegdash.dataset.HUPiEEG alias of [`DS004100`](eegdash.dataset.DS004100.md#eegdash.dataset.DS004100) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hupieeg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hupieeg) # Hatano: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Hatano dataset = Hatano(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Hatano(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Hatano( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hatano, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HATANO` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HATANO` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hatano) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hatano) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hatano](https://openneuro.org/datasets/hatano) - NeMAR: [hatano](https://nemar.org/dataexplorer/detail?dataset_id=hatano) ## API Reference Use the `Hatano` class to access this dataset programmatically. ### eegdash.dataset.Hatano alias of [`DS007118`](eegdash.dataset.DS007118.md#eegdash.dataset.DS007118) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hatano) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hatano) # Haupt2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Haupt2025 dataset = Haupt2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Haupt2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Haupt2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{haupt2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HAUPT2025` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HAUPT2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/haupt2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=haupt2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [haupt2025](https://openneuro.org/datasets/haupt2025) - NeMAR: [haupt2025](https://nemar.org/dataexplorer/detail?dataset_id=haupt2025) ## API Reference Use the `Haupt2025` class to access this dataset programmatically. ### eegdash.dataset.Haupt2025 alias of [`DS004951`](eegdash.dataset.DS004951.md#eegdash.dataset.DS004951) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/haupt2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=haupt2025) # HealthyBrainNetwork: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HealthyBrainNetwork dataset = HealthyBrainNetwork(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HealthyBrainNetwork(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HealthyBrainNetwork( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{healthybrainnetwork, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HEALTHYBRAINNETWORK` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HEALTHYBRAINNETWORK` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/healthybrainnetwork) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=healthybrainnetwork) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [healthybrainnetwork](https://openneuro.org/datasets/healthybrainnetwork) - NeMAR: [healthybrainnetwork](https://nemar.org/dataexplorer/detail?dataset_id=healthybrainnetwork) ## API Reference Use the `HealthyBrainNetwork` class to access this dataset programmatically. ### eegdash.dataset.HealthyBrainNetwork alias of [`NM000103`](eegdash.dataset.NM000103.md#eegdash.dataset.NM000103) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/healthybrainnetwork) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=healthybrainnetwork) # HeartBEAM: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HeartBEAM dataset = HeartBEAM(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HeartBEAM(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HeartBEAM( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{heartbeam, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HEARTBEAM` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HEARTBEAM` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/heartbeam) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=heartbeam) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [heartbeam](https://openneuro.org/datasets/heartbeam) - NeMAR: [heartbeam](https://nemar.org/dataexplorer/detail?dataset_id=heartbeam) ## API Reference Use the `HeartBEAM` class to access this dataset programmatically. ### eegdash.dataset.HeartBEAM alias of [`DS006466`](eegdash.dataset.DS006466.md#eegdash.dataset.DS006466) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/heartbeam) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=heartbeam) # HenaoIsaza2026: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HenaoIsaza2026 dataset = HenaoIsaza2026(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HenaoIsaza2026(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HenaoIsaza2026( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{henaoisaza2026, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HENAOISAZA2026` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HENAOISAZA2026` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/henaoisaza2026) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=henaoisaza2026) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [henaoisaza2026](https://openneuro.org/datasets/henaoisaza2026) - NeMAR: [henaoisaza2026](https://nemar.org/dataexplorer/detail?dataset_id=henaoisaza2026) ## API Reference Use the `HenaoIsaza2026` class to access this dataset programmatically. ### eegdash.dataset.HenaoIsaza2026 alias of [`DS007427`](eegdash.dataset.DS007427.md#eegdash.dataset.DS007427) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/henaoisaza2026) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=henaoisaza2026) # Hermann2021: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Hermann2021 dataset = Hermann2021(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Hermann2021(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Hermann2021( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hermann2021, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HERMANN2021` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HERMANN2021` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hermann2021) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hermann2021) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hermann2021](https://openneuro.org/datasets/hermann2021) - NeMAR: [hermann2021](https://nemar.org/dataexplorer/detail?dataset_id=hermann2021) ## API Reference Use the `Hermann2021` class to access this dataset programmatically. ### eegdash.dataset.Hermann2021 alias of [`DS003352`](eegdash.dataset.DS003352.md#eegdash.dataset.DS003352) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hermann2021) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hermann2021) # Hermes2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Hermes2024 dataset = Hermes2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Hermes2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Hermes2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hermes2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HERMES2024` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HERMES2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hermes2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hermes2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hermes2024](https://openneuro.org/datasets/hermes2024) - NeMAR: [hermes2024](https://nemar.org/dataexplorer/detail?dataset_id=hermes2024) ## API Reference Use the `Hermes2024` class to access this dataset programmatically. ### eegdash.dataset.Hermes2024 alias of [`DS006392`](eegdash.dataset.DS006392.md#eegdash.dataset.DS006392) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hermes2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hermes2024) # Herrema2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Herrema2024 dataset = Herrema2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Herrema2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Herrema2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{herrema2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HERREMA2024` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HERREMA2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/herrema2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=herrema2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [herrema2024](https://openneuro.org/datasets/herrema2024) - NeMAR: [herrema2024](https://nemar.org/dataexplorer/detail?dataset_id=herrema2024) ## API Reference Use the `Herrema2024` class to access this dataset programmatically. ### eegdash.dataset.Herrema2024 alias of [`DS005494`](eegdash.dataset.DS005494.md#eegdash.dataset.DS005494) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/herrema2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=herrema2024) # Hinss2021: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Hinss2021 dataset = Hinss2021(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Hinss2021(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Hinss2021( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hinss2021, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HINSS2021` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HINSS2021` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hinss2021) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hinss2021) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hinss2021](https://openneuro.org/datasets/hinss2021) - NeMAR: [hinss2021](https://nemar.org/dataexplorer/detail?dataset_id=hinss2021) ## API Reference Use the `Hinss2021` class to access this dataset programmatically. ### eegdash.dataset.Hinss2021 alias of [`NM000206`](eegdash.dataset.NM000206.md#eegdash.dataset.NM000206) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hinss2021) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hinss2021) # Hinss2021_v2: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Hinss2021_v2 dataset = Hinss2021_v2(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Hinss2021_v2(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Hinss2021_v2( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hinss2021_v2, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HINSS2021_V2` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HINSS2021_V2` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hinss2021_v2) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hinss2021_v2) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hinss2021_v2](https://openneuro.org/datasets/hinss2021_v2) - NeMAR: [hinss2021_v2](https://nemar.org/dataexplorer/detail?dataset_id=hinss2021_v2) ## API Reference Use the `Hinss2021_v2` class to access this dataset programmatically. ### eegdash.dataset.Hinss2021_v2 alias of [`NM000343`](eegdash.dataset.NM000343.md#eegdash.dataset.NM000343) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hinss2021_v2) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hinss2021_v2) # Huang2022: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Huang2022 dataset = Huang2022(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Huang2022(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Huang2022( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{huang2022, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HUANG2022` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HUANG2022` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/huang2022) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=huang2022) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [huang2022](https://openneuro.org/datasets/huang2022) - NeMAR: [huang2022](https://nemar.org/dataexplorer/detail?dataset_id=huang2022) ## API Reference Use the `Huang2022` class to access this dataset programmatically. ### eegdash.dataset.Huang2022 alias of [`DS004457`](eegdash.dataset.DS004457.md#eegdash.dataset.DS004457) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/huang2022) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=huang2022) # Huebner2017: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Huebner2017 dataset = Huebner2017(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Huebner2017(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Huebner2017( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{huebner2017, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HUEBNER2017` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HUEBNER2017` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/huebner2017) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=huebner2017) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [huebner2017](https://openneuro.org/datasets/huebner2017) - NeMAR: [huebner2017](https://nemar.org/dataexplorer/detail?dataset_id=huebner2017) ## API Reference Use the `Huebner2017` class to access this dataset programmatically. ### eegdash.dataset.Huebner2017 alias of [`NM000199`](eegdash.dataset.NM000199.md#eegdash.dataset.NM000199) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/huebner2017) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=huebner2017) # Huebner2018: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Huebner2018 dataset = Huebner2018(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Huebner2018(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Huebner2018( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{huebner2018, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HUEBNER2018` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HUEBNER2018` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/huebner2018) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=huebner2018) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [huebner2018](https://openneuro.org/datasets/huebner2018) - NeMAR: [huebner2018](https://nemar.org/dataexplorer/detail?dataset_id=huebner2018) ## API Reference Use the `Huebner2018` class to access this dataset programmatically. ### eegdash.dataset.Huebner2018 alias of [`NM000195`](eegdash.dataset.NM000195.md#eegdash.dataset.NM000195) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/huebner2018) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=huebner2018) # HySER: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import HySER dataset = HySER(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = HySER(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = HySER( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hyser, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HYSER` | |----------------|---------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HYSER` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hyser) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hyser) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hyser](https://openneuro.org/datasets/hyser) - NeMAR: [hyser](https://nemar.org/dataexplorer/detail?dataset_id=hyser) ## API Reference Use the `HySER` class to access this dataset programmatically. ### eegdash.dataset.HySER alias of [`NM000108`](eegdash.dataset.NM000108.md#eegdash.dataset.NM000108) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hyser) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hyser) # Hyser: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Hyser dataset = Hyser(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Hyser(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Hyser( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{hyser, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `HYSER` | |----------------|---------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `HYSER` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/hyser) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=hyser) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [hyser](https://openneuro.org/datasets/hyser) - NeMAR: [hyser](https://nemar.org/dataexplorer/detail?dataset_id=hyser) ## API Reference Use the `Hyser` class to access this dataset programmatically. ### eegdash.dataset.Hyser alias of [`NM000108`](eegdash.dataset.NM000108.md#eegdash.dataset.NM000108) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/hyser) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=hyser) # IACKD: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import IACKD dataset = IACKD(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = IACKD(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = IACKD( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{iackd, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `IACKD` | |----------------|---------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `IACKD` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/iackd) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=iackd) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [iackd](https://openneuro.org/datasets/iackd) - NeMAR: [iackd](https://nemar.org/dataexplorer/detail?dataset_id=iackd) ## API Reference Use the `IACKD` class to access this dataset programmatically. ### eegdash.dataset.IACKD alias of [`DS006840`](eegdash.dataset.DS006840.md#eegdash.dataset.DS006840) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/iackd) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=iackd) # Jao2020: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Jao2020 dataset = Jao2020(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Jao2020(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Jao2020( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{jao2020, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `JAO2020` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `JAO2020` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/jao2020) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=jao2020) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [jao2020](https://openneuro.org/datasets/jao2020) - NeMAR: [jao2020](https://nemar.org/dataexplorer/detail?dataset_id=jao2020) ## API Reference Use the `Jao2020` class to access this dataset programmatically. ### eegdash.dataset.Jao2020 alias of [`NM000249`](eegdash.dataset.NM000249.md#eegdash.dataset.NM000249) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/jao2020) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=jao2020) # Johnson2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Johnson2024 dataset = Johnson2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Johnson2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Johnson2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{johnson2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `JOHNSON2024` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `JOHNSON2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/johnson2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=johnson2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [johnson2024](https://openneuro.org/datasets/johnson2024) - NeMAR: [johnson2024](https://nemar.org/dataexplorer/detail?dataset_id=johnson2024) ## API Reference Use the `Johnson2024` class to access this dataset programmatically. ### eegdash.dataset.Johnson2024 alias of [`DS004850`](eegdash.dataset.DS004850.md#eegdash.dataset.DS004850) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/johnson2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=johnson2024) # Johnson2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Johnson2025 dataset = Johnson2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Johnson2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Johnson2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{johnson2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `JOHNSON2025` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `JOHNSON2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/johnson2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=johnson2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [johnson2025](https://openneuro.org/datasets/johnson2025) - NeMAR: [johnson2025](https://nemar.org/dataexplorer/detail?dataset_id=johnson2025) ## API Reference Use the `Johnson2025` class to access this dataset programmatically. ### eegdash.dataset.Johnson2025 alias of [`DS004852`](eegdash.dataset.DS004852.md#eegdash.dataset.DS004852) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/johnson2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=johnson2025) # Kajikawa2000: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Kajikawa2000 dataset = Kajikawa2000(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Kajikawa2000(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Kajikawa2000( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{kajikawa2000, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `KAJIKAWA2000` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `KAJIKAWA2000` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/kajikawa2000) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=kajikawa2000) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [kajikawa2000](https://openneuro.org/datasets/kajikawa2000) - NeMAR: [kajikawa2000](https://nemar.org/dataexplorer/detail?dataset_id=kajikawa2000) ## API Reference Use the `Kajikawa2000` class to access this dataset programmatically. ### eegdash.dataset.Kajikawa2000 alias of [`DS007028`](eegdash.dataset.DS007028.md#eegdash.dataset.DS007028) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/kajikawa2000) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=kajikawa2000) # Kalenkovich2019: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Kalenkovich2019 dataset = Kalenkovich2019(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Kalenkovich2019(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Kalenkovich2019( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{kalenkovich2019, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `KALENKOVICH2019` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `KALENKOVICH2019` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/kalenkovich2019) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=kalenkovich2019) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [kalenkovich2019](https://openneuro.org/datasets/kalenkovich2019) - NeMAR: [kalenkovich2019](https://nemar.org/dataexplorer/detail?dataset_id=kalenkovich2019) ## API Reference Use the `Kalenkovich2019` class to access this dataset programmatically. ### eegdash.dataset.Kalenkovich2019 alias of [`DS003703`](eegdash.dataset.DS003703.md#eegdash.dataset.DS003703) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/kalenkovich2019) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=kalenkovich2019) # Kanno2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Kanno2025 dataset = Kanno2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Kanno2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Kanno2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{kanno2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `KANNO2025` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `KANNO2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/kanno2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=kanno2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [kanno2025](https://openneuro.org/datasets/kanno2025) - NeMAR: [kanno2025](https://nemar.org/dataexplorer/detail?dataset_id=kanno2025) ## API Reference Use the `Kanno2025` class to access this dataset programmatically. ### eegdash.dataset.Kanno2025 alias of [`DS005545`](eegdash.dataset.DS005545.md#eegdash.dataset.DS005545) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/kanno2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=kanno2025) # Kekecs2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Kekecs2024 dataset = Kekecs2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Kekecs2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Kekecs2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{kekecs2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `KEKECS2024` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `KEKECS2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/kekecs2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=kekecs2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [kekecs2024](https://openneuro.org/datasets/kekecs2024) - NeMAR: [kekecs2024](https://nemar.org/dataexplorer/detail?dataset_id=kekecs2024) ## API Reference Use the `Kekecs2024` class to access this dataset programmatically. ### eegdash.dataset.Kekecs2024 alias of [`DS004572`](eegdash.dataset.DS004572.md#eegdash.dataset.DS004572) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/kekecs2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=kekecs2024) # Kidder2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Kidder2024 dataset = Kidder2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Kidder2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Kidder2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{kidder2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `KIDDER2024` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `KIDDER2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/kidder2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=kidder2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [kidder2024](https://openneuro.org/datasets/kidder2024) - NeMAR: [kidder2024](https://nemar.org/dataexplorer/detail?dataset_id=kidder2024) ## API Reference Use the `Kidder2024` class to access this dataset programmatically. ### eegdash.dataset.Kidder2024 alias of [`DS004278`](eegdash.dataset.DS004278.md#eegdash.dataset.DS004278) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/kidder2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=kidder2024) # Kim2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Kim2025 dataset = Kim2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Kim2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Kim2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{kim2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `KIM2025` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `KIM2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/kim2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=kim2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [kim2025](https://openneuro.org/datasets/kim2025) - NeMAR: [kim2025](https://nemar.org/dataexplorer/detail?dataset_id=kim2025) ## API Reference Use the `Kim2025` class to access this dataset programmatically. ### eegdash.dataset.Kim2025 alias of [`NM000127`](eegdash.dataset.NM000127.md#eegdash.dataset.NM000127) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/kim2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=kim2025) # Kinley2019: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Kinley2019 dataset = Kinley2019(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Kinley2019(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Kinley2019( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{kinley2019, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `KINLEY2019` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `KINLEY2019` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/kinley2019) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=kinley2019) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [kinley2019](https://openneuro.org/datasets/kinley2019) - NeMAR: [kinley2019](https://nemar.org/dataexplorer/detail?dataset_id=kinley2019) ## API Reference Use the `Kinley2019` class to access this dataset programmatically. ### eegdash.dataset.Kinley2019 alias of [`DS006446`](eegdash.dataset.DS006446.md#eegdash.dataset.DS006446) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/kinley2019) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=kinley2019) # Kitazawa2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Kitazawa2025 dataset = Kitazawa2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Kitazawa2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Kitazawa2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{kitazawa2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `KITAZAWA2025` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `KITAZAWA2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/kitazawa2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=kitazawa2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [kitazawa2025](https://openneuro.org/datasets/kitazawa2025) - NeMAR: [kitazawa2025](https://nemar.org/dataexplorer/detail?dataset_id=kitazawa2025) ## API Reference Use the `Kitazawa2025` class to access this dataset programmatically. ### eegdash.dataset.Kitazawa2025 alias of [`DS005007`](eegdash.dataset.DS005007.md#eegdash.dataset.DS005007) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/kitazawa2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=kitazawa2025) # Kucyi2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Kucyi2024 dataset = Kucyi2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Kucyi2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Kucyi2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{kucyi2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `KUCYI2024` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `KUCYI2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/kucyi2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=kucyi2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [kucyi2024](https://openneuro.org/datasets/kucyi2024) - NeMAR: [kucyi2024](https://nemar.org/dataexplorer/detail?dataset_id=kucyi2024) ## API Reference Use the `Kucyi2024` class to access this dataset programmatically. ### eegdash.dataset.Kucyi2024 alias of [`DS007216`](eegdash.dataset.DS007216.md#eegdash.dataset.DS007216) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/kucyi2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=kucyi2024) # Kuroda2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Kuroda2024 dataset = Kuroda2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Kuroda2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Kuroda2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{kuroda2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `KURODA2024` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `KURODA2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/kuroda2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=kuroda2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [kuroda2024](https://openneuro.org/datasets/kuroda2024) - NeMAR: [kuroda2024](https://nemar.org/dataexplorer/detail?dataset_id=kuroda2024) ## API Reference Use the `Kuroda2024` class to access this dataset programmatically. ### eegdash.dataset.Kuroda2024 alias of [`DS006107`](eegdash.dataset.DS006107.md#eegdash.dataset.DS006107) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/kuroda2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=kuroda2024) # LEMON: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import LEMON dataset = LEMON(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = LEMON(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = LEMON( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{lemon, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `LEMON` | |----------------|---------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `LEMON` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/lemon) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=lemon) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [lemon](https://openneuro.org/datasets/lemon) - NeMAR: [lemon](https://nemar.org/dataexplorer/detail?dataset_id=lemon) ## API Reference Use the `LEMON` class to access this dataset programmatically. ### eegdash.dataset.LEMON alias of [`NM000179`](eegdash.dataset.NM000179.md#eegdash.dataset.NM000179) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/lemon) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=lemon) # LPP: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import LPP dataset = LPP(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = LPP(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = LPP( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{lpp, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `LPP` | |----------------|-----------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `LPP` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/lpp) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=lpp) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [lpp](https://openneuro.org/datasets/lpp) - NeMAR: [lpp](https://nemar.org/dataexplorer/detail?dataset_id=lpp) ## API Reference Use the `LPP` class to access this dataset programmatically. ### eegdash.dataset.LPP alias of [`DS005345`](eegdash.dataset.DS005345.md#eegdash.dataset.DS005345) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/lpp) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=lpp) # LeganesFonteneau2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import LeganesFonteneau2024 dataset = LeganesFonteneau2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = LeganesFonteneau2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = LeganesFonteneau2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{leganesfonteneau2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `LEGANESFONTENEAU2024` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `LEGANESFONTENEAU2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/leganesfonteneau2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=leganesfonteneau2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [leganesfonteneau2024](https://openneuro.org/datasets/leganesfonteneau2024) - NeMAR: [leganesfonteneau2024](https://nemar.org/dataexplorer/detail?dataset_id=leganesfonteneau2024) ## API Reference Use the `LeganesFonteneau2024` class to access this dataset programmatically. ### eegdash.dataset.LeganesFonteneau2024 alias of [`DS006159`](eegdash.dataset.DS006159.md#eegdash.dataset.DS006159) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/leganesfonteneau2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=leganesfonteneau2024) # Lin2019: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Lin2019 dataset = Lin2019(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Lin2019(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Lin2019( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{lin2019, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `LIN2019` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `LIN2019` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/lin2019) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=lin2019) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [lin2019](https://openneuro.org/datasets/lin2019) - NeMAR: [lin2019](https://nemar.org/dataexplorer/detail?dataset_id=lin2019) ## API Reference Use the `Lin2019` class to access this dataset programmatically. ### eegdash.dataset.Lin2019 alias of [`DS006035`](eegdash.dataset.DS006035.md#eegdash.dataset.DS006035) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/lin2019) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=lin2019) # LittlePrince: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import LittlePrince dataset = LittlePrince(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = LittlePrince(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = LittlePrince( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{littleprince, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `LITTLEPRINCE` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `LITTLEPRINCE` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/littleprince) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=littleprince) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [littleprince](https://openneuro.org/datasets/littleprince) - NeMAR: [littleprince](https://nemar.org/dataexplorer/detail?dataset_id=littleprince) ## API Reference Use the `LittlePrince` class to access this dataset programmatically. ### eegdash.dataset.LittlePrince alias of [`DS007524`](eegdash.dataset.DS007524.md#eegdash.dataset.DS007524) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/littleprince) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=littleprince) # Liu2022EldBETA: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Liu2022EldBETA dataset = Liu2022EldBETA(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Liu2022EldBETA(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Liu2022EldBETA( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{liu2022eldbeta, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `LIU2022ELDBETA` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `LIU2022ELDBETA` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/liu2022eldbeta) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=liu2022eldbeta) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [liu2022eldbeta](https://openneuro.org/datasets/liu2022eldbeta) - NeMAR: [liu2022eldbeta](https://nemar.org/dataexplorer/detail?dataset_id=liu2022eldbeta) ## API Reference Use the `Liu2022EldBETA` class to access this dataset programmatically. ### eegdash.dataset.Liu2022EldBETA alias of [`NM000130`](eegdash.dataset.NM000130.md#eegdash.dataset.NM000130) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/liu2022eldbeta) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=liu2022eldbeta) # Lowe2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Lowe2025 dataset = Lowe2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Lowe2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Lowe2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{lowe2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `LOWE2025` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `LOWE2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/lowe2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=lowe2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [lowe2025](https://openneuro.org/datasets/lowe2025) - NeMAR: [lowe2025](https://nemar.org/dataexplorer/detail?dataset_id=lowe2025) ## API Reference Use the `Lowe2025` class to access this dataset programmatically. ### eegdash.dataset.Lowe2025 alias of [`DS006817`](eegdash.dataset.DS006817.md#eegdash.dataset.DS006817) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/lowe2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=lowe2025) # Luke2019: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Luke2019 dataset = Luke2019(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Luke2019(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Luke2019( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{luke2019, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `LUKE2019` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `LUKE2019` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/luke2019) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=luke2019) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [luke2019](https://openneuro.org/datasets/luke2019) - NeMAR: [luke2019](https://nemar.org/dataexplorer/detail?dataset_id=luke2019) ## API Reference Use the `Luke2019` class to access this dataset programmatically. ### eegdash.dataset.Luke2019 alias of [`DS005964`](eegdash.dataset.DS005964.md#eegdash.dataset.DS005964) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/luke2019) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=luke2019) # MAMEM2: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import MAMEM2 dataset = MAMEM2(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = MAMEM2(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = MAMEM2( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{mamem2, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MAMEM2` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MAMEM2` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/mamem2) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=mamem2) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [mamem2](https://openneuro.org/datasets/mamem2) - NeMAR: [mamem2](https://nemar.org/dataexplorer/detail?dataset_id=mamem2) ## API Reference Use the `MAMEM2` class to access this dataset programmatically. ### eegdash.dataset.MAMEM2 alias of [`NM000120`](eegdash.dataset.NM000120.md#eegdash.dataset.NM000120) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/mamem2) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=mamem2) # MAMEM2_SSVEP: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import MAMEM2_SSVEP dataset = MAMEM2_SSVEP(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = MAMEM2_SSVEP(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = MAMEM2_SSVEP( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{mamem2_ssvep, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MAMEM2_SSVEP` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MAMEM2_SSVEP` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/mamem2_ssvep) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=mamem2_ssvep) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [mamem2_ssvep](https://openneuro.org/datasets/mamem2_ssvep) - NeMAR: [mamem2_ssvep](https://nemar.org/dataexplorer/detail?dataset_id=mamem2_ssvep) ## API Reference Use the `MAMEM2_SSVEP` class to access this dataset programmatically. ### eegdash.dataset.MAMEM2_SSVEP alias of [`NM000120`](eegdash.dataset.NM000120.md#eegdash.dataset.NM000120) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/mamem2_ssvep) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=mamem2_ssvep) # MAMEM3: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import MAMEM3 dataset = MAMEM3(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = MAMEM3(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = MAMEM3( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{mamem3, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MAMEM3` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MAMEM3` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/mamem3) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=mamem3) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [mamem3](https://openneuro.org/datasets/mamem3) - NeMAR: [mamem3](https://nemar.org/dataexplorer/detail?dataset_id=mamem3) ## API Reference Use the `MAMEM3` class to access this dataset programmatically. ### eegdash.dataset.MAMEM3 alias of [`NM000121`](eegdash.dataset.NM000121.md#eegdash.dataset.NM000121) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/mamem3) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=mamem3) # MASC_MEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import MASC_MEG dataset = MASC_MEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = MASC_MEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = MASC_MEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{masc_meg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MASC_MEG` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MASC_MEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/masc_meg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=masc_meg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [masc_meg](https://openneuro.org/datasets/masc_meg) - NeMAR: [masc_meg](https://nemar.org/dataexplorer/detail?dataset_id=masc_meg) ## API Reference Use the `MASC_MEG` class to access this dataset programmatically. ### eegdash.dataset.MASC_MEG alias of [`NM000229`](eegdash.dataset.NM000229.md#eegdash.dataset.NM000229) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/masc_meg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=masc_meg) # MAVIS: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import MAVIS dataset = MAVIS(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = MAVIS(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = MAVIS( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{mavis, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MAVIS` | |----------------|---------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MAVIS` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/mavis) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=mavis) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [mavis](https://openneuro.org/datasets/mavis) - NeMAR: [mavis](https://nemar.org/dataexplorer/detail?dataset_id=mavis) ## API Reference Use the `MAVIS` class to access this dataset programmatically. ### eegdash.dataset.MAVIS alias of [`DS004010`](eegdash.dataset.DS004010.md#eegdash.dataset.DS004010) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/mavis) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=mavis) # MEGMEM: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import MEGMEM dataset = MEGMEM(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = MEGMEM(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = MEGMEM( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{megmem, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MEGMEM` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MEGMEM` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/megmem) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=megmem) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [megmem](https://openneuro.org/datasets/megmem) - NeMAR: [megmem](https://nemar.org/dataexplorer/detail?dataset_id=megmem) ## API Reference Use the `MEGMEM` class to access this dataset programmatically. ### eegdash.dataset.MEGMEM alias of [`DS003694`](eegdash.dataset.DS003694.md#eegdash.dataset.DS003694) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/megmem) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=megmem) # MEG_MASC: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import MEG_MASC dataset = MEG_MASC(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = MEG_MASC(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = MEG_MASC( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{meg_masc, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MEG_MASC` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MEG_MASC` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/meg_masc) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=meg_masc) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [meg_masc](https://openneuro.org/datasets/meg_masc) - NeMAR: [meg_masc](https://nemar.org/dataexplorer/detail?dataset_id=meg_masc) ## API Reference Use the `MEG_MASC` class to access this dataset programmatically. ### eegdash.dataset.MEG_MASC alias of [`NM000229`](eegdash.dataset.NM000229.md#eegdash.dataset.NM000229) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/meg_masc) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=meg_masc) # MEG_SCANS: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import MEG_SCANS dataset = MEG_SCANS(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = MEG_SCANS(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = MEG_SCANS( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{meg_scans, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MEG_SCANS` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MEG_SCANS` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/meg_scans) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=meg_scans) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [meg_scans](https://openneuro.org/datasets/meg_scans) - NeMAR: [meg_scans](https://nemar.org/dataexplorer/detail?dataset_id=meg_scans) ## API Reference Use the `MEG_SCANS` class to access this dataset programmatically. ### eegdash.dataset.MEG_SCANS alias of [`DS006468`](eegdash.dataset.DS006468.md#eegdash.dataset.DS006468) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/meg_scans) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=meg_scans) # MNESomato: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import MNESomato dataset = MNESomato(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = MNESomato(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = MNESomato( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{mnesomato, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MNESOMATO` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MNESOMATO` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/mnesomato) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=mnesomato) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [mnesomato](https://openneuro.org/datasets/mnesomato) - NeMAR: [mnesomato](https://nemar.org/dataexplorer/detail?dataset_id=mnesomato) ## API Reference Use the `MNESomato` class to access this dataset programmatically. ### eegdash.dataset.MNESomato alias of [`DS003104`](eegdash.dataset.DS003104.md#eegdash.dataset.DS003104) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/mnesomato) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=mnesomato) # MNESomatoData: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import MNESomatoData dataset = MNESomatoData(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = MNESomatoData(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = MNESomatoData( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{mnesomatodata, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MNESOMATODATA` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MNESOMATODATA` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/mnesomatodata) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=mnesomatodata) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [mnesomatodata](https://openneuro.org/datasets/mnesomatodata) - NeMAR: [mnesomatodata](https://nemar.org/dataexplorer/detail?dataset_id=mnesomatodata) ## API Reference Use the `MNESomatoData` class to access this dataset programmatically. ### eegdash.dataset.MNESomatoData alias of [`DS003104`](eegdash.dataset.DS003104.md#eegdash.dataset.DS003104) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/mnesomatodata) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=mnesomatodata) # MNE_Sample_Data: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import MNE_Sample_Data dataset = MNE_Sample_Data(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = MNE_Sample_Data(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = MNE_Sample_Data( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{mne_sample_data, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MNE_SAMPLE_DATA` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MNE_SAMPLE_DATA` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/mne_sample_data) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=mne_sample_data) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [mne_sample_data](https://openneuro.org/datasets/mne_sample_data) - NeMAR: [mne_sample_data](https://nemar.org/dataexplorer/detail?dataset_id=mne_sample_data) ## API Reference Use the `MNE_Sample_Data` class to access this dataset programmatically. ### eegdash.dataset.MNE_Sample_Data alias of [`DS000248`](eegdash.dataset.DS000248.md#eegdash.dataset.DS000248) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/mne_sample_data) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=mne_sample_data) # MSSV: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import MSSV dataset = MSSV(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = MSSV(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = MSSV( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{mssv, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MSSV` | |----------------|-------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MSSV` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/mssv) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=mssv) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [mssv](https://openneuro.org/datasets/mssv) - NeMAR: [mssv](https://nemar.org/dataexplorer/detail?dataset_id=mssv) ## API Reference Use the `MSSV` class to access this dataset programmatically. ### eegdash.dataset.MSSV alias of [`DS006366`](eegdash.dataset.DS006366.md#eegdash.dataset.DS006366) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/mssv) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=mssv) # MUSING: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import MUSING dataset = MUSING(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = MUSING(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = MUSING( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{musing, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MUSING` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MUSING` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/musing) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=musing) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [musing](https://openneuro.org/datasets/musing) - NeMAR: [musing](https://nemar.org/dataexplorer/detail?dataset_id=musing) ## API Reference Use the `MUSING` class to access this dataset programmatically. ### eegdash.dataset.MUSING alias of [`DS003774`](eegdash.dataset.DS003774.md#eegdash.dataset.DS003774) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/musing) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=musing) # Maestu2021: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Maestu2021 dataset = Maestu2021(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Maestu2021(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Maestu2021( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{maestu2021, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MAESTU2021` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MAESTU2021` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/maestu2021) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=maestu2021) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [maestu2021](https://openneuro.org/datasets/maestu2021) - NeMAR: [maestu2021](https://nemar.org/dataexplorer/detail?dataset_id=maestu2021) ## API Reference Use the `Maestu2021` class to access this dataset programmatically. ### eegdash.dataset.Maestu2021 alias of [`DS003483`](eegdash.dataset.DS003483.md#eegdash.dataset.DS003483) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/maestu2021) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=maestu2021) # Martzoukou2024_Post: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Martzoukou2024_Post dataset = Martzoukou2024_Post(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Martzoukou2024_Post(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Martzoukou2024_Post( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{martzoukou2024_post, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MARTZOUKOU2024_POST` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MARTZOUKOU2024_POST` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/martzoukou2024_post) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=martzoukou2024_post) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [martzoukou2024_post](https://openneuro.org/datasets/martzoukou2024_post) - NeMAR: [martzoukou2024_post](https://nemar.org/dataexplorer/detail?dataset_id=martzoukou2024_post) ## API Reference Use the `Martzoukou2024_Post` class to access this dataset programmatically. ### eegdash.dataset.Martzoukou2024_Post alias of [`DS007314`](eegdash.dataset.DS007314.md#eegdash.dataset.DS007314) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/martzoukou2024_post) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=martzoukou2024_post) # Martzoukou2024_Post_A: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Martzoukou2024_Post_A dataset = Martzoukou2024_Post_A(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Martzoukou2024_Post_A(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Martzoukou2024_Post_A( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{martzoukou2024_post_a, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MARTZOUKOU2024_POST_A` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MARTZOUKOU2024_POST_A` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/martzoukou2024_post_a) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=martzoukou2024_post_a) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [martzoukou2024_post_a](https://openneuro.org/datasets/martzoukou2024_post_a) - NeMAR: [martzoukou2024_post_a](https://nemar.org/dataexplorer/detail?dataset_id=martzoukou2024_post_a) ## API Reference Use the `Martzoukou2024_Post_A` class to access this dataset programmatically. ### eegdash.dataset.Martzoukou2024_Post_A alias of [`DS007315`](eegdash.dataset.DS007315.md#eegdash.dataset.DS007315) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/martzoukou2024_post_a) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=martzoukou2024_post_a) # Melcon2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Melcon2024 dataset = Melcon2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Melcon2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Melcon2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{melcon2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MELCON2024` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MELCON2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/melcon2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=melcon2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [melcon2024](https://openneuro.org/datasets/melcon2024) - NeMAR: [melcon2024](https://nemar.org/dataexplorer/detail?dataset_id=melcon2024) ## API Reference Use the `Melcon2024` class to access this dataset programmatically. ### eegdash.dataset.Melcon2024 alias of [`DS006171`](eegdash.dataset.DS006171.md#eegdash.dataset.DS006171) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/melcon2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=melcon2024) # Mendola2020: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Mendola2020 dataset = Mendola2020(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Mendola2020(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Mendola2020( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{mendola2020, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MENDOLA2020` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MENDOLA2020` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/mendola2020) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=mendola2020) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [mendola2020](https://openneuro.org/datasets/mendola2020) - NeMAR: [mendola2020](https://nemar.org/dataexplorer/detail?dataset_id=mendola2020) ## API Reference Use the `Mendola2020` class to access this dataset programmatically. ### eegdash.dataset.Mendola2020 alias of [`DS002001`](eegdash.dataset.DS002001.md#eegdash.dataset.DS002001) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/mendola2020) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=mendola2020) # Mesquita2019: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Mesquita2019 dataset = Mesquita2019(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Mesquita2019(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Mesquita2019( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{mesquita2019, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MESQUITA2019` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MESQUITA2019` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/mesquita2019) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=mesquita2019) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [mesquita2019](https://openneuro.org/datasets/mesquita2019) - NeMAR: [mesquita2019](https://nemar.org/dataexplorer/detail?dataset_id=mesquita2019) ## API Reference Use the `Mesquita2019` class to access this dataset programmatically. ### eegdash.dataset.Mesquita2019 alias of [`DS005963`](eegdash.dataset.DS005963.md#eegdash.dataset.DS005963) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/mesquita2019) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=mesquita2019) # MetaRDK: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import MetaRDK dataset = MetaRDK(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = MetaRDK(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = MetaRDK( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{metardk, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `METARDK` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `METARDK` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/metardk) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=metardk) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [metardk](https://openneuro.org/datasets/metardk) - NeMAR: [metardk](https://nemar.org/dataexplorer/detail?dataset_id=metardk) ## API Reference Use the `MetaRDK` class to access this dataset programmatically. ### eegdash.dataset.MetaRDK alias of [`DS006253`](eegdash.dataset.DS006253.md#eegdash.dataset.DS006253) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/metardk) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=metardk) # Mheich2020: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Mheich2020 dataset = Mheich2020(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Mheich2020(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Mheich2020( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{mheich2020, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MHEICH2020` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MHEICH2020` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/mheich2020) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=mheich2020) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [mheich2020](https://openneuro.org/datasets/mheich2020) - NeMAR: [mheich2020](https://nemar.org/dataexplorer/detail?dataset_id=mheich2020) ## API Reference Use the `Mheich2020` class to access this dataset programmatically. ### eegdash.dataset.Mheich2020 alias of [`DS002791`](eegdash.dataset.DS002791.md#eegdash.dataset.DS002791) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/mheich2020) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=mheich2020) # Mheich2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Mheich2024 dataset = Mheich2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Mheich2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Mheich2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{mheich2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MHEICH2024` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MHEICH2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/mheich2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=mheich2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [mheich2024](https://openneuro.org/datasets/mheich2024) - NeMAR: [mheich2024](https://nemar.org/dataexplorer/detail?dataset_id=mheich2024) ## API Reference Use the `Mheich2024` class to access this dataset programmatically. ### eegdash.dataset.Mheich2024 alias of [`DS002833`](eegdash.dataset.DS002833.md#eegdash.dataset.DS002833) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/mheich2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=mheich2024) # Miller2021: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Miller2021 dataset = Miller2021(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Miller2021(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Miller2021( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{miller2021, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MILLER2021` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MILLER2021` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/miller2021) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=miller2021) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [miller2021](https://openneuro.org/datasets/miller2021) - NeMAR: [miller2021](https://nemar.org/dataexplorer/detail?dataset_id=miller2021) ## API Reference Use the `Miller2021` class to access this dataset programmatically. ### eegdash.dataset.Miller2021 alias of [`DS003708`](eegdash.dataset.DS003708.md#eegdash.dataset.DS003708) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/miller2021) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=miller2021) # Mishra2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Mishra2024 dataset = Mishra2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Mishra2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Mishra2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{mishra2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MISHRA2024` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MISHRA2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/mishra2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=mishra2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [mishra2024](https://openneuro.org/datasets/mishra2024) - NeMAR: [mishra2024](https://nemar.org/dataexplorer/detail?dataset_id=mishra2024) ## API Reference Use the `Mishra2024` class to access this dataset programmatically. ### eegdash.dataset.Mishra2024 alias of [`DS007322`](eegdash.dataset.DS007322.md#eegdash.dataset.DS007322) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/mishra2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=mishra2024) # Mivalt2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Mivalt2024 dataset = Mivalt2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Mivalt2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Mivalt2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{mivalt2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MIVALT2024` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MIVALT2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/mivalt2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=mivalt2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [mivalt2024](https://openneuro.org/datasets/mivalt2024) - NeMAR: [mivalt2024](https://nemar.org/dataexplorer/detail?dataset_id=mivalt2024) ## API Reference Use the `Mivalt2024` class to access this dataset programmatically. ### eegdash.dataset.Mivalt2024 alias of [`DS004624`](eegdash.dataset.DS004624.md#eegdash.dataset.DS004624) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/mivalt2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=mivalt2024) # Moerel2023: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Moerel2023 dataset = Moerel2023(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Moerel2023(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Moerel2023( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{moerel2023, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MOEREL2023` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MOEREL2023` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/moerel2023) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=moerel2023) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [moerel2023](https://openneuro.org/datasets/moerel2023) - NeMAR: [moerel2023](https://nemar.org/dataexplorer/detail?dataset_id=moerel2023) ## API Reference Use the `Moerel2023` class to access this dataset programmatically. ### eegdash.dataset.Moerel2023 alias of [`DS004995`](eegdash.dataset.DS004995.md#eegdash.dataset.DS004995) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/moerel2023) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=moerel2023) # Moerel2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Moerel2025 dataset = Moerel2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Moerel2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Moerel2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{moerel2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MOEREL2025` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MOEREL2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/moerel2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=moerel2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [moerel2025](https://openneuro.org/datasets/moerel2025) - NeMAR: [moerel2025](https://nemar.org/dataexplorer/detail?dataset_id=moerel2025) ## API Reference Use the `Moerel2025` class to access this dataset programmatically. ### eegdash.dataset.Moerel2025 alias of [`DS007521`](eegdash.dataset.DS007521.md#eegdash.dataset.DS007521) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/moerel2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=moerel2025) # Moradi2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Moradi2024 dataset = Moradi2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Moradi2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Moradi2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{moradi2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MORADI2024` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MORADI2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/moradi2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=moradi2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [moradi2024](https://openneuro.org/datasets/moradi2024) - NeMAR: [moradi2024](https://nemar.org/dataexplorer/detail?dataset_id=moradi2024) ## API Reference Use the `Moradi2024` class to access this dataset programmatically. ### eegdash.dataset.Moradi2024 alias of [`DS004598`](eegdash.dataset.DS004598.md#eegdash.dataset.DS004598) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/moradi2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=moradi2024) # Motion_Yucel2014: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Motion_Yucel2014 dataset = Motion_Yucel2014(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Motion_Yucel2014(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Motion_Yucel2014( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{motion_yucel2014, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `MOTION_YUCEL2014` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `MOTION_YUCEL2014` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/motion_yucel2014) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=motion_yucel2014) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [motion_yucel2014](https://openneuro.org/datasets/motion_yucel2014) - NeMAR: [motion_yucel2014](https://nemar.org/dataexplorer/detail?dataset_id=motion_yucel2014) ## API Reference Use the `Motion_Yucel2014` class to access this dataset programmatically. ### eegdash.dataset.Motion_Yucel2014 alias of [`DS005929`](eegdash.dataset.DS005929.md#eegdash.dataset.DS005929) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/motion_yucel2014) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=motion_yucel2014) # NM000103: eeg dataset, 447 subjects *Healthy Brain Network EEG - Not for Commercial Use* Access recordings and metadata through EEGDash. **Citation:** Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig (20). *Healthy Brain Network EEG - Not for Commercial Use*. [10.82901/nemar.nm000103](https://doi.org/10.82901/nemar.nm000103) Modality: eeg Subjects: 447 Recordings: 3522 License: CC-BY-NC-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000103 dataset = NM000103(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000103(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000103( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000103, title = {Healthy Brain Network EEG - Not for Commercial Use}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.82901/nemar.nm000103}, url = {https://doi.org/10.82901/nemar.nm000103}, } ``` ## About This Dataset **Overview** This is **NOT for Commercial-Use Release** of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017). **Data Description** This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from participants involved in the HBN project. **Contents** \*\\\*EEG [Data:\*](Data:*) High-resolution EEG recordings capture a wide range of neural activity during various tasks. \*\\\*Behavioral Responses:\* Participant responses during EEG tasks, including reaction times and accuracy. This data was originally recorded within the behavior directory of the HBN data. This data is now included with the EEG data within the_events. tsv\\\` files. **Special Features** \*\\\*Hierarchical Event Descriptors (HED):\* Events, including the original EEG events and the included behavioral events, have clear explanations, including proper HED annotation suitable for systematic meta and mega analysis of the data. \*\\\*P-Factor, Attention, Internalization and Externalization:\* Derived from behavioral questionnaires, these factors provide valuable insights into the internalizing and externalizing behaviors of participants, adding a rich layer of psychological interpretation to the EEG and behavioral data. \*\\\*Data quality and availability:\* We performed minimal quality control to ensure that the data was not corrupted, each task had its necessary events, and was ready for preprocessing. The results of this quality control are available in the `participants.tsv` file. **Copyright and License** This dataset is licensed under the non-commercial version of the Creative Common Attributions version 4.0 license (CC BY NC SA 4.0) based on the participant’s consent. Subjects (or their legal gurdians) did NOT provide consent for their data to be used for any commercial pourposes. **Acknowledgments** We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data. ## Dataset Information | Dataset ID | `NM000103` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Healthy Brain Network EEG - Not for Commercial Use | | Author (year) | `Shirazi2017` | | Canonical | `HealthyBrainNetwork`, `HBN_EEG_NC`, `HBN_NoCommercial` | | Importable as | `NM000103`, `Shirazi2017`, `HealthyBrainNetwork`, `HBN_EEG_NC`, `HBN_NoCommercial` | | Year | 20 | | Authors | Seyed Yahya Shirazi, Alexandre Franco, Maurício Scopel Hoffmann, Nathalia B. Esper, Dung Truong, Arnaud Delorme, Michael Milham, Scott Makeig | | License | CC-BY-NC-SA 4.0 | | Citation / DOI | [10.82901/nemar.nm000103](https://doi.org/10.82901/nemar.nm000103) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000103) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000103) | [Source URL](https://nemar.org/dataexplorer/detail/nm000103) | ### Copy-paste BibTeX ```bibtex @dataset{nm000103, title = {Healthy Brain Network EEG - Not for Commercial Use}, author = {Seyed Yahya Shirazi and Alexandre Franco and Maurício Scopel Hoffmann and Nathalia B. Esper and Dung Truong and Arnaud Delorme and Michael Milham and Scott Makeig}, doi = {10.82901/nemar.nm000103}, url = {https://doi.org/10.82901/nemar.nm000103}, } ``` ## Technical Details - Subjects: 447 - Recordings: 3522 - Tasks: 10 - Channels: 129 - Sampling rate (Hz): 500 - Duration (hours): 285.0150427777777 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 250.3 GB - File count: 3522 - Format: BIDS - License: CC-BY-NC-SA 4.0 - DOI: 10.82901/nemar.nm000103 - Source: nemar - OpenNeuro: [nm000103](https://openneuro.org/datasets/nm000103) - NeMAR: [nm000103](https://nemar.org/dataexplorer/detail?dataset_id=nm000103) ## API Reference Use the `NM000103` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000103(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network EEG - Not for Commercial Use * **Study:** `nm000103` (NeMAR) * **Author (year):** `Shirazi2017` * **Canonical:** `HealthyBrainNetwork`, `HBN_EEG_NC`, `HBN_NoCommercial` Also importable as: `NM000103`, `Shirazi2017`, `HealthyBrainNetwork`, `HBN_EEG_NC`, `HBN_NoCommercial`. Modality: `eeg`. Subjects: 447; recordings: 3522; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000103](https://openneuro.org/datasets/nm000103) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000103](https://nemar.org/dataexplorer/detail?dataset_id=nm000103) DOI: [https://doi.org/10.82901/nemar.nm000103](https://doi.org/10.82901/nemar.nm000103) ### Examples ```pycon >>> from eegdash.dataset import NM000103 >>> dataset = NM000103(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000103) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000103) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000104: emg dataset, 108 subjects *emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography* Access recordings and metadata through EEGDash. **Citation:** Viswanath Sivakumar, Jeffrey Seely, Alan Du, Sean R. Bittner, Adam Berenzweig, Anuoluwapo Bolarinwa, Alexandre Gramfort, Michael I. Mandel (2024). *emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography*. [10.82901/nemar.nm000104](https://doi.org/10.82901/nemar.nm000104) Modality: emg Subjects: 108 Recordings: 1136 License: CC-BY-NC-SA-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000104 dataset = NM000104(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000104(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000104( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000104, title = {emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography}, author = {Viswanath Sivakumar and Jeffrey Seely and Alan Du and Sean R. Bittner and Adam Berenzweig and Anuoluwapo Bolarinwa and Alexandre Gramfort and Michael I. Mandel}, doi = {10.82901/nemar.nm000104}, url = {https://doi.org/10.82901/nemar.nm000104}, } ``` ## About This Dataset **emg2qwerty: Touch Typing from Surface Electromyography** **Overview** **Dataset**: emg2qwerty - Touch typing from wrist-based surface electromyography **Task**: Touch typing on QWERTY keyboard **Participants**: 108 subjects **Sessions**: 1,135 total (average 10 per subject, range 1-18) ### View full README **emg2qwerty: Touch Typing from Surface Electromyography** **Overview** **Dataset**: emg2qwerty - Touch typing from wrist-based surface electromyography **Task**: Touch typing on QWERTY keyboard **Participants**: 108 subjects **Sessions**: 1,135 total (average 10 per subject, range 1-18) **Duration**: 346.4 hours total (9.5-47.5 min per session) **Publication**: Sivakumar et al., 2024 - “emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography” **Purpose** This dataset captures wrist-based sEMG signals during touch typing on a physical keyboard. The goal is to enable keyboard-free text input by decoding typing intent directly from neuromuscular activity, with applications in AR/VR, mobile computing, and brain-computer interfaces. This is the largest public sEMG dataset to date, specifically designed to study: - Cross-user generalization - Cross-session adaptation (domain shift from electrode placement) - Sequence-to-sequence learning (analogous to automatic speech recognition) - High-bandwidth neuromotor interfaces **Dataset Details** **Participants** **Sample size**: 108 participants **Demographics**: Not available (age, sex, handedness marked as n/a) **Screening**: Touch typists with >90% correct finger-to-key mapping **Typing speed**: 130-439 keys/min (mean: 265 keys/min, ~4.4 keys/sec) **Hardware** **Device**: sEMG Research Device (sEMG-RD) **Configuration**: Two wristbands (left and right wrists) **Channels**: 32 total (16 per wrist) **Sampling rate**: 2000 Hz **Bit depth**: 12 bits **Dynamic range**: ±6.6 mV **Bandwidth**: 20-850 Hz **Connectivity**: Bluetooth **Electrode type**: Dry gold-plated differential pairs **Recording Setup** **Keyboard**: Apple Magic Keyboard (US English) **Text prompts**: - Random words from dictionary - Sentences from English Wikipedia - Filtered for offensive terms - Lowercase with basic punctuation only **Ground truth**: Keylogger recording key-down and key-up timestamps (±0.5 ms precision) **Backspace usage**: Allowed (natural typing behavior) **Session Protocol** 1. Participant dons two sEMG-RDs (one per wrist) 2. Types prompted text on physical keyboard 3. Keylogger records all keystrokes with timestamps 4. sEMG signals streamed via Bluetooth 5. Between sessions: Bands doffed and re-donned (realistic electrode placement variability) **Session duration**: 9.5-47.5 minutes (depends on typing speed) **Inter-session protocol**: Complete band removal and replacement to simulate real-world usage **Data Contents** **Files per Session** ```text sub-XXXXXXXX/ses-YYYYYYYYYY/emg/ ``` ```text ├── sub-XXXXXXXX_ses-YYYYYYYYYY_task-typing_emg.edf ├── sub-XXXXXXXX_ses-YYYYYYYYYY_task-typing_emg.json ├── sub-XXXXXXXX_ses-YYYYYYYYYY_task-typing_channels.tsv ├── sub-XXXXXXXX_ses-YYYYYYYYYY_task-typing_events.tsv └── sub-XXXXXXXX_ses-YYYYYYYYYY_electrodes.tsv ``` **Channel Configuration** **Total channels**: 32 - EMG0-EMG15: Left wrist - EMG16-EMG31: Right wrist **Channel naming**: Unique across entire dataset (EMG0-EMG31) **Electrode naming**: E0-E15 (reused for left and right wrists) **Reference**: Bipolar (differential sensing) **channels.tsv columns**: - `name`: Channel identifier (EMG0-EMG31) - `type`: EMG - `units`: V - `signal_electrode`: Physical electrode name (E0-E15) - `reference`: bipolar - `group`: left or right (wrist) - `target_muscle`: forearm muscles **electrodes.tsv columns**: - `name`: Electrode identifier (E0-E15) - `x`, `y`, `z`: 3D coordinates (percent units, no decimals) - `coordinate_system`: leftForearm or rightForearm - `group`: left or right **Events** **events.tsv contains**: - **Keystroke events**: Individual key-press and key-release > - `type`: keystroke_X (where X is the key character) > - `latency`: Sample index of keystroke > - `duration`: Samples from press to release > - `key`: Character typed - **Prompt events**: Text prompts shown to participant - `type`: prompt - `prompt_text`: Displayed text **Total keystrokes**: 5,262,671 across all sessions **Coordinate Systems** **Two separate coordinate systems** (space entities): **Left Forearm** (`space-leftForearm_coordsystem.json`): ```text EMGCoordinateSystem: Other EMGCoordinateUnits: percent X: USP → RSP (0-100%) Y: Right-hand rule perpendicular (limits: Olecranon Process → Cubital Fossa) Z: Midpoint RSP-USP → Lateral Humeral Epicondyle ``` **Right Forearm** (`space-rightForearm_coordsystem.json`): ```text EMGCoordinateSystem: Other EMGCoordinateUnits: percent X: RSP → USP (0-100%, reversed from left) Y: Right-hand rule perpendicular (limits: Olecranon Process → Cubital Fossa) Z: Midpoint RSP-USP → Lateral Humeral Epicondyle ``` **Anatomical landmarks**: - RSP: Radial Styloid Process - USP: Ulnar Styloid Process - LHE: Lateral Humeral Epicondyle **Note**: Same physical device worn on both wrists with reversed differential polarity **Signal Processing** **Preprocessing Applied** 1. **High-pass filtering**: 40 Hz cutoff (removes DC drift, motion artifacts) 2. **Clock drift correction**: Synchronization between devices and laptop 3. **Temporal alignment**: Left/right wristband sample alignment (±0.5 ms) 4. **Irregular sampling handling**: Resampling applied when deviation >1% **Signal Characteristics** **Typical features**: - Muscle activation precedes keystroke by ~tens of milliseconds - Different muscles activate for different fingers - “Co-articulation” effects: sEMG affected by adjacent keystrokes - Bigram/trigram context important for fast typists **Receptive field**: Models typically need ~1 second context **Baseline Performance** **Published Results (Sivakumar et al., 2024)** **Generic Model** (100 training users): - Validation CER: 52.10 ± 5.54% (with 6-gram LM) - Test CER: 51.78 ± 4.61% (with 6-gram LM) - **Interpretation**: Unusable without personalization **Personalized Model** (finetuned from generic): - Validation CER: 8.31 ± 3.19% (with 6-gram LM) - Test CER: 6.95 ± 3.61% (with 6-gram LM) - **Best user**: 3.16% CER - **Usability threshold**: ~10% CER **Model architecture**: Time Depth Separable ConvNets (TDS) **Loss function**: Connectionist Temporal Classification (CTC) **Language model**: 6-gram modified Kneser-Ney (trained on WikiText-103) **Key Findings** 1. **Generalization emerges at scale**: 100+ users needed for meaningful representations 2. **Personalization essential**: Generic model alone has >50% CER 3. **Domain shift is severe**: Cross-user variation much larger than cross-session 4. **No obvious user clusters**: Every user requires individual adaptation **Data Splits** **Benchmark Setup (from paper)** **Training set**: 100 users (all sessions except 2 validation per user) **Validation set**: 2 sessions from each of 100 training users **Test set**: 8 held-out users > - Each test user: Multiple sessions split into train/val/test > - Used for personalization experiments **Note**: This split ensures test users don’t influence generic model hyperparameters **Use Cases** **Machine Learning** - **Sequence-to-sequence learning**: Similar to ASR but with different generative process - **Domain adaptation**: Cross-user, cross-session generalization - **Transfer learning**: Generic models with user-specific fine-tuning - **Few-shot learning**: Data-efficient personalization - **Language modeling**: Backspace-aware beam search decoding **Neuroscience** - **Motor control**: Understand muscle coordination during fine motor tasks - **Motor learning**: Track typing skill changes across sessions - **Neuromuscular variability**: Study individual differences in muscle recruitment **Applications** - **Keyboard-free typing**: Text entry without physical keyboard - **AR/VR interfaces**: Text input for head-mounted displays - **Silent communication**: Private text entry in public spaces - **Accessibility**: Alternative input for users with limited mobility **Known Issues and Limitations** **By Design** - **Touch typing required**: Not representative of hunt-and-peck typists - **English only**: Language-specific - **Physical keyboard**: Not actual keyboard-free typing - **Typing style variation**: Individual strategies differ (especially non-fluent typists) - **No demographic data**: Age, sex, handedness not collected **Technical** - **Domain shift**: Large variations across users and sessions - **Signal amplitude**: Varies with typing force (not normalized) - **Backspace handling**: More complex than speech (can modify history) - **Hardware unavailable**: sEMG-RD not commercially available **Data Quality** - **Irregular sampling**: Some sessions required resampling (up to 9290% deviation detected) - **Electrode placement**: Intentionally varies across sessions (creates realistic challenge) - **Session length**: Varies by typing speed (9.5-47.5 min) **Access and Contact** **Original data**: [https://github.com/facebookresearch/emg2qwerty](https://github.com/facebookresearch/emg2qwerty) **BIDS conversion**: Custom MATLAB tools using EEGLAB BIDS plugin **Data curator**: Yahya Shirazi, SCCN, INC, UCSD **Contact**: See original publication for corresponding author **License** Non-Commercial, Share Alike CC-BY-NC-SA 4.0 **Citation** ```text Sivakumar, V., Seely, J., Du, A., Bittner, S.R., Berenzweig, A., Bolarinwa, A., Gramfort, A., & Mandel, M.I. (2024). emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography. arXiv:2410.20081. https://github.com/facebookresearch/emg2qwerty ``` **Data Curator** **Yahya Shirazi** SCCN (Swartz Center for Computational Neuroscience) INC (Institute for Neural Computation) University of California San Diego **Version History** **v1.0 (2025-10-01): Initial BIDS conversion** **BIDS Version**: 1.11 | **EMG-BIDS**: BEP-042 | **Updated**: Oct 1, 2025 ## Dataset Information | Dataset ID | `NM000104` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography | | Author (year) | `Sivakumar2024` | | Canonical | `emg2qwerty` | | Importable as | `NM000104`, `Sivakumar2024`, `emg2qwerty` | | Year | 2024 | | Authors | Viswanath Sivakumar, Jeffrey Seely, Alan Du, Sean R. Bittner, Adam Berenzweig, Anuoluwapo Bolarinwa, Alexandre Gramfort, Michael I. Mandel | | License | CC-BY-NC-SA-4.0 | | Citation / DOI | [10.82901/nemar.nm000104](https://doi.org/10.82901/nemar.nm000104) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000104) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000104) | [Source URL](https://nemar.org/dataexplorer/detail/nm000104) | ### Copy-paste BibTeX ```bibtex @dataset{nm000104, title = {emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography}, author = {Viswanath Sivakumar and Jeffrey Seely and Alan Du and Sean R. Bittner and Adam Berenzweig and Anuoluwapo Bolarinwa and Alexandre Gramfort and Michael I. Mandel}, doi = {10.82901/nemar.nm000104}, url = {https://doi.org/10.82901/nemar.nm000104}, } ``` ## Technical Details - Subjects: 108 - Recordings: 1136 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 2000 - Duration (hours): 346.3244476388889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 223.3 GB - File count: 1136 - Format: BIDS - License: CC-BY-NC-SA-4.0 - DOI: 10.82901/nemar.nm000104 - Source: nemar - OpenNeuro: [nm000104](https://openneuro.org/datasets/nm000104) - NeMAR: [nm000104](https://nemar.org/dataexplorer/detail?dataset_id=nm000104) ## API Reference Use the `NM000104` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000104(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography * **Study:** `nm000104` (NeMAR) * **Author (year):** `Sivakumar2024` * **Canonical:** `emg2qwerty` Also importable as: `NM000104`, `Sivakumar2024`, `emg2qwerty`. Modality: `emg`. Subjects: 108; recordings: 1136; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000104](https://openneuro.org/datasets/nm000104) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000104](https://nemar.org/dataexplorer/detail?dataset_id=nm000104) DOI: [https://doi.org/10.82901/nemar.nm000104](https://doi.org/10.82901/nemar.nm000104) ### Examples ```pycon >>> from eegdash.dataset import NM000104 >>> dataset = NM000104(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000104) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000104) * [eegdash.dataset.NM000105](eegdash.dataset.NM000105.md) * [eegdash.dataset.NM000106](eegdash.dataset.NM000106.md) * [eegdash.dataset.NM000107](eegdash.dataset.NM000107.md) * [eegdash.dataset.NM000108](eegdash.dataset.NM000108.md) * [eegdash.dataset.NM000155](eegdash.dataset.NM000155.md) # NM000105: emg dataset, 100 subjects *FRL Discrete Gestures: Hand Gesture Recognition from Surface Electromyography* Access recordings and metadata through EEGDash. **Citation:** Patrick Kaifosh, Thomas R. Reardon, CTRL-labs at Reality Labs (2019). *FRL Discrete Gestures: Hand Gesture Recognition from Surface Electromyography*. [10.82901/nemar.nm000105](https://doi.org/10.82901/nemar.nm000105) Modality: emg Subjects: 100 Recordings: 100 License: CC-BY-NC 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000105 dataset = NM000105(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000105(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000105( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000105, title = {FRL Discrete Gestures: Hand Gesture Recognition from Surface Electromyography}, author = {Patrick Kaifosh and Thomas R. Reardon and CTRL-labs at Reality Labs}, doi = {10.82901/nemar.nm000105}, url = {https://doi.org/10.82901/nemar.nm000105}, } ``` ## About This Dataset **discrete_gestures: Discrete Hand Gesture Detection from EMG** **Overview** **Dataset**: discrete_gestures - Discrete hand gestures from wrist-based surface electromyography **Task**: Nine discrete hand gestures (pinches and swipes) **Participants**: 100 subjects **Sessions**: 100 total (1 per subject) ### View full README **discrete_gestures: Discrete Hand Gesture Detection from EMG** **Overview** **Dataset**: discrete_gestures - Discrete hand gestures from wrist-based surface electromyography **Task**: Nine discrete hand gestures (pinches and swipes) **Participants**: 100 subjects **Sessions**: 100 total (1 per subject) **Publication**: Kaifosh et al., 2025 - “A generic non-invasive neuromotor interface for human-computer interaction” (Nature) **Purpose** This dataset captures wrist-based sEMG signals during prompted discrete hand gestures for navigation and activation tasks. The goal is to enable gesture-based computer control without cameras or visible hand movements, with applications in AR/VR, mobile interfaces, and accessibility. Key research objectives: - Generic models that work across users without calibration - Discrete gesture classification with high accuracy - Real-time gesture detection for interactive systems - Robustness to electrode placement variability **Dataset Details** **Participants** **Sample size**: 100 participants **Demographics**: Not available (age, sex, handedness marked as n/a) **Recording side**: Dominant wrist (assumed right-handed, varies by participant) **Sessions**: 1 session per participant **Hardware** **Device**: sEMG Research Device (sEMG-RD) **Configuration**: Single wristband (dominant wrist) **Channels**: 16 **Sampling rate**: 2000 Hz **Bit depth**: 12 bits **Dynamic range**: ±6.6 mV **Bandwidth**: 20-850 Hz **Connectivity**: Bluetooth **Electrode type**: Dry gold-plated differential pairs **Gestures** **Nine discrete gestures**: **Thumb swipes** (4): - Left swipe - Right swipe - Up swipe - Down swipe **Pinches** (4): - Index-to-thumb pinch - Middle-to-thumb pinch - Ring-to-thumb pinch - Pinky-to-thumb pinch **Activation** (1): - Thumb tap **Recording Protocol** 1. Participant dons sEMG-RD on dominant wrist 2. Gesture prompter displays gesture cue (scrolling left-to-right) 3. Participant performs prompted gesture 4. Randomized order with randomized inter-gesture intervals 5. Multiple repetitions of each gesture type **Session duration**: Varies by participant **Total gestures**: 1900 prompted gestures across all participants **Stage boundaries**: 16 recording stages per session **Data Contents** **Files per Session** ```text sub-XXX/ses-XXX/emg/ ``` ```text ├── sub-XXX_ses-XXX_task-discretegestures_emg.edf ├── sub-XXX_ses-XXX_task-discretegestures_emg.json ├── sub-XXX_ses-XXX_task-discretegestures_channels.tsv ├── sub-XXX_ses-XXX_task-discretegestures_events.tsv └── sub-XXX_ses-XXX_electrodes.tsv ``` **Channel Configuration** **Total channels**: 16 (EMG0-EMG15) **Channel naming**: Unique identifiers (EMG0-EMG15) **Electrode naming**: E0-E15 (physical positions) **Reference**: Bipolar (differential sensing) **channels.tsv columns**: - `name`: Channel identifier (EMG0-EMG15) - `type`: EMG - `units`: V - `signal_electrode`: Physical electrode name (E0-E15) - `reference`: bipolar **electrodes.tsv columns**: - `name`: Electrode identifier (E0-E15) - `x`, `y`, `z`: 3D coordinates (percent units, no decimals) **Events** **events.tsv contains**: - **Gesture prompts**: Timestamped prompts for each gesture > - `type`: gesture_X (where X is the gesture name) > - `latency`: Sample index when gesture was prompted > - `gesture_type`: Specific gesture (e.g., “index_pinch”, “thumb_swipe_left”) - **Stage boundaries**: Recording session phases - `type`: stage_boundary - `stage_name`: Stage identifier **Total events**: 1916 (1900 gesture prompts + 16 stage boundaries) **Coordinate System** **Single coordinate system** (no space entity): ```text EMGCoordinateSystem: Other EMGCoordinateUnits: percent X: USP → RSP (0-100%) Y: Right-hand rule perpendicular (0-100%) Z: Radial offset (constant 10%) ``` **Anatomical landmarks**: - RSP: Radial Styloid Process - USP: Ulnar Styloid Process **Note**: Right-handed coordinate system for dominant wrist **Signal Processing** **Preprocessing Applied** 1. **High-pass filtering**: 40 Hz cutoff 2. **Clock drift correction**: Time synchronization 3. **Irregular sampling handling**: Resampling when deviation >1% (up to 9290% deviation detected) **Signal Characteristics** **Gesture patterns**: - Patterned activity across channels corresponding to flexor/extensor muscles - Fine differences across gesture instances - Channel activity correlates with muscle positions (Fig. 1 in paper) **Baseline Performance** **Published Results (Kaifosh et al., 2025)** **Offline Classification** (held-out participants): - Accuracy: >90% for gesture classification - False-negative rate improves with more training data - Generic models trained on hundreds of participants **Closed-loop Performance** (n=24 naive test users): - **First-hit probability**: Median improvement from 0.74 (practice) to 0.82 (evaluation block 2) - **Gesture completion rate**: Median 0.88 gestures/second (evaluation block 2) - **Baseline comparison**: Gaming controller achieves 1.45 completions/second **Model architecture**: 1D convolution → LSTM layers **Learning effects**: Participants improve from practice to evaluation blocks **Representation Analysis** **Network learns**: - First layer filters resemble motor unit action potentials (MUAPs) - Deeper layers progressively separate gesture categories - Invariance to nuisance variables (participant ID, electrode placement, signal power) **Confusion Matrix** **Common confusions** (from paper): - Index and middle holds sometimes released too early - Similar gestures (e.g., adjacent finger pinches) occasionally confused - Swipe directions generally well-separated **Note**: Some errors are behavioral (wrong gesture performed) not just decoding errors **Use Cases** **Machine Learning** - **Time series classification**: Discrete event detection - **Generic modeling**: Out-of-the-box cross-user generalization - **Representation learning**: Physiologically-grounded features - **Real-time prediction**: Low-latency gesture detection **Applications** - **Grid navigation**: Discrete movement in 2D space - **Menu selection**: Activation gestures for UI elements - **Game control**: Gesture-based game inputs - **AR/VR interfaces**: Hands-free navigation - **Accessibility**: Alternative input modality **Known Issues and Limitations** **By Design** - **Single wrist**: Dominant hand only (not bilateral) - **Handedness unknown**: Assumed right-handed, varies by participant - **Gesture novelty**: Users needed coaching to learn effective gestures - **No demographic data**: Age, sex, handedness not collected **Technical** - **Electrode placement**: Single session per user (less cross-session data than emg2qwerty) - **Signal amplitude**: Varies with gesture force - **Hardware unavailable**: sEMG-RD not commercially available **Data Quality** - **Irregular sampling**: High deviation detected (up to 9290%), resampling applied - **Behavioral errors**: Not all errors are decoder errors (some user mistakes) **Comparison to Baselines** **Nintendo Joy-Con controller**: - Median: 1.45 completions/second - sEMG decoder: 0.88 completions/second (66% slower) **However**: sEMG doesn’t require hand-encumbering device **BIDS Format** ```text Pernet, C.R., et al. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6(1), 103. ``` **Access and Contact** **Original data**: Part of Meta Reality Labs neuromotor interface research **BIDS conversion**: Custom MATLAB tools using EEGLAB BIDS plugin **Data curator**: Yahya Shirazi, SCCN (Swartz Center for Computational Neuroscience), INC (Institute for Neural Computation), UCSD **Contact**: See Nature paper for corresponding authors **License** Research and educational use. See original publication. **Citation** ```text Kaifosh, P., Reardon, T.R., & CTRL-labs at Reality Labs. (2025). A generic non-invasive neuromotor interface for human-computer interaction. Nature, 645(8081), 702-711. https://doi.org/10.1038/s41586-025-09255-w ``` **Data Curator** **Yahya Shirazi** SCCN (Swartz Center for Computational Neuroscience) INC (Institute for Neural Computation) University of California San Diego **Version History** **v1.0 (2025-10-01): Initial BIDS conversion** **BIDS Version**: 1.11 | **EMG-BIDS**: BEP-042 | **Updated**: Oct 1, 2025 ## Dataset Information | Dataset ID | `NM000105` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | FRL Discrete Gestures: Hand Gesture Recognition from Surface Electromyography | | Author (year) | `Kaifosh2025` | | Canonical | `FRL_DiscreteGestures` | | Importable as | `NM000105`, `Kaifosh2025`, `FRL_DiscreteGestures` | | Year | 2019 | | Authors | Patrick Kaifosh, Thomas R. Reardon, CTRL-labs at Reality Labs | | License | CC-BY-NC 4.0 | | Citation / DOI | [10.82901/nemar.nm000105](https://doi.org/10.82901/nemar.nm000105) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000105) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000105) | [Source URL](https://nemar.org/dataexplorer/detail/nm000105) | ### Copy-paste BibTeX ```bibtex @dataset{nm000105, title = {FRL Discrete Gestures: Hand Gesture Recognition from Surface Electromyography}, author = {Patrick Kaifosh and Thomas R. Reardon and CTRL-labs at Reality Labs}, doi = {10.82901/nemar.nm000105}, url = {https://doi.org/10.82901/nemar.nm000105}, } ``` ## Technical Details - Subjects: 100 - Recordings: 100 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 2000 - Duration (hours): 63.93759180555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 20.6 GB - File count: 100 - Format: BIDS - License: CC-BY-NC 4.0 - DOI: 10.82901/nemar.nm000105 - Source: nemar - OpenNeuro: [nm000105](https://openneuro.org/datasets/nm000105) - NeMAR: [nm000105](https://nemar.org/dataexplorer/detail?dataset_id=nm000105) ## API Reference Use the `NM000105` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000105(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRL Discrete Gestures: Hand Gesture Recognition from Surface Electromyography * **Study:** `nm000105` (NeMAR) * **Author (year):** `Kaifosh2025` * **Canonical:** `FRL_DiscreteGestures` Also importable as: `NM000105`, `Kaifosh2025`, `FRL_DiscreteGestures`. Modality: `emg`. Subjects: 100; recordings: 100; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000105](https://openneuro.org/datasets/nm000105) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000105](https://nemar.org/dataexplorer/detail?dataset_id=nm000105) DOI: [https://doi.org/10.82901/nemar.nm000105](https://doi.org/10.82901/nemar.nm000105) ### Examples ```pycon >>> from eegdash.dataset import NM000105 >>> dataset = NM000105(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000105) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000105) * [eegdash.dataset.NM000104](eegdash.dataset.NM000104.md) * [eegdash.dataset.NM000106](eegdash.dataset.NM000106.md) * [eegdash.dataset.NM000107](eegdash.dataset.NM000107.md) * [eegdash.dataset.NM000108](eegdash.dataset.NM000108.md) * [eegdash.dataset.NM000155](eegdash.dataset.NM000155.md) # NM000106: emg dataset, 100 subjects *FRL Handwriting: Handwriting Decoding from Surface Electromyography* Access recordings and metadata through EEGDash. **Citation:** Patrick Kaifosh, Thomas R. Reardon, CTRL-labs at Reality Labs (2025). *FRL Handwriting: Handwriting Decoding from Surface Electromyography*. [10.82901/nemar.nm000106](https://doi.org/10.82901/nemar.nm000106) Modality: emg Subjects: 100 Recordings: 807 License: CC-BY-NC 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000106 dataset = NM000106(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000106(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000106( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000106, title = {FRL Handwriting: Handwriting Decoding from Surface Electromyography}, author = {Patrick Kaifosh and Thomas R. Reardon and CTRL-labs at Reality Labs}, doi = {10.82901/nemar.nm000106}, url = {https://doi.org/10.82901/nemar.nm000106}, } ``` ## About This Dataset **handwriting: Handwriting Recognition from EMG** **Overview** **Dataset**: handwriting - Imagined handwriting from wrist-based surface electromyography **Task**: Air-writing (imagined handwriting without pen) **Participants**: 100 subjects **Sessions**: ~700 total (~7 per subject) ### View full README **handwriting: Handwriting Recognition from EMG** **Overview** **Dataset**: handwriting - Imagined handwriting from wrist-based surface electromyography **Task**: Air-writing (imagined handwriting without pen) **Participants**: 100 subjects **Sessions**: ~700 total (~7 per subject) **Publication**: Kaifosh et al., 2025 - “A generic non-invasive neuromotor interface for human-computer interaction” (Nature) **Purpose** This dataset captures wrist-based sEMG signals during imagined handwriting motions for text entry. Participants “write” prompted text with fingers together (as if holding an invisible pen) without any physical writing surface. Applications include AR/VR text input, mobile computing, and hands-free communication. **Dataset Details** **Participants** - **Sample size**: 100 participants - **Demographics**: Not available (marked as n/a) - **Recording side**: Dominant wrist - **Sessions**: Average 7 per participant **Hardware** - **Device**: sEMG-RD (single wristband) - **Channels**: 16 (EMG0-EMG15) - **Sampling rate**: 2000 Hz - **Reference**: Bipolar differential **Recording Protocol** 1. Participant holds fingers together (as if holding pen) 2. Prompted text appears on screen 3. Participant “writes” the text in air 4. Session duration: ~11 minutes 5. Prompts per session: 96 phrases **Data Contents** **Files per Session** ```text sub-XXX/ses-XXX/emg/ ``` ```text ├── sub-XXX_ses-XXX_task-handwriting_emg.edf ├── sub-XXX_ses-XXX_task-handwriting_emg.json ├── sub-XXX_ses-XXX_task-handwriting_channels.tsv ├── sub-XXX_ses-XXX_task-handwriting_events.tsv └── sub-XXX_ses-XXX_electrodes.tsv ``` **Events** - **Handwriting prompts**: Text to be written - `prompt_text`: Displayed phrase - **Stage boundaries**: Posture changes (sitting/standing), session phases **Coordinate System** Single coordinate system at root (dominant wrist, percent units, no decimals) **Baseline Performance** **Published Results (Kaifosh et al., 2025)** **Generic Model** (6,527 training participants): - Offline CER: >90% classification accuracy on held-out participants - Online performance: 20.9 words per minute (WPM) - Online CER: Median improvement from ~35% (practice) to ~25% (evaluation) **Personalized Model** (20 min fine-tuning): - 16% improvement over generic model - Better performance for users with higher generic CER - Diminishing returns with more pretraining data **Comparison**: - Open-loop handwriting (no pen): 25.1 WPM - sEMG handwriting: 20.9 WPM (83% of baseline) - Mobile phone keyboard: 36 WPM **Model architecture**: MPF features + Conformer (attention mechanism) **Use Cases** - **Keyboard-free text entry**: AR/VR, mobile devices - **Silent communication**: Private text input in public spaces - **Personalization research**: Few-shot learning, transfer learning - **Sequence modeling**: Character-level prediction with attention **Known Limitations** - Single wrist (dominant hand only) - Handedness not recorded - Learning curve: Users improve with practice/coaching - Lower WPM than physical writing or typing **Citation** ```text Kaifosh, P., Reardon, T.R., & CTRL-labs at Reality Labs. (2025). A generic non-invasive neuromotor interface for human-computer interaction. Nature, 645(8081), 702-711. https://doi.org/10.1038/s41586-025-09255-w ``` **Data Curator** **Yahya Shirazi** SCCN (Swartz Center for Computational Neuroscience) INC (Institute for Neural Computation) University of California San Diego **Version History** **v1.0 (2025-10-01): Initial BIDS conversion** **BIDS Version**: 1.11 | **EMG-BIDS**: BEP-042 | **Updated**: Oct 1, 2025 ## Dataset Information | Dataset ID | `NM000106` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | FRL Handwriting: Handwriting Decoding from Surface Electromyography | | Author (year) | `Kaifosh2025_106` | | Canonical | `FRL_Handwriting` | | Importable as | `NM000106`, `Kaifosh2025_106`, `FRL_Handwriting` | | Year | 2025 | | Authors | Patrick Kaifosh, Thomas R. Reardon, CTRL-labs at Reality Labs | | License | CC-BY-NC 4.0 | | Citation / DOI | [10.82901/nemar.nm000106](https://doi.org/10.82901/nemar.nm000106) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000106) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000106) | [Source URL](https://nemar.org/dataexplorer/detail/nm000106) | ### Copy-paste BibTeX ```bibtex @dataset{nm000106, title = {FRL Handwriting: Handwriting Decoding from Surface Electromyography}, author = {Patrick Kaifosh and Thomas R. Reardon and CTRL-labs at Reality Labs}, doi = {10.82901/nemar.nm000106}, url = {https://doi.org/10.82901/nemar.nm000106}, } ``` ## Technical Details - Subjects: 100 - Recordings: 807 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 2000 - Duration (hours): 140.70160930555554 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 45.3 GB - File count: 807 - Format: BIDS - License: CC-BY-NC 4.0 - DOI: 10.82901/nemar.nm000106 - Source: nemar - OpenNeuro: [nm000106](https://openneuro.org/datasets/nm000106) - NeMAR: [nm000106](https://nemar.org/dataexplorer/detail?dataset_id=nm000106) ## API Reference Use the `NM000106` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000106(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRL Handwriting: Handwriting Decoding from Surface Electromyography * **Study:** `nm000106` (NeMAR) * **Author (year):** `Kaifosh2025_106` * **Canonical:** `FRL_Handwriting` Also importable as: `NM000106`, `Kaifosh2025_106`, `FRL_Handwriting`. Modality: `emg`. Subjects: 100; recordings: 807; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000106](https://openneuro.org/datasets/nm000106) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000106](https://nemar.org/dataexplorer/detail?dataset_id=nm000106) DOI: [https://doi.org/10.82901/nemar.nm000106](https://doi.org/10.82901/nemar.nm000106) ### Examples ```pycon >>> from eegdash.dataset import NM000106 >>> dataset = NM000106(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000106) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000106) * [eegdash.dataset.NM000104](eegdash.dataset.NM000104.md) * [eegdash.dataset.NM000105](eegdash.dataset.NM000105.md) * [eegdash.dataset.NM000107](eegdash.dataset.NM000107.md) * [eegdash.dataset.NM000108](eegdash.dataset.NM000108.md) * [eegdash.dataset.NM000155](eegdash.dataset.NM000155.md) # NM000107: emg dataset, 100 subjects *FRL Wrist Control: Wrist Movement Decoding from Surface Electromyography* Access recordings and metadata through EEGDash. **Citation:** Patrick Kaifosh, Thomas R. Reardon, CTRL-labs at Reality Labs (2025). *FRL Wrist Control: Wrist Movement Decoding from Surface Electromyography*. [10.82901/nemar.nm000107](https://doi.org/10.82901/nemar.nm000107) Modality: emg Subjects: 100 Recordings: 182 License: CC-BY-NC 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000107 dataset = NM000107(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000107(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000107( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000107, title = {FRL Wrist Control: Wrist Movement Decoding from Surface Electromyography}, author = {Patrick Kaifosh and Thomas R. Reardon and CTRL-labs at Reality Labs}, doi = {10.82901/nemar.nm000107}, url = {https://doi.org/10.82901/nemar.nm000107}, } ``` ## About This Dataset **wrist: Wrist Movement Control from EMG** **Overview** **Dataset**: wrist - Wrist posture and movement from wrist-based surface electromyography **Task**: 1D continuous cursor control via wrist flexion/extension **Participants**: 100 subjects **Sessions**: 100 total (1 per subject) ### View full README **wrist: Wrist Movement Control from EMG** **Overview** **Dataset**: wrist - Wrist posture and movement from wrist-based surface electromyography **Task**: 1D continuous cursor control via wrist flexion/extension **Participants**: 100 subjects **Sessions**: 100 total (1 per subject) **Publication**: Kaifosh et al., 2025 - “A generic non-invasive neuromotor interface for human-computer interaction” (Nature) **Purpose** This dataset captures wrist-based sEMG signals during wrist movements for continuous cursor control. Motion capture provides ground-truth wrist angles. The goal is to enable gesture-free control through wrist posture alone, demonstrating sEMG’s ability to decode motor intent before visible movement occurs. **Dataset Details** **Participants** - **Sample size**: 100 participants - **Demographics**: Not available (marked as n/a) - **Recording side**: Dominant wrist - **Sessions**: 1 per participant **Hardware** - **Device**: sEMG-RD (single wristband) - **Channels**: 16 (EMG0-EMG15) - **Sampling rate**: 2000 Hz - **Reference**: Bipolar differential - **Ground truth**: Motion capture wrist angles **Recording Protocol** 1. Participant wears sEMG-RD on dominant wrist 2. Motion capture tracks wrist angles in real-time 3. Participant controls horizontal cursor position with wrist flexion/extension 4. Target acquisition task: Navigate to targets and hold for 500ms **Data Contents** **Files per Session** ```text sub-XXX/ses-XXX/emg/ ``` ```text ├── sub-XXX_ses-XXX_task-wrist_emg.edf ├── sub-XXX_ses-XXX_task-wrist_emg.json ├── sub-XXX_ses-XXX_task-wrist_channels.tsv ├── sub-XXX_ses-XXX_task-wrist_events.tsv └── sub-XXX_ses-XXX_electrodes.tsv ``` **Events** - **Stage boundaries**: Task phases and movement trials **Coordinate System** Single coordinate system at root (dominant wrist, percent units, no decimals) **Signal Processing** **Note**: This dataset has significant data quality issues: - Duplicate timestamps found in many sessions (up to 88% duplicates) - Irregular sampling requiring resampling (up to 916% deviation) - Post-processing: Duplicate removal followed by resampling to regular 2000 Hz **Baseline Performance** **Published Results (Kaifosh et al., 2025)** **Offline Evaluation**: - Wrist angle velocity error: <13°/s - Error decreases with more training participants **Closed-loop Performance** (n=17 naive test users): - **Target acquisition time**: Median 1.51s (sEMG decoder) - **Dial-in time**: Time to re-acquire after premature exit - **Learning effects**: Improvement from practice to evaluation blocks **Comparison**: - Motion capture ground truth: 0.96s - MacBook trackpad: 0.68s - sEMG decoder: 1.51s (2.2× slower than trackpad) **Model architecture**: MPF features + LSTM **Key Findings** - **Predictive signals**: sEMG precedes movement by tens of milliseconds - **Generic models work**: Out-of-the-box cross-user generalization - **Continuous control**: Demonstrates feasibility of gesture-free interfaces - **Room for improvement**: Performance gap vs traditional inputs **Use Cases** - **Continuous control**: Cursor/pointer movement - **AR/VR navigation**: Hands-free interface - **Low-effort control**: Minimal visible movement required - **Predictive decoding**: Intent detection before motion completion **Known Limitations** - Single degree of freedom (1D control only) - Single wrist (dominant hand) - Duplicate timestamps (data quality issue) - Performance below traditional inputs - Extension to 2D control not demonstrated **Citation** ```text Kaifosh, P., Reardon, T.R., & CTRL-labs at Reality Labs. (2025). A generic non-invasive neuromotor interface for human-computer interaction. Nature, 645(8081), 702-711. https://doi.org/10.1038/s41586-025-09255-w ``` **Data Curator** **Yahya Shirazi** SCCN (Swartz Center for Computational Neuroscience) INC (Institute for Neural Computation) University of California San Diego **Version History** **v1.0 (2025-10-01): Initial BIDS conversion** **BIDS Version**: 1.11 | **EMG-BIDS**: BEP-042 | **Updated**: Oct 1, 2025 ## Dataset Information | Dataset ID | `NM000107` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | FRL Wrist Control: Wrist Movement Decoding from Surface Electromyography | | Author (year) | `Kaifosh2025_107` | | Canonical | `FRL_WristControl` | | Importable as | `NM000107`, `Kaifosh2025_107`, `FRL_WristControl` | | Year | 2025 | | Authors | Patrick Kaifosh, Thomas R. Reardon, CTRL-labs at Reality Labs | | License | CC-BY-NC 4.0 | | Citation / DOI | [10.82901/nemar.nm000107](https://doi.org/10.82901/nemar.nm000107) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000107) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000107) | [Source URL](https://nemar.org/dataexplorer/detail/nm000107) | ### Copy-paste BibTeX ```bibtex @dataset{nm000107, title = {FRL Wrist Control: Wrist Movement Decoding from Surface Electromyography}, author = {Patrick Kaifosh and Thomas R. Reardon and CTRL-labs at Reality Labs}, doi = {10.82901/nemar.nm000107}, url = {https://doi.org/10.82901/nemar.nm000107}, } ``` ## Technical Details - Subjects: 100 - Recordings: 182 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 2000 - Duration (hours): 77.18355888888888 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 24.9 GB - File count: 182 - Format: BIDS - License: CC-BY-NC 4.0 - DOI: 10.82901/nemar.nm000107 - Source: nemar - OpenNeuro: [nm000107](https://openneuro.org/datasets/nm000107) - NeMAR: [nm000107](https://nemar.org/dataexplorer/detail?dataset_id=nm000107) ## API Reference Use the `NM000107` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000107(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRL Wrist Control: Wrist Movement Decoding from Surface Electromyography * **Study:** `nm000107` (NeMAR) * **Author (year):** `Kaifosh2025_107` * **Canonical:** `FRL_WristControl` Also importable as: `NM000107`, `Kaifosh2025_107`, `FRL_WristControl`. Modality: `emg`. Subjects: 100; recordings: 182; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000107](https://openneuro.org/datasets/nm000107) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000107](https://nemar.org/dataexplorer/detail?dataset_id=nm000107) DOI: [https://doi.org/10.82901/nemar.nm000107](https://doi.org/10.82901/nemar.nm000107) ### Examples ```pycon >>> from eegdash.dataset import NM000107 >>> dataset = NM000107(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000107) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000107) * [eegdash.dataset.NM000104](eegdash.dataset.NM000104.md) * [eegdash.dataset.NM000105](eegdash.dataset.NM000105.md) * [eegdash.dataset.NM000106](eegdash.dataset.NM000106.md) * [eegdash.dataset.NM000108](eegdash.dataset.NM000108.md) * [eegdash.dataset.NM000155](eegdash.dataset.NM000155.md) # NM000108: emg dataset, 20 subjects *HySER: High-Density Surface Electromyogram Recordings* Access recordings and metadata through EEGDash. **Citation:** Xinyu Jiang, Chenyun Dai, Xiangyu Liu, Jiahao Fan (20). *HySER: High-Density Surface Electromyogram Recordings*. [10.82901/nemar.nm000108](https://doi.org/10.82901/nemar.nm000108) Modality: emg Subjects: 20 Recordings: 1514 License: ODC-By-1.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000108 dataset = NM000108(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000108(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000108( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000108, title = {HySER: High-Density Surface Electromyogram Recordings}, author = {Xinyu Jiang and Chenyun Dai and Xiangyu Liu and Jiahao Fan}, doi = {10.82901/nemar.nm000108}, url = {https://doi.org/10.82901/nemar.nm000108}, } ``` ## About This Dataset [DOI](https://doi.org/10.82901/nemar.nm000108) [Paper DOI](https://doi.org/10.1109/TNSRE.2021.3082551) [PhysioNet](https://physionet.org/content/hd-semg/1.0.0/) [License](https://opendatacommons.org/licenses/by/1-0/) **HySER: High-Density Surface Electromyogram Recordings** BIDS-formatted version of the Hyser Surface EMG for Hand Gesture Recognition dataset (Jiang et al., 2021). 20 subjects performed 5 task types across 2 sessions using 256-channel high-density surface EMG (HD-sEMG) with simultaneous 5-finger force recordings. **Subjects** ### View full README [DOI](https://doi.org/10.82901/nemar.nm000108) [Paper DOI](https://doi.org/10.1109/TNSRE.2021.3082551) [PhysioNet](https://physionet.org/content/hd-semg/1.0.0/) [License](https://opendatacommons.org/licenses/by/1-0/) **HySER: High-Density Surface Electromyogram Recordings** BIDS-formatted version of the Hyser Surface EMG for Hand Gesture Recognition dataset (Jiang et al., 2021). 20 subjects performed 5 task types across 2 sessions using 256-channel high-density surface EMG (HD-sEMG) with simultaneous 5-finger force recordings. **Subjects** 20 right-handed participants (12M, 8F; age 21-34). Two sessions per subject separated by 3-25 days. Demographics in `participants.tsv`. **Tasks** ```text | Task | BIDS label | Description | Trials | |------|-----------|-------------|--------| | Pattern Recognition | `gesture01`-`gesture34` | 34 discrete hand gestures (Table I in paper) | 2 per gesture, each with 3 dynamic + 1 maintenance | | Maximum Voluntary Contraction | `mvc` | MVC flexion/extension per finger | 2 per finger, 10s each | | Single Finger (1-DOF) | `singlefinger` | Triangle force trajectory, individual fingers | 3 per finger, 25s each | | Multi-Finger (N-DOF) | `multifinger` | Simultaneous multi-finger force tracking | 2 per combination, 25s each | | Random | `random` | Free finger contractions at any force | 5 trials, 25s each | ``` **Equipment** - *EMG system:* Quattrocento (OT Bioelettronica, Torino, Italy), 2048 Hz, gain 150, 16-bit ADC - *Electrodes:* Four 8x8 gelled elliptical arrays (5mm x 2.8mm), 10mm inter-electrode distance - *Placement:* Two arrays on extensors (distal/proximal), two on flexors (distal/proximal) of right forearm - *Reference:* Olecranon (elbow); Ground: head of ulna (right leg drive) - *Hardware filters:* HP 10 Hz (2nd order), LP 500 Hz - *Force sensors:* SAS + HSGA (Huatran, Shenzhen, China), 100 Hz, 5 fingers **File Organization** - `*_emg.bdf` - 256-channel EMG data (BDF format) - `*_physio.tsv.gz` - 5-finger force data (non-PR tasks only) - `*_channels.tsv` - Channel metadata with electrode mapping (`signal_electrode` column) - `*_electrodes.tsv` - Electrode positions in local grid coordinates (mm) - `*_events.tsv` - Event markers (gesture trials or segment boundaries) Non-PR tasks (MVC, singlefinger, multifinger, random) are merged from multiple original recordings into single files per session, with boundary events marking segment junctions. **Coordinate Systems** Four local grid systems (`space-ed`, `space-ep`, `space-fd`, `space-fp`) in mm, anchored to a parent forearm system (`space-forearm`) in anatomical percent coordinates. See `space-*_coordsystem.json` files. **Missing Data** 6 gesture recordings absent in source dataset (not conversion failures): sub-01/ses-2/gesture25, sub-03/ses-1/gesture04, sub-03/ses-2/gesture04, sub-05/ses-1/gesture34, sub-11/ses-1/gesture08, sub-19/ses-2/gesture11. **Conversion** Converted using EMG-2-BIDS (EEGLAB + bids-matlab-tools). Data integrity verified: mean Pearson correlation >0.9999 between original WFDB and converted BDF across all 1514 recordings. See `dataset_description.json` for generator details. ## Dataset Information | Dataset ID | `NM000108` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | HySER: High-Density Surface Electromyogram Recordings | | Author (year) | `Jiang2021` | | Canonical | `HySER`, `Hyser` | | Importable as | `NM000108`, `Jiang2021`, `HySER`, `Hyser` | | Year | 20 | | Authors | Xinyu Jiang, Chenyun Dai, Xiangyu Liu, Jiahao Fan | | License | ODC-By-1.0 | | Citation / DOI | [10.82901/nemar.nm000108](https://doi.org/10.82901/nemar.nm000108) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000108) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000108) | [Source URL](https://nemar.org/dataexplorer/detail/nm000108) | ### Copy-paste BibTeX ```bibtex @dataset{nm000108, title = {HySER: High-Density Surface Electromyogram Recordings}, author = {Xinyu Jiang and Chenyun Dai and Xiangyu Liu and Jiahao Fan}, doi = {10.82901/nemar.nm000108}, url = {https://doi.org/10.82901/nemar.nm000108}, } ``` ## Technical Details - Subjects: 20 - Recordings: 1514 - Tasks: 38 - Channels: 256 - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: 108.2 GB - File count: 1514 - Format: BIDS - License: ODC-By-1.0 - DOI: 10.82901/nemar.nm000108 - Source: nemar - OpenNeuro: [nm000108](https://openneuro.org/datasets/nm000108) - NeMAR: [nm000108](https://nemar.org/dataexplorer/detail?dataset_id=nm000108) ## API Reference Use the `NM000108` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000108(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HySER: High-Density Surface Electromyogram Recordings * **Study:** `nm000108` (NeMAR) * **Author (year):** `Jiang2021` * **Canonical:** `HySER`, `Hyser` Also importable as: `NM000108`, `Jiang2021`, `HySER`, `Hyser`. Modality: `emg`. Subjects: 20; recordings: 1514; tasks: 38. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000108](https://openneuro.org/datasets/nm000108) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000108](https://nemar.org/dataexplorer/detail?dataset_id=nm000108) DOI: [https://doi.org/10.82901/nemar.nm000108](https://doi.org/10.82901/nemar.nm000108) ### Examples ```pycon >>> from eegdash.dataset import NM000108 >>> dataset = NM000108(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000108) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000108) * [eegdash.dataset.NM000104](eegdash.dataset.NM000104.md) * [eegdash.dataset.NM000105](eegdash.dataset.NM000105.md) * [eegdash.dataset.NM000106](eegdash.dataset.NM000106.md) * [eegdash.dataset.NM000107](eegdash.dataset.NM000107.md) * [eegdash.dataset.NM000155](eegdash.dataset.NM000155.md) # NM000109: eeg dataset, 36 subjects *EEG During Mental Arithmetic Tasks* Access recordings and metadata through EEGDash. **Citation:** Igor Zyma, Sergii Tukaev, Ivan Seleznov, Ken Kiyono, Anton Popov, Mariia Chernykh, Oleksii Shpenkov (2000). *EEG During Mental Arithmetic Tasks*. [10.82901/nemar.nm000109](https://doi.org/10.82901/nemar.nm000109) Modality: eeg Subjects: 36 Recordings: 72 License: ODC-By-1.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000109 dataset = NM000109(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000109(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000109( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000109, title = {EEG During Mental Arithmetic Tasks}, author = {Igor Zyma and Sergii Tukaev and Ivan Seleznov and Ken Kiyono and Anton Popov and Mariia Chernykh and Oleksii Shpenkov}, doi = {10.82901/nemar.nm000109}, url = {https://doi.org/10.82901/nemar.nm000109}, } ``` ## About This Dataset [DOI](https://doi.org/10.82901/nemar.nm000109) **EEG During Mental Arithmetic Tasks** **Introduction** This dataset contains scalp EEG recordings from 36 healthy university students (9 male, 27 female; ages 18-26 years) during mental arithmetic tasks and resting-state periods. The study was designed to investigate EEG correlates of cognitive activity during intensive mental workload involving serial subtraction. The dataset provides brain electrical activity measurements for studying the neural mechanisms of mathematical cognition and cognitive stress responses, with potential applications in cognitive neuroscience research, mental workload assessment, and brain-computer interface development. **Overview of the experiment** ### View full README [DOI](https://doi.org/10.82901/nemar.nm000109) **EEG During Mental Arithmetic Tasks** **Introduction** This dataset contains scalp EEG recordings from 36 healthy university students (9 male, 27 female; ages 18-26 years) during mental arithmetic tasks and resting-state periods. The study was designed to investigate EEG correlates of cognitive activity during intensive mental workload involving serial subtraction. The dataset provides brain electrical activity measurements for studying the neural mechanisms of mathematical cognition and cognitive stress responses, with potential applications in cognitive neuroscience research, mental workload assessment, and brain-computer interface development. **Overview of the experiment** Participants were recorded during two conditions: (1) resting-state with eyes closed, and (2) mental arithmetic task involving serial subtraction. During the resting state, participants sat comfortably in a dark, soundproof chamber and were instructed to relax. After a 3-minute adaptation period, a 3-minute resting-state EEG recording was made with eyes closed. Participants then performed a 4-minute mental arithmetic task during which they were presented with a 4-digit minuend and 2-digit subtrahend (e.g., 3141 - 42) and performed serial subtractions mentally. They were instructed to count accurately and quickly in their self-determined rhythm without speaking or using finger movements. The dataset stores the last 3 minutes of the rest period (180 seconds) and the first minute of mental arithmetic performance (60 seconds) for each participant. EEG was recorded using a Neurocom 23-channel monopolar system sampled at 500 Hz with electrodes placed according to the International 10/20 system and referenced to interconnected ear electrodes. Filters included a high-pass filter (0.5 Hz cut-off), low-pass filter (45 Hz cut-off), and power line notch filter (50 Hz). Participants were divided post-hoc into two performance groups based on the number of completed arithmetic operations: “good counters” (Group G, n=24, mean operations=21, SD=7.4) and “bad counters” (Group B, n=12, mean operations=7, SD=3.6). **Description of the preprocessing if any** All recordings included only artifact-free EEG segments, with 30 of 66 initially recorded participants excluded due to excessive oculographic and myographic artifacts. Channel names have been standardized to match the International 10-20 nomenclature. The raw EDF files have been converted to BIDS format with proper channel type assignments (EEG for brain signals). Subject birth years were calculated from age and recording year. Recording dates have been set to January 1st of the recording year due to privacy considerations in the original dataset. Impedance checks confirmed all electrodes were below 5 kΩ prior to recording. **Description of the event values if any** No events.tsv files are provided. The “task” field in the BIDS filenames indicates the experimental condition: - “rest”: resting-state condition - “mentalArithmetic”: mental arithmetic task condition **Citation** When using this dataset, please cite: 1. Zyma I, Tukaev S, Seleznov I, Kiyono K, Popov A, Chernykh M, Shpenkov O. Electroencephalograms during Mental Arithmetic Task Performance. Data. 2019; 4(1):14. [https://doi.org/10.3390/data4010014](https://doi.org/10.3390/data4010014) 2. PhysioNet database: [https://doi.org/10.13026/C2JQ1P](https://doi.org/10.13026/C2JQ1P) 3. Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P. C., Mark, R., … & Stanley, H. E. (2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation [Online]. 101 (23), pp. e215–e220. *Data curators:* Pierre Guetschel (BIDS conversion) Original data collection team: - Igor Zyma, PhD (National Technical University of Ukraine) - Sergii Tukaev (National Technical University of Ukraine) - Ivan Seleznov (National Technical University of Ukraine) - Ken Kiyono, PhD - Anton Popov (National Technical University of Ukraine) - Mariia Chernykh (National Technical University of Ukraine) **- Oleksii Shpenkov (National Technical University of Ukraine)** **Automatic report** *Report automatically generated by \`\`mne_bids.make_report()\`\`.* > The EEG During Mental Arithmetic Tasks dataset was created by Igor Zyma, Sergii Tukaev, Ivan Seleznov, Ken Kiyono, Anton Popov, Mariia Chernykh, and Oleksii Shpenkov and conforms to BIDS version 1.7.0. This report was generated with MNE- BIDS ([https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896)). The dataset consists of 36 participants (comprised of 9 male and 27 female participants; handedness were all unknown; ages ranged from 16.0 to 26.0 (mean = 18.25, std = 2.14)) . Data was recorded using an EEG system sampled at 500.0 Hz with line noise at n/a Hz. There were 72 scans in total. Recording durations ranged from 62.0 to 188.0 seconds (mean = 120.5, std = 59.71), for a total of 8675.86 seconds of data recorded over all scans. For each dataset, there were on average 21.0 (std = 0.0) recording channels per scan, out of which 21.0 (std = 0.0) were used in analysis (0.0 +/- 0.0 were removed from analysis). ## Dataset Information | Dataset ID | `NM000109` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | EEG During Mental Arithmetic Tasks | | Author (year) | `Zyma2019` | | Canonical | — | | Importable as | `NM000109`, `Zyma2019` | | Year | 2000 | | Authors | Igor Zyma, Sergii Tukaev, Ivan Seleznov, Ken Kiyono, Anton Popov, Mariia Chernykh, Oleksii Shpenkov | | License | ODC-By-1.0 | | Citation / DOI | [10.82901/nemar.nm000109](https://doi.org/10.82901/nemar.nm000109) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000109) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000109) | [Source URL](https://nemar.org/dataexplorer/detail/nm000109) | ### Copy-paste BibTeX ```bibtex @dataset{nm000109, title = {EEG During Mental Arithmetic Tasks}, author = {Igor Zyma and Sergii Tukaev and Ivan Seleznov and Ken Kiyono and Anton Popov and Mariia Chernykh and Oleksii Shpenkov}, doi = {10.82901/nemar.nm000109}, url = {https://doi.org/10.82901/nemar.nm000109}, } ``` ## Technical Details - Subjects: 36 - Recordings: 72 - Tasks: 2 - Channels: 21 - Sampling rate (Hz): 500 - Duration (hours): 2.40996 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 174.5 MB - File count: 72 - Format: BIDS - License: ODC-By-1.0 - DOI: 10.82901/nemar.nm000109 - Source: nemar - OpenNeuro: [nm000109](https://openneuro.org/datasets/nm000109) - NeMAR: [nm000109](https://nemar.org/dataexplorer/detail?dataset_id=nm000109) ## API Reference Use the `NM000109` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000109(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG During Mental Arithmetic Tasks * **Study:** `nm000109` (NeMAR) * **Author (year):** `Zyma2019` * **Canonical:** — Also importable as: `NM000109`, `Zyma2019`. Modality: `eeg`. Subjects: 36; recordings: 72; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000109](https://openneuro.org/datasets/nm000109) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000109](https://nemar.org/dataexplorer/detail?dataset_id=nm000109) DOI: [https://doi.org/10.82901/nemar.nm000109](https://doi.org/10.82901/nemar.nm000109) ### Examples ```pycon >>> from eegdash.dataset import NM000109 >>> dataset = NM000109(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000109) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000109) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000110: eeg dataset, 24 subjects *CHB-MIT* Access recordings and metadata through EEGDash. **Citation:** Jack Connolly, Herman Edwards, Blaise Bourgeois, S. Ted Treves, Ali Shoeb, John Guttag (2010). *CHB-MIT*. [10.82901/nemar.nm000110](https://doi.org/10.82901/nemar.nm000110) Modality: eeg Subjects: 24 Recordings: 686 License: ODC-By-1.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000110 dataset = NM000110(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000110(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000110( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000110, title = {CHB-MIT}, author = {Jack Connolly and Herman Edwards and Blaise Bourgeois and S. Ted Treves and Ali Shoeb and John Guttag}, doi = {10.82901/nemar.nm000110}, url = {https://doi.org/10.82901/nemar.nm000110}, } ``` ## About This Dataset [DOI](https://doi.org/10.82901/nemar.nm000110) **CHB-MIT** **Introduction** The CHB-MIT Scalp EEG Database consists of EEG recordings from pediatric subjects with intractable seizures. This dataset was collected at the Children’s Hospital Boston and includes recordings from 22 subjects (5 males, ages 3-22; and 17 females, ages 1.5-19) with epilepsy. The recordings contain 198 annotated seizures and were originally collected to characterize seizures and assess patients’ candidacy for surgical intervention. **Overview of the experiment** ### View full README [DOI](https://doi.org/10.82901/nemar.nm000110) **CHB-MIT** **Introduction** The CHB-MIT Scalp EEG Database consists of EEG recordings from pediatric subjects with intractable seizures. This dataset was collected at the Children’s Hospital Boston and includes recordings from 22 subjects (5 males, ages 3-22; and 17 females, ages 1.5-19) with epilepsy. The recordings contain 198 annotated seizures and were originally collected to characterize seizures and assess patients’ candidacy for surgical intervention. **Overview of the experiment** Subjects were monitored for up to several days following withdrawal of anti-seizure medication in a controlled hospital environment. The purpose was to capture and characterize their seizure patterns using continuous scalp EEG monitoring. Each case (subject) contains between 9 and 42 continuous EEG recording files. All signals were sampled at 256 samples per second with 16-bit resolution. Most files contain 23 EEG signals recorded using the International 10-20 system of EEG electrode positions and nomenclature. The recordings use bipolar montages, where each channel represents the potential difference between two electrode sites. Hardware limitations resulted in gaps between consecutively-numbered files, typically 10 seconds or less, during which signals were not recorded. Most recording files contain exactly one hour of digitized EEG signals, though some cases contain two-hour or four-hour recordings. Additional signals such as ECG and vagal nerve stimulus (VNS) were recorded in some cases. **Description of the preprocessing if any** The original .edf files from PhysioNet have been converted to BIDS format. Channel names have been standardized to match the standard 10-05 montage naming convention. Bipolar channel pairs are represented in the format “Electrode1-Electrode2” (e.g., “FP1-F7”). Non-EEG channels such as ECG are preserved with appropriate BIDS channel types. Channels that did not match expected formats or could not be mapped to the standard montage were marked as “misc” type. All protected health information (PHI) in the original files has been replaced with surrogate information. Dates have been replaced with surrogate dates while preserving time relationships between files. Subject birthdates are calculated based on age at recording time when available. **Description of the event values if any** The events.tsv files contain seizure onset and offset annotations. Each seizure event has: - onset: Time in seconds from the beginning of the recording when the seizure starts - duration: Duration of the seizure in seconds - value: “seizure” - indicating a seizure event - sample: Sample number at onset The seizure annotations were originally marked with ‘[’ for onset and ‘]’ for offset in the .seizures annotation files and have been converted to BIDS-compliant event format. In total, the dataset contains 198 seizure events across all subjects (182 in the original 23 cases, plus 16 additional seizures from case chb24 added in December 2010). **Citation** When using this dataset, please cite: 1. Ali Shoeb. Application of Machine Learning to Epileptic Seizure Onset Detection and Treatment. PhD Thesis, Massachusetts Institute of Technology, September 2009. [http://hdl.handle.net/1721.1/54669](http://hdl.handle.net/1721.1/54669) 2. Guttag, J. (2010). CHB-MIT Scalp EEG Database (version 1.0.0). PhysioNet. [https://doi.org/10.13026/C2K01R](https://doi.org/10.13026/C2K01R) 3. Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P. C., Mark, R., … & Stanley, H. E. (2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation [Online]. 101 (23), pp. e215–e220. *Data curators:* Pierre Guetschel (BIDS conversion) Original data collection team: - Jack Connolly, REEGT (Children’s Hospital Boston) - Herman Edwards, REEGT (Children’s Hospital Boston) - Blaise Bourgeois, MD (Children’s Hospital Boston) - S. Ted Treves, MD (Children’s Hospital Boston) - Ali Shoeb, PhD (Massachusetts Institute of Technology) **- Professor John Guttag (Massachusetts Institute of Technology)** **Automatic report** *Report automatically generated by \`\`mne_bids.make_report()\`\`.* > The CHB-MIT dataset was created by Jack Connolly, Herman Edwards, Blaise Bourgeois, S. Ted Treves, Ali Shoeb, and John Guttag and conforms to BIDS version 1.7.0. This report was generated with MNE-BIDS ([https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896)). The dataset consists of 24 participants (comprised of 5 male and 18 female participants; handedness were all unknown; ages ranged from 71.0 to 91.0 (mean = 79.04, std = 5.51; 1 with unknown age)) . Data was recorded using an EEG system sampled at 256.0 Hz with line noise at n/a Hz. There were 686 scans in total. Recording durations ranged from 600.0 to 14427.0 seconds (mean = 5158.26, std = 3657.58), for a total of 3538564.32 seconds of data recorded over all scans. For each dataset, there were on average 26.03 (std = 3.81) recording channels per scan, out of which 26.03 (std = 3.81) were used in analysis (0.0 +/- 0.0 were removed from analysis). ## Dataset Information | Dataset ID | `NM000110` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | CHB-MIT | | Author (year) | `Connolly2010` | | Canonical | `CHBMIT`, `CHB_MIT` | | Importable as | `NM000110`, `Connolly2010`, `CHBMIT`, `CHB_MIT` | | Year | 2010 | | Authors | Jack Connolly, Herman Edwards, Blaise Bourgeois, S. Ted Treves, Ali Shoeb, John Guttag | | License | ODC-By-1.0 | | Citation / DOI | [10.82901/nemar.nm000110](https://doi.org/10.82901/nemar.nm000110) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000110) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000110) | [Source URL](https://nemar.org/dataexplorer/detail/nm000110) | ### Copy-paste BibTeX ```bibtex @dataset{nm000110, title = {CHB-MIT}, author = {Jack Connolly and Herman Edwards and Blaise Bourgeois and S. Ted Treves and Ali Shoeb and John Guttag}, doi = {10.82901/nemar.nm000110}, url = {https://doi.org/10.82901/nemar.nm000110}, } ``` ## Technical Details - Subjects: 24 - Recordings: 686 - Tasks: 1 - Channels: 23 (306), 28 (259), 38 (39), 22 (36), 24 (30), 29 (14), 25, 31 - Sampling rate (Hz): 256 - Duration (hours): 982.9345334201388 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 42.6 GB - File count: 686 - Format: BIDS - License: ODC-By-1.0 - DOI: 10.82901/nemar.nm000110 - Source: nemar - OpenNeuro: [nm000110](https://openneuro.org/datasets/nm000110) - NeMAR: [nm000110](https://nemar.org/dataexplorer/detail?dataset_id=nm000110) ## API Reference Use the `NM000110` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000110(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CHB-MIT * **Study:** `nm000110` (NeMAR) * **Author (year):** `Connolly2010` * **Canonical:** `CHBMIT`, `CHB_MIT` Also importable as: `NM000110`, `Connolly2010`, `CHBMIT`, `CHB_MIT`. Modality: `eeg`. Subjects: 24; recordings: 686; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000110](https://openneuro.org/datasets/nm000110) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000110](https://nemar.org/dataexplorer/detail?dataset_id=nm000110) DOI: [https://doi.org/10.82901/nemar.nm000110](https://doi.org/10.82901/nemar.nm000110) ### Examples ```pycon >>> from eegdash.dataset import NM000110 >>> dataset = NM000110(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000110) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000110) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000112: eeg dataset, 123 subjects *FACED - Finer-grained Affective Computing EEG Dataset* Access recordings and metadata through EEGDash. **Citation:** Yisi Liu, Olga Sourina, Minh Khoa Nguyen (2023). *FACED - Finer-grained Affective Computing EEG Dataset*. [10.82901/nemar.nm000112](https://doi.org/10.82901/nemar.nm000112) Modality: eeg Subjects: 123 Recordings: 123 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000112 dataset = NM000112(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000112(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000112( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000112, title = {FACED - Finer-grained Affective Computing EEG Dataset}, author = {Yisi Liu and Olga Sourina and Minh Khoa Nguyen}, doi = {10.82901/nemar.nm000112}, url = {https://doi.org/10.82901/nemar.nm000112}, } ``` ## About This Dataset [DOI](https://doi.org/10.82901/nemar.nm000112) **FACED - Finer-grained Affective Computing EEG Dataset** **Introduction** The Finer-grained Affective Computing EEG Dataset (FACED) contains scalp EEG recordings from 123 healthy participants who watched 28 emotion-eliciting video clips designed to evoke nine different emotion categories. The dataset includes four negative emotions (anger, fear, disgust, sadness) from Ekman’s basic emotions and four positive emotions (amusement, inspiration, joy, tenderness) selected based on recent psychological and neuroscience progress and application needs. Participants provided detailed self-reported emotion ratings on 12 dimensions: eight emotions, arousal, valence, liking, and familiarity. The dataset is designed to facilitate cross-subject affective computing research and development of EEG-based emotion recognition algorithms for real-world applications. **Overview of the experiment** ### View full README [DOI](https://doi.org/10.82901/nemar.nm000112) **FACED - Finer-grained Affective Computing EEG Dataset** **Introduction** The Finer-grained Affective Computing EEG Dataset (FACED) contains scalp EEG recordings from 123 healthy participants who watched 28 emotion-eliciting video clips designed to evoke nine different emotion categories. The dataset includes four negative emotions (anger, fear, disgust, sadness) from Ekman’s basic emotions and four positive emotions (amusement, inspiration, joy, tenderness) selected based on recent psychological and neuroscience progress and application needs. Participants provided detailed self-reported emotion ratings on 12 dimensions: eight emotions, arousal, valence, liking, and familiarity. The dataset is designed to facilitate cross-subject affective computing research and development of EEG-based emotion recognition algorithms for real-world applications. **Overview of the experiment** Participants (123 subjects, 75 female, ages 17-38, mean=23.2 years) were seated 60 cm from a 22-inch LCD monitor in a regular office environment. Each trial consisted of: (1) a 5-second fixation cross, (2) a video clip of varying length (typically 30-60 seconds), and (3) subjective emotional rating on 12 items (anger, fear, disgust, sadness, amusement, inspiration, joy, tenderness, valence, arousal, liking, familiarity) on a continuous 0-7 scale, followed by at least 30 seconds rest. Video clips were presented in blocks: three positive blocks, three negative blocks, and one neutral block, with 20 arithmetic problems between blocks to minimize carryover effects. The 28 video clips were designed to target nine emotion categories, with randomized presentation order across participants. EEG was recorded using a 32-channel biosignal recording system sampled at either 1000 Hz (92 subjects) or 250 Hz (31 subjects), with channels positioned according to the International 10-20 system. Signal units were recorded in either Volts or microVolts depending on the hardware configuration used. *Video stimulus information:* The dataset includes 28 video clips designed to elicit nine emotion categories (Trigger values 1–28): - Anger (Videos 1-3): Durations 73-81 seconds, negative valence - Disgust (Videos 4-6): Durations 69-91 seconds, negative valence - Fear (Videos 7-9): Durations 56-106 seconds, negative valence - Sadness (Videos 10-12): Durations 45-82 seconds, negative valence - Neutral (Videos 13-16): Durations 35-43 seconds, neutral valence - Amusement (Videos 17-19): Durations 56-73 seconds, positive valence - Inspiration (Videos 20-22): Durations 76-129 seconds, positive valence - Joy (Videos 23-25): Durations 34-68 seconds, positive valence - Tenderness (Videos 26-28): Durations 54-77 seconds, positive valence Metadata for each video (duration, source film, source database, valence, targeted emotion) is read from Stimuli_info.xlsx. *Event markers (from evt.bdf annotations):* - 100: Task/block start - 101: Video onset - 102: Video offset - 1–28: Video index (appears just before 101, used to link to stimulus metadata) - 201/202: Block boundary markers - “Start Impedance” / “Stop Impedance”: Technical markers (ignored) The conversion script reads evt.bdf annotations for each subject, parses video presentation spans (from video index + 101 to 102), and creates MNE Annotations with the source film title (video_title) as description. These annotations are exported to BIDS events.tsv with extra columns: - emotion_label: targeted emotion category (Anger, Disgust, Fear, Sadness, Neutral, Amusement, Inspiration, Joy, Tenderness) - binary_label: positive/negative/neutral classification - video_index: 1–28 - Self-reported ratings (Joy, Tenderness, Inspiration, Amusement, Anger, Disgust, Fear, Sadness, Arousal, Valence, Familiarity, Liking) **Description of the preprocessing if any** Raw BDF files from the biosignal recording system have been converted to BIDS format. Channel names are standardized to match the International 10-20 nomenclature. Subjects have been assigned numeric IDs (sub-000 through sub-122) corresponding to their original subject designations in the dataset. Recording dates have been set to a default value (2023-01-01) due to privacy considerations, while time relationships between files are preserved. Subject demographic information (age, sex) has been extracted from the Recording_info.csv file and properly formatted for BIDS. Stimulus timing information from the evt.bdf event files has been parsed and enriched with metadata from Stimuli_info.xlsx. Each video presentation is annotated with the targeted emotion category (Anger, Disgust, Fear, Sadness, Neutral, Amusement, Inspiration, Joy, Tenderness) and includes self-reported ratings from After_remarks.mat when available. **Citation** When using this dataset, please cite: 1. Liu, Y., Sourina, O., & Nguyen, M. K. (2023). Finer-grained Affective Computing EEG Dataset. Scientific Data, 10(1), 809. [https://doi.org/10.1038/s41597-023-02650-w](https://doi.org/10.1038/s41597-023-02650-w) 2. Synapse Platform: [https://www.synapse.org/#!Synapse:syn50614194](https://www.synapse.org/#!Synapse:syn50614194) 3. The dataset is available at the Synapse platform repository. *Data curators:* Pierre Guetschel (BIDS conversion) Original data collection team: - Yisi Liu (Nanyang Technological University) - Olga Sourina (Nanyang Technological University) **- Minh Khoa Nguyen (Nanyang Technological University)** **Automatic report** *Report automatically generated by \`\`mne_bids.make_report()\`\`.* > The FACED - Finer-grained Affective Computing EEG Dataset dataset was created by Yisi Liu, Olga Sourina, and Minh Khoa Nguyen and conforms to BIDS version 1.7.0. This report was generated with MNE-BIDS ([https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896)). The dataset consists of 123 participants (comprised of 48 male and 75 female participants; handedness were all unknown; ages ranged from 17.0 to 38.0 (mean = 22.94, std = 4.66)) . Data was recorded using an EEG system (Biosemi) sampled at 1000.0, and 250.0 Hz with line noise at n/a Hz. There were 123 scans in total. Recording durations ranged from 3468.0 to 6743.0 seconds (mean = 4544.83, std = 647.24), for a total of 559013.71 seconds of data recorded over all scans. For each dataset, there were on average 32.0 (std = 0.0) recording channels per scan, out of which 32.0 (std = 0.0) were used in analysis (0.0 +/- 0.0 were removed from analysis). ## Dataset Information | Dataset ID | `NM000112` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | FACED - Finer-grained Affective Computing EEG Dataset | | Author (year) | `Liu2024_112` | | Canonical | `FACED` | | Importable as | `NM000112`, `Liu2024_112`, `FACED` | | Year | 2023 | | Authors | Yisi Liu, Olga Sourina, Minh Khoa Nguyen | | License | CC-BY-4.0 | | Citation / DOI | [10.82901/nemar.nm000112](https://doi.org/10.82901/nemar.nm000112) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000112) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000112) | [Source URL](https://nemar.org/dataexplorer/detail/nm000112) | ### Copy-paste BibTeX ```bibtex @dataset{nm000112, title = {FACED - Finer-grained Affective Computing EEG Dataset}, author = {Yisi Liu and Olga Sourina and Minh Khoa Nguyen}, doi = {10.82901/nemar.nm000112}, url = {https://doi.org/10.82901/nemar.nm000112}, } ``` ## Technical Details - Subjects: 123 - Recordings: 123 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 1000 (68), 250 (55) - Duration (hours): 155.28158666666664 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 31.4 GB - File count: 123 - Format: BIDS - License: CC-BY-4.0 - DOI: 10.82901/nemar.nm000112 - Source: nemar - OpenNeuro: [nm000112](https://openneuro.org/datasets/nm000112) - NeMAR: [nm000112](https://nemar.org/dataexplorer/detail?dataset_id=nm000112) ## API Reference Use the `NM000112` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000112(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FACED - Finer-grained Affective Computing EEG Dataset * **Study:** `nm000112` (NeMAR) * **Author (year):** `Liu2024_112` * **Canonical:** `FACED` Also importable as: `NM000112`, `Liu2024_112`, `FACED`. Modality: `eeg`. Subjects: 123; recordings: 123; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000112](https://openneuro.org/datasets/nm000112) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000112](https://nemar.org/dataexplorer/detail?dataset_id=nm000112) DOI: [https://doi.org/10.82901/nemar.nm000112](https://doi.org/10.82901/nemar.nm000112) ### Examples ```pycon >>> from eegdash.dataset import NM000112 >>> dataset = NM000112(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000112) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000112) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000113: eeg dataset, 15 subjects *2020 BCI competition, track 3* Access recordings and metadata through EEGDash. **Citation:** Seong-Whan Lee, Klaus-Robert Müller, José del R. Millán (20). *2020 BCI competition, track 3*. [10.82901/nemar.nm000113](https://doi.org/10.82901/nemar.nm000113) Modality: eeg Subjects: 15 Recordings: 45 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000113 dataset = NM000113(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000113(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000113( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000113, title = {2020 BCI competition, track 3}, author = {Seong-Whan Lee and Klaus-Robert Müller and José del R. Millán}, doi = {10.82901/nemar.nm000113}, url = {https://doi.org/10.82901/nemar.nm000113}, } ``` ## About This Dataset [DOI](https://doi.org/10.82901/nemar.nm000113) **2020 BCI competition, track 3** **Introduction** The 2020 BCI Competition Track 3 dataset contains EEG recordings from participants performing imagined speech tasks. This dataset was designed for brain-computer interface research focused on decoding imagined speech from brain signals. The dataset includes recordings from 15 subjects performing five different imagined speech commands: “Hello”, “Help me”, “Stop”, “Thank you”, and “Yes”. The data is divided into training, validation, and test sets to facilitate machine learning approaches to imagined speech classification. **Overview of the experiment** ### View full README [DOI](https://doi.org/10.82901/nemar.nm000113) **2020 BCI competition, track 3** **Introduction** The 2020 BCI Competition Track 3 dataset contains EEG recordings from participants performing imagined speech tasks. This dataset was designed for brain-computer interface research focused on decoding imagined speech from brain signals. The dataset includes recordings from 15 subjects performing five different imagined speech commands: “Hello”, “Help me”, “Stop”, “Thank you”, and “Yes”. The data is divided into training, validation, and test sets to facilitate machine learning approaches to imagined speech classification. **Overview of the experiment** Participants performed imagined speech tasks where they were instructed to mentally articulate five different phrases without producing any audible speech or overt mouth movements. The five imagined speech commands were: “Hello”, “Help me”, “Stop”, “Thank you”, and “Yes”. EEG signals were recorded during these mental articulation tasks. The dataset is split into three sets: Training Set (run-00), Validation Set (run-01), and Test Set (run-02). Each recording session contains multiple trials of imagined speech, with each trial corresponding to one of the five command categories. The EEG data was recorded using a multi-channel EEG system, and the exact number of channels and their montage are preserved in the BIDS format. **Description of the preprocessing if any** The original MATLAB (.mat) files from the BCI Competition have been converted to BIDS-compliant EDF format. For training and validation sets, the data was stored in structured MATLAB arrays with fields for EEG data (‘x’), labels (‘y’), sampling frequency (‘fs’), and channel labels (‘clab’). For the test set, the data was stored in HDF5 format and labels were extracted from the Track3_Answer Sheet_Test.xlsx file. The EEG data has been scaled from the original units to Volts (multiplied by 1e-6). The epoched data structure from the original dataset has been concatenated into continuous recordings for BIDS compliance, with annotations marking the onset and duration of each imagined speech trial. Channel names and montage information from the original ‘mnt’ (montage) structure have been preserved in the BIDS format. **Description of the event values if any** The events.tsv files contain annotations for each imagined speech trial. Each event has: - onset: Time in seconds from the beginning of the recording when the imagined speech trial begins - duration: Duration of the trial in seconds (calculated as the number of samples in the epoch divided by the sampling frequency) - value: The imagined speech command label, one of: “Hello”, “Help me”, “Stop”, “Thank you”, or “Yes” - trial_type: Corresponds to the value field These annotations enable temporal segmentation of the continuous EEG data by imagined speech command type. The labels for the training and validation sets were extracted from the ‘y’ field in the original MATLAB structures (one-hot encoded vectors converted to class indices). For the test set, labels were obtained from the Track3_Answer Sheet_Test.xlsx file provided with the competition data. **Citation** When using this dataset, please cite: 1. The 2020 BCI Competition Track 3: [https://osf.io/pq7vb/overview](https://osf.io/pq7vb/overview) 2. Original competition organizers and data collectors (please refer to the competition website for complete citation information) *Data curators:* Pierre Guetschel (BIDS conversion) **Competition co-chairs: Seong-Whan Lee, Klaus-Robert Müller, José del R. Millán** **Automatic report** *Report automatically generated by \`\`mne_bids.make_report()\`\`.* > The 2020 BCI competition, track 3 dataset was created by Seong-Whan Lee, Klaus- Robert Müller, and José del R. Millán and conforms to BIDS version 1.7.0. This report was generated with MNE-BIDS ([https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896)). The dataset consists of 15 participants (sex were all unknown; handedness were all unknown; ages all unknown) . Data was recorded using an EEG system sampled at 256.0 Hz with line noise at n/a Hz. There were 45 scans in total. Recording durations ranged from 155.27 to 931.64 seconds (mean = 414.06, std = 365.98), for a total of 18632.64 seconds of data recorded over all scans. For each dataset, there were on average 64.0 (std = 0.0) recording channels per scan, out of which 64.0 (std = 0.0) were used in analysis (0.0 +/- 0.0 were removed from analysis). ## Dataset Information | Dataset ID | `NM000113` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 2020 BCI competition, track 3 | | Author (year) | `Lee2020` | | Canonical | — | | Importable as | `NM000113`, `Lee2020` | | Year | 20 | | Authors | Seong-Whan Lee, Klaus-Robert Müller, José del R. Millán | | License | CC-BY-4.0 | | Citation / DOI | [10.82901/nemar.nm000113](https://doi.org/10.82901/nemar.nm000113) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000113) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000113) | [Source URL](https://nemar.org/dataexplorer/detail/nm000113) | ### Copy-paste BibTeX ```bibtex @dataset{nm000113, title = {2020 BCI competition, track 3}, author = {Seong-Whan Lee and Klaus-Robert Müller and José del R. Millán}, doi = {10.82901/nemar.nm000113}, url = {https://doi.org/10.82901/nemar.nm000113}, } ``` ## Technical Details - Subjects: 15 - Recordings: 45 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 256 - Duration (hours): 5.175732421875 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 585.2 MB - File count: 45 - Format: BIDS - License: CC-BY-4.0 - DOI: 10.82901/nemar.nm000113 - Source: nemar - OpenNeuro: [nm000113](https://openneuro.org/datasets/nm000113) - NeMAR: [nm000113](https://nemar.org/dataexplorer/detail?dataset_id=nm000113) ## API Reference Use the `NM000113` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000113(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 2020 BCI competition, track 3 * **Study:** `nm000113` (NeMAR) * **Author (year):** `Lee2020` * **Canonical:** — Also importable as: `NM000113`, `Lee2020`. Modality: `eeg`. Subjects: 15; recordings: 45; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000113](https://openneuro.org/datasets/nm000113) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000113](https://nemar.org/dataexplorer/detail?dataset_id=nm000113) DOI: [https://doi.org/10.82901/nemar.nm000113](https://doi.org/10.82901/nemar.nm000113) ### Examples ```pycon >>> from eegdash.dataset import NM000113 >>> dataset = NM000113(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000113) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000113) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000114: eeg dataset, 64 subjects *MDD Patients and Healthy Controls EEG Data* Access recordings and metadata through EEGDash. **Citation:** Wajid Mumtaz, Likun Xia, Syed Saad Azhar Ali, Mohd Azhar Mohd Yasin, Mazhar Hussain, Aamir Saeed Malik (2017). *MDD Patients and Healthy Controls EEG Data*. [10.82901/nemar.nm000114](https://doi.org/10.82901/nemar.nm000114) Modality: eeg Subjects: 64 Recordings: 181 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000114 dataset = NM000114(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000114(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000114( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000114, title = {MDD Patients and Healthy Controls EEG Data}, author = {Wajid Mumtaz and Likun Xia and Syed Saad Azhar Ali and Mohd Azhar Mohd Yasin and Mazhar Hussain and Aamir Saeed Malik}, doi = {10.82901/nemar.nm000114}, url = {https://doi.org/10.82901/nemar.nm000114}, } ``` ## About This Dataset [DOI](https://doi.org/10.82901/nemar.nm000114) **MDD Patients and Healthy Controls EEG Data** **Introduction** This dataset contains resting-state and task-based EEG recordings from patients diagnosed with Major Depressive Disorder (MDD) and healthy control participants (H). The data was collected to investigate differences in brain electrical activity between MDD patients and healthy individuals across different mental states. The dataset includes 34 participants (19 healthy controls and 15 MDD patients) with recordings during eyes-closed rest, eyes-open rest, and an auditory oddball P300 task. This dataset enables research on neurophysiological biomarkers of depression, comparative studies of brain activity patterns between clinical and healthy populations, and investigation of attentional processing differences in MDD. **Overview of the experiment** ### View full README [DOI](https://doi.org/10.82901/nemar.nm000114) **MDD Patients and Healthy Controls EEG Data** **Introduction** This dataset contains resting-state and task-based EEG recordings from patients diagnosed with Major Depressive Disorder (MDD) and healthy control participants (H). The data was collected to investigate differences in brain electrical activity between MDD patients and healthy individuals across different mental states. The dataset includes 34 participants (19 healthy controls and 15 MDD patients) with recordings during eyes-closed rest, eyes-open rest, and an auditory oddball P300 task. This dataset enables research on neurophysiological biomarkers of depression, comparative studies of brain activity patterns between clinical and healthy populations, and investigation of attentional processing differences in MDD. **Overview of the experiment** Participants underwent three recording conditions: (1) eyes-closed resting state, (2) eyes-open resting state, and (3) an auditory oddball P300 task. During the resting-state conditions, participants were instructed to sit quietly with either their eyes closed (EC) or eyes open (EO) for the duration of the recording. In the P300 task, participants were presented with auditory stimuli consisting of frequent standard tones (80% probability) and infrequent target tones (20% probability), and were required to mentally count the target tones. EEG was recorded using a 19-channel monopolar EEG system with electrodes positioned according to the International 10-20 system, referenced to linked ears (A1+A2). The sampling rate was 256 Hz. Hardware filters included a high-pass filter at 0.5 Hz and a low-pass filter at 70 Hz, with a 50 Hz notch filter to remove power line noise. All electrode impedances were maintained below 5 kΩ. The recordings were conducted in a controlled environment to minimize external artifacts. One participant (MDD S15) had two separate recording sessions, resulting in duplicate recordings for this subject. **Description of the preprocessing if any** The original EDF files have been converted to BIDS format. Channel names have been standardized by extracting the electrode names from the original “EEG -” format. Channels originally referenced to the left ear (LE) are now labeled with just the electrode name, while other bipolar derivations (e.g., A2-A1, 23A-23R, 24A-24R) retain their bipolar notation in the format “-”. The dataset includes 19 standard EEG channels plus three additional bipolar channels. Subject IDs have been prefixed with their diagnostic group (“H” for healthy controls, “MDD” for Major Depressive Disorder patients) to facilitate group comparisons. All recordings were artifact-free segments selected from longer recording sessions, with epochs containing excessive oculographic or myographic artifacts excluded during initial data collection. **Description of the event values if any** No events.tsv files are provided as the recordings represent continuous resting-state or task conditions without discrete trial markers. The experimental condition for each recording is indicated by the “task” field in the BIDS filename: - “eyesClosed”: eyes-closed resting state - “eyesOpen”: eyes-open resting state - “P300”: auditory oddball task (continuous recording during the entire task block) For the P300 task recordings, while individual stimulus onsets are not marked in events.tsv files, the entire recording represents the period during which participants performed the auditory oddball counting task. **Citation** When using this dataset, please cite: 1. Mumtaz, W., Xia, L., Ali, S. S. A., Yasin, M. A. M., Hussain, M., & Malik, A. S. (2017). Electroencephalogram (EEG)-based computer-aided technique to diagnose major depressive disorder (MDD). Biomedical Signal Processing and Control, 31, 108-115. [https://doi.org/10.1016/j.bspc.2016.07.006](https://doi.org/10.1016/j.bspc.2016.07.006) 2. Mumtaz, Wajid (2016). MDD Patients and Healthy Controls EEG Data (New). figshare. Dataset. [https://doi.org/10.6084/m9.figshare.4244171.v2](https://doi.org/10.6084/m9.figshare.4244171.v2) *Data curators:* Pierre Guetschel (BIDS conversion) Original data collection team: - Wajid Mumtaz (Universiti Teknologi PETRONAS) - Likun Xia (Universiti Teknologi PETRONAS) - Syed Saad Azhar Ali (Universiti Teknologi PETRONAS) - Mohd Azhar Mohd Yasin (Universiti Teknologi PETRONAS) - Mazhar Hussain (Universiti Teknologi PETRONAS) **- Aamir Saeed Malik (Universiti Teknologi PETRONAS)** **Automatic report** *Report automatically generated by \`\`mne_bids.make_report()\`\`.* > The MDD Patients and Healthy Controls EEG Data dataset was created by Wajid Mumtaz, Likun Xia, Syed Saad Azhar Ali, Mohd Azhar Mohd Yasin, Mazhar Hussain, and Aamir Saeed Malik and conforms to BIDS version 1.7.0. This report was generated with MNE-BIDS ([https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896)). The dataset consists of 64 participants (sex were all unknown; handedness were all unknown; ages all unknown) . Data was recorded using an EEG system sampled at 256.0 Hz with line noise at n/a Hz. There were 181 scans in total. Recording durations ranged from 180.0 to 686.0 seconds (mean = 408.84, std = 155.23), for a total of 74000.29 seconds of data recorded over all scans. For each dataset, there were on average 21.24 (std = 0.97) recording channels per scan, out of which 21.24 (std = 0.97) were used in analysis (0.0 +/- 0.0 were removed from analysis). ## Dataset Information | Dataset ID | `NM000114` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MDD Patients and Healthy Controls EEG Data | | Author (year) | `Mumtaz2017` | | Canonical | — | | Importable as | `NM000114`, `Mumtaz2017` | | Year | 2017 | | Authors | Wajid Mumtaz, Likun Xia, Syed Saad Azhar Ali, Mohd Azhar Mohd Yasin, Mazhar Hussain, Aamir Saeed Malik | | License | CC-BY-4.0 | | Citation / DOI | [10.82901/nemar.nm000114](https://doi.org/10.82901/nemar.nm000114) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000114) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000114) | [Source URL](https://nemar.org/dataexplorer/detail/nm000114) | ### Copy-paste BibTeX ```bibtex @dataset{nm000114, title = {MDD Patients and Healthy Controls EEG Data}, author = {Wajid Mumtaz and Likun Xia and Syed Saad Azhar Ali and Mohd Azhar Mohd Yasin and Mazhar Hussain and Aamir Saeed Malik}, doi = {10.82901/nemar.nm000114}, url = {https://doi.org/10.82901/nemar.nm000114}, } ``` ## Technical Details - Subjects: 64 - Recordings: 181 - Tasks: 3 - Channels: 22 (112), 20 (69) - Sampling rate (Hz): 256 - Duration (hours): 20.55563693576389 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 812.8 MB - File count: 181 - Format: BIDS - License: CC-BY-4.0 - DOI: 10.82901/nemar.nm000114 - Source: nemar - OpenNeuro: [nm000114](https://openneuro.org/datasets/nm000114) - NeMAR: [nm000114](https://nemar.org/dataexplorer/detail?dataset_id=nm000114) ## API Reference Use the `NM000114` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000114(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MDD Patients and Healthy Controls EEG Data * **Study:** `nm000114` (NeMAR) * **Author (year):** `Mumtaz2017` * **Canonical:** — Also importable as: `NM000114`, `Mumtaz2017`. Modality: `eeg`. Subjects: 64; recordings: 181; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000114](https://openneuro.org/datasets/nm000114) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000114](https://nemar.org/dataexplorer/detail?dataset_id=nm000114) DOI: [https://doi.org/10.82901/nemar.nm000114](https://doi.org/10.82901/nemar.nm000114) ### Examples ```pycon >>> from eegdash.dataset import NM000114 >>> dataset = NM000114(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000114) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000114) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000115: eeg dataset, 4 subjects *Zhou2016* Access recordings and metadata through EEGDash. **Citation:** Bangyan Zhou, Xiaopei Wu, Zongtan Lv, Lei Zhang, Xiaojin Guo (2016). *Zhou2016*. [10.82901/nemar.nm000115](https://doi.org/10.82901/nemar.nm000115) Modality: eeg Subjects: 4 Recordings: 24 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000115 dataset = NM000115(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000115(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000115( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000115, title = {Zhou2016}, author = {Bangyan Zhou and Xiaopei Wu and Zongtan Lv and Lei Zhang and Xiaojin Guo}, doi = {10.82901/nemar.nm000115}, url = {https://doi.org/10.82901/nemar.nm000115}, } ``` ## About This Dataset [DOI](https://doi.org/10.82901/nemar.nm000115) README **Introduction** This dataset contains EEG recordings from four subjects performing motor imagery tasks (left hand, right hand, and feet), originally published by Zhou et al. (2016). The data was reformatted into BIDS from its Zenodo version ([https://zenodo.org/records/16534752](https://zenodo.org/records/16534752)), which was itself generated by MOABB (Mother of All BCI Benchmarks, [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb)). The original study investigated a fully automated trial selection method for optimization of motor imagery based brain-computer interfaces. **Overview of the experiment** Four participants each completed three recording sessions separated by days to months. Each session contained two consecutive runs with inter-run breaks. Each run comprised 75 trials (25 per class: left hand, right hand, and feet imagery), for a total of 450 trials per subject across all sessions. Trials began with an auditory cue, followed by a 5-second visual arrow stimulus indicating the motor imagery task to perform, then a 4-second rest period. EEG was recorded from 14 channels placed according to the extended 10/20 system (Fp1, Fp2, FC3, FCz, FC4, C3, Cz, C4, CP3, CPz, CP4, O1, Oz, O2) at a sampling frequency of 250 Hz with a 50 Hz power line frequency. ### View full README [DOI](https://doi.org/10.82901/nemar.nm000115) README **Introduction** This dataset contains EEG recordings from four subjects performing motor imagery tasks (left hand, right hand, and feet), originally published by Zhou et al. (2016). The data was reformatted into BIDS from its Zenodo version ([https://zenodo.org/records/16534752](https://zenodo.org/records/16534752)), which was itself generated by MOABB (Mother of All BCI Benchmarks, [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb)). The original study investigated a fully automated trial selection method for optimization of motor imagery based brain-computer interfaces. **Overview of the experiment** Four participants each completed three recording sessions separated by days to months. Each session contained two consecutive runs with inter-run breaks. Each run comprised 75 trials (25 per class: left hand, right hand, and feet imagery), for a total of 450 trials per subject across all sessions. Trials began with an auditory cue, followed by a 5-second visual arrow stimulus indicating the motor imagery task to perform, then a 4-second rest period. EEG was recorded from 14 channels placed according to the extended 10/20 system (Fp1, Fp2, FC3, FCz, FC4, C3, Cz, C4, CP3, CPz, CP4, O1, Oz, O2) at a sampling frequency of 250 Hz with a 50 Hz power line frequency. **Dataset structure** - 4 subjects (sub-1 through sub-4) - 3 sessions per subject (ses-0, ses-1, ses-2) - 2 runs per session (run-0, run-1) - 24 EEG recordings total in EDF format - 14 EEG channels, 250 Hz sampling rate - 3 event types: left_hand (value=2), right_hand (value=3), feet (value=1) - Electrode positions in CapTrak coordinate system **Preprocessing** The data distributed here has undergone minimal preprocessing by MOABB prior to BIDS conversion: - Extraction of the 14 EEG channels from the original recordings - Annotation of motor imagery events (left_hand, right_hand, feet) with 5-second durations - Resampling to 250 Hz - Export to EDF format **Original and related datasets** This dataset was reformatted into BIDS from the Zenodo archive at [https://zenodo.org/records/16534752](https://zenodo.org/records/16534752). That archive was generated by MOABB v1.2.0 from the original data accompanying the publication. The original study and data are described in: Zhou B, Wu X, Lv Z, Zhang L, Guo X (2016). A Fully Automated Trial Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface. PLoS ONE 11(9): e0162657. [https://doi.org/10.1371/journal.pone.0162657](https://doi.org/10.1371/journal.pone.0162657) **References** Zhou B, Wu X, Lv Z, Zhang L, Guo X (2016). A Fully Automated Trial Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface. PLoS ONE 11(9): e0162657. [https://doi.org/10.1371/journal.pone.0162657](https://doi.org/10.1371/journal.pone.0162657) Appelhoff S, Sanderson M, Brooks T, et al. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: 1896. [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet CR, Appelhoff S, Gorgolewski KJ, et al. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Data curator for NEMAR version: Arnaud Delorme (UCSD, La Jolla, CA, USA) ## Dataset Information | Dataset ID | `NM000115` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Zhou2016 | | Author (year) | `Zhou2016` | | Canonical | — | | Importable as | `NM000115`, `Zhou2016` | | Year | 2016 | | Authors | Bangyan Zhou, Xiaopei Wu, Zongtan Lv, Lei Zhang, Xiaojin Guo | | License | CC-BY-4.0 | | Citation / DOI | [10.82901/nemar.nm000115](https://doi.org/10.82901/nemar.nm000115) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000115) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000115) | [Source URL](https://nemar.org/dataexplorer/detail/nm000115) | ### Copy-paste BibTeX ```bibtex @dataset{nm000115, title = {Zhou2016}, author = {Bangyan Zhou and Xiaopei Wu and Zongtan Lv and Lei Zhang and Xiaojin Guo}, doi = {10.82901/nemar.nm000115}, url = {https://doi.org/10.82901/nemar.nm000115}, } ``` ## Technical Details - Subjects: 4 - Recordings: 24 - Tasks: 1 - Channels: 14 - Sampling rate (Hz): 250 - Duration (hours): 6.268284444444444 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 152.1 MB - File count: 24 - Format: BIDS - License: CC-BY-4.0 - DOI: 10.82901/nemar.nm000115 - Source: nemar - OpenNeuro: [nm000115](https://openneuro.org/datasets/nm000115) - NeMAR: [nm000115](https://nemar.org/dataexplorer/detail?dataset_id=nm000115) ## API Reference Use the `NM000115` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000115(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Zhou2016 * **Study:** `nm000115` (NeMAR) * **Author (year):** `Zhou2016` * **Canonical:** — Also importable as: `NM000115`, `Zhou2016`. Modality: `eeg`. Subjects: 4; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000115](https://openneuro.org/datasets/nm000115) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000115](https://nemar.org/dataexplorer/detail?dataset_id=nm000115) DOI: [https://doi.org/10.82901/nemar.nm000115](https://doi.org/10.82901/nemar.nm000115) ### Examples ```pycon >>> from eegdash.dataset import NM000115 >>> dataset = NM000115(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000115) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000115) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000118: eeg dataset, 9 subjects *Nakanishi2015 – SSVEP Nakanishi 2015 dataset* Access recordings and metadata through EEGDash. **Citation:** Masaki Nakanishi, Yijun Wang, Yu-Te Wang, Tzyy-Ping Jung (2019). *Nakanishi2015 – SSVEP Nakanishi 2015 dataset*. Modality: eeg Subjects: 9 Recordings: 9 License: — Source: nemar Metadata: Good (80%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000118 dataset = NM000118(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000118(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000118( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000118, title = {Nakanishi2015 – SSVEP Nakanishi 2015 dataset}, author = {Masaki Nakanishi and Yijun Wang and Yu-Te Wang and Tzyy-Ping Jung}, } ``` ## About This Dataset **SSVEP Nakanishi 2015 dataset** SSVEP Nakanishi 2015 dataset. **Dataset Overview** - **Code**: Nakanishi2015 - **Paradigm**: ssvep - **DOI**: 10.1371/journal.pone.0140703 ### View full README **SSVEP Nakanishi 2015 dataset** SSVEP Nakanishi 2015 dataset. **Dataset Overview** - **Code**: Nakanishi2015 - **Paradigm**: ssvep - **DOI**: 10.1371/journal.pone.0140703 - **Subjects**: 9 - **Sessions per subject**: 1 - **Events**: 9.25=1, 11.25=2, 13.25=3, 9.75=4, 11.75=5, 13.75=6, 10.25=7, 12.25=8, 14.25=9, 10.75=10, 12.75=11, 14.75=12 - **Trial interval**: [0.15, 4.3] s - **File format**: mat - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 8 - **Channel types**: eeg=8 - **Channel names**: PO7, PO3, POz, PO4, PO8, O1, Oz, O2 - **Montage**: standard_1020 - **Hardware**: Biosemi ActiveTwo - **Reference**: CMS/DRL - **Sensor type**: EEG - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 9 - **Health status**: healthy - **Age**: mean=28.0 - **Gender distribution**: male=9, female=1 - **BCI experience**: not specified **Experimental Protocol** - **Paradigm**: ssvep - **Number of classes**: 12 - **Class labels**: 9.25, 11.25, 13.25, 9.75, 11.75, 13.75, 10.25, 12.25, 14.25, 10.75, 12.75, 14.75 - **Trial duration**: 4.0 s - **Study design**: 12-class SSVEP target identification task with joint frequency and phase coding - **Feedback type**: none - **Stimulus type**: flickering - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: False - **Instructions**: Subjects were asked to gaze at one of the visual stimuli indicated by the stimulus program in a random order for 4s. At the beginning of each trial, a red square appeared for 1s at the position of the target stimulus. Subjects were asked to shift their gaze to the target within the same 1s duration. After that, all stimuli started to flicker simultaneously for 4s. - **Stimulus presentation**: SoftwareName=MATLAB with Psychophysics Toolbox, monitor=ASUS VG278 27-inch LCD, refresh_rate=60Hz, resolution=1280x800 pixels, stimulus_size=6x6 cm each, viewing_distance=60cm, arrangement=4x3 matrix virtual keypad **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 9.25 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_25 11.25 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_25 13.25 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_25 9.75 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_75 11.75 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_75 13.75 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_75 10.25 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_25 12.25 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_25 14.25 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_25 10.75 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_75 12.75 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_75 14.75 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_75 ``` **Paradigm-Specific Parameters** - **Detected paradigm**: ssvep - **Stimulus frequencies**: [9.25, 9.75, 10.25, 10.75, 11.25, 11.75, 12.25, 12.75, 13.25, 13.75, 14.25, 14.75] Hz - **Frequency resolution**: 0.5 Hz - **Code type**: joint frequency and phase coding - **Number of targets**: 12 **Data Structure** - **Trials**: 180 - **Blocks per session**: 15 - **Trials context**: 15 blocks x 12 trials per block = 180 trials total per subject **Preprocessing** - **Preprocessing applied**: True - **Steps**: downsampling, bandpass filtering - **Bandpass filter**: {‘low_cutoff_hz’: 6.0, ‘high_cutoff_hz’: 80.0} - **Filter type**: IIR - **Downsampled to**: 256.0 Hz - **Epoch window**: [0.135, 4.135] - **Notes**: Zero-phase forward and reverse IIR filtering was implemented using the filtfilt() function in MATLAB. Data epochs were extracted with a 135-ms latency delay considering the visual system delay. **Signal Processing** - **Classifiers**: CCA, IT-CCA, MwayCCA, L1-MCCA, MsetCCA, CACC, Combination Method - **Feature extraction**: CCA, canonical correlation - **Spatial filters**: CCA **Cross-Validation** - **Method**: leave-one-block-out - **Folds**: 15 - **Evaluation type**: cross_validation **Performance (Original Study)** - **Accuracy**: 92.78% - **Itr**: 91.68 bits/min - **R Square**: 0.87 - **Combination Method Accuracy 1S**: 92.78 - **Combination Method Itr 1S**: 91.68 - **Standard Cca Accuracy 1S**: 55.0 - **Standard Cca Itr 2S**: 50.4 **BCI Application** - **Applications**: communication - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Research **Documentation** - **Description**: A comparison study of canonical correlation analysis based methods for detecting steady-state visual evoked potentials. This study performed a comparison of existing CCA-based SSVEP detection methods using a 12-class SSVEP dataset recorded from 10 subjects in a simulated online BCI experiment. - **DOI**: 10.1371/journal.pone.0140703 - **License**: Unknown - **Investigators**: Masaki Nakanishi, Yijun Wang, Yu-Te Wang, Tzyy-Ping Jung - **Contact**: [wangyj@semi.ac.cn](mailto:wangyj@semi.ac.cn) - **Institution**: University of California San Diego - **Department**: Swartz Center for Computational Neuroscience, Institute for Neural Computation; Center for Advanced Neurological Engineering, Institute of Engineering in Medicine - **Country**: US - **Repository**: Github - **Data URL**: [https://github.com/mnakanishi/12JFPM_SSVEP/raw/master/data/](https://github.com/mnakanishi/12JFPM_SSVEP/raw/master/data/) - **Publication year**: 2015 - **Funding**: Swartz Foundation gift fund; U.S. Office of Naval Research (N00014-08-1215); Army Research Office (W911NF-09-1-0510); Army Research Laboratory (W911NF-10-2-0022); DARPA (USDI D11PC20183); UC Proof of Concept Grant Award (269228); NIH Grant (1R21EY025056-01); Recruitment Program for Young Professionals - **Ethics approval**: Human Research Protections Program of the University of California San Diego - **Keywords**: SSVEP, BCI, CCA, canonical correlation analysis, brain-computer interface, steady-state visual evoked potentials **Abstract** Canonical correlation analysis (CCA) has been widely used in the detection of the steady-state visual evoked potentials (SSVEPs) in brain-computer interfaces (BCIs). The standard CCA method, which uses sinusoidal signals as reference signals, was first proposed for SSVEP detection without calibration. However, the detection performance can be deteriorated by the interference from the spontaneous EEG activities. Recently, various extended methods have been developed to incorporate individual EEG calibration data in CCA to improve the detection performance. Although advantages of the extended CCA methods have been demonstrated in separate studies, a comprehensive comparison between these methods is still missing. This study performed a comparison of the existing CCA-based SSVEP detection methods using a 12-class SSVEP dataset recorded from 10 subjects in a simulated online BCI experiment. Classification accuracy and information transfer rate (ITR) were used for performance evaluation. The results suggest that individual calibration data can significantly improve the detection performance. Furthermore, the results showed that the combination method based on the standard CCA and the individual template based CCA (IT-CCA) achieved the highest performance. **Methodology** A simulated online BCI experiment was conducted with 10 subjects. Each subject completed 15 blocks, with each block containing 12 trials (one for each of the 12 targets). Visual stimuli were presented as a 4x3 matrix on a 27-inch LCD monitor at 60Hz refresh rate. The 12 targets used joint frequency and phase coding (frequencies: 9.25-14.75Hz with 0.5Hz intervals; phases: 0 to 5.5π with 0.5π intervals). Each trial began with a 1s cue (red square) followed by 4s of flickering stimulation. EEG was recorded from 8 occipital electrodes at 2048Hz and downsampled to 256Hz for analysis. Seven CCA-based methods were compared using leave-one-block-out cross-validation (14 blocks for training, 1 for testing). Performance was evaluated using classification accuracy and ITR. **References** Masaki Nakanishi, Yijun Wang, Yu-Te Wang and Tzyy-Ping Jung, “A Comparison Study of Canonical Correlation Analysis Based Methods for Detecting Steady-State Visual Evoked Potentials,” PLoS One, vol.10, no.10, e140703, 2015. [http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0140703](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0140703) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000118` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Nakanishi2015 – SSVEP Nakanishi 2015 dataset | | Author (year) | `Nakanishi2015` | | Canonical | — | | Importable as | `NM000118`, `Nakanishi2015` | | Year | 2019 | | Authors | Masaki Nakanishi, Yijun Wang, Yu-Te Wang, Tzyy-Ping Jung | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000118) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000118) | [Source URL](https://nemar.org/dataexplorer/detail/nm000118) | ## Technical Details - Subjects: 9 - Recordings: 9 - Tasks: 1 - Channels: 8 - Sampling rate (Hz): 256.0 - Duration (hours): 2.133974609375 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 65.4 MB - File count: 9 - Format: BIDS - License: See source - DOI: — - Source: nemar - OpenNeuro: [nm000118](https://openneuro.org/datasets/nm000118) - NeMAR: [nm000118](https://nemar.org/dataexplorer/detail?dataset_id=nm000118) ## API Reference Use the `NM000118` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000118(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Nakanishi2015 – SSVEP Nakanishi 2015 dataset * **Study:** `nm000118` (NeMAR) * **Author (year):** `Nakanishi2015` * **Canonical:** — Also importable as: `NM000118`, `Nakanishi2015`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 9; recordings: 9; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000118](https://openneuro.org/datasets/nm000118) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000118](https://nemar.org/dataexplorer/detail?dataset_id=nm000118) ### Examples ```pycon >>> from eegdash.dataset import NM000118 >>> dataset = NM000118(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000118) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000118) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000119: eeg dataset, 11 subjects *Oikonomou2016 – SSVEP MAMEM 1 dataset* Access recordings and metadata through EEGDash. **Citation:** Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris (2016). *Oikonomou2016 – SSVEP MAMEM 1 dataset*. Modality: eeg Subjects: 11 Recordings: 47 License: ODC-By-1.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000119 dataset = NM000119(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000119(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000119( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000119, title = {Oikonomou2016 – SSVEP MAMEM 1 dataset}, author = {Vangelis P. Oikonomou and Georgios Liaros and Kostantinos Georgiadis and Elisavet Chatzilari and Katerina Adam and Spiros Nikolopoulos and Ioannis Kompatsiaris}, } ``` ## About This Dataset **SSVEP MAMEM 1 dataset** SSVEP MAMEM 1 dataset. **Dataset Overview** - **Code**: MAMEM1 - **Paradigm**: ssvep - **DOI**: 10.48550/arXiv.1602.00904 ### View full README **SSVEP MAMEM 1 dataset** SSVEP MAMEM 1 dataset. **Dataset Overview** - **Code**: MAMEM1 - **Paradigm**: ssvep - **DOI**: 10.48550/arXiv.1602.00904 - **Subjects**: 11 - **Sessions per subject**: 1 - **Events**: 6.66=1, 7.50=2, 8.57=3, 10.00=4, 12.00=5 - **Trial interval**: [1, 4] s - **File format**: MATLAB .mat **Acquisition** - **Sampling rate**: 250.0 Hz - **Number of channels**: 256 - **Channel types**: eeg=256 - **Channel names**: E1, E10, E100, E101, E102, E103, E104, E105, E106, E107, E108, E109, E11, E110, E111, E112, E113, E114, E115, E116, E117, E118, E119, E12, E120, E121, E122, E123, E124, E125, E126, E127, E128, E129, E13, E130, E131, E132, E133, E134, E135, E136, E137, E138, E139, E14, E140, E141, E142, E143, E144, E145, E146, E147, E148, E149, E15, E150, E151, E152, E153, E154, E155, E156, E157, E158, E159, E16, E160, E161, E162, E163, E164, E165, E166, E167, E168, E169, E17, E170, E171, E172, E173, E174, E175, E176, E177, E178, E179, E18, E180, E181, E182, E183, E184, E185, E186, E187, E188, E189, E19, E190, E191, E192, E193, E194, E195, E196, E197, E198, E199, E2, E20, E200, E201, E202, E203, E204, E205, E206, E207, E208, E209, E21, E210, E211, E212, E213, E214, E215, E216, E217, E218, E219, E22, E220, E221, E222, E223, E224, E225, E226, E227, E228, E229, E23, E230, E231, E232, E233, E234, E235, E236, E237, E238, E239, E24, E240, E241, E242, E243, E244, E245, E246, E247, E248, E249, E25, E250, E251, E252, E253, E254, E255, E256, E26, E27, E28, E29, E3, E30, E31, E32, E33, E34, E35, E36, E37, E38, E39, E4, E40, E41, E42, E43, E44, E45, E46, E47, E48, E49, E5, E50, E51, E52, E53, E54, E55, E56, E57, E58, E59, E6, E60, E61, E62, E63, E64, E65, E66, E67, E68, E69, E7, E70, E71, E72, E73, E74, E75, E76, E77, E78, E79, E8, E80, E81, E82, E83, E84, E85, E86, E87, E88, E89, E9, E90, E91, E92, E93, E94, E95, E96, E97, E98, E99 - **Montage**: GSN-HydroCel-256 - **Hardware**: EGI 300 Geodesic EEG System (GES 300) - **Line frequency**: 50.0 Hz - **Impedance threshold**: 80.0 kOhm - **Cap manufacturer**: EGI - **Cap model**: HydroCel Geodesic Sensor Net (HCGSN) **Participants** - **Number of subjects**: 11 - **Health status**: healthy - **Clinical population**: able-bodied subjects without any known neuro-muscular or mental disorders - **Age**: min=24, max=39 - **Gender distribution**: male=8, female=3 - **Handedness**: {‘right’: 10, ‘left’: 1} - **Species**: human **Experimental Protocol** - **Paradigm**: ssvep - **Number of classes**: 5 - **Class labels**: 6.66, 7.50, 8.57, 10.00, 12.00 - **Trial duration**: 5.0 s - **Study design**: Subjects focus attention on a single violet box flickering at different frequencies (6.66, 7.50, 8.57, 10.00, 12.00 Hz) presented sequentially. Each frequency is presented for 5 seconds (trial) followed by 5 seconds rest, repeated 3 times per frequency, with 30 seconds rest between different frequencies. - **Feedback type**: none - **Stimulus type**: flickering box - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Instructions**: Subjects were instructed to focus attention on the flickering box, limit movements, and avoid swallowing or blinking during visual stimulation - **Stimulus presentation**: SoftwareName=Microsoft Visual Studio 2010 with OpenGL, monitor=22 inch LCD monitor, refresh_rate=60 Hz, resolution=1680x1080 pixels **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 6.66 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/6_66 7.50 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/7_50 8.57 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_57 10.00 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_00 12.00 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_00 ``` **Paradigm-Specific Parameters** - **Detected paradigm**: ssvep - **Stimulus frequencies**: [6.66, 7.5, 8.57, 10.0, 12.0] Hz - **Number of targets**: 5 - **Number of repetitions**: 3 **Data Structure** - **Trials**: 1104 - **Trials context**: Total 1104 trials across all subjects. Each session includes 23 trials (8 adaptation + 15 main). S001: 3 sessions, S003 and S004: 4 sessions, others: 5 sessions. Some sessions excluded due to technical issues. **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False **Signal Processing** - **Classifiers**: LDA, SVM, Random Forest, kNN, Naive Bayes, CCA, AdaBoost, Decision Trees - **Feature extraction**: Periodogram, Welch Spectrum, Goertzel algorithm, Yule-AR Spectrum, FFT, PSD, Discrete Wavelet Transform - **Frequency bands**: analyzed=[5.0, 48.0] Hz - **Spatial filters**: CAR, CSP, Minimum Energy **Cross-Validation** - **Method**: leave-one-subject-out - **Evaluation type**: cross_subject **Performance (Original Study)** - **Default Accuracy**: 72.47 - **Optimal Accuracy**: 79.47 **BCI Application** - **Applications**: communication - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Perception **Documentation** - **Description**: Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs - **DOI**: 10.6084/m9.figshare.2068677.v1 - **Associated paper DOI**: 10.48550/arXiv.1602.00904 - **License**: ODC-By-1.0 - **Investigators**: Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris - **Senior author**: Ioannis Kompatsiaris - **Institution**: Centre for Research and Technology Hellas (CERTH) - **Country**: GR - **Repository**: Figshare - **Data URL**: [https://dx.doi.org/10.6084/m9.figshare.2068677.v1](https://dx.doi.org/10.6084/m9.figshare.2068677.v1) - **Publication year**: 2016 - **Funding**: H2020-ICT-2014-644780 - **Ethics approval**: Centre for Research and Technology Hellas ethics committee, dated 3/7/2015, grant H2020-ICT-2014-644780 - **Keywords**: SSVEP, BCI, EEG, brain-computer interface, comparative evaluation, state-of-the-art algorithms **Abstract** Brain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. This report focuses on SSVEP-based BCIs and performs a comparative evaluation of the most promising algorithms. A dataset of 256-channel EEG signals from 11 subjects is provided, along with a processing toolbox for reproducing results and supporting further experimentation. **Methodology** Empirical approach where each signal processing parameter (filtering, artifact removal, feature extraction, feature selection, classification) is studied independently by keeping all other parameters fixed. Leave-one-subject-out cross-validation used to evaluate system without subject-specific training. Multiple algorithms compared for each processing stage to obtain state-of-the-art baseline. **References** Oikonomou, V. P., Liaros, G., Georgiadis, K., Chatzilari, E., Adam, K., Nikolopoulos, S., & Kompatsiaris, I. (2016). Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs. arXiv preprint arXiv:1602.00904. MAMEM Steady State Visually Evoked Potential EEG Database [https://archive.physionet.org/physiobank/database/mssvepdb/](https://archive.physionet.org/physiobank/database/mssvepdb/) S. Nikolopoulos, 2016, DataAcquisitionDetails.pdf [https://figshare.com/articles/dataset/MAMEM_EEG_SSVEP_Dataset_I_256_channels_11_subjects_5_frequencies_/2068677?file=3793738](https://figshare.com/articles/dataset/MAMEM_EEG_SSVEP_Dataset_I_256_channels_11_subjects_5_frequencies_/2068677?file=3793738) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000119` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Oikonomou2016 – SSVEP MAMEM 1 dataset | | Author (year) | `Oikonomou2016_MAMEM1` | | Canonical | `Oikonomou2016` | | Importable as | `NM000119`, `Oikonomou2016_MAMEM1`, `Oikonomou2016` | | Year | 2016 | | Authors | Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris | | License | ODC-By-1.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000119) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000119) | [Source URL](https://nemar.org/dataexplorer/detail/nm000119) | ## Technical Details - Subjects: 11 - Recordings: 47 - Tasks: 1 - Channels: 256 - Sampling rate (Hz): 250.0 - Duration (hours): 6.22372 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 5.4 GB - File count: 47 - Format: BIDS - License: ODC-By-1.0 - DOI: — - Source: nemar - OpenNeuro: [nm000119](https://openneuro.org/datasets/nm000119) - NeMAR: [nm000119](https://nemar.org/dataexplorer/detail?dataset_id=nm000119) ## API Reference Use the `NM000119` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000119(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Oikonomou2016 – SSVEP MAMEM 1 dataset * **Study:** `nm000119` (NeMAR) * **Author (year):** `Oikonomou2016_MAMEM1` * **Canonical:** `Oikonomou2016` Also importable as: `NM000119`, `Oikonomou2016_MAMEM1`, `Oikonomou2016`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 11; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000119](https://openneuro.org/datasets/nm000119) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000119](https://nemar.org/dataexplorer/detail?dataset_id=nm000119) ### Examples ```pycon >>> from eegdash.dataset import NM000119 >>> dataset = NM000119(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000119) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000119) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000120: eeg dataset, 11 subjects *Oikonomou2016 – SSVEP MAMEM 2 dataset* Access recordings and metadata through EEGDash. **Citation:** Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris (2016). *Oikonomou2016 – SSVEP MAMEM 2 dataset*. Modality: eeg Subjects: 11 Recordings: 55 License: ODC-By-1.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000120 dataset = NM000120(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000120(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000120( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000120, title = {Oikonomou2016 – SSVEP MAMEM 2 dataset}, author = {Vangelis P. Oikonomou and Georgios Liaros and Kostantinos Georgiadis and Elisavet Chatzilari and Katerina Adam and Spiros Nikolopoulos and Ioannis Kompatsiaris}, } ``` ## About This Dataset **SSVEP MAMEM 2 dataset** SSVEP MAMEM 2 dataset. **Dataset Overview** - **Code**: MAMEM2 - **Paradigm**: ssvep - **DOI**: 10.48550/arXiv.1602.00904 ### View full README **SSVEP MAMEM 2 dataset** SSVEP MAMEM 2 dataset. **Dataset Overview** - **Code**: MAMEM2 - **Paradigm**: ssvep - **DOI**: 10.48550/arXiv.1602.00904 - **Subjects**: 11 - **Sessions per subject**: 1 - **Events**: 6.66=1, 7.50=2, 8.57=3, 10.00=4, 12.00=5 - **Trial interval**: [1, 4] s - **Runs per session**: 5 - **File format**: MAT **Acquisition** - **Sampling rate**: 250.0 Hz - **Number of channels**: 256 - **Channel types**: eeg=256 - **Channel names**: E1, E10, E100, E101, E102, E103, E104, E105, E106, E107, E108, E109, E11, E110, E111, E112, E113, E114, E115, E116, E117, E118, E119, E12, E120, E121, E122, E123, E124, E125, E126, E127, E128, E129, E13, E130, E131, E132, E133, E134, E135, E136, E137, E138, E139, E14, E140, E141, E142, E143, E144, E145, E146, E147, E148, E149, E15, E150, E151, E152, E153, E154, E155, E156, E157, E158, E159, E16, E160, E161, E162, E163, E164, E165, E166, E167, E168, E169, E17, E170, E171, E172, E173, E174, E175, E176, E177, E178, E179, E18, E180, E181, E182, E183, E184, E185, E186, E187, E188, E189, E19, E190, E191, E192, E193, E194, E195, E196, E197, E198, E199, E2, E20, E200, E201, E202, E203, E204, E205, E206, E207, E208, E209, E21, E210, E211, E212, E213, E214, E215, E216, E217, E218, E219, E22, E220, E221, E222, E223, E224, E225, E226, E227, E228, E229, E23, E230, E231, E232, E233, E234, E235, E236, E237, E238, E239, E24, E240, E241, E242, E243, E244, E245, E246, E247, E248, E249, E25, E250, E251, E252, E253, E254, E255, E256, E26, E27, E28, E29, E3, E30, E31, E32, E33, E34, E35, E36, E37, E38, E39, E4, E40, E41, E42, E43, E44, E45, E46, E47, E48, E49, E5, E50, E51, E52, E53, E54, E55, E56, E57, E58, E59, E6, E60, E61, E62, E63, E64, E65, E66, E67, E68, E69, E7, E70, E71, E72, E73, E74, E75, E76, E77, E78, E79, E8, E80, E81, E82, E83, E84, E85, E86, E87, E88, E89, E9, E90, E91, E92, E93, E94, E95, E96, E97, E98, E99 - **Montage**: GSN-HydroCel-256 - **Hardware**: EGI 300 Geodesic EEG System (GES 300) - **Reference**: Cz - **Line frequency**: 50.0 Hz - **Impedance threshold**: 80.0 kOhm - **Cap manufacturer**: EGI - **Cap model**: HydroCel Geodesic Sensor Net (HCGSN) **Participants** - **Number of subjects**: 11 - **Health status**: healthy - **Age**: min=24, max=39 - **Gender distribution**: male=8, female=3 - **Handedness**: {‘right’: 10, ‘left’: 1} **Experimental Protocol** - **Paradigm**: ssvep - **Number of classes**: 5 - **Class labels**: 6.66, 7.50, 8.57, 10.00, 12.00 - **Trial duration**: 5.0 s - **Study design**: Subjects focus attention on visual stimuli flickering at different frequencies (6.66, 7.50, 8.57, 10.00, 12.00 Hz) to select commands. Each stimulus presented for 5 seconds followed by 5 seconds rest. - **Feedback type**: none - **Stimulus type**: flickering box - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Stimulus presentation**: SoftwareName=Microsoft Visual Studio 2010 with OpenGL, device=22 inch LCD monitor, refresh_rate=60 Hz, resolution=1680x1080 **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 6.66 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/6_66 7.50 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/7_50 8.57 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_57 10.00 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_00 12.00 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_00 ``` **Paradigm-Specific Parameters** - **Detected paradigm**: ssvep - **Stimulus frequencies**: [6.66, 7.5, 8.57, 10.0, 12.0] Hz - **Number of targets**: 5 - **Number of repetitions**: 3 **Data Structure** - **Trials**: 1104 - **Trials context**: Each session includes 23 trials (8 adaptation trials excluded from analysis). 5 sessions per subject (with exceptions: S001=3 sessions, S003=4 sessions, S004=4 sessions). Total: 1104 trials of 5 seconds each. **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False **Signal Processing** - **Classifiers**: LDA, SVM, Random Forest, kNN, Naive Bayes, AdaBoost, Decision Trees, CCA - **Feature extraction**: PWelch, Periodogram, FFT, Goertzel, PYULEAR (Yule-AR), STFT, DWT, PSD, Wavelet, Spectrogram - **Frequency bands**: analyzed=[5.0, 48.0] Hz - **Spatial filters**: CAR, CSP, Minimum Energy **Cross-Validation** - **Method**: leave-one-subject-out - **Evaluation type**: cross_subject **Performance (Original Study)** - **Accuracy**: 74.42% - **Mean Accuracy Default Config**: 72.47 - **Mean Accuracy Optimal Config**: 74.42 - **Processing Time Msec**: 68 **BCI Application** - **Applications**: command_selection - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Research **Documentation** - **DOI**: 10.48550/arXiv.1602.00904 - **Associated paper DOI**: arXiv:1602.00904v2 - **License**: ODC-By-1.0 - **Investigators**: Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris - **Institution**: Centre for Research and Technology Hellas (CERTH) - **Country**: GR - **Repository**: GitHub - **Data URL**: [https://figshare.com/articles/dataset/3153409](https://figshare.com/articles/dataset/3153409) - **Publication year**: 2016 - **Funding**: H2020-ICT-2014-644780 - **Ethics approval**: Approved by ethics committee of Centre for Research and Technology Hellas, date 3/7/2015, grant H2020-ICT-2014-644780 - **Keywords**: SSVEP, BCI, brain-computer interface, EEG, visual evoked potentials, signal processing, feature extraction, classification **Abstract** Brain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. This study focuses on SSVEP-based BCIs and performs a comparative evaluation of state-of-the-art algorithms for filtering, artifact removal, feature extraction, feature selection and classification. Dataset consists of 256-channel EEG signals from 11 subjects with 5 flickering frequencies (6.66, 7.50, 8.57, 10.00, 12.00 Hz). **Methodology** Leave-one-subject-out cross-validation was used to evaluate a general-purpose BCI system without subject-specific training. Systematic comparison of algorithms across all signal processing stages: (1) Signal filtering: FIR vs IIR filters; (2) Artifact removal: AMUSE vs FastICA; (3) Feature extraction: PWelch, Periodogram, PYULEAR, DWT, STFT, Goertzel; (4) Feature selection: entropy-based methods and PCA/SVD; (5) Classification: SVM, LDA, KNN, Naive Bayes, Random Forest, AdaBoost. Optimal configuration achieved 74.42% mean accuracy using IIR-Elliptic filter, AMUSE artifact removal, PWelch feature extraction with nfft=512, segment length=350, overlap=0.75, and channel-138. **References** Oikonomou, V. P., Liaros, G., Georgiadis, K., Chatzilari, E., Adam, K., Nikolopoulos, S., & Kompatsiaris, I. (2016). Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs. arXiv preprint arXiv:1602.00904. MAMEM Steady State Visually Evoked Potential EEG Database [https://archive.physionet.org/physiobank/database/mssvepdb/](https://archive.physionet.org/physiobank/database/mssvepdb/) S. Nikolopoulos, 2016, DataAcquisitionDetails.pdf [https://figshare.com/articles/dataset/MAMEM_EEG_SSVEP_Dataset_II_256_channels_11_subjects_5_frequencies_presented_simultaneously_/3153409?file=4911931](https://figshare.com/articles/dataset/MAMEM_EEG_SSVEP_Dataset_II_256_channels_11_subjects_5_frequencies_presented_simultaneously_/3153409?file=4911931) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000120` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Oikonomou2016 – SSVEP MAMEM 2 dataset | | Author (year) | `Oikonomou2016_MAMEM2` | | Canonical | `MAMEM2`, `SSVEPMAMEM2`, `MAMEM2_SSVEP` | | Importable as | `NM000120`, `Oikonomou2016_MAMEM2`, `MAMEM2`, `SSVEPMAMEM2`, `MAMEM2_SSVEP` | | Year | 2016 | | Authors | Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris | | License | ODC-By-1.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000120) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000120) | [Source URL](https://nemar.org/dataexplorer/detail/nm000120) | ## Technical Details - Subjects: 11 - Recordings: 55 - Tasks: 1 - Channels: 256 - Sampling rate (Hz): 250.0 - Duration (hours): 5.1091766666666665 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 4.4 GB - File count: 55 - Format: BIDS - License: ODC-By-1.0 - DOI: — - Source: nemar - OpenNeuro: [nm000120](https://openneuro.org/datasets/nm000120) - NeMAR: [nm000120](https://nemar.org/dataexplorer/detail?dataset_id=nm000120) ## API Reference Use the `NM000120` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000120(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Oikonomou2016 – SSVEP MAMEM 2 dataset * **Study:** `nm000120` (NeMAR) * **Author (year):** `Oikonomou2016_MAMEM2` * **Canonical:** `MAMEM2`, `SSVEPMAMEM2`, `MAMEM2_SSVEP` Also importable as: `NM000120`, `Oikonomou2016_MAMEM2`, `MAMEM2`, `SSVEPMAMEM2`, `MAMEM2_SSVEP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 11; recordings: 55; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000120](https://openneuro.org/datasets/nm000120) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000120](https://nemar.org/dataexplorer/detail?dataset_id=nm000120) ### Examples ```pycon >>> from eegdash.dataset import NM000120 >>> dataset = NM000120(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000120) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000120) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000121: eeg dataset, 11 subjects *Oikonomou2016 – SSVEP MAMEM 3 dataset* Access recordings and metadata through EEGDash. **Citation:** Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris (2016). *Oikonomou2016 – SSVEP MAMEM 3 dataset*. Modality: eeg Subjects: 11 Recordings: 110 License: ODC-By-1.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000121 dataset = NM000121(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000121(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000121( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000121, title = {Oikonomou2016 – SSVEP MAMEM 3 dataset}, author = {Vangelis P. Oikonomou and Georgios Liaros and Kostantinos Georgiadis and Elisavet Chatzilari and Katerina Adam and Spiros Nikolopoulos and Ioannis Kompatsiaris}, } ``` ## About This Dataset **SSVEP MAMEM 3 dataset** SSVEP MAMEM 3 dataset. **Dataset Overview** - **Code**: MAMEM3 - **Paradigm**: ssvep - **DOI**: 10.48550/arXiv.1602.00904 ### View full README **SSVEP MAMEM 3 dataset** SSVEP MAMEM 3 dataset. **Dataset Overview** - **Code**: MAMEM3 - **Paradigm**: ssvep - **DOI**: 10.48550/arXiv.1602.00904 - **Subjects**: 11 - **Sessions per subject**: 1 - **Events**: 6.66=33029, 7.50=33028, 8.57=33027, 10.00=33026, 12.00=33025 - **Trial interval**: [1, 4] s - **Runs per session**: 10 - **File format**: csv - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 128.0 Hz - **Number of channels**: 14 - **Channel types**: eeg=14 - **Channel names**: AF3, AF4, F3, F4, F7, F8, FC5, FC6, O1, O2, P7, P8, T7, T8 - **Montage**: 10-20 - **Hardware**: EGI 300 Geodesic EEG System (GES 300) - **Software**: Microsoft Visual Studio 2010 with OpenGL - **Reference**: CAR - **Sensor type**: scalp electrodes - **Line frequency**: 50.0 Hz - **Online filters**: 5-48 Hz bandpass, 50 Hz notch - **Impedance threshold**: 80.0 kOhm - **Cap manufacturer**: EGI - **Cap model**: HydroCel Geodesic Sensor Net (HCGSN) - **Electrode type**: wet - **Auxiliary channels**: ecg, gsr, ppg **Participants** - **Number of subjects**: 11 - **Health status**: healthy - **Age**: min=24.0, max=39.0 - **Gender distribution**: male=8, female=3 - **Handedness**: {‘right’: 10, ‘left’: 1} - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: ssvep - **Number of classes**: 5 - **Class labels**: 6.66, 7.50, 8.57, 10.00, 12.00 - **Trial duration**: 5.0 s - **Study design**: Subjects focus attention on a violet box flickering at different frequencies (6.66, 7.50, 8.57, 10.00, 12.00 Hz) presented at the center of the monitor. Each trial lasts 5 seconds followed by 5 seconds rest. - **Feedback type**: none - **Stimulus type**: visual - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: False - **Instructions**: Subjects were instructed to focus attention on the flickering stimulus and minimize artifacts by reducing eye blinks and movements. - **Stimulus presentation**: display=22 inch LCD monitor, 60 Hz refresh rate, 1680x1080 resolution, background=black, stimulus=violet box flickering at center of screen, graphics=Nvidia GeForce GTX 860M with vertical synchronization enabled **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 6.66 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/6_66 7.50 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/7_50 8.57 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_57 10.00 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_00 12.00 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_00 ``` **Paradigm-Specific Parameters** - **Detected paradigm**: ssvep - **Stimulus frequencies**: [6.66, 7.5, 8.57, 10.0, 12.0] Hz - **Number of targets**: 5 **Data Structure** - **Trials**: 1104 - **Trials context**: Total of 1104 trials (5 seconds each) across all subjects and sessions. Subject S001: 3 sessions, S003 and S004: 4 sessions each, all others: 5 sessions. Each session includes 23 trials (8 adaptation + 15 experimental). **Preprocessing** - **Preprocessing applied**: True - **Steps**: bandpass filtering (5-48 Hz), notch filtering (50 Hz), artifact removal (AMUSE, ICA), Common Average Reference (CAR) - **Highpass filter**: 5.0 Hz - **Lowpass filter**: 48.0 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 5.0, ‘high_cutoff_hz’: 48.0} - **Notch filter**: 50.0 Hz - **Filter type**: IIR (Chebyshev, Elliptic) - **Artifact methods**: AMUSE, ICA, FastICA - **Re-reference**: CAR **Signal Processing** - **Classifiers**: LDA, SVM, Random Forest, kNN, Naive Bayes, CCA, ELM, Decision Trees - **Feature extraction**: Periodogram, Welch, Goertzel, Yule-AR, STFT, Discrete Wavelet Transform, PSD, CSP, ICA - **Frequency bands**: analyzed=[5.0, 48.0] Hz - **Spatial filters**: CAR, CSP, Minimum Energy **Cross-Validation** - **Method**: leave-one-subject-out - **Evaluation type**: cross_subject **Performance (Original Study)** - **Accuracy**: 72.47% - **Default Config Accuracy**: 72.47 - **Optimal Config Accuracy**: 79.47 - **Best Electrode Accuracy**: 74.42 - **Execution Time Ms**: 5.0 **BCI Application** - **Applications**: research, comparative_study - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Perception **Documentation** - **Description**: Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs. Dataset includes 256-channel EEG signals from 11 subjects performing SSVEP tasks with 5 different flickering frequencies. - **DOI**: 10.6084/m9.figshare.2068677.v1 - **Associated paper DOI**: arXiv:1602.00904v2 - **License**: ODC-By-1.0 - **Investigators**: Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris - **Senior author**: Ioannis Kompatsiaris - **Institution**: Centre for Research and Technology Hellas (CERTH) - **Country**: Greece - **Repository**: Figshare - **Data URL**: [https://dx.doi.org/10.6084/m9.figshare.2068677.v1](https://dx.doi.org/10.6084/m9.figshare.2068677.v1) - **Publication year**: 2016 - **Ethics approval**: Ethics committee of the Centre for Research and Technology Hellas, approved 3/7/2015 - **Keywords**: SSVEP, BCI, brain-computer interface, EEG, visual evoked potentials, comparative evaluation, signal processing **Abstract** Brain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. This report focuses on EEG-based BCIs that rely on Steady-State-Visual-Evoked Potentials (SSVEPs) and performs a comparative evaluation of state-of-the-art algorithms for filtering, artifact removal, feature extraction, feature selection and classification. The dataset consists of 256-channel EEG signals from 11 subjects, along with a processing toolbox for reproducing results. **Methodology** Comparative evaluation of SSVEP-based BCI algorithms using leave-one-subject-out cross-validation. The study examines filtering methods (IIR, FIR), artifact removal (AMUSE, ICA), feature extraction (Periodogram, Welch, Goertzel, Yule-AR, STFT, DWT), feature selection (Shannon entropy, PCA, ICA), and classification (LDA, SVM, kNN, Naive Bayes, Random Forest, CCA, ELM, Decision Trees). Each parameter is studied independently while keeping others fixed to identify optimal configurations. **References** Oikonomou, V. P., Liaros, G., Georgiadis, K., Chatzilari, E., Adam, K., Nikolopoulos, S., & Kompatsiaris, I. (2016). Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs. arXiv preprint arXiv:1602.00904. MAMEM Steady State Visually Evoked Potential EEG Database [https://archive.physionet.org/physiobank/database/mssvepdb/](https://archive.physionet.org/physiobank/database/mssvepdb/) S. Nikolopoulos, 2016, DataAcquisitionDetails.pdf [https://figshare.com/articles/dataset/MAMEM_EEG_SSVEP_Dataset_III_14_channels_11_subjects_5_frequencies_presented_simultaneously_/3413851](https://figshare.com/articles/dataset/MAMEM_EEG_SSVEP_Dataset_III_14_channels_11_subjects_5_frequencies_presented_simultaneously_/3413851) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000121` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Oikonomou2016 – SSVEP MAMEM 3 dataset | | Author (year) | `Oikonomou2016_MAMEM3` | | Canonical | `MAMEM3`, `SSVEP_MAMEM3` | | Importable as | `NM000121`, `Oikonomou2016_MAMEM3`, `MAMEM3`, `SSVEP_MAMEM3` | | Year | 2016 | | Authors | Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, Ioannis Kompatsiaris | | License | ODC-By-1.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000121) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000121) | [Source URL](https://nemar.org/dataexplorer/detail/nm000121) | ## Technical Details - Subjects: 11 - Recordings: 110 - Tasks: 1 - Channels: 14 - Sampling rate (Hz): 128.0 - Duration (hours): 4.597261284722222 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 120.2 MB - File count: 110 - Format: BIDS - License: ODC-By-1.0 - DOI: — - Source: nemar - OpenNeuro: [nm000121](https://openneuro.org/datasets/nm000121) - NeMAR: [nm000121](https://nemar.org/dataexplorer/detail?dataset_id=nm000121) ## API Reference Use the `NM000121` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000121(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Oikonomou2016 – SSVEP MAMEM 3 dataset * **Study:** `nm000121` (NeMAR) * **Author (year):** `Oikonomou2016_MAMEM3` * **Canonical:** `MAMEM3`, `SSVEP_MAMEM3` Also importable as: `NM000121`, `Oikonomou2016_MAMEM3`, `MAMEM3`, `SSVEP_MAMEM3`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 11; recordings: 110; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000121](https://openneuro.org/datasets/nm000121) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000121](https://nemar.org/dataexplorer/detail?dataset_id=nm000121) ### Examples ```pycon >>> from eegdash.dataset import NM000121 >>> dataset = NM000121(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000121) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000121) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000122: eeg dataset, 12 subjects *Chen2017 – Single-flicker online SSVEP BCI dataset* Access recordings and metadata through EEGDash. **Citation:** Jingjing Chen, Dan Zhang, Andreas K. Engel, Qin Gong, Alexander Maye (2019). *Chen2017 – Single-flicker online SSVEP BCI dataset*. Modality: eeg Subjects: 12 Recordings: 12 License: CC BY 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000122 dataset = NM000122(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000122(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000122( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000122, title = {Chen2017 – Single-flicker online SSVEP BCI dataset}, author = {Jingjing Chen and Dan Zhang and Andreas K. Engel and Qin Gong and Alexander Maye}, } ``` ## About This Dataset **Single-flicker online SSVEP BCI dataset** Single-flicker online SSVEP BCI dataset. **Dataset Overview** - **Code**: Chen2017SingleFlicker - **Paradigm**: ssvep - **DOI**: 10.1371/journal.pone.0178385 ### View full README **Single-flicker online SSVEP BCI dataset** Single-flicker online SSVEP BCI dataset. **Dataset Overview** - **Code**: Chen2017SingleFlicker - **Paradigm**: ssvep - **DOI**: 10.1371/journal.pone.0178385 - **Subjects**: 12 - **Sessions per subject**: 2 - **Events**: north=1, east=2, west=3, south=4 - **Trial interval**: [0.0, 3.5] s - **File format**: XDF/MAT **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 32 - **Channel types**: eeg=32 - **Montage**: biosemi32 - **Hardware**: BioSemi ActiveTwo - **Reference**: CMS/DRL - **Sensor type**: active - **Line frequency**: 50.0 Hz - **Cap manufacturer**: BioSemi - **Electrode material**: sintered Ag/AgCl **Participants** - **Number of subjects**: 12 - **Health status**: healthy - **Age**: mean=23.5, min=19, max=32 - **Gender distribution**: male=5, female=7 **Experimental Protocol** - **Paradigm**: ssvep - **Task type**: spatial navigation - **Number of classes**: 4 - **Class labels**: north, east, west, south - **Study design**: Spatial navigation with single 15 Hz flicker - **Feedback type**: visual - **Stimulus type**: single-flicker spatially coded - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: online - **Training/test split**: True **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text north ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/north east ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/east west ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/west south ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/south ``` **Paradigm-Specific Parameters** - **Detected paradigm**: ssvep - **Stimulus frequencies**: [15.0] Hz **Signal Processing** - **Classifiers**: LDA - **Feature extraction**: CCA - **Frequency bands**: bandpass=[1.0, 80.0] Hz - **Spatial filters**: CCA **Cross-Validation** - **Evaluation type**: within_subject **BCI Application** - **Applications**: spatial_navigation - **Environment**: lab - **Online feedback**: True **Tags** - **Pathology**: healthy - **Modality**: visual - **Type**: perception **Documentation** - **DOI**: 10.1371/journal.pone.0178385 - **License**: CC BY 4.0 - **Investigators**: Jingjing Chen, Dan Zhang, Andreas K. Engel, Qin Gong, Alexander Maye - **Senior author**: Alexander Maye - **Institution**: University Medical Center Hamburg-Eppendorf - **Department**: Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf - **Country**: DE - **Repository**: Zenodo - **Data URL**: [https://zenodo.org/records/580485](https://zenodo.org/records/580485) - **Publication year**: 2017 - **Funding**: DFG TRR169/B1/Z2 Crossmodal Learning; Landesforschungsfoerderung Hamburg CROSS FV25 - **Ethics approval**: Ethics committee of the medical association, Hamburg - **Keywords**: SSVEP, BCI, spatial navigation, single-flicker, online BCI **References** J. Chen, D. Zhang, A. K. Engel, Q. Gong, and A. Maye, “Application of a single-flicker online SSVEP BCI for spatial navigation,” PLoS ONE, vol. 12, no. 5, e0178385, 2017. DOI: 10.1371/journal.pone.0178385 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000122` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Chen2017 – Single-flicker online SSVEP BCI dataset | | Author (year) | `Chen2017` | | Canonical | — | | Importable as | `NM000122`, `Chen2017` | | Year | 2019 | | Authors | Jingjing Chen, Dan Zhang, Andreas K. Engel, Qin Gong, Alexander Maye | | License | CC BY 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000122) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000122) | [Source URL](https://nemar.org/dataexplorer/detail/nm000122) | ## Technical Details - Subjects: 12 - Recordings: 12 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 512.0 - Duration (hours): 3.2708430989583333 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 741.9 MB - File count: 12 - Format: BIDS - License: CC BY 4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000122](https://openneuro.org/datasets/nm000122) - NeMAR: [nm000122](https://nemar.org/dataexplorer/detail?dataset_id=nm000122) ## API Reference Use the `NM000122` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000122(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Chen2017 – Single-flicker online SSVEP BCI dataset * **Study:** `nm000122` (NeMAR) * **Author (year):** `Chen2017` * **Canonical:** — Also importable as: `NM000122`, `Chen2017`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000122](https://openneuro.org/datasets/nm000122) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000122](https://nemar.org/dataexplorer/detail?dataset_id=nm000122) ### Examples ```pycon >>> from eegdash.dataset import NM000122 >>> dataset = NM000122(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000122) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000122) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000123: eeg dataset, 12 subjects *Kalunga2016 – SSVEP Exo dataset* Access recordings and metadata through EEGDash. **Citation:** Emmanuel K. Kalunga, Sylvain Chevallier, Quentin Barthélemy, Karim Djouani, Eric Monacelli, Yskandar Hamam (2019). *Kalunga2016 – SSVEP Exo dataset*. Modality: eeg Subjects: 12 Recordings: 30 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000123 dataset = NM000123(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000123(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000123( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000123, title = {Kalunga2016 – SSVEP Exo dataset}, author = {Emmanuel K. Kalunga and Sylvain Chevallier and Quentin Barthélemy and Karim Djouani and Eric Monacelli and Yskandar Hamam}, } ``` ## About This Dataset **SSVEP Exo dataset** SSVEP Exo dataset. **Dataset Overview** - **Code**: Kalunga2016 - **Paradigm**: ssvep - **DOI**: 10.1016/j.neucom.2016.01.007 ### View full README **SSVEP Exo dataset** SSVEP Exo dataset. **Dataset Overview** - **Code**: Kalunga2016 - **Paradigm**: ssvep - **DOI**: 10.1016/j.neucom.2016.01.007 - **Subjects**: 12 - **Sessions per subject**: 1 - **Events**: 13=2, 17=4, 21=3, rest=1 - **Trial interval**: [2, 4] s - **File format**: fif **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 8 - **Channel types**: eeg=8 - **Channel names**: Oz, O1, O2, POz, PO3, PO4, PO7, PO8 - **Montage**: standard_1005 - **Hardware**: g.tec MobiLab - **Reference**: right mastoid - **Sensor type**: EEG - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 12 - **Health status**: healthy - **Species**: human **Experimental Protocol** - **Paradigm**: ssvep - **Number of classes**: 4 - **Class labels**: 13, 17, 21, rest - **Trial duration**: 6.0 s - **Study design**: SSVEP - **Feedback type**: none - **Stimulus type**: flickering - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Stimulus presentation**: device=LED stimuli, frequencies=13 Hz, 17 Hz, 21 Hz, note=No phase synchronization required **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 13 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13 17 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/17 21 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/21 rest ``` ```text ├─ Experiment-structure └─ Rest ``` **Paradigm-Specific Parameters** - **Detected paradigm**: ssvep - **Stimulus frequencies**: [13.0, 17.0, 21.0] Hz - **Number of targets**: 3 **Data Structure** - **Trials**: 32 trials per session (8 per visual stimulus, 8 for resting class) - **Trials context**: per session **Preprocessing** - **Preprocessing applied**: False **Signal Processing** - **Classifiers**: MDRM, CCA - **Feature extraction**: Covariance/Riemannian **Cross-Validation** - **Method**: bootstrap - **Evaluation type**: cross_subject, cross_session **BCI Application** - **Applications**: assistive_robotics - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Perception **Documentation** - **Description**: Online SSVEP-based BCI using Riemannian geometry for assistive robotics with shared control scheme - **DOI**: 10.1016/j.neucom.2016.01.007 - **License**: CC-BY-4.0 - **Investigators**: Emmanuel K. Kalunga, Sylvain Chevallier, Quentin Barthélemy, Karim Djouani, Eric Monacelli, Yskandar Hamam - **Senior author**: Sylvain Chevallier - **Institution**: Universite de Versailles Saint-Quentin - **Department**: Laboratoire d’Ingénierie des Systèmes de Versailles - **Address**: 78140 Velizy, France - **Country**: FR - **Repository**: Zenodo - **Data URL**: [https://zenodo.org/record/2392979](https://zenodo.org/record/2392979) - **Publication year**: 2016 - **Keywords**: Riemannian geometry, Online, Asynchronous, Brain-Computer Interfaces, Steady State Visually Evoked Potentials **References** Emmanuel K. Kalunga, Sylvain Chevallier, Quentin Barthelemy. “Online SSVEP-based BCI using Riemannian Geometry”. Neurocomputing, 2016. arXiv report: [https://arxiv.org/abs/1501.03227](https://arxiv.org/abs/1501.03227) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000123` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Kalunga2016 – SSVEP Exo dataset | | Author (year) | `Kalunga2016` | | Canonical | — | | Importable as | `NM000123`, `Kalunga2016` | | Year | 2019 | | Authors | Emmanuel K. Kalunga, Sylvain Chevallier, Quentin Barthélemy, Karim Djouani, Eric Monacelli, Yskandar Hamam | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000123) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000123) | [Source URL](https://nemar.org/dataexplorer/detail/nm000123) | ## Technical Details - Subjects: 12 - Recordings: 30 - Tasks: 1 - Channels: 8 - Sampling rate (Hz): 256.0 - Duration (hours): 2.589654947916667 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 78.2 MB - File count: 30 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000123](https://openneuro.org/datasets/nm000123) - NeMAR: [nm000123](https://nemar.org/dataexplorer/detail?dataset_id=nm000123) ## API Reference Use the `NM000123` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000123(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Kalunga2016 – SSVEP Exo dataset * **Study:** `nm000123` (NeMAR) * **Author (year):** `Kalunga2016` * **Canonical:** — Also importable as: `NM000123`, `Kalunga2016`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 12; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000123](https://openneuro.org/datasets/nm000123) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000123](https://nemar.org/dataexplorer/detail?dataset_id=nm000123) ### Examples ```pycon >>> from eegdash.dataset import NM000123 >>> dataset = NM000123(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000123) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000123) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000124: eeg dataset, 24 subjects *Han2024 – SSVEP fatigue dataset with two frequency paradigms* Access recordings and metadata through EEGDash. **Citation:** Yuheng Han, Yufeng Ke, Ruiyan Wang, Tao Wang, Dong Ming (2019). *Han2024 – SSVEP fatigue dataset with two frequency paradigms*. Modality: eeg Subjects: 24 Recordings: 48 License: CC BY 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000124 dataset = NM000124(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000124(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000124( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000124, title = {Han2024 – SSVEP fatigue dataset with two frequency paradigms}, author = {Yuheng Han and Yufeng Ke and Ruiyan Wang and Tao Wang and Dong Ming}, } ``` ## About This Dataset **SSVEP fatigue dataset with two frequency paradigms** SSVEP fatigue dataset with two frequency paradigms. **Dataset Overview** - **Code**: Han2024Fatigue - **Paradigm**: ssvep - **DOI**: 10.1109/TNSRE.2024.3380635 ### View full README **SSVEP fatigue dataset with two frequency paradigms** SSVEP fatigue dataset with two frequency paradigms. **Dataset Overview** - **Code**: Han2024Fatigue - **Paradigm**: ssvep - **DOI**: 10.1109/TNSRE.2024.3380635 - **Subjects**: 24 - **Sessions per subject**: 2 - **Events**: 8=1, 8.5=2, 9=3, 9.5=4, 10=5, 10.5=6, 11=7, 11.5=8, 12=9, 12.5=10, 13=11, 13.5=12, 14=13, 14.5=14, 15=15, 15.5=16, 25.5=17, 26=18, 26.5=19, 27=20, 27.5=21, 28=22, 28.5=23, 29=24, 29.5=25, 30=26, 30.5=27, 31=28, 31.5=29, 32=30, 32.5=31, 33=32 - **Trial interval**: [0.14, 2.14] s - **File format**: MAT **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 64 - **Channel types**: eeg=64 - **Channel names**: Fp1, Fpz, Fp2, AF3, AF4, F7, F5, F3, F1, Fz, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT8, T7, C5, C3, C1, Cz, C2, C4, C6, T8, M1, TP7, CP5, CP3, CP1, CPz, CP2, CP4, CP6, TP8, M2, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO7, PO5, PO3, POz, PO4, PO6, PO8, CB1, O1, Oz, O2, CB2 - **Montage**: standard_1005 - **Hardware**: Synamps2 (Neuroscan) - **Reference**: Cz - **Ground**: midway between Fz and FPz - **Line frequency**: 50.0 Hz - **Online filters**: {‘bandpass_hz’: [0.15, 200.0]} - **Impedance threshold**: 10 kOhm **Participants** - **Number of subjects**: 24 - **Health status**: healthy - **Age**: min=18, max=26 - **Gender distribution**: male=12, female=12 **Experimental Protocol** - **Paradigm**: ssvep - **Task type**: gaze-shifting - **Number of classes**: 32 - **Class labels**: 8, 8.5, 9, 9.5, 10, 10.5, 11, 11.5, 12, 12.5, 13, 13.5, 14, 14.5, 15, 15.5, 25.5, 26, 26.5, 27, 27.5, 28, 28.5, 29, 29.5, 30, 30.5, 31, 31.5, 32, 32.5, 33 - **Trial duration**: 2.0 s - **Feedback type**: none - **Stimulus type**: JFPM visual flicker - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: True **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8 8.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_5 9 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9 9.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_5 10 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10 10.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_5 11 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11 11.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_5 12 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12 12.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_5 13 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13 13.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_5 14 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14 14.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_5 15 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15 15.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_5 25.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/25_5 26 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/26 26.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/26_5 27 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/27 27.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/27_5 28 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/28 28.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/28_5 29 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/29 29.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/29_5 30 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/30 30.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/30_5 31 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/31 31.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/31_5 32 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/32 32.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/32_5 33 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/33 ``` **Paradigm-Specific Parameters** - **Detected paradigm**: ssvep - **Stimulus frequencies**: [8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14.0, 14.5, 15.0, 15.5, 25.5, 26.0, 26.5, 27.0, 27.5, 28.0, 28.5, 29.0, 29.5, 30.0, 30.5, 31.0, 31.5, 32.0, 32.5, 33.0] Hz - **Frequency resolution**: 0.5 Hz **Data Structure** - **Trials**: 960 per frequency band (16 targets x 60 blocks) - **Blocks per session**: 60 - **Trials context**: 6 training + 24 fatigue blocks per frequency condition **Preprocessing** - **Data state**: epoched **Signal Processing** - **Classifiers**: TRCA - **Spatial filters**: TRCA **BCI Application** - **Environment**: lab - **Online feedback**: False **Tags** - **Pathology**: healthy - **Modality**: visual - **Type**: perception **Documentation** - **DOI**: 10.1109/TNSRE.2024.3380635 - **License**: CC BY 4.0 - **Investigators**: Yuheng Han, Yufeng Ke, Ruiyan Wang, Tao Wang, Dong Ming - **Senior author**: Dong Ming - **Institution**: Tianjin University - **Department**: Academy of Medical Engineering and Translational Medicine, Tianjin University - **Country**: CN - **Repository**: Zenodo - **Data URL**: [https://zenodo.org/records/10507229](https://zenodo.org/records/10507229) - **Publication year**: 2024 - **Funding**: National Key Research and Development Program of China (Grant 2021YFF1200603); National Natural Science Foundation of China (Grants 62276184, 61806141) - **Ethics approval**: Research Ethics Committee of Tianjin University - **Keywords**: SSVEP, BCI, fatigue, dynamic stopping, EEG **References** Y. Han, Y. Ke, R. Wang, T. Wang, and D. Ming, “Enhancing SSVEP-BCI Performance Under Fatigue State Using Dynamic Stopping Strategy,” IEEE Trans. Neural Syst. Rehab. Eng., vol. 32, pp. 1407-1415, 2024. DOI: 10.1109/TNSRE.2024.3380635 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000124` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Han2024 – SSVEP fatigue dataset with two frequency paradigms | | Author (year) | `Han2024` | | Canonical | — | | Importable as | `NM000124`, `Han2024` | | Year | 2019 | | Authors | Yuheng Han, Yufeng Ke, Ruiyan Wang, Tao Wang, Dong Ming | | License | CC BY 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000124) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000124) | [Source URL](https://nemar.org/dataexplorer/detail/nm000124) | ## Technical Details - Subjects: 24 - Recordings: 48 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 19.839986666666668 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 17.0 GB - File count: 48 - Format: BIDS - License: CC BY 4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000124](https://openneuro.org/datasets/nm000124) - NeMAR: [nm000124](https://nemar.org/dataexplorer/detail?dataset_id=nm000124) ## API Reference Use the `NM000124` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000124(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Han2024 – SSVEP fatigue dataset with two frequency paradigms * **Study:** `nm000124` (NeMAR) * **Author (year):** `Han2024` * **Canonical:** — Also importable as: `NM000124`, `Han2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 24; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000124](https://openneuro.org/datasets/nm000124) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000124](https://nemar.org/dataexplorer/detail?dataset_id=nm000124) ### Examples ```pycon >>> from eegdash.dataset import NM000124 >>> dataset = NM000124(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000124) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000124) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000125: eeg dataset, 23 subjects *Lee2021 – SSVEP paradigm of the Mobile BCI dataset* Access recordings and metadata through EEGDash. **Citation:** Young-Eun Lee, Gi-Hwan Shin, Minji Lee, Seong-Whan Lee (2019). *Lee2021 – SSVEP paradigm of the Mobile BCI dataset*. Modality: eeg Subjects: 23 Recordings: 85 License: CC BY 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000125 dataset = NM000125(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000125(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000125( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000125, title = {Lee2021 – SSVEP paradigm of the Mobile BCI dataset}, author = {Young-Eun Lee and Gi-Hwan Shin and Minji Lee and Seong-Whan Lee}, } ``` ## About This Dataset **SSVEP paradigm of the Mobile BCI dataset** SSVEP paradigm of the Mobile BCI dataset. **Dataset Overview** - **Code**: Lee2021Mobile-SSVEP - **Paradigm**: ssvep - **DOI**: 10.1038/s41597-021-01094-4 ### View full README **SSVEP paradigm of the Mobile BCI dataset** SSVEP paradigm of the Mobile BCI dataset. **Dataset Overview** - **Code**: Lee2021Mobile-SSVEP - **Paradigm**: ssvep - **DOI**: 10.1038/s41597-021-01094-4 - **Subjects**: 23 - **Sessions per subject**: 4 - **Events**: 5.45=11, 8.57=12, 12.0=13 - **Trial interval**: [0, 5] s - **File format**: BrainVision **Acquisition** - **Sampling rate**: 100.0 Hz - **Number of channels**: 73 - **Channel types**: eeg=73 - **Montage**: standard_1005 - **Hardware**: BrainAmp (Brain Product GmbH) - **Reference**: FCz - **Ground**: Fpz - **Sensor type**: Ag/AgCl - **Line frequency**: 60.0 Hz - **Impedance threshold**: 50 kOhm - **Electrode material**: Ag/AgCl - **Auxiliary channels**: EOG (4 ch, vertical, horizontal) **Participants** - **Number of subjects**: 23 - **Health status**: healthy - **Age**: mean=24.5, std=2.9, min=19, max=32 - **Gender distribution**: male=13, female=10 **Experimental Protocol** - **Paradigm**: ssvep - **Number of classes**: 3 - **Class labels**: 5.45, 8.57, 12.0 - **Trial duration**: 5.0 s - **Study design**: BCI during motion (standing/walking/running) - **Stimulus type**: visual flicker - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 5.45 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/5_45 8.57 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_57 12.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_0 ``` **Signal Processing** - **Classifiers**: rLDA, CCA - **Feature extraction**: power_over_time_intervals, CCA - **Frequency bands**: delta=[0.5, 3.5] Hz; theta=[3.5, 7.5] Hz; alpha=[7.5, 12.5] Hz; beta=[12.5, 30.0] Hz **Cross-Validation** - **Method**: holdout - **Evaluation type**: within_subject **BCI Application** - **Applications**: mobile_BCI - **Environment**: treadmill **Tags** - **Pathology**: healthy - **Modality**: visual - **Type**: perception **Documentation** - **DOI**: 10.1038/s41597-021-01094-4 - **License**: CC BY 4.0 - **Investigators**: Young-Eun Lee, Gi-Hwan Shin, Minji Lee, Seong-Whan Lee - **Senior author**: Seong-Whan Lee - **Institution**: Korea University - **Country**: KR - **Repository**: OSF - **Data URL**: [https://osf.io/r7s9b/](https://osf.io/r7s9b/) - **Publication year**: 2021 - **Funding**: IITP No. 2017-0-00451; IITP No. 2015-0-00185; IITP No. 2019-0-00079 - **Ethics approval**: Institutional Review Board of Korea University, KUIRB-2019-0194-01 - **Keywords**: SSVEP, ERP, mobile BCI, ear-EEG, locomotion **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000125` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Lee2021 – SSVEP paradigm of the Mobile BCI dataset | | Author (year) | `Lee2021_SSVEP` | | Canonical | — | | Importable as | `NM000125`, `Lee2021_SSVEP` | | Year | 2019 | | Authors | Young-Eun Lee, Gi-Hwan Shin, Minji Lee, Seong-Whan Lee | | License | CC BY 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000125) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000125) | [Source URL](https://nemar.org/dataexplorer/detail/nm000125) | ## Technical Details - Subjects: 23 - Recordings: 85 - Tasks: 1 - Channels: 73 (84), 46 - Sampling rate (Hz): 100.0 - Duration (hours): 13.33595 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 1.3 GB - File count: 85 - Format: BIDS - License: CC BY 4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000125](https://openneuro.org/datasets/nm000125) - NeMAR: [nm000125](https://nemar.org/dataexplorer/detail?dataset_id=nm000125) ## API Reference Use the `NM000125` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000125(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Lee2021 – SSVEP paradigm of the Mobile BCI dataset * **Study:** `nm000125` (NeMAR) * **Author (year):** `Lee2021_SSVEP` * **Canonical:** — Also importable as: `NM000125`, `Lee2021_SSVEP`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 23; recordings: 85; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000125](https://openneuro.org/datasets/nm000125) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000125](https://nemar.org/dataexplorer/detail?dataset_id=nm000125) ### Examples ```pycon >>> from eegdash.dataset import NM000125 >>> dataset = NM000125(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000125) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000125) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000126: eeg dataset, 34 subjects *Wang2016 – SSVEP Wang 2016 dataset* Access recordings and metadata through EEGDash. **Citation:** Yijun Wang, Xiaogang Chen, Xiaorong Gao, Shangkai Gao (2016). *Wang2016 – SSVEP Wang 2016 dataset*. Modality: eeg Subjects: 34 Recordings: 34 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000126 dataset = NM000126(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000126(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000126( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000126, title = {Wang2016 – SSVEP Wang 2016 dataset}, author = {Yijun Wang and Xiaogang Chen and Xiaorong Gao and Shangkai Gao}, } ``` ## About This Dataset **SSVEP Wang 2016 dataset** SSVEP Wang 2016 dataset. **Dataset Overview** - **Code**: Wang2016 - **Paradigm**: ssvep - **DOI**: 10.1109/TNSRE.2016.2627556 ### View full README **SSVEP Wang 2016 dataset** SSVEP Wang 2016 dataset. **Dataset Overview** - **Code**: Wang2016 - **Paradigm**: ssvep - **DOI**: 10.1109/TNSRE.2016.2627556 - **Subjects**: 34 - **Sessions per subject**: 1 - **Events**: 8=1, 9=2, 10=3, 11=4, 12=5, 13=6, 14=7, 15=8, 8.2=9, 9.2=10, 10.2=11, 11.2=12, 12.2=13, 13.2=14, 14.2=15, 15.2=16, 8.4=17, 9.4=18, 10.4=19, 11.4=20, 12.4=21, 13.4=22, 14.4=23, 15.4=24, 8.6=25, 9.6=26, 10.6=27, 11.6=28, 12.6=29, 13.6=30, 14.6=31, 15.6=32, 8.8=33, 9.8=34, 10.8=35, 11.8=36, 12.8=37, 13.8=38, 14.8=39, 15.8=40 - **Trial interval**: [0.5, 5.5] s - **File format**: MATLAB MAT - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 250.0 Hz - **Number of channels**: 64 - **Channel types**: eeg=64 - **Channel names**: AF3, AF4, C1, C2, C3, C4, C5, C6, CB1, CB2, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, M1, M2, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO5, PO6, PO7, PO8, POz, Pz, T7, T8, TP7, TP8 - **Montage**: standard_1005 - **Hardware**: Synamps2 EEG system (Neuroscan, Inc.) - **Reference**: Cz - **Line frequency**: 50.0 Hz - **Online filters**: {‘bandpass’: [0.15, 200], ‘notch’: 50} - **Impedance threshold**: 10 kOhm **Participants** - **Number of subjects**: 34 - **Health status**: healthy - **Age**: mean=22.0, min=17, max=34 - **Gender distribution**: female=17, male=18 - **BCI experience**: 8 experienced, 27 naïve - **Species**: human **Experimental Protocol** - **Paradigm**: ssvep - **Number of classes**: 40 - **Class labels**: 8, 9, 10, 11, 12, 13, 14, 15, 8.2, 9.2, 10.2, 11.2, 12.2, 13.2, 14.2, 15.2, 8.4, 9.4, 10.4, 11.4, 12.4, 13.4, 14.4, 15.4, 8.6, 9.6, 10.6, 11.6, 12.6, 13.6, 14.6, 15.6, 8.8, 9.8, 10.8, 11.8, 12.8, 13.8, 14.8, 15.8 - **Trial duration**: 6.0 s - **Study design**: Cue-guided target selecting task using a 40-target BCI speller with joint frequency and phase modulation (JFPM) approach - **Stimulus type**: visual flicker - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Instructions**: Subjects were asked to shift their gaze to the target as soon as possible after cue and avoid eye blinks during the 5-s stimulation duration - **Stimulus presentation**: SoftwareName=MATLAB Psychophysics Toolbox Ver. 3 (PTB-3), display=23.6-in LCD monitor (Acer GD245 HQ, response time: 2 ms), resolution=1920 × 1080 pixels at 60 Hz, viewing_distance=70 cm, stimulus_size=140 × 140 pixels (3.2° × 3.2°), character_size=32 × 32 pixels (0.7° × 0.7°), matrix_size=1510 × 1037 pixels (34° × 24°), matrix_layout=5 × 8 stimulus matrix, inter_stimulus_distance=50 pixels vertical and horizontal, method=sampled sinusoidal stimulation **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8 9 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9 10 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10 11 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11 12 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12 13 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13 14 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14 15 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15 8.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_2 9.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_2 10.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_2 11.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_2 12.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_2 13.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_2 14.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_2 15.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_2 8.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_4 9.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_4 10.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_4 11.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_4 12.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_4 13.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_4 14.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_4 15.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_4 8.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_6 9.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_6 10.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_6 11.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_6 12.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_6 13.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_6 14.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_6 15.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_6 8.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_8 9.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_8 10.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_8 11.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_8 12.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_8 13.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_8 14.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_8 15.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_8 ``` **Paradigm-Specific Parameters** - **Detected paradigm**: ssvep - **Stimulus frequencies**: [8.0, 8.2, 8.4, 8.6, 8.8, 9.0, 9.2, 9.4, 9.6, 9.8, 10.0, 10.2, 10.4, 10.6, 10.8, 11.0, 11.2, 11.4, 11.6, 11.8, 12.0, 12.2, 12.4, 12.6, 12.8, 13.0, 13.2, 13.4, 13.6, 13.8, 14.0, 14.2, 14.4, 14.6, 14.8, 15.0, 15.2, 15.4, 15.6, 15.8] Hz - **Frequency resolution**: 0.2 Hz - **Number of targets**: 40 - **Number of repetitions**: 6 - **Cue duration**: 0.5 s **Data Structure** - **Trials**: 240 - **Trials per class**: per_target=6 - **Blocks per session**: 6 - **Trials context**: 40 trials per block corresponding to all 40 characters in random order **Preprocessing** - **Data state**: Raw epochs extracted from continuous EEG recordings according to stimulus onsets, downsampled to 250 Hz, no digital filters applied - **Preprocessing applied**: True - **Steps**: Epoch extraction according to stimulus onsets from event channel, Downsampling from 1000 Hz to 250 Hz, No digital filters applied in preprocessing - **Downsampled to**: 250.0 Hz - **Epoch window**: [-0.5, 5.5] - **Notes**: Data epochs include 0.5 s before stimulus onset, 5 s for stimulation, and 0.5 s after stimulus offset. Upper bound frequency of SSVEP harmonics is around 90 Hz. **Signal Processing** - **Classifiers**: CCA, FBCCA - **Feature extraction**: Canonical Correlation Analysis, Filter Bank CCA - **Frequency bands**: analyzed=[7.0, 90.0] Hz **Cross-Validation** - **Method**: leave-one-out (on six blocks) - **Folds**: 6 - **Evaluation type**: within_subject **Performance (Original Study)** - **Itr**: 117.75 bits/min - **Peak Itr Fbcca 0.55S Gaze**: 117.75 - **Peak Itr Fbcca 2S Gaze**: 68.99 - **Peak Itr Cca 0.55S Gaze**: 89.89 - **Peak Itr Cca 2S Gaze**: 56.03 - **Visual Latency Ms**: 136.91 - **Visual Latency Std Ms**: 18.4 **BCI Application** - **Applications**: speller - **Environment**: dimly lit soundproof room - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Perception **Documentation** - **Description**: A benchmark SSVEP dataset acquired with a 40-target BCI speller using joint frequency and phase modulation (JFPM) approach - **DOI**: 10.1109/TNSRE.2016.2627556 - **License**: CC-BY-4.0 - **Investigators**: Yijun Wang, Xiaogang Chen, Xiaorong Gao, Shangkai Gao - **Senior author**: Shangkai Gao - **Contact**: [wangyj@semi.ac.cn](mailto:wangyj@semi.ac.cn); [chenxg@bme.cams.cn](mailto:chenxg@bme.cams.cn); [gxrdea@tsinghua.edu.cn](mailto:gxrdea@tsinghua.edu.cn); [gsk-dea@tsinghua.edu.cn](mailto:gsk-dea@tsinghua.edu.cn) - **Institution**: Tsinghua University - **Department**: Department of Biomedical Engineering, Tsinghua University - **Address**: Beijing, China - **Country**: CN - **Repository**: BNCI Horizon 2020 - **Data URL**: [http://bci.med.tsinghua.edu.cn/download.html](http://bci.med.tsinghua.edu.cn/download.html) - **Publication year**: 2016 - **Funding**: National Natural Science Foundation of China (No. 61431007, No. 91220301, and No. 91320202); National High-tech R&D Program (863) of China (No. 2012AA011601); Recruitment Program for Young Professionals; Young Talents Lift Project of Chinese Association of Science and Technology; PUMC Youth Fund (No. 3332016101) - **Ethics approval**: Research Ethics Committee of Tsinghua University - **Keywords**: Brain–computer interface (BCI), electroencephalogram (EEG), joint frequency and phase modulation (JFPM), public data set, steady-state visual evoked potential (SSVEP) **External Links** - **Source**: [http://bci.med.tsinghua.edu.cn/download.html](http://bci.med.tsinghua.edu.cn/download.html) - **Bnci Horizon**: [https://bnci-horizon-2020.eu/database/data-sets](https://bnci-horizon-2020.eu/database/data-sets) **Abstract** This paper presents a benchmark steady-state visual evoked potential (SSVEP) dataset acquired with a 40-target brain–computer interface (BCI) speller. The dataset consists of 64-channel Electroencephalogram (EEG) data from 35 healthy subjects (8 experienced and 27 naïve) while they performed a cue-guided target selecting task. The virtual keyboard of the speller was composed of 40 visual flickers, which were coded using a joint frequency and phase modulation (JFPM) approach. The stimulation frequencies ranged from 8 Hz to 15.8 Hz with an interval of 0.2 Hz. The phase difference between two adjacent frequencies was 0.5π. For each subject, the data included six blocks of 40 trials corresponding to all 40 flickers indicated by a visual cue in a random order. The stimulation duration in each trial was five seconds. **Methodology** The study used a cue-guided target selecting task with a 40-target BCI speller. Stimuli were presented on a 23.6-in LCD monitor at 60 Hz using sampled sinusoidal stimulation method. Each trial started with a 0.5-s target cue, followed by 5 s of concurrent flickering of all stimuli, and ended with 0.5 s blank screen. The experiment included six blocks per subject, with 40 trials per block in random order. EEG data were recorded using Synamps2 system at 1000 Hz with 64 electrodes, referenced to Cz. Data were preprocessed by extracting epochs according to stimulus onsets and downsampling to 250 Hz. The JFPM approach encoded 40 characters using frequencies from 8-15.8 Hz (0.2 Hz interval) and phases from 0 to 19.5π (0.5π interval). Performance was evaluated using CCA and FBCCA methods with leave-one-out cross-validation. **References** Wang, Y., Chen, X., Gao, X., & Gao, S. (2016). A benchmark dataset for SSVEP-based brain–computer interfaces. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(10), 1746-1752. doi: 10.1109/TNSRE.2016.2627556. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000126` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Wang2016 – SSVEP Wang 2016 dataset | | Author (year) | `Wang2016` | | Canonical | — | | Importable as | `NM000126`, `Wang2016` | | Year | 2016 | | Authors | Yijun Wang, Xiaogang Chen, Xiaorong Gao, Shangkai Gao | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000126) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000126) | [Source URL](https://nemar.org/dataexplorer/detail/nm000126) | ## Technical Details - Subjects: 34 - Recordings: 34 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 250.0 - Duration (hours): 14.506628888888889 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 3.1 GB - File count: 34 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000126](https://openneuro.org/datasets/nm000126) - NeMAR: [nm000126](https://nemar.org/dataexplorer/detail?dataset_id=nm000126) ## API Reference Use the `NM000126` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000126(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Wang2016 – SSVEP Wang 2016 dataset * **Study:** `nm000126` (NeMAR) * **Author (year):** `Wang2016` * **Canonical:** — Also importable as: `NM000126`, `Wang2016`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000126](https://openneuro.org/datasets/nm000126) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000126](https://nemar.org/dataexplorer/detail?dataset_id=nm000126) ### Examples ```pycon >>> from eegdash.dataset import NM000126 >>> dataset = NM000126(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000126) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000126) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000127: eeg dataset, 40 subjects *Kim2025 – 40-class beta-range SSVEP speller dataset* Access recordings and metadata through EEGDash. **Citation:** Heegyu Kim, Kyungho Won, Minkyu Ahn, Sung Chan Jun (2019). *Kim2025 – 40-class beta-range SSVEP speller dataset*. Modality: eeg Subjects: 40 Recordings: 240 License: CC BY 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000127 dataset = NM000127(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000127(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000127( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000127, title = {Kim2025 – 40-class beta-range SSVEP speller dataset}, author = {Heegyu Kim and Kyungho Won and Minkyu Ahn and Sung Chan Jun}, } ``` ## About This Dataset **40-class beta-range SSVEP speller dataset** 40-class beta-range SSVEP speller dataset. **Dataset Overview** - **Code**: Kim2025BetaRange - **Paradigm**: ssvep - **DOI**: 10.1038/s41597-025-06032-2 ### View full README **40-class beta-range SSVEP speller dataset** 40-class beta-range SSVEP speller dataset. **Dataset Overview** - **Code**: Kim2025BetaRange - **Paradigm**: ssvep - **DOI**: 10.1038/s41597-025-06032-2 - **Subjects**: 40 - **Sessions per subject**: 6 - **Events**: 14=1, 15=2, 16=3, 17=4, 18=5, 19=6, 20=7, 21=8, 14.2=9, 15.2=10, 16.2=11, 17.2=12, 18.2=13, 19.2=14, 20.2=15, 21.2=16, 14.4=17, 15.4=18, 16.4=19, 17.4=20, 18.4=21, 19.4=22, 20.4=23, 21.4=24, 14.6=25, 15.6=26, 16.6=27, 17.6=28, 18.6=29, 19.6=30, 20.6=31, 21.6=32, 14.8=33, 15.8=34, 16.8=35, 17.8=36, 18.8=37, 19.8=38, 20.8=39, 21.8=40 - **Trial interval**: [0.0, 5.0] s - **File format**: MAT **Acquisition** - **Sampling rate**: 1024.0 Hz - **Number of channels**: 31 - **Channel types**: eeg=31, misc=2 - **Montage**: standard_1005 - **Hardware**: BioSemi ActiveTwo - **Software**: OpenViBE - **Reference**: CMS/DRL - **Ground**: CMS/DRL near Pz - **Sensor type**: active - **Line frequency**: 60.0 Hz - **Impedance threshold**: 5 kOhm - **Cap manufacturer**: BioSemi - **Electrode type**: wet - **Electrode material**: Ag/AgCl **Participants** - **Number of subjects**: 40 - **Health status**: healthy - **Age**: mean=22.8, std=3.34, min=20, max=35 - **Gender distribution**: male=25, female=15 - **BCI experience**: 3 of 40 had prior SSVEP-BCI experience **Experimental Protocol** - **Paradigm**: ssvep - **Task type**: speller - **Number of classes**: 40 - **Class labels**: 14, 15, 16, 17, 18, 19, 20, 21, 14.2, 15.2, 16.2, 17.2, 18.2, 19.2, 20.2, 21.2, 14.4, 15.4, 16.4, 17.4, 18.4, 19.4, 20.4, 21.4, 14.6, 15.6, 16.6, 17.6, 18.6, 19.6, 20.6, 21.6, 14.8, 15.8, 16.8, 17.8, 18.8, 19.8, 20.8, 21.8 - **Trial duration**: 5.0 s - **Feedback type**: none - **Stimulus type**: JFPM visual flicker - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: True **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 14 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14 15 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15 16 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/16 17 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/17 18 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/18 19 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/19 20 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/20 21 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/21 14.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_2 15.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_2 16.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/16_2 17.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/17_2 18.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/18_2 19.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/19_2 20.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/20_2 21.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/21_2 14.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_4 15.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_4 16.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/16_4 17.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/17_4 18.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/18_4 19.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/19_4 20.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/20_4 21.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/21_4 14.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_6 15.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_6 16.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/16_6 17.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/17_6 18.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/18_6 19.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/19_6 20.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/20_6 21.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/21_6 14.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_8 15.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_8 16.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/16_8 17.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/17_8 18.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/18_8 19.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/19_8 20.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/20_8 21.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/21_8 ``` **Paradigm-Specific Parameters** - **Detected paradigm**: ssvep - **Stimulus frequencies**: [14.0, 14.2, 14.4, 14.6, 14.8, 15.0, 15.2, 15.4, 15.6, 15.8, 16.0, 16.2, 16.4, 16.6, 16.8, 17.0, 17.2, 17.4, 17.6, 17.8, 18.0, 18.2, 18.4, 18.6, 18.8, 19.0, 19.2, 19.4, 19.6, 19.8, 20.0, 20.2, 20.4, 20.6, 20.8, 21.0, 21.2, 21.4, 21.6, 21.8] Hz - **Frequency resolution**: 0.2 Hz **Data Structure** - **Trials**: 240 - **Blocks per session**: 6 **Preprocessing** - **Data state**: epoched **Signal Processing** - **Classifiers**: CCA, FBCCA, ITCCA, TRCA, EEGNet - **Feature extraction**: CCA, FBCCA, TRCA - **Frequency bands**: stimulus_range=[14.0, 22.0] Hz; analysis=[13.0, 89.0] Hz - **Spatial filters**: CCA, TRCA **Cross-Validation** - **Method**: leave-one-subject-out - **Folds**: 6 - **Evaluation type**: within_subject, cross_subject **BCI Application** - **Applications**: speller - **Environment**: lab **Tags** - **Pathology**: healthy - **Modality**: visual - **Type**: perception **Documentation** - **DOI**: 10.1038/s41597-025-06032-2 - **License**: CC BY 4.0 - **Investigators**: Heegyu Kim, Kyungho Won, Minkyu Ahn, Sung Chan Jun - **Senior author**: Sung Chan Jun - **Institution**: Gwangju Institute of Science and Technology - **Department**: School of Electrical Engineering and Computer Science, GIST - **Country**: KR - **Repository**: Figshare - **Data URL**: [https://doi.org/10.6084/m9.figshare.28806815.v2](https://doi.org/10.6084/m9.figshare.28806815.v2) - **Publication year**: 2025 - **Ethics approval**: GIST IRB, No. 20211201-HR-64-02-04 - **Keywords**: SSVEP, BCI, beta range, visual fatigue, 40-class speller, JFPM, EEG **References** H. Kim, K. Won, M. Ahn, and S. C. Jun, “A 40-class SSVEP speller dataset: beta range stimulation for low-fatigue BCI applications,” Scientific Data, vol. 12, p. 1751, 2025. DOI: 10.1038/s41597-025-06032-2 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000127` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Kim2025 – 40-class beta-range SSVEP speller dataset | | Author (year) | `Kim2025_SSVEP` | | Canonical | `Kim2025` | | Importable as | `NM000127`, `Kim2025_SSVEP`, `Kim2025` | | Year | 2019 | | Authors | Heegyu Kim, Kyungho Won, Minkyu Ahn, Sung Chan Jun | | License | CC BY 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000127) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000127) | [Source URL](https://nemar.org/dataexplorer/detail/nm000127) | ## Technical Details - Subjects: 40 - Recordings: 240 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 1024.0 - Duration (hours): 18.927018229166663 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 8.1 GB - File count: 240 - Format: BIDS - License: CC BY 4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000127](https://openneuro.org/datasets/nm000127) - NeMAR: [nm000127](https://nemar.org/dataexplorer/detail?dataset_id=nm000127) ## API Reference Use the `NM000127` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000127(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Kim2025 – 40-class beta-range SSVEP speller dataset * **Study:** `nm000127` (NeMAR) * **Author (year):** `Kim2025_SSVEP` * **Canonical:** `Kim2025` Also importable as: `NM000127`, `Kim2025_SSVEP`, `Kim2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 40; recordings: 240; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000127](https://openneuro.org/datasets/nm000127) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000127](https://nemar.org/dataexplorer/detail?dataset_id=nm000127) ### Examples ```pycon >>> from eegdash.dataset import NM000127 >>> dataset = NM000127(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000127) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000127) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000128: eeg dataset, 59 subjects *Dong2023 – 59-subject 40-class SSVEP dataset* Access recordings and metadata through EEGDash. **Citation:** Yue Dong, Sen Tian (2019). *Dong2023 – 59-subject 40-class SSVEP dataset*. Modality: eeg Subjects: 59 Recordings: 59 License: CC BY-NC 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000128 dataset = NM000128(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000128(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000128( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000128, title = {Dong2023 – 59-subject 40-class SSVEP dataset}, author = {Yue Dong and Sen Tian}, } ``` ## About This Dataset **59-subject 40-class SSVEP dataset** 59-subject 40-class SSVEP dataset. **Dataset Overview** - **Code**: Dong2023 - **Paradigm**: ssvep - **DOI**: 10.26599/BSA.2023.9050020 ### View full README **59-subject 40-class SSVEP dataset** 59-subject 40-class SSVEP dataset. **Dataset Overview** - **Code**: Dong2023 - **Paradigm**: ssvep - **DOI**: 10.26599/BSA.2023.9050020 - **Subjects**: 59 - **Sessions per subject**: 1 - **Events**: 8=1, 8.2=2, 8.4=3, 8.6=4, 8.8=5, 9=6, 9.2=7, 9.4=8, 9.6=9, 9.8=10, 10=11, 10.2=12, 10.4=13, 10.6=14, 10.8=15, 11=16, 11.2=17, 11.4=18, 11.6=19, 11.8=20, 12=21, 12.2=22, 12.4=23, 12.6=24, 12.8=25, 13=26, 13.2=27, 13.4=28, 13.6=29, 13.8=30, 14=31, 14.2=32, 14.4=33, 14.6=34, 14.8=35, 15=36, 15.2=37, 15.4=38, 15.6=39, 15.8=40 - **Trial interval**: [0.5, 4.5] s - **File format**: MAT **Acquisition** - **Sampling rate**: 250.0 Hz - **Number of channels**: 8 - **Channel types**: eeg=8 - **Channel names**: POz, PO3, PO4, PO7, PO8, Oz, O1, O2 - **Montage**: standard_1005 - **Hardware**: NeuSenW (Neuracle) - **Reference**: Fp1 - **Ground**: Fp2 - **Sensor type**: semi-dry (pre-gelled) - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 59 - **Health status**: healthy - **Age**: mean=12.4, min=10, max=16 - **Gender distribution**: male=37, female=22 **Experimental Protocol** - **Paradigm**: ssvep - **Task type**: SSVEP speller - **Number of classes**: 40 - **Class labels**: 8, 8.2, 8.4, 8.6, 8.8, 9, 9.2, 9.4, 9.6, 9.8, 10, 10.2, 10.4, 10.6, 10.8, 11, 11.2, 11.4, 11.6, 11.8, 12, 12.2, 12.4, 12.6, 12.8, 13, 13.2, 13.4, 13.6, 13.8, 14, 14.2, 14.4, 14.6, 14.8, 15, 15.2, 15.4, 15.6, 15.8 - **Trial duration**: 4.0 s - **Feedback type**: visual - **Stimulus type**: JFPM visual flicker - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: False **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8 8.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_2 8.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_4 8.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_6 8.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_8 9 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9 9.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_2 9.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_4 9.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_6 9.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_8 10 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10 10.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_2 10.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_4 10.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_6 10.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_8 11 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11 11.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_2 11.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_4 11.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_6 11.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_8 12 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12 12.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_2 12.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_4 12.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_6 12.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_8 13 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13 13.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_2 13.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_4 13.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_6 13.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_8 14 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14 14.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_2 14.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_4 14.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_6 14.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_8 15 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15 15.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_2 15.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_4 15.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_6 15.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_8 ``` **Paradigm-Specific Parameters** - **Detected paradigm**: ssvep - **Stimulus frequencies**: [8.0, 8.2, 8.4, 8.6, 8.8, 9.0, 9.2, 9.4, 9.6, 9.8, 10.0, 10.2, 10.4, 10.6, 10.8, 11.0, 11.2, 11.4, 11.6, 11.8, 12.0, 12.2, 12.4, 12.600000000000001, 12.8, 13.0, 13.2, 13.4, 13.600000000000001, 13.8, 14.0, 14.2, 14.4, 14.600000000000001, 14.8, 15.0, 15.2, 15.4, 15.600000000000001, 15.8] Hz - **Frequency resolution**: 0.2 Hz **Data Structure** - **Trials**: 160 - **Blocks per session**: 4 **Preprocessing** - **Data state**: epoched - **Downsampled to**: 250.0 Hz **Signal Processing** - **Classifiers**: FBCCA, eTRCA, msTRCA - **Spatial filters**: CCA, TRCA **Cross-Validation** - **Method**: leave-one-block-out - **Folds**: 4 - **Evaluation type**: within_subject **BCI Application** - **Environment**: non-shielded - **Online feedback**: True **Tags** - **Pathology**: healthy - **Modality**: visual - **Type**: perception **Documentation** - **DOI**: 10.26599/BSA.2023.9050020 - **License**: CC BY-NC 4.0 - **Investigators**: Yue Dong, Sen Tian - **Senior author**: Yue Dong - **Institution**: Jiangsu JITRI Brain Machine Fusion Intelligence Institute - **Country**: CN - **Repository**: Zenodo - **Data URL**: [https://zenodo.org/records/18847318](https://zenodo.org/records/18847318) - **Publication year**: 2023 **References** Y. Dong and S. Tian, “A large database towards user-friendly SSVEP-based BCI,” Brain Science Advances, vol. 9, no. 4, pp. 297-309, 2023. DOI: 10.26599/BSA.2023.9050020 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000128` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dong2023 – 59-subject 40-class SSVEP dataset | | Author (year) | `Dong2023` | | Canonical | — | | Importable as | `NM000128`, `Dong2023` | | Year | 2019 | | Authors | Yue Dong, Sen Tian | | License | CC BY-NC 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000128) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000128) | [Source URL](https://nemar.org/dataexplorer/detail/nm000128) | ## Technical Details - Subjects: 59 - Recordings: 59 - Tasks: 1 - Channels: 8 - Sampling rate (Hz): 250.0 - Duration (hours): 14.159934444444444 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 397.1 MB - File count: 59 - Format: BIDS - License: CC BY-NC 4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000128](https://openneuro.org/datasets/nm000128) - NeMAR: [nm000128](https://nemar.org/dataexplorer/detail?dataset_id=nm000128) ## API Reference Use the `NM000128` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000128(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dong2023 – 59-subject 40-class SSVEP dataset * **Study:** `nm000128` (NeMAR) * **Author (year):** `Dong2023` * **Canonical:** — Also importable as: `NM000128`, `Dong2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 59; recordings: 59; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000128](https://openneuro.org/datasets/nm000128) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000128](https://nemar.org/dataexplorer/detail?dataset_id=nm000128) ### Examples ```pycon >>> from eegdash.dataset import NM000128 >>> dataset = NM000128(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000128) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000128) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000129: eeg dataset, 70 subjects *Liu2020 – BETA SSVEP benchmark dataset* Access recordings and metadata through EEGDash. **Citation:** Bingchuan Liu, Xiaoshan Huang, Yijun Wang, Xiaogang Chen, Xiaorong Gao (2019). *Liu2020 – BETA SSVEP benchmark dataset*. Modality: eeg Subjects: 70 Recordings: 70 License: Non-commercial research use Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000129 dataset = NM000129(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000129(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000129( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000129, title = {Liu2020 – BETA SSVEP benchmark dataset}, author = {Bingchuan Liu and Xiaoshan Huang and Yijun Wang and Xiaogang Chen and Xiaorong Gao}, } ``` ## About This Dataset **BETA SSVEP benchmark dataset** BETA SSVEP benchmark dataset. **Dataset Overview** - **Code**: Liu2020BETA - **Paradigm**: ssvep - **DOI**: 10.3389/fnins.2020.00627 ### View full README **BETA SSVEP benchmark dataset** BETA SSVEP benchmark dataset. **Dataset Overview** - **Code**: Liu2020BETA - **Paradigm**: ssvep - **DOI**: 10.3389/fnins.2020.00627 - **Subjects**: 70 - **Sessions per subject**: 1 - **Events**: 8.6=1, 8.8=2, 9=3, 9.2=4, 9.4=5, 9.6=6, 9.8=7, 10=8, 10.2=9, 10.4=10, 10.6=11, 10.8=12, 11=13, 11.2=14, 11.4=15, 11.6=16, 11.8=17, 12=18, 12.2=19, 12.4=20, 12.6=21, 12.8=22, 13=23, 13.2=24, 13.4=25, 13.6=26, 13.8=27, 14=28, 14.2=29, 14.4=30, 14.6=31, 14.8=32, 15=33, 15.2=34, 15.4=35, 15.6=36, 15.8=37, 8=38, 8.2=39, 8.4=40 - **Trial interval**: [0, 3.0] s - **File format**: MAT **Acquisition** - **Sampling rate**: 250.0 Hz - **Number of channels**: 64 - **Channel types**: eeg=64 - **Channel names**: Fp1, Fpz, Fp2, AF3, AF4, F7, F5, F3, F1, Fz, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT8, T7, C5, C3, C1, Cz, C2, C4, C6, T8, M1, TP7, CP5, CP3, CP1, CPz, CP2, CP4, CP6, TP8, M2, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO7, PO5, PO3, POz, PO4, PO6, PO8, CB1, O1, Oz, O2, CB2 - **Montage**: standard_1005 - **Hardware**: Synamps2 (Neuroscan) - **Reference**: Cz - **Line frequency**: 50.0 Hz - **Impedance threshold**: 10 kOhm **Participants** - **Number of subjects**: 70 - **Health status**: healthy - **Age**: mean=25.14, std=7.97, min=9, max=64 - **Gender distribution**: male=42, female=28 - **BCI experience**: mixed **Experimental Protocol** - **Paradigm**: ssvep - **Task type**: cued-spelling - **Number of classes**: 40 - **Class labels**: 8.6, 8.8, 9, 9.2, 9.4, 9.6, 9.8, 10, 10.2, 10.4, 10.6, 10.8, 11, 11.2, 11.4, 11.6, 11.8, 12, 12.2, 12.4, 12.6, 12.8, 13, 13.2, 13.4, 13.6, 13.8, 14, 14.2, 14.4, 14.6, 14.8, 15, 15.2, 15.4, 15.6, 15.8, 8, 8.2, 8.4 - **Trial duration**: 3.0 s - **Feedback type**: visual - **Stimulus type**: JFPM visual flicker - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: False **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 8.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_6 8.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_8 9 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9 9.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_2 9.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_4 9.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_6 9.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_8 10 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10 10.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_2 10.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_4 10.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_6 10.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_8 11 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11 11.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_2 11.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_4 11.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_6 11.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_8 12 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12 12.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_2 12.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_4 12.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_6 12.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_8 13 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13 13.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_2 13.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_4 13.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_6 13.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_8 14 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14 14.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_2 14.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_4 14.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_6 14.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_8 15 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15 15.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_2 15.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_4 15.6 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_6 15.8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/15_8 8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8 8.2 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_2 8.4 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_4 ``` **Paradigm-Specific Parameters** - **Detected paradigm**: ssvep - **Stimulus frequencies**: [8.0, 8.2, 8.4, 8.6, 8.8, 9.0, 9.2, 9.4, 9.6, 9.8, 10.0, 10.2, 10.4, 10.6, 10.8, 11.0, 11.2, 11.4, 11.6, 11.8, 12.0, 12.2, 12.4, 12.600000000000001, 12.8, 13.0, 13.2, 13.4, 13.600000000000001, 13.8, 14.0, 14.2, 14.4, 14.600000000000001, 14.8, 15.0, 15.2, 15.4, 15.600000000000001, 15.8] Hz - **Frequency resolution**: 0.2 Hz **Data Structure** - **Trials**: 160 - **Blocks per session**: 4 **Preprocessing** - **Data state**: epoched - **Notch filter**: 50 Hz - **Filter type**: zero-phase FIR - **Downsampled to**: 250.0 Hz **Signal Processing** - **Classifiers**: TRCA, msTRCA, FBCCA, CCA - **Feature extraction**: CCA, TRCA, FBCCA - **Frequency bands**: bandpass=[3.0, 100.0] Hz - **Spatial filters**: CCA, TRCA **Cross-Validation** - **Method**: leave-one-block-out - **Folds**: 4 - **Evaluation type**: within_subject **BCI Application** - **Applications**: speller - **Environment**: classroom - **Online feedback**: True **Tags** - **Pathology**: healthy - **Modality**: visual - **Type**: perception **Documentation** - **DOI**: 10.3389/fnins.2020.00627 - **License**: Non-commercial research use - **Investigators**: Bingchuan Liu, Xiaoshan Huang, Yijun Wang, Xiaogang Chen, Xiaorong Gao - **Senior author**: Xiaorong Gao - **Institution**: Tsinghua University - **Department**: Department of Biomedical Engineering, Tsinghua University - **Country**: CN - **Repository**: Tsinghua BCI Lab - **Data URL**: [http://bci.med.tsinghua.edu.cn/upload/liubingchuan/](http://bci.med.tsinghua.edu.cn/upload/liubingchuan/) - **Publication year**: 2020 - **Funding**: National Key Research and Development Program of China (No. 2017YFB1002505); Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB32040200); Key Research and Development Program of Guangdong Province (No. 2018B030339001); National Natural Science Foundation of China (Grant No. 61431007) - **Ethics approval**: Ethics Committee of Tsinghua University, No. 20190002 - **Keywords**: SSVEP, BCI, EEG, benchmark, JFPM **References** B. Liu, X. Huang, Y. Wang, X. Chen, and X. Gao, “BETA: A Large Benchmark Database Toward SSVEP-BCI Application,” Frontiers in Neuroscience, vol. 14, p. 627, 2020. DOI: 10.3389/fnins.2020.00627 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000129` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Liu2020 – BETA SSVEP benchmark dataset | | Author (year) | `Liu2020` | | Canonical | `BetaSSVEP`, `BETA_SSVEP`, `BETA` | | Importable as | `NM000129`, `Liu2020`, `BetaSSVEP`, `BETA_SSVEP`, `BETA` | | Year | 2019 | | Authors | Bingchuan Liu, Xiaoshan Huang, Yijun Wang, Xiaogang Chen, Xiaorong Gao | | License | Non-commercial research use | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000129) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000129) | [Source URL](https://nemar.org/dataexplorer/detail/nm000129) | ## Technical Details - Subjects: 70 - Recordings: 70 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 250.0 - Duration (hours): 13.022144444444445 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 2.8 GB - File count: 70 - Format: BIDS - License: Non-commercial research use - DOI: — - Source: nemar - OpenNeuro: [nm000129](https://openneuro.org/datasets/nm000129) - NeMAR: [nm000129](https://nemar.org/dataexplorer/detail?dataset_id=nm000129) ## API Reference Use the `NM000129` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000129(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Liu2020 – BETA SSVEP benchmark dataset * **Study:** `nm000129` (NeMAR) * **Author (year):** `Liu2020` * **Canonical:** `BetaSSVEP`, `BETA_SSVEP`, `BETA` Also importable as: `NM000129`, `Liu2020`, `BetaSSVEP`, `BETA_SSVEP`, `BETA`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 70; recordings: 70; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000129](https://openneuro.org/datasets/nm000129) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000129](https://nemar.org/dataexplorer/detail?dataset_id=nm000129) ### Examples ```pycon >>> from eegdash.dataset import NM000129 >>> dataset = NM000129(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000129) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000129) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000130: eeg dataset, 100 subjects *Liu2022 – eldBETA SSVEP benchmark dataset for elderly population* Access recordings and metadata through EEGDash. **Citation:** Bingchuan Liu, Yijun Wang, Xiaorong Gao, Xiaogang Chen (2019). *Liu2022 – eldBETA SSVEP benchmark dataset for elderly population*. Modality: eeg Subjects: 100 Recordings: 700 License: CC BY 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000130 dataset = NM000130(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000130(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000130( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000130, title = {Liu2022 – eldBETA SSVEP benchmark dataset for elderly population}, author = {Bingchuan Liu and Yijun Wang and Xiaorong Gao and Xiaogang Chen}, } ``` ## About This Dataset **eldBETA SSVEP benchmark dataset for elderly population** eldBETA SSVEP benchmark dataset for elderly population. **Dataset Overview** - **Code**: Liu2022EldBETA - **Paradigm**: ssvep - **DOI**: 10.1038/s41597-022-01372-9 ### View full README **eldBETA SSVEP benchmark dataset for elderly population** eldBETA SSVEP benchmark dataset for elderly population. **Dataset Overview** - **Code**: Liu2022EldBETA - **Paradigm**: ssvep - **DOI**: 10.1038/s41597-022-01372-9 - **Subjects**: 100 - **Sessions per subject**: 7 - **Events**: 8=1, 9.5=2, 11=3, 8.5=4, 10=5, 11.5=6, 9=7, 10.5=8, 12=9 - **Trial interval**: [0, 6.0] s - **File format**: GDF (BIDS) **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 64 - **Channel types**: eeg=64 - **Montage**: standard_1005 - **Hardware**: Synamps2 (Neuroscan) - **Reference**: Cz - **Line frequency**: 50.0 Hz - **Impedance threshold**: 20 kOhm **Participants** - **Number of subjects**: 100 - **Health status**: healthy - **Age**: mean=63.17, std=6.05, min=51, max=81 - **Gender distribution**: male=33, female=67 **Experimental Protocol** - **Paradigm**: ssvep - **Task type**: 9-target SSVEP speller - **Number of classes**: 9 - **Class labels**: 8, 9.5, 11, 8.5, 10, 11.5, 9, 10.5, 12 - **Trial duration**: 5.0 s - **Feedback type**: visual - **Stimulus type**: JFPM visual flicker - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: online - **Training/test split**: False **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 8 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8 9.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_5 11 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11 8.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/8_5 10 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10 11.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_5 9 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9 10.5 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_5 12 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12 ``` **Paradigm-Specific Parameters** - **Detected paradigm**: ssvep - **Stimulus frequencies**: [8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0] Hz - **Frequency resolution**: 0.5 Hz **Data Structure** - **Trials**: 63 - **Blocks per session**: 7 **Signal Processing** - **Classifiers**: TDCA, ms-eCCA, ensemble_msTRCA, ensemble_TRCA, Extended_CCA, ITCCA, L1MCCA, FBCCA, CVARS, tMSI, MEC, MSI, CCA - **Feature extraction**: TDCA, CCA, FBCCA, TRCA, ms-eCCA, msTRCA, Extended_CCA, ITCCA, L1MCCA, CVARS, tMSI, MEC, MSI - **Frequency bands**: bandpass=[6.0, 100.0] Hz - **Spatial filters**: TDCA, CCA, TRCA, ms-eCCA, msTRCA, Extended_CCA, ITCCA, L1MCCA, CVARS, MEC, MSI, tMSI **Cross-Validation** - **Method**: leave-one-block-out - **Folds**: 7 - **Evaluation type**: within_subject **BCI Application** - **Applications**: speller - **Environment**: lab - **Online feedback**: True **Tags** - **Pathology**: healthy - **Modality**: visual - **Type**: perception **Documentation** - **DOI**: 10.1038/s41597-022-01372-9 - **License**: CC BY 4.0 - **Investigators**: Bingchuan Liu, Yijun Wang, Xiaorong Gao, Xiaogang Chen - **Senior author**: Xiaogang Chen - **Institution**: Tsinghua University - **Department**: Department of Biomedical Engineering, School of Medicine, Tsinghua University - **Country**: CN - **Repository**: Figshare - **Data URL**: [https://doi.org/10.6084/m9.figshare.18032669](https://doi.org/10.6084/m9.figshare.18032669) - **Publication year**: 2022 - **Funding**: National Natural Science Foundation of China (No. 62171473); Doctoral Brain+X Seed Grant Program of Tsinghua University; Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB32040200) - **Ethics approval**: Institutional Review Board of Tsinghua University, No. 20210032 - **Keywords**: SSVEP, BCI, EEG, elderly, aging, benchmark, JFPM **References** B. Liu, Y. Wang, X. Gao, and X. Chen, “eldBETA: A Large Eldercare-oriented Benchmark Database of SSVEP-BCI for the Aging Population,” Scientific Data, vol. 9, p. 252, 2022. DOI: 10.1038/s41597-022-01372-9 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000130` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Liu2022 – eldBETA SSVEP benchmark dataset for elderly population | | Author (year) | `Liu2022` | | Canonical | `EldBETA`, `eldBETA`, `Liu2022EldBETA` | | Importable as | `NM000130`, `Liu2022`, `EldBETA`, `eldBETA`, `Liu2022EldBETA` | | Year | 2019 | | Authors | Bingchuan Liu, Yijun Wang, Xiaorong Gao, Xiaogang Chen | | License | CC BY 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000130) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000130) | [Source URL](https://nemar.org/dataexplorer/detail/nm000130) | ## Technical Details - Subjects: 100 - Recordings: 700 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 20.17517222222222 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 17.4 GB - File count: 700 - Format: BIDS - License: CC BY 4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000130](https://openneuro.org/datasets/nm000130) - NeMAR: [nm000130](https://nemar.org/dataexplorer/detail?dataset_id=nm000130) ## API Reference Use the `NM000130` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000130(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Liu2022 – eldBETA SSVEP benchmark dataset for elderly population * **Study:** `nm000130` (NeMAR) * **Author (year):** `Liu2022` * **Canonical:** `EldBETA`, `eldBETA`, `Liu2022EldBETA` Also importable as: `NM000130`, `Liu2022`, `EldBETA`, `eldBETA`, `Liu2022EldBETA`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 100; recordings: 700; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000130](https://openneuro.org/datasets/nm000130) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000130](https://nemar.org/dataexplorer/detail?dataset_id=nm000130) ### Examples ```pycon >>> from eegdash.dataset import NM000130 >>> dataset = NM000130(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000130) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000130) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000131: eeg dataset, 8 subjects *Wang2021 – Combined SSVEP dataset with single stimulus location for two inputs* Access recordings and metadata through EEGDash. **Citation:** Lu Wang, Zhenhao Zhang, Dan Han, Zhijun Zhang, Zhifang Liu, Wei Liu (2019). *Wang2021 – Combined SSVEP dataset with single stimulus location for two inputs*. Modality: eeg Subjects: 8 Recordings: 22 License: CC BY 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000131 dataset = NM000131(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000131(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000131( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000131, title = {Wang2021 – Combined SSVEP dataset with single stimulus location for two inputs}, author = {Lu Wang and Zhenhao Zhang and Dan Han and Zhijun Zhang and Zhifang Liu and Wei Liu}, } ``` ## About This Dataset **Combined SSVEP dataset with single stimulus location for two inputs** Combined SSVEP dataset with single stimulus location for two inputs. **Dataset Overview** - **Code**: Wang2021Combined - **Paradigm**: ssvep - **DOI**: 10.1111/ejn.15030 ### View full README **Combined SSVEP dataset with single stimulus location for two inputs** Combined SSVEP dataset with single stimulus location for two inputs. **Dataset Overview** - **Code**: Wang2021Combined - **Paradigm**: ssvep - **DOI**: 10.1111/ejn.15030 - **Subjects**: 8 - **Sessions per subject**: 1 - **Events**: 14.17=1, 12.14=2, 9.44=3, 7.73=4 - **Trial interval**: [0.0, 5.0] s - **File format**: CNT **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 31 - **Channel types**: eeg=31, eog=2 - **Montage**: standard_1005 - **Hardware**: eego mylab (ANT Neuro) - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 8 - **Health status**: healthy **Experimental Protocol** - **Paradigm**: ssvep - **Task type**: covert_attention - **Number of classes**: 4 - **Class labels**: 14.17, 12.14, 9.44, 7.73 - **Trial duration**: 5.0 s - **Study design**: One-to-two combined SSVEP with overlapping stimuli - **Feedback type**: none - **Stimulus type**: overlapping SSVEP arrows (CRT 85 Hz) - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 14.17 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/14_17 12.14 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_14 9.44 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/9_44 7.73 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/7_73 ``` **Paradigm-Specific Parameters** - **Detected paradigm**: ssvep - **Stimulus frequencies**: [14.17, 12.14, 9.44, 7.73] Hz **Data Structure** - **Blocks per session**: 2 **BCI Application** - **Environment**: lab **Tags** - **Pathology**: healthy - **Modality**: visual - **Type**: perception **Documentation** - **DOI**: 10.1111/ejn.15030 - **License**: CC BY 4.0 - **Investigators**: Lu Wang, Zhenhao Zhang, Dan Han, Zhijun Zhang, Zhifang Liu, Wei Liu - **Senior author**: Zhijun Zhang - **Institution**: Shandong University - **Country**: CN - **Repository**: Zenodo - **Data URL**: [https://zenodo.org/records/18873228](https://zenodo.org/records/18873228) - **Publication year**: 2021 **References** L. Wang, Z. Zhang, D. Han, Z. Zhang, Z. Liu, and W. Liu, “Single stimulus location for two inputs: A combined brain-computer interface based on Steady-State Visual Evoked Potential (SSVEP),” European Journal of Neuroscience, vol. 53, no. 3, pp. 861-875, 2021. DOI: 10.1111/ejn.15030 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000131` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Wang2021 – Combined SSVEP dataset with single stimulus location for two inputs | | Author (year) | `Wang2021` | | Canonical | — | | Importable as | `NM000131`, `Wang2021` | | Year | 2019 | | Authors | Lu Wang, Zhenhao Zhang, Dan Han, Zhijun Zhang, Zhifang Liu, Wei Liu | | License | CC BY 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000131) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000131) | [Source URL](https://nemar.org/dataexplorer/detail/nm000131) | ## Technical Details - Subjects: 8 - Recordings: 22 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 1000.0 - Duration (hours): 6.1615825 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 2.6 GB - File count: 22 - Format: BIDS - License: CC BY 4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000131](https://openneuro.org/datasets/nm000131) - NeMAR: [nm000131](https://nemar.org/dataexplorer/detail?dataset_id=nm000131) ## API Reference Use the `NM000131` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000131(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Wang2021 – Combined SSVEP dataset with single stimulus location for two inputs * **Study:** `nm000131` (NeMAR) * **Author (year):** `Wang2021` * **Canonical:** — Also importable as: `NM000131`, `Wang2021`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 8; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000131](https://openneuro.org/datasets/nm000131) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000131](https://nemar.org/dataexplorer/detail?dataset_id=nm000131) ### Examples ```pycon >>> from eegdash.dataset import NM000131 >>> dataset = NM000131(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000131) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000131) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000132: eeg dataset, 40 subjects *ERP CORE* Access recordings and metadata through EEGDash. **Citation:** Emily S. Kappenman, Jaclyn L. Farrens, Wendy Zhang, Andrew X. Stewart, Steven J. Luck. (2004). *ERP CORE*. [10.82901/nemar.nm000132](https://doi.org/10.82901/nemar.nm000132) Modality: eeg Subjects: 40 Recordings: 240 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000132 dataset = NM000132(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000132(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000132( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000132, title = {ERP CORE}, author = {Emily S. Kappenman and Jaclyn L. Farrens and Wendy Zhang and Andrew X. Stewart and Steven J. Luck.}, doi = {10.82901/nemar.nm000132}, url = {https://doi.org/10.82901/nemar.nm000132}, } ``` ## About This Dataset [DOI](https://doi.org/10.82901/nemar.nm000132) **ERP CORE (Compendium of Open Resources and Experiments)** Emily S. Kappenman(1,2), Jaclyn L. Farrens(1), Wendy Zhang(1,2), Andrew X. Stewart(3), & Steven J. Luck(3) 1. San Diego State University, Department of Psychology, San Diego, CA, 92120 2. SDSU/UC San Diego Joint Doctoral Program in Clinical Psychology, San Diego, CA, 92120 3. University of California, Davis, Center for Mind & Brain and Department of Psychology, Davis, CA, 95616 The ERP CORE ([https://doi.org/10.18115/D5JW4R](https://doi.org/10.18115/D5JW4R)) is a freely available online resource consisting of optimized paradigms, experiment control scripts, example data from 40 neurotypical adults, data processing pipelines and analysis scripts, and a broad set of results for 7 widely used ERP components: N170, mismatch negativity (MMN), N2pc, N400, P3, lateralized readiness potential (LRP), and error-related negativity (ERN). ### View full README [DOI](https://doi.org/10.82901/nemar.nm000132) **ERP CORE (Compendium of Open Resources and Experiments)** Emily S. Kappenman(1,2), Jaclyn L. Farrens(1), Wendy Zhang(1,2), Andrew X. Stewart(3), & Steven J. Luck(3) 1. San Diego State University, Department of Psychology, San Diego, CA, 92120 2. SDSU/UC San Diego Joint Doctoral Program in Clinical Psychology, San Diego, CA, 92120 3. University of California, Davis, Center for Mind & Brain and Department of Psychology, Davis, CA, 95616 The ERP CORE ([https://doi.org/10.18115/D5JW4R](https://doi.org/10.18115/D5JW4R)) is a freely available online resource consisting of optimized paradigms, experiment control scripts, example data from 40 neurotypical adults, data processing pipelines and analysis scripts, and a broad set of results for 7 widely used ERP components: N170, mismatch negativity (MMN), N2pc, N400, P3, lateralized readiness potential (LRP), and error-related negativity (ERN). **What’s Included** 1. Raw EEG data for 6 task paradigms (yielding 7 ERP components) from 40 participants, located in subject folders sub-001 through sub-040 2. The event code schemes for all experiment paradigms 3. The task stimuli used for eliciting N170, MMN, and N400, located in the `stimuli/` folder 4. Demographic information for all 40 participants (`participants.tsv` and `participants.json`) **Tasks** ```text | Task | ERP Component(s) | Description | | -------- | ---------------- | ------------------------------------------------------------- | | flankers | ERN, LRP | Eriksen flanker task (compatible/incompatible arrow flankers) | | MMN | MMN | Passive auditory oddball | | N170 | N170 | Face perception (faces, cars, scrambled) | | N2pc | N2pc | Visual search | | N400 | N400 | Word pair judgment (related/unrelated) | | P3 | P3 | Active visual oddball | ``` The following more detailed description of the tasks was taken from [Kappenman et al. 2021](https://www.sciencedirect.com/science/article/pii/S1053811920309502) and its supplement. In all tasks the interstimulus image refers to a gray screen with a white central fixation point. The order of the six tasks was randomized across participants, and the task parameters were counterbalanced within participants wherever possible (see individual task descriptions below for details). Interstimulus intervals (ISIs) were jittered using a rectangular distribution to prevent phase-locking of alpha-band EEG oscillations to the stimulus sequence. Each block of trials began with a sequence of “Ready,” “Set,” “Go” screens. Participant-controlled rest breaks were provided between blocks. **Eriksen flanker task** The lateralized readiness potential (LRP) and the error-related negativity (ERN) were elicited using a variant of the Eriksen flanker task ( Eriksen and Eriksen, 1974 ; see Fig. 1 F). A central arrowhead pointing to the left (<) or right (>) was flanked on both sides by 2 arrowheads that pointed in the same direction (congruent trials) or the opposite direction (incongruent trials). Participants indicated the direction of the central arrowhead on each trial with a left or right-hand buttonpress. The arrowheads were displayed for 200 ms followed by the interstimulus image displayed from 1200-1400 ms. **Passive auditory oddball** The MMN was elicited using a passive auditory oddball task modeled on Näätänen et al. (2004). Standard tones (presented at 80 dB, with p = .8) and deviant tones (presented at 70 dB, with p = .2) were presented over speakers while participants watched a silent video and ignored the tones. The stimuli were 1000 Hz pure auditory tones, 100 ms in duration (including 5 ms rise and fall times), separated by a silent ISI of 450-550 ms. To establish the auditory context for the standard, a stream of 15 standard stimuli was presented at the start of the task; all remaining stimuli were presented in a random order with the specified probabilities, except with the constraint that no two deviant tones could be presented sequentially. A total of 1000 tones were presented (including the initial stream of 15 standards), consisting of 800 standards and 200 deviants. Participants watched a silent movie of [sand drawings](https://www.youtube.com/watch?v=TtBOuPMZdoQ) by artist Kseniya Simonova. **Face perception** The N170 was elicited in a face perception task modified from Rossion and Caharel (2011) using their stimuli. In this task, an image of a face, car, scrambled face, or scrambled car was presented on each trial in the center of the screen, and participants responded whether the stimulus was an ‘object’ (face or car) or a ‘texture’ (scrambled face or scrambled car). The cropped images were displayed on a gray background for 300 ms followed by the interstimulus image for 1100-1300 ms. Each participant completed a total of 320 trials. Each category included 40 exemplars, each of which was presented twice, yielding 80 total trials of each stimulus category. The stimuli were presented in a randomly shuffled sequence, with the constraint that a given exemplar was presented only once in the first half and once in the second half of the session. The task was divided into blocks of 40 trials. **Visual search** The N2pc was elicited using a simple visual search task based on Luck et al. (2006). Participants were given a target color of pink or blue at the beginning of a trial block, and responded on each trial whether the gap in the target color square was on the top or bottom. On each trial, 12 items were presented in the left visual field and 12 items were presented in the right visual field. Each item was an outlined square with a gap on one side. Eleven of the items in each visual field were black and had either a left-side or right-side gap (randomly and independently determined). In addition, one blue square and one pink square with a gap on either the top or bottom (randomly and independently determined) were presented on each trial. The blue and pink were equally distant from the gray background in the CIE (1976) color space. The pink item and blue item were always presented in opposite visual fields, with the location randomized across trials. Each stimulus array was presented for 500 ms, separated by the interstimulus image of 900-1100 ms. The fixation point was always visible. The blue item was the target for half of the task, and the pink item was the target for the other half; the order was counterbalanced across participants. Participants pressed a button to indicate the location of the gap in the target square using the index finger (top gap) and middle finger (bottom gap) of the dominant hand. Because the response mapping was natural (e.g., an upper buttonpress for a gap on the top), and the data were eventually collapsed across top and bottom gap positions, the stimulus-response mapping was held consistent across participants. Each participant completed a total of 320 trials, divided into blocks of 40. **Word pair judgement** The N400 was elicited using a word pair judgment task adapted from Holcomb & Kutas (1990). On each trial, a red prime word was followed by a green target word. Participants responded whether the target word was semantically related or unrelated to the prime word. Each target word was presented twice in the experiment, once preceded by an associated prime word and once preceded by an unassociated prime word. Participants received a total of 60 associated word pairs and 60 unassociated word pairs. The order of word pairs was randomized with the constraint that each target word occurred only once within each half of the experiment. The prime word on each trial was presented for 200 ms in red (x = 0.65, y = 0.33, 5.8 cd/m2 ), followed by an ISI of 900-1100 ms. The target word was then presented for 200 ms in green, followed by an intertrial interval (ITI) of 1400-1600 ms. The prime and target words were presented in different colors to make it easier for participants to determine which word in the pair required a response. Participants were instructed to press one button if the target word was semantically related to the prime word and another button if the words were unrelated. Each participant completed 120 trials, divided into blocks of 20. **Active visual search** The P3 was elicited in an active visual oddball task adapted from Luck et al. (2009). The letters A, B, C, D, and E were presented in random order ( p = .2 for each letter). One letter was designated the target for a given block of trials, and the other 4 letters were non-targets. Thus, the probability of the target category was .2, but the same physical stimulus served as a target in some blocks and a nontarget in others. Participants pressed one button for targets and another button for non-targets. Each of the five letters served as a target in one block of the experiment and as a non-target in the other four blocks, with the order of blocks randomized across participants. Each letter was presented with equal probability within a block of trials (p = .2), such that the target category was rare (p = .2) and the non-target category was frequent (p = .8). This design eliminates any possible sensory differences between the target and non-target stimuli, including differential sensory adaptation of the individual target and non-target stimuli (see Luck, 2014). **BIDS Restructuring Notes** The original dataset from NEMAR used `ses-*` (session) directories for each ERP paradigm. This was corrected because all 6 paradigms were administered in a single recording session (Kappenman et al., 2021). The following changes were made: - Removed session-level directories; data is now organized as `sub-XXX/eeg/sub-XXX_task-YYY_*` - Merged `task-ERN` and `task-LRP` into `task-flankers`, since both ERP components come from the same flanker paradigm recording (identical raw data) - Fixed `EEGCoordinateSystem` from invalid `"ARS"` to `"EEGLAB"` (Anterior-Left-Superior orientation) - Fixed `task-N170_events.json`: replaced invalid range-based Levels keys (e.g., `"1-40"`) with a Description field - Fixed `task-MMN_events.json` and `task-N2pc_events.json`: added missing event value definitions - Consolidated duplicate `electrodes.tsv` and `coordsystem.json` files using BIDS inheritance (one per subject instead of one per task) - Fixed `dataset_description.json` name from `"ERP CORE N400"` to `"ERP CORE"` **Contact** Experiment control files, archival copies of the EEGLAB/ERPLAB scripts and accessory files used for analyzing the data, and the processed data can all be found at [https://doi.org/10.18115/D5JW4R](https://doi.org/10.18115/D5JW4R). To download the latest versions of the scripts or to report bugs, visit [https://github.com/lucklab/ERP_CORE](https://github.com/lucklab/ERP_CORE). For other questions about the ERP CORE, contact us at [erpbootcamp@gmail.com](mailto:erpbootcamp@gmail.com). Additional ERP resources are available on the ERP Info website ([https://erpinfo.org](https://erpinfo.org)). ## Dataset Information | Dataset ID | `NM000132` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | ERP CORE | | Author (year) | `Kappenman2021` | | Canonical | `ERPCORE`, `ERP_CORE` | | Importable as | `NM000132`, `Kappenman2021`, `ERPCORE`, `ERP_CORE` | | Year | 2004 | | Authors | Emily S. Kappenman, Jaclyn L. Farrens, Wendy Zhang, Andrew X. Stewart, Steven J. Luck. | | License | CC-BY-4.0 | | Citation / DOI | [10.82901/nemar.nm000132](https://doi.org/10.82901/nemar.nm000132) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000132) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000132) | [Source URL](https://nemar.org/dataexplorer/detail/nm000132) | ### Copy-paste BibTeX ```bibtex @dataset{nm000132, title = {ERP CORE}, author = {Emily S. Kappenman and Jaclyn L. Farrens and Wendy Zhang and Andrew X. Stewart and Steven J. Luck.}, doi = {10.82901/nemar.nm000132}, url = {https://doi.org/10.82901/nemar.nm000132}, } ``` ## Technical Details - Subjects: 40 - Recordings: 240 - Tasks: 6 - Channels: 33 - Sampling rate (Hz): 1024 - Duration (hours): 37.74583333333333 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 17.5 GB - File count: 240 - Format: BIDS - License: CC-BY-4.0 - DOI: 10.82901/nemar.nm000132 - Source: nemar - OpenNeuro: [nm000132](https://openneuro.org/datasets/nm000132) - NeMAR: [nm000132](https://nemar.org/dataexplorer/detail?dataset_id=nm000132) ## API Reference Use the `NM000132` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000132(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ERP CORE * **Study:** `nm000132` (NeMAR) * **Author (year):** `Kappenman2021` * **Canonical:** `ERPCORE`, `ERP_CORE` Also importable as: `NM000132`, `Kappenman2021`, `ERPCORE`, `ERP_CORE`. Modality: `eeg`. Subjects: 40; recordings: 240; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000132](https://openneuro.org/datasets/nm000132) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000132](https://nemar.org/dataexplorer/detail?dataset_id=nm000132) DOI: [https://doi.org/10.82901/nemar.nm000132](https://doi.org/10.82901/nemar.nm000132) ### Examples ```pycon >>> from eegdash.dataset import NM000132 >>> dataset = NM000132(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000132) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000132) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000133: eeg dataset, 8 subjects *Alljoined1* Access recordings and metadata through EEGDash. **Citation:** Jonathan Xu, Si Kai Lee, Wangshu Jiang (2024). *Alljoined1*. [10.82901/nemar.nm000133](https://doi.org/10.82901/nemar.nm000133) Modality: eeg Subjects: 8 Recordings: 13 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000133 dataset = NM000133(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000133(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000133( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000133, title = {Alljoined1}, author = {Jonathan Xu and Si Kai Lee and Wangshu Jiang}, doi = {10.82901/nemar.nm000133}, url = {https://doi.org/10.82901/nemar.nm000133}, } ``` ## About This Dataset [DOI](https://doi.org/10.82901/nemar.nm000133) **Alljoined1: EEG Responses to Natural Images** **Overview** Alljoined1 is an EEG dataset of neural responses to rapid serial visual presentation (RSVP) of natural images, designed for EEG-to-image decoding research. Eight healthy right-handed adults (6 male, 2 female; mean age 22 +/- 0.64 years, normal or corrected-to-normal vision) each viewed 10,000 natural images across two recording sessions on separate days. The original data were recorded in BioSemi Data Format (BDF) via a 64-channel BioSemi ActiveTwo system with 24-bit A/D conversion, digitized at 512 Hz. This BIDS-formatted version preserves the BDF format to maintain full 24-bit data fidelity. *Reference:* Xu, J., Lee, S. K., & Jiang, W. (2024). Alljoined – A dataset for EEG-to-Image decoding. [https://doi.org/10.48550/arXiv.2404.05553](https://doi.org/10.48550/arXiv.2404.05553) ### View full README [DOI](https://doi.org/10.82901/nemar.nm000133) **Alljoined1: EEG Responses to Natural Images** **Overview** Alljoined1 is an EEG dataset of neural responses to rapid serial visual presentation (RSVP) of natural images, designed for EEG-to-image decoding research. Eight healthy right-handed adults (6 male, 2 female; mean age 22 +/- 0.64 years, normal or corrected-to-normal vision) each viewed 10,000 natural images across two recording sessions on separate days. The original data were recorded in BioSemi Data Format (BDF) via a 64-channel BioSemi ActiveTwo system with 24-bit A/D conversion, digitized at 512 Hz. This BIDS-formatted version preserves the BDF format to maintain full 24-bit data fidelity. *Reference:* Xu, J., Lee, S. K., & Jiang, W. (2024). Alljoined – A dataset for EEG-to-Image decoding. [https://doi.org/10.48550/arXiv.2404.05553](https://doi.org/10.48550/arXiv.2404.05553) **Recording Setup** - *Equipment:* BioSemi ActiveTwo, 64 Ag/AgCl sintered electrodes - *Montage:* International 10-20 system - *Sampling rate:* 512 Hz - *Reference:* CMS/DRL (BioSemi default); average reference applied in preprocessing - *Electrode offset:* kept below 40 mV - *Power line:* 60 Hz notch filter applied during preprocessing **Task Paradigm** Participants viewed natural images in a rapid serial visual presentation (RSVP) paradigm with an oddball detection task. Each trial consisted of an image presented for 300 ms, followed by 300 ms of black screen, plus 0-50 ms of random jitter. Participants pressed the space bar when two consecutive trials contained the same image (oddball detection). Oddball trials (24 per block) were excluded from analysis. **Stimulus Set** 10,000 natural images per participant drawn from the Natural Scenes Dataset (NSD), which itself is sourced from MS-COCO: - *1,000 shared images:* the first 960 images from the NSD “shared1000” subset, shown to all participants (each image repeated 4 times per participant) - *9,000 unique images:* different for each participant Each image was shown 4 times per participant across blocks and sessions (presented twice per block, with blocks repeated within sessions). **Subjects and Sessions** 8 subjects, 1-2 sessions each (13 sessions total): ```text | Subject | Sessions | Notes | |---------|----------|-------| | sub-01 | ses-01, ses-02 | | | sub-02 | ses-01 | | | sub-03 | ses-01, ses-02 | Epoched file missing for ses-01 | | sub-04 | ses-01, ses-02 | | | sub-05 | ses-01, ses-02 | | | sub-06 | ses-01, ses-02 | | | sub-07 | ses-01 | | | sub-08 | ses-01 | | ``` Total: approximately 46,080 epochs across all participants (approximately 3,839 events per session after oddball exclusion). **Data Format** Raw continuous EEG recordings are stored as BDF files (BioSemi Data Format, 24-bit resolution). The original data were distributed as MNE-Python FIF files; conversion to BDF was performed to preserve the native 24-bit precision of the BioSemi ActiveTwo system. Round-trip validation confirmed data integrity to within 1.55e-8 V (sub-nanovolt), and event onsets match exactly (zero timing error). *Per-session files:* ```text | Path | Description | |------|-------------| | `sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_eeg.bdf` | Raw EEG | | `sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_events.tsv` | Event markers | ``` *Shared sidecar files (root level, BIDS inheritance principle):* ```text | File | Description | |------|-------------| | `task-images_eeg.json` | Recording parameters | | `task-images_channels.tsv` | Channel descriptions (64 EEG channels) | | `task-images_electrodes.tsv` | Electrode positions (standard 10-20, CapTrak) | | `task-images_coordsystem.json` | Coordinate system specification | ``` Event values in the events.tsv files represent image indices (1-960+) corresponding to NSD image identifiers. The `trial_type` column uses the format `image/{index}`. **Derivatives** The `derivatives/epoched/` directory contains preprocessed and epoched data provided by the original authors, stored in MNE-Python FIF format (`.fif`). Preprocessing pipeline applied by the original authors: 1. Band-pass filter: 0.5-125 Hz 2. Notch filter: 60 Hz (power line) 3. Independent Component Analysis (ICA): FastICA, retaining 95% of variance 4. Epoch extraction: -50 ms to 600 ms relative to stimulus onset 5. Artifact rejection: AutoReject algorithm (mean 130.75 epochs dropped per subject, SD 260.44) 6. Baseline correction 7. Average re-referencing These epoched files are derivative products, not raw recordings, and are stored separately per BIDS conventions. Note: the epoched file for sub-03 ses-01 was not available in the source distribution. **Code** The `code/` directory contains the original Alljoined1 analysis code, cloned from [https://github.com/Alljoined/alljoined-dataset1](https://github.com/Alljoined/alljoined-dataset1). **BIDS Conversion** Converted to BIDS by Yahya Shirazi (Swartz Center for Computational Neuroscience, UC San Diego) using MNE-Python and custom scripts. - *Source data:* OSF repository [https://osf.io/kqgs8/](https://osf.io/kqgs8/) - Conversion validated with round-trip integrity checks (data, channels, sampling frequency, event count, event values, and event timing) **License and Terms of Use** This dataset is distributed under CC-BY-NC-ND-4.0 (Creative Commons Attribution-NonCommercial-NoDerivatives 4.0). The Alljoined team imposes additional terms on their datasets. By using this dataset you agree to all conditions below. 1. Researcher shall use the Dataset only for non-commercial research and educational purposes, in accordance with Alljoined’s [Terms of Use](https://www.alljoined.com/terms-of-use). 2. *No Warranties:* Alljoined makes no representations or warranties regarding the Dataset, including but not limited to warranties of non-infringement or fitness for a particular purpose. 3. *Full Responsibility:* Researcher accepts full responsibility for his or her use of the Dataset and shall defend and indemnify Alljoined, including their employees, officers and agents, against any and all claims arising from Researcher’s use of the Dataset. 4. *Privacy Compliance:* Researcher shall comply with Alljoined’s [Privacy Policy](https://www.alljoined.com/privacy-policy) and ensure that any use of the Dataset respects the privacy rights of individuals whose data may be included. 5. *Sharing Rights:* Researcher may provide research associates and colleagues with access to the Dataset provided that they first agree to be bound by these terms and conditions. 6. *Termination Rights:* Alljoined reserves the right to terminate Researcher’s access to the Dataset at any time. 7. *Commercial Entity Binding:* If Researcher is employed by a for-profit, commercial entity, Researcher’s employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer. 8. *Governing Law:* The law of the State of California shall apply to all disputes under this agreement. > *Note:* The original Alljoined1 dataset on OSF ([https://osf.io/kqgs8/](https://osf.io/kqgs8/)) does not specify an explicit license. The terms above are from the Alljoined-1.6M HuggingFace distribution and the Alljoined website; they are included here as the best available guidance. Contact the Alljoined team ([team@alljoined.com](mailto:team@alljoined.com)) for clarification on redistribution rights. - Full terms: [https://www.alljoined.com/terms-of-use](https://www.alljoined.com/terms-of-use) - Privacy policy: [https://www.alljoined.com/privacy-policy](https://www.alljoined.com/privacy-policy) **References** Xu, J., Lee, S. K., & Jiang, W. (2024). Alljoined – A dataset for EEG-to-Image decoding. [https://doi.org/10.48550/arXiv.2404.05553](https://doi.org/10.48550/arXiv.2404.05553) ## Dataset Information | Dataset ID | `NM000133` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Alljoined1 | | Author (year) | `Xu2024` | | Canonical | `Alljoined1`, `Alljoined` | | Importable as | `NM000133`, `Xu2024`, `Alljoined1`, `Alljoined` | | Year | 2024 | | Authors | Jonathan Xu, Si Kai Lee, Wangshu Jiang | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | [10.82901/nemar.nm000133](https://doi.org/10.82901/nemar.nm000133) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000133) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000133) | [Source URL](https://nemar.org/dataexplorer/detail/nm000133) | ### Copy-paste BibTeX ```bibtex @dataset{nm000133, title = {Alljoined1}, author = {Jonathan Xu and Si Kai Lee and Wangshu Jiang}, doi = {10.82901/nemar.nm000133}, url = {https://doi.org/10.82901/nemar.nm000133}, } ``` ## Technical Details - Subjects: 8 - Recordings: 13 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 512 - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: 7.6 GB - File count: 13 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: 10.82901/nemar.nm000133 - Source: nemar - OpenNeuro: [nm000133](https://openneuro.org/datasets/nm000133) - NeMAR: [nm000133](https://nemar.org/dataexplorer/detail?dataset_id=nm000133) ## API Reference Use the `NM000133` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000133(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alljoined1 * **Study:** `nm000133` (NeMAR) * **Author (year):** `Xu2024` * **Canonical:** `Alljoined1`, `Alljoined` Also importable as: `NM000133`, `Xu2024`, `Alljoined1`, `Alljoined`. Modality: `eeg`. Subjects: 8; recordings: 13; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000133](https://openneuro.org/datasets/nm000133) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000133](https://nemar.org/dataexplorer/detail?dataset_id=nm000133) DOI: [https://doi.org/10.82901/nemar.nm000133](https://doi.org/10.82901/nemar.nm000133) ### Examples ```pycon >>> from eegdash.dataset import NM000133 >>> dataset = NM000133(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000133) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000133) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000134: eeg dataset, 20 subjects *Alljoined-1.6M* Access recordings and metadata through EEGDash. **Citation:** Jonathan Xu, Ugo Bruzadin Nunes, Wangshu Jiang, Samuel Ryther, Jordan Pringle, Paul S. Scotti, Arnaud Delorme, Reese Kneeland (2025). *Alljoined-1.6M*. [10.82901/nemar.nm000134](https://doi.org/10.82901/nemar.nm000134) Modality: eeg Subjects: 20 Recordings: 1525 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000134 dataset = NM000134(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000134(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000134( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000134, title = {Alljoined-1.6M}, author = {Jonathan Xu and Ugo Bruzadin Nunes and Wangshu Jiang and Samuel Ryther and Jordan Pringle and Paul S. Scotti and Arnaud Delorme and Reese Kneeland}, doi = {10.82901/nemar.nm000134}, url = {https://doi.org/10.82901/nemar.nm000134}, } ``` ## About This Dataset [DOI](https://doi.org/10.82901/nemar.nm000134) **Alljoined-1.6M: Million-Trial EEG Dataset with Consumer-Grade Hardware** **Overview** Alljoined-1.6M is a large-scale EEG dataset of neural responses to rapid serial visual presentation (RSVP) of natural images, recorded using a consumer-grade 32-channel EMOTIV FLEX2 system. Twenty healthy adult participants (ages 23-63; 15 male, 5 female) each completed four recording sessions, generating over 1.6 million visual stimulus trials in total. The dataset was designed to evaluate whether deep neural network-based brain-computer interface (BCI) research and semantic decoding methods can be effectively conducted with affordable consumer-grade EEG systems (approximately $2.2k versus $35-60k for research-grade systems). *Reference:* Xu, J., Bruzadin Nunes, U., Jiang, W., Ryther, S., Pringle, J., Scotti, P. S., Delorme, A., & Kneeland, R. (2025). Alljoined-1.6M: A Million-Trial EEG-Image Dataset for Evaluating Affordable Brain-Computer Interfaces. [https://doi.org/10.48550/arXiv.2508.18571](https://doi.org/10.48550/arXiv.2508.18571) ### View full README [DOI](https://doi.org/10.82901/nemar.nm000134) **Alljoined-1.6M: Million-Trial EEG Dataset with Consumer-Grade Hardware** **Overview** Alljoined-1.6M is a large-scale EEG dataset of neural responses to rapid serial visual presentation (RSVP) of natural images, recorded using a consumer-grade 32-channel EMOTIV FLEX2 system. Twenty healthy adult participants (ages 23-63; 15 male, 5 female) each completed four recording sessions, generating over 1.6 million visual stimulus trials in total. The dataset was designed to evaluate whether deep neural network-based brain-computer interface (BCI) research and semantic decoding methods can be effectively conducted with affordable consumer-grade EEG systems (approximately $2.2k versus $35-60k for research-grade systems). *Reference:* Xu, J., Bruzadin Nunes, U., Jiang, W., Ryther, S., Pringle, J., Scotti, P. S., Delorme, A., & Kneeland, R. (2025). Alljoined-1.6M: A Million-Trial EEG-Image Dataset for Evaluating Affordable Brain-Computer Interfaces. [https://doi.org/10.48550/arXiv.2508.18571](https://doi.org/10.48550/arXiv.2508.18571) **Recording Setup** - *Equipment:* EMOTIV FLEX2, 32-channel sintered Ag/AgCl gel-based electrodes - *Connectivity:* wireless Bluetooth 5.2 - *Sampling rate:* 256 Hz (resampled to 250 Hz in published analyses) - *Montage:* extended 10-20 system, focused on occipital/visual regions - *Channels:* Cz, Fp1, F7, F3, CP5, CP1, P1, P3, P5, P7, PO9, PO7, PO3, O1, O9, Pz, POz, Oz, O10, O2, PO4, PO8, PO10, P8, P6, P4, P2, CP2, CP6, F4, F8, Fp2 - *Firmware filters:* dual 50/60 Hz notch filter (built into EMOTIV firmware) - *Cost:* approximately $2.2k (approximately 27x cheaper than research-grade systems) **Task Paradigm** Rapid Serial Visual Presentation (RSVP) with orthogonal oddball detection. Each trial consisted of an image presented for 100 ms, followed by 100 ms of blank screen (200 ms total cycle). A small semi-transparent red fixation dot (0.2 x 0.2 degrees, 50% opacity) was present throughout. Oddball detection: participants pressed a button when they detected catch trials featuring a Woody (Toy Story) character, which appeared in approximately 6% of sequences. Detection window was up to 2 seconds post-sequence. This task maintained engagement without biasing perception toward specific image categories. Viewing distance: 60 cm; viewing angle: 7 degrees. **Stimulus Set** 16,740 unique images from the THINGS database (26,000 total images across 1,854 object categories), identical to the THINGS-EEG2 stimulus set for direct comparison. - *Test images:* shown 80 times per participant (4 sessions x 4 test blocks x 5 presentations) - *Training images:* shown 4-5 times per participant - *Randomization:* constrained so no image repeats within 2 intervening items **Subjects, Sessions, and Runs** 20 subjects, 4 sessions each (sub-08 has an additional session `ses-02old`, a retake of session 2). Each session contains 19 RSVP blocks (runs), approximately 5 minutes each. The first 4 runs per session present test images; the remaining 15 runs present training images. Total: 83,520 image trials per subject; approximately 1.6 million trials across all 20 participants. ```text | Subject | Sessions | Runs | Notes | |---------|----------|------|-------| | sub-01 | 4 | 76 | | | sub-02 | 4 | 76 | | | sub-03 | 4 | 76 | | | sub-04 | 4 | 76 | | | sub-05 | 4 | 76 | | | sub-06 | 4 | 76 | | | sub-07 | 4 | 76 | | | sub-08 | 5 | 81 | Includes ses-02old (session 2 retake) | | sub-09 | 4 | 76 | | | sub-10 | 4 | 76 | | | sub-11 | 4 | 76 | | | sub-12 | 4 | 76 | | | sub-13 | 4 | 76 | | | sub-14 | 4 | 76 | | | sub-15 | 4 | 76 | | | sub-16 | 4 | 76 | | | sub-17 | 4 | 76 | | | sub-18 | 4 | 76 | | | sub-19 | 4 | 76 | | | sub-20 | 4 | 76 | | ``` Participants were recruited from San Francisco via local platforms (Craigslist 55%, Instawork 35%) and filtered from an initial pool of 48 for high behavioral engagement. Mean oddball detection performance: 88% AUC (+/- 1% SE). **Data Format** Raw continuous EEG recordings are stored as European Data Format (EDF) files, the native export format of the EMOTIV FLEX2 system (16-bit resolution). Only the 32 EEG channels are retained; EMOTIV metadata channels (timestamps, counters, contact quality, motion sensors, etc.) were excluded during conversion. *Per-run files:* ```text | Path | Description | |------|-------------| | `sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_run-ZZ_eeg.edf` | Raw EEG | | `sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_run-ZZ_events.tsv` | Events | | `sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_run-ZZ_events.json` | Event metadata | | `sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_run-ZZ_channels.tsv` | Channels | | `sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_run-ZZ_eeg.json` | Recording parameters | | `sub-XX/ses-YY/eeg/sub-XX_ses-YY_space-CapTrak_coordsystem.json` | Coordinate system | | `sub-XX/ses-YY/eeg/sub-XX_ses-YY_space-CapTrak_electrodes.tsv` | Electrode positions | ``` Event annotations in the events.tsv files use the following `trial_type` format from the EMOTIV recording system: - `stim_test,{image_id},-1,{trial}` – test image presentation - `oddball,...` – oddball (catch) trial - `behav,...` – behavioral response (button press) **Source Data** The `sourcedata/` directory contains the original EMOTIV JSON metadata files from each recording block. These files include the raw EMOTIV marker data with precise timestamps, UUIDs, and port information as recorded by the EMOTIV software. They are the original, unprocessed recording artifacts from the EMOTIV system, not derived products, and are stored in `sourcedata/` per BIDS conventions. ```text sourcedata/sub-XX/ses-YY/eeg/sub-XX_ses-YY_task-images_run-ZZ_recording.json ``` **Code** The `code/` directory contains the original Alljoined-1.6M analysis code, cloned from [https://github.com/Alljoined/Alljoined-1.6M](https://github.com/Alljoined/Alljoined-1.6M). **BIDS Conversion** Converted to BIDS by Yahya Shirazi (Swartz Center for Computational Neuroscience, UC San Diego) using MNE-BIDS and custom scripts. - *Source data:* HuggingFace [https://huggingface.co/datasets/Alljoined/Alljoined-1.6M](https://huggingface.co/datasets/Alljoined/Alljoined-1.6M) - EMOTIV channel `Afz` renamed to `AFz` (standard 10-20 capitalization) - Session label `session_02 old` sanitized to `ses-02old` for BIDS compliance - 95 EMOTIV metadata channels excluded (only 32 EEG channels retained) - Conversion validated with round-trip integrity checks (data amplitude, per-channel correlation, sampling frequency, event count, and event timing) **License and Terms of Use** This dataset is distributed under CC-BY-NC-ND-4.0 (Creative Commons Attribution-NonCommercial-NoDerivatives 4.0) with the following additional terms imposed by the Alljoined team. By using this dataset you agree to all conditions below. 1. Researcher shall use the Dataset only for non-commercial research and educational purposes, in accordance with Alljoined’s [Terms of Use](https://www.alljoined.com/terms-of-use). 2. *No Warranties:* Alljoined makes no representations or warranties regarding the Dataset, including but not limited to warranties of non-infringement or fitness for a particular purpose. 3. *Full Responsibility:* Researcher accepts full responsibility for his or her use of the Dataset and shall defend and indemnify Alljoined, including their employees, officers and agents, against any and all claims arising from Researcher’s use of the Dataset. 4. *Privacy Compliance:* Researcher shall comply with Alljoined’s [Privacy Policy](https://www.alljoined.com/privacy-policy) and ensure that any use of the Dataset respects the privacy rights of individuals whose data may be included. 5. *Sharing Rights:* Researcher may provide research associates and colleagues with access to the Dataset provided that they first agree to be bound by these terms and conditions. 6. *Termination Rights:* Alljoined reserves the right to terminate Researcher’s access to the Dataset at any time. 7. *Commercial Entity Binding:* If Researcher is employed by a for-profit, commercial entity, Researcher’s employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer. 8. *Governing Law:* The law of the State of California shall apply to all disputes under this agreement. - Full terms: [https://www.alljoined.com/terms-of-use](https://www.alljoined.com/terms-of-use) - Privacy policy: [https://www.alljoined.com/privacy-policy](https://www.alljoined.com/privacy-policy) **References** Xu, J., Bruzadin Nunes, U., Jiang, W., Ryther, S., Pringle, J., Scotti, P. S., Delorme, A., & Kneeland, R. (2025). Alljoined-1.6M: A Million-Trial EEG-Image Dataset for Evaluating Affordable Brain-Computer Interfaces. [https://doi.org/10.48550/arXiv.2508.18571](https://doi.org/10.48550/arXiv.2508.18571) Xu, J., Lee, S. K., & Jiang, W. (2024). Alljoined – A dataset for EEG-to-Image decoding. [https://doi.org/10.48550/arXiv.2404.05553](https://doi.org/10.48550/arXiv.2404.05553) ## Dataset Information | Dataset ID | `NM000134` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Alljoined-1.6M | | Author (year) | `Xu2025` | | Canonical | `Alljoined16M`, `Alljoined_16M`, `Alljoined1p6M` | | Importable as | `NM000134`, `Xu2025`, `Alljoined16M`, `Alljoined_16M`, `Alljoined1p6M` | | Year | 2025 | | Authors | Jonathan Xu, Ugo Bruzadin Nunes, Wangshu Jiang, Samuel Ryther, Jordan Pringle, Paul S. Scotti, Arnaud Delorme, Reese Kneeland | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | [10.82901/nemar.nm000134](https://doi.org/10.82901/nemar.nm000134) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000134) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000134) | [Source URL](https://nemar.org/dataexplorer/detail/nm000134) | ### Copy-paste BibTeX ```bibtex @dataset{nm000134, title = {Alljoined-1.6M}, author = {Jonathan Xu and Ugo Bruzadin Nunes and Wangshu Jiang and Samuel Ryther and Jordan Pringle and Paul S. Scotti and Arnaud Delorme and Reese Kneeland}, doi = {10.82901/nemar.nm000134}, url = {https://doi.org/10.82901/nemar.nm000134}, } ``` ## Technical Details - Subjects: 20 - Recordings: 1525 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 256 - Duration (hours): 129.4950119357639 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 8.2 GB - File count: 1525 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: 10.82901/nemar.nm000134 - Source: nemar - OpenNeuro: [nm000134](https://openneuro.org/datasets/nm000134) - NeMAR: [nm000134](https://nemar.org/dataexplorer/detail?dataset_id=nm000134) ## API Reference Use the `NM000134` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000134(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alljoined-1.6M * **Study:** `nm000134` (NeMAR) * **Author (year):** `Xu2025` * **Canonical:** `Alljoined16M`, `Alljoined_16M`, `Alljoined1p6M` Also importable as: `NM000134`, `Xu2025`, `Alljoined16M`, `Alljoined_16M`, `Alljoined1p6M`. Modality: `eeg`. Subjects: 20; recordings: 1525; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000134](https://openneuro.org/datasets/nm000134) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000134](https://nemar.org/dataexplorer/detail?dataset_id=nm000134) DOI: [https://doi.org/10.82901/nemar.nm000134](https://doi.org/10.82901/nemar.nm000134) ### Examples ```pycon >>> from eegdash.dataset import NM000134 >>> dataset = NM000134(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000134) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000134) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000135: eeg dataset, 1 subjects *BNCI 2014-004 Motor Imagery dataset* Access recordings and metadata through EEGDash. **Citation:** R. Leeb, C. Brunner, G. R. Müller-Putz, A. Schlögl, G. Pfurtscheller, F. Lee, C. Keinrath, R. Scherer, H. Bischof (2019). *BNCI 2014-004 Motor Imagery dataset*. Modality: eeg Subjects: 1 Recordings: 5 License: CC-BY-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000135 dataset = NM000135(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000135(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000135( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000135, title = {BNCI 2014-004 Motor Imagery dataset}, author = {R. Leeb and C. Brunner and G. R. Müller-Putz and A. Schlögl and G. Pfurtscheller and F. Lee and C. Keinrath and R. Scherer and H. Bischof}, } ``` ## About This Dataset **BNCI 2014-004 Motor Imagery dataset** BNCI 2014-004 Motor Imagery dataset. **Dataset Overview** - **Code**: BNCI2014-004 - **Paradigm**: imagery - **DOI**: 10.1109/TNSRE.2007.906956 ### View full README **BNCI 2014-004 Motor Imagery dataset** BNCI 2014-004 Motor Imagery dataset. **Dataset Overview** - **Code**: BNCI2014-004 - **Paradigm**: imagery - **DOI**: 10.1109/TNSRE.2007.906956 - **Subjects**: 9 - **Sessions per subject**: 5 - **Events**: left_hand=1, right_hand=2 - **Trial interval**: [3, 7.5] s - **Session IDs**: 01T, 02T, 03T, 04E, 05E - **File format**: GDF **Acquisition** - **Sampling rate**: 250.0 Hz - **Number of channels**: 3 - **Channel types**: eeg=3, eog=3 - **Channel names**: C3, C4, Cz, EOG1, EOG2, EOG3 - **Montage**: standard_1020 - **Hardware**: g.tec - **Software**: rtsBCI (MATLAB/Simulink) - **Reference**: left mastoid - **Ground**: Fz - **Sensor type**: EEG - **Line frequency**: 50.0 Hz - **Online filters**: 0.5-100 Hz bandpass, 50 Hz notch - **Cap manufacturer**: Easycap - **Electrode material**: Ag/AgCl - **Auxiliary channels**: EOG (3 ch, horizontal, vertical, radial) **Participants** - **Number of subjects**: 9 - **Health status**: healthy - **Age**: mean=24.7, std=3.3 - **Handedness**: right - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Task type**: motor_imagery - **Number of classes**: 2 - **Class labels**: left_hand, right_hand - **Trial duration**: 7.5 s - **Tasks**: left_hand_imagery, right_hand_imagery - **Study design**: Two-class motor imagery: left hand and right hand. Screening sessions (01T, 02T) without feedback, feedback sessions (03T, 04E, 05E) with smiley feedback. - **Study domain**: brain-computer interface - **Feedback type**: visual - **Stimulus type**: arrow_cue - **Stimulus modalities**: visual, auditory - **Primary modality**: visual - **Synchronicity**: cue_based - **Mode**: both - **Training/test split**: True - **Instructions**: Subjects selected their best motor imagery strategy (e.g., squeezing a ball or pulling a brake) and performed kinesthetic motor imagery of left or right hand movements. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_hand ``` ```text ├─ Sensory-event │ ├─ Experimental-stimulus │ ├─ Visual-presentation │ └─ Leftward, Arrow └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event │ ├─ Experimental-stimulus │ ├─ Visual-presentation │ └─ Rightward, Arrow └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_hand, right_hand - **Cue duration**: 1.25 s - **Imagery duration**: 4.0 s **Data Structure** - **Trials**: {‘screening’: 120, ‘feedback’: 160} - **Trials context**: per session **Preprocessing** - **Data state**: raw with online filtering - **Preprocessing applied**: True - **Steps**: bandpass filtering, notch filtering - **Highpass filter**: 0.5 Hz - **Lowpass filter**: 100.0 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.5, ‘high_cutoff_hz’: 100.0} - **Notch filter**: [50.0] Hz - **Filter type**: analog - **Notes**: Online bandpass (0.5-100 Hz) and notch (50 Hz) filters applied during recording. Artifact trials marked with event type 1023. EOG channels provided for user-applied artifact correction. **Signal Processing** - **Classifiers**: LDA - **Feature extraction**: Bandpower, BP **Cross-Validation** - **Method**: 10x10 cross-validation - **Folds**: 10 - **Evaluation type**: within_subject **BCI Application** - **Applications**: motor_control - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Motor Imagery **Documentation** - **Description**: BCI Competition 2008 - Graz data set B: Two-class motor imagery dataset (left/right hand) with screening sessions (no feedback) and smiley feedback sessions. 9 subjects, 3 bipolar EEG channels (C3, Cz, C4) + 3 EOG channels, 250 Hz. - **DOI**: 10.1109/TNSRE.2007.906956 - **License**: CC-BY-ND-4.0 - **Investigators**: R. Leeb, C. Brunner, G. R. Müller-Putz, A. Schlögl, G. Pfurtscheller, F. Lee, C. Keinrath, R. Scherer, H. Bischof - **Senior author**: G. Pfurtscheller - **Institution**: Graz University of Technology - **Department**: Institute for Knowledge Discovery - **Country**: AT - **Repository**: BNCI Horizon - **Data URL**: [http://biosig.sourceforge.net/](http://biosig.sourceforge.net/) - **Publication year**: 2007 - **Keywords**: brain-computer interface, BCI, electroencephalogram, EEG, motor imagery, BCI competition, smiley feedback **External Links** - **Source**: [http://biosig.sourceforge.net/](http://biosig.sourceforge.net/) **Abstract** BCI Competition 2008 Graz data set B. EEG data from 9 subjects performing two-class motor imagery (left hand vs right hand). Two screening sessions without feedback (120 trials each) and three feedback sessions with smiley feedback (160 trials each). Three bipolar EEG channels (C3, Cz, C4) and three EOG channels recorded at 250 Hz. **Methodology** Subjects performed kinesthetic motor imagery of left or right hand movements. Two screening sessions (01T, 02T) without feedback: 6 runs x 20 trials = 120 trials per session. Three feedback sessions (03T, 04E, 05E) with smiley feedback: 4 runs x 40 trials (20 per class) = 160 trials per session. Screening trials: fixation cross + beep at t=0, arrow cue at ~t=2 for 1.25s, imagery for 4s, break. Feedback trials: smiley at t=0, beep at t=2, cue from t=3 to t=7.5 with continuous smiley feedback. Three bipolar EEG channels (C3, Cz, C4) plus three monopolar EOG channels recorded at 250 Hz with 0.5-100 Hz bandpass and 50 Hz notch filter. EEG ground at Fz, EOG reference at left mastoid. Amplifier: g.tec. Software: rtsBCI (MATLAB/Simulink). **References** Tangermann, M., Muller, K.R., Aertsen, A., Birbaumer, N., Braun, C., Brunner, C., Leeb, R., Mehring, C., Miller, K.J., Mueller-Putz, G. and Nolte, G., 2012. Review of the BCI competition IV. Frontiers in neuroscience, 6, p.55. Notes .. note:: `BNCI2014_004` was previously named `BNCI2014004`. `BNCI2014004` will be removed in version 1.1. .. versionadded:: 0.4.0 This dataset is commonly referred to as “BCI Competition IV Dataset 2b”. It is widely used for binary motor imagery classification tasks. See Also BNCI2014_001 : 4-class motor imagery (Dataset 2a) BNCI2014_002 : 2-class motor imagery with Laplacian derivations Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000135` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2014-004 Motor Imagery dataset | | Author (year) | `Leeb2014` | | Canonical | `BNCI2014004` | | Importable as | `NM000135`, `Leeb2014`, `BNCI2014004` | | Year | 2019 | | Authors | 1. Leeb, C. Brunner, G. R. Müller-Putz, A. Schlögl, G. Pfurtscheller, F. Lee, C. Keinrath, R. Scherer, H. Bischof | | License | CC-BY-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000135) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000135) | [Source URL](https://nemar.org/dataexplorer/detail/nm000135) | ## Technical Details - Subjects: 1 - Recordings: 5 - Tasks: 1 - Channels: 3 - Sampling rate (Hz): 250.0 - Duration (hours): 2.8521533333333333 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 22.6 MB - File count: 5 - Format: BIDS - License: CC-BY-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000135](https://openneuro.org/datasets/nm000135) - NeMAR: [nm000135](https://nemar.org/dataexplorer/detail?dataset_id=nm000135) ## API Reference Use the `NM000135` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000135(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-004 Motor Imagery dataset * **Study:** `nm000135` (NeMAR) * **Author (year):** `Leeb2014` * **Canonical:** `BNCI2014004` Also importable as: `NM000135`, `Leeb2014`, `BNCI2014004`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 1; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000135](https://openneuro.org/datasets/nm000135) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000135](https://nemar.org/dataexplorer/detail?dataset_id=nm000135) ### Examples ```pycon >>> from eegdash.dataset import NM000135 >>> dataset = NM000135(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000135) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000135) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000136: eeg dataset, 31 subjects *GuttmannFlury2025-P300* Access recordings and metadata through EEGDash. **Citation:** Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu (2025). *GuttmannFlury2025-P300*. [10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) Modality: eeg Subjects: 31 Recordings: 63 License: CC0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000136 dataset = NM000136(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000136(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000136( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000136, title = {GuttmannFlury2025-P300}, author = {Eva Guttmann-Flury and Xinjun Sheng and Xiangyang Zhu}, doi = {10.1038/s41597-025-04861-9}, url = {https://doi.org/10.1038/s41597-025-04861-9}, } ``` ## About This Dataset **GuttmannFlury2025-P300** Eye-BCI multimodal P300 speller dataset from Guttmann-Flury et al 2025. **Dataset Overview** > Code: GuttmannFlury2025-P300 > Paradigm: p300 > DOI: 10.1038/s41597-025-04861-9 ### View full README **GuttmannFlury2025-P300** Eye-BCI multimodal P300 speller dataset from Guttmann-Flury et al 2025. **Dataset Overview** > Code: GuttmannFlury2025-P300 > Paradigm: p300 > DOI: 10.1038/s41597-025-04861-9 > Subjects: 31 > Sessions per subject: 3 > Events: Target=1, NonTarget=2 > Trial interval: [0, 1] s > File format: BDF **Acquisition** > Sampling rate: 1000.0 Hz > Number of channels: 66 > Channel types: eeg=64, eog=1, stim=1 > Channel names: FP1, FPZ, FP2, AF3, AF4, F7, F5, F3, F1, FZ, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCZ, FC2, FC4, FC6, FT8, T7, C5, C3, C1, CZ, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPZ, CP2, CP4, CP6, TP8, P7, P5, P3, P1, PZ, P2, P4, P6, P8, PO7, PO5, PO3, POZ, PO4, PO6, PO8, O1, OZ, O2, CB1, CB2 > Montage: standard_1005 > Hardware: Neuroscan Quik-Cap 65-ch, SynAmps2 > Reference: right mastoid (M1) > Ground: forehead > Sensor type: Ag/AgCl > Line frequency: 50.0 Hz > Online filters: {‘highpass_time_constant_s’: 10} **Participants** > Number of subjects: 31 > Health status: healthy > Age: mean=28.3, min=20.0, max=57.0 > Gender distribution: female=11, male=20 > Species: human **Experimental Protocol** > Paradigm: p300 > Number of classes: 2 > Class labels: Target, NonTarget > Study design: Multi-paradigm BCI (MI/ME/SSVEP/P300). P300: row/column speller with 4L and 5L grid sizes. > Feedback type: none > Stimulus type: row-column flash > Stimulus modalities: visual > Primary modality: visual > Synchronicity: synchronous > Mode: offline **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 **Data Structure** > Trials: 2520 > Trials context: 63 sessions x 40 trials = 2520 (P300-4L default) **BCI Application** > Applications: speller, communication > Environment: laboratory **Tags** > Pathology: Healthy > Modality: ERP > Type: Research **Documentation** > DOI: 10.1038/s41597-025-04861-9 > License: CC0 > Investigators: Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu > Institution: Shanghai Jiao Tong University > Country: CN > Publication year: 2025 **References** Guttmann-Flury, E., Sheng, X., & Zhu, X. (2025). Dataset combining EEG, eye-tracking, and high-speed video for ocular activity analysis across BCI paradigms. Scientific Data, 12, 587. [https://doi.org/10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000136` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | GuttmannFlury2025-P300 | | Author (year) | `GuttmannFlury2025` | | Canonical | — | | Importable as | `NM000136`, `GuttmannFlury2025` | | Year | 2025 | | Authors | Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu | | License | CC0 | | Citation / DOI | [doi:10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000136) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000136) | [Source URL](https://nemar.org/dataexplorer/detail/nm000136) | ### Copy-paste BibTeX ```bibtex @dataset{nm000136, title = {GuttmannFlury2025-P300}, author = {Eva Guttmann-Flury and Xinjun Sheng and Xiangyang Zhu}, doi = {10.1038/s41597-025-04861-9}, url = {https://doi.org/10.1038/s41597-025-04861-9}, } ``` ## Technical Details - Subjects: 31 - Recordings: 63 - Tasks: 1 - Channels: 65 - Sampling rate (Hz): 1000.0 - Duration (hours): 11.223038055555556 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 7.3 GB - File count: 63 - Format: BIDS - License: CC0 - DOI: doi:10.1038/s41597-025-04861-9 - Source: nemar - OpenNeuro: [nm000136](https://openneuro.org/datasets/nm000136) - NeMAR: [nm000136](https://nemar.org/dataexplorer/detail?dataset_id=nm000136) ## API Reference Use the `NM000136` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000136(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) GuttmannFlury2025-P300 * **Study:** `nm000136` (NeMAR) * **Author (year):** `GuttmannFlury2025` * **Canonical:** — Also importable as: `NM000136`, `GuttmannFlury2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 31; recordings: 63; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000136](https://openneuro.org/datasets/nm000136) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000136](https://nemar.org/dataexplorer/detail?dataset_id=nm000136) DOI: [https://doi.org/10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) ### Examples ```pycon >>> from eegdash.dataset import NM000136 >>> dataset = NM000136(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000136) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000136) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000137: eeg dataset, 7 subjects *Classical motor imagery dataset with left hand, right hand, and rest* Access recordings and metadata through EEGDash. **Citation:** Murat Kaya, Mustafa Kemal Binli, Erkan Ozbay, Hilmi Yanar, Yuriy Mishchenko (2019). *Classical motor imagery dataset with left hand, right hand, and rest*. Modality: eeg Subjects: 7 Recordings: 17 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000137 dataset = NM000137(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000137(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000137( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000137, title = {Classical motor imagery dataset with left hand, right hand, and rest}, author = {Murat Kaya and Mustafa Kemal Binli and Erkan Ozbay and Hilmi Yanar and Yuriy Mishchenko}, } ``` ## About This Dataset **Classical motor imagery dataset with left hand, right hand, and rest** Classical motor imagery dataset with left hand, right hand, and rest. **Dataset Overview** - **Code**: Kaya2018 - **Paradigm**: imagery - **DOI**: 10.1038/sdata.2018.211 ### View full README **Classical motor imagery dataset with left hand, right hand, and rest** Classical motor imagery dataset with left hand, right hand, and rest. **Dataset Overview** - **Code**: Kaya2018 - **Paradigm**: imagery - **DOI**: 10.1038/sdata.2018.211 - **Subjects**: 7 - **Sessions per subject**: 1 - **Events**: left_hand=1, right_hand=2, passive=3 - **Trial interval**: [0, 1] s - **File format**: MAT **Acquisition** - **Sampling rate**: 200.0 Hz - **Number of channels**: 19 - **Channel types**: eeg=19 - **Channel names**: Fp1, Fp2, F3, F4, C3, C4, P3, P4, O1, O2, F7, F8, T3, T4, T5, T6, Fz, Cz, Pz - **Montage**: standard_1020 - **Hardware**: Nihon Kohden EEG-1200 - \*\*Reference\*\*: System 0V (0.55\*(C3+C4)) - **Ground**: A1, A2 (earlobes) - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 7 - **Health status**: healthy - **Age**: min=20, max=35 - **Gender distribution**: male=5, female=2 **Experimental Protocol** - **Paradigm**: imagery - **Task type**: left_right_hand - **Number of classes**: 3 - **Class labels**: left_hand, right_hand, passive - **Trial duration**: 1.0 s - **Study design**: Classical left/right hand motor imagery with passive rest - **Feedback type**: none - **Stimulus type**: visual arrow cue - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand passive ``` ```text ├─ Sensory-event └─ Label/passive ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_hand, right_hand, passive - **Cue duration**: 1.0 s **Data Structure** - **Trials context**: Variable number of trials per session; 1s cue + 1.5-2.5s ITI **Preprocessing** - **Data state**: raw **Signal Processing** - **Classifiers**: SVM - **Feature extraction**: fourier_transform_amplitudes - **Frequency bands**: low_pass=[0.0, 5.0] Hz **Cross-Validation** - **Method**: repeated_random_split - **Folds**: 5 - **Evaluation type**: within_subject **BCI Application** - **Environment**: lab - **Online feedback**: False **Tags** - **Pathology**: healthy - **Modality**: motor - **Type**: imagery **Documentation** - **DOI**: 10.1038/sdata.2018.211 - **License**: CC-BY-4.0 - **Investigators**: Murat Kaya, Mustafa Kemal Binli, Erkan Ozbay, Hilmi Yanar, Yuriy Mishchenko - **Senior author**: Yuriy Mishchenko - **Institution**: Mersin University - **Country**: TR - **Repository**: Figshare - **Data URL**: [https://figshare.com/collections/A_large_electroencephalographic_motor_imagery_dataset_for_electroencephalographic_brain_computer_interfaces/3917698](https://figshare.com/collections/A_large_electroencephalographic_motor_imagery_dataset_for_electroencephalographic_brain_computer_interfaces/3917698) - **Publication year**: 2018 - **Keywords**: EEG, motor imagery, brain-computer interface, BCI **References** M. Kaya, M. K. Binli, E. Ozbay, H. Yanar, and Y. Mishchenko, “A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces,” Scientific Data, vol. 5, p. 180211, 2018. DOI: 10.1038/sdata.2018.211 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000137` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Classical motor imagery dataset with left hand, right hand, and rest | | Author (year) | `Kaya2018` | | Canonical | — | | Importable as | `NM000137`, `Kaya2018` | | Year | 2019 | | Authors | Murat Kaya, Mustafa Kemal Binli, Erkan Ozbay, Hilmi Yanar, Yuriy Mishchenko | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000137) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000137) | [Source URL](https://nemar.org/dataexplorer/detail/nm000137) | ## Technical Details - Subjects: 7 - Recordings: 17 - Tasks: 1 - Channels: 19 - Sampling rate (Hz): 200.0 - Duration (hours): 15.696565277777776 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 623.4 MB - File count: 17 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000137](https://openneuro.org/datasets/nm000137) - NeMAR: [nm000137](https://nemar.org/dataexplorer/detail?dataset_id=nm000137) ## API Reference Use the `NM000137` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000137(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Classical motor imagery dataset with left hand, right hand, and rest * **Study:** `nm000137` (NeMAR) * **Author (year):** `Kaya2018` * **Canonical:** — Also importable as: `NM000137`, `Kaya2018`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 7; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000137](https://openneuro.org/datasets/nm000137) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000137](https://nemar.org/dataexplorer/detail?dataset_id=nm000137) ### Examples ```pycon >>> from eegdash.dataset import NM000137 >>> dataset = NM000137(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000137) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000137) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000138: eeg dataset, 8 subjects *Alex Motor Imagery dataset* Access recordings and metadata through EEGDash. **Citation:** Alexandre Barachant (2019). *Alex Motor Imagery dataset*. Modality: eeg Subjects: 8 Recordings: 8 License: CC-BY-SA-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000138 dataset = NM000138(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000138(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000138( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000138, title = {Alex Motor Imagery dataset}, author = {Alexandre Barachant}, } ``` ## About This Dataset **Alex Motor Imagery dataset** Alex Motor Imagery dataset. **Dataset Overview** - **Code**: AlexandreMotorImagery - **Paradigm**: imagery - **DOI**: 10.5281/zenodo.806022 ### View full README **Alex Motor Imagery dataset** Alex Motor Imagery dataset. **Dataset Overview** - **Code**: AlexandreMotorImagery - **Paradigm**: imagery - **DOI**: 10.5281/zenodo.806022 - **Subjects**: 8 - **Sessions per subject**: 1 - **Events**: right_hand=2, feet=3, rest=4 - **Trial interval**: [0, 3] s - **File format**: fif - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Channel names**: Fpz, F7, F3, Fz, F4, F8, T7, C3, Cz, C4, T8, P7, P3, Pz, P4, P8 - **Montage**: standard_1005 - **Hardware**: g.tec g.USBamp - **Software**: Matlab/Simulink - **Reference**: earlobe - **Sensor type**: EEG - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 8 - **Health status**: healthy - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 3 - **Class labels**: right_hand, feet, rest - **Trial duration**: 3.0 s - **Study design**: Cue-based motor imagery paradigm (Step B of Brain Switch campaign) for familiarization and algorithm development - **Feedback type**: none - **Stimulus type**: visual cue - **Stimulus modalities**: visual, auditory - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Instructions**: Cue-based paradigm without feedback. Subjects perform 20 imagined movements per class (right hand, feet, rest) following a visual cue, lasting 3 seconds each. Total duration approximately 10 minutes. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand feet ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine, Move, Foot rest ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Rest ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: right_hand, feet, rest - **Cue duration**: 1.0 s - **Imagery duration**: 3.0 s **Data Structure** - **Trials**: 60 - **Trials per class**: right_hand=20, feet=20, rest=20 - **Trials context**: 20 trials per class, 3 second duration each **Preprocessing** - **Re-reference**: earlobe **Signal Processing** - **Classifiers**: LDA, SVM, MDM, Riemannian, kNN, Naive Bayes, Logistic Regression - **Feature extraction**: CSP, FBCSP, ERD, ERS, PSD, Covariance/Riemannian, AR, ICA - **Frequency bands**: alpha=[8.0, 12.0] Hz; mu=[8.0, 12.0] Hz - **Spatial filters**: CSP, Geodesic filtering **Cross-Validation** - **Method**: cross-validation - **Evaluation type**: within_session **BCI Application** - **Applications**: motor_control - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Research **Documentation** - **Description**: Motor imagery dataset from the PhD dissertation of A. Barachant. Contains EEG recordings from 8 subjects performing motor imagination tasks (right hand, feet, or rest). Used to validate robust control of an effector via asynchronous EEG-based brain-machine interface. - **DOI**: 10.5281/zenodo.806022 - **Associated paper DOI**: tel-01196752v1 - **License**: CC-BY-SA-4.0 - **Investigators**: Alexandre Barachant - **Senior author**: Alexandre Barachant - **Contact**: [alexandre.barachant@gmail.com](mailto:alexandre.barachant@gmail.com) - **Institution**: Université de Grenoble - **Department**: Laboratoire Électronique et système pour la santé CEA-LETI - **Address**: CEA-LETI Grenoble, France - **Country**: France - **Repository**: Zenodo - **Data URL**: [https://zenodo.org/record/806023](https://zenodo.org/record/806023) - **Publication year**: 2012 - **Keywords**: brain-computer interface, motor imagery, EEG, Riemannian geometry, asynchronous BCI, brain-switch, covariance matrices, Common Spatial Pattern **Abstract** Motor imagery dataset from the PhD thesis on robust control of an effector via asynchronous EEG brain-machine interface (Barachant, 2012). This shared dataset corresponds to Step B (cue-based imagery without feedback) of the Brain Switch campaign. Contains recordings from 8 subjects performing 3 motor imagery tasks (right hand, feet, rest) with 20 trials per class. **Methodology** Cue-based paradigm without feedback (Step B of Brain Switch campaign). EEG recorded at 512 Hz with 16 active electrodes using a g.tec g.USBamp amplifier. Reference electrode placed on the ear. Subjects performed imagined movements following visual cues: right hand, feet, and rest, 20 trials per class, 3 seconds each. Recorded in standard office conditions (not shielded laboratory). Software: Matlab/Simulink with g.tec drivers. **References** Barachant, A., 2012. Commande robuste d’un effecteur par une interface cerveau machine EEG asynchrone (Doctoral dissertation, Université de Grenoble). [https://tel.archives-ouvertes.fr/tel-01196752](https://tel.archives-ouvertes.fr/tel-01196752) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000138` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Alex Motor Imagery dataset | | Author (year) | `Barachant2012` | | Canonical | `AlexMI`, `AlexMotorImagery`, `AlexandreMotorImagery` | | Importable as | `NM000138`, `Barachant2012`, `AlexMI`, `AlexMotorImagery`, `AlexandreMotorImagery` | | Year | 2019 | | Authors | Alexandre Barachant | | License | CC-BY-SA-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000138) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000138) | [Source URL](https://nemar.org/dataexplorer/detail/nm000138) | ## Technical Details - Subjects: 8 - Recordings: 8 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 512.0 - Duration (hours): 1.1037152777777777 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 99.7 MB - File count: 8 - Format: BIDS - License: CC-BY-SA-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000138](https://openneuro.org/datasets/nm000138) - NeMAR: [nm000138](https://nemar.org/dataexplorer/detail?dataset_id=nm000138) ## API Reference Use the `NM000138` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000138(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alex Motor Imagery dataset * **Study:** `nm000138` (NeMAR) * **Author (year):** `Barachant2012` * **Canonical:** `AlexMI`, `AlexMotorImagery`, `AlexandreMotorImagery` Also importable as: `NM000138`, `Barachant2012`, `AlexMI`, `AlexMotorImagery`, `AlexandreMotorImagery`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000138](https://openneuro.org/datasets/nm000138) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000138](https://nemar.org/dataexplorer/detail?dataset_id=nm000138) ### Examples ```pycon >>> from eegdash.dataset import NM000138 >>> dataset = NM000138(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000138) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000138) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000139: eeg dataset, 9 subjects *BNCI 2014-001 Motor Imagery dataset* Access recordings and metadata through EEGDash. **Citation:** Michael Tangermann, Klaus-Robert Müller, Ad Aertsen, Niels Birbaumer, Christoph Braun, Clemens Brunner, Robert Leeb, Carsten Mehring, Kai J. Miller, Gernot R. Müller-Putz, Guido Nolte, Gert Pfurtscheller, Hubert Preissl, Gerwin Schalk, Alois Schlögl, Carmen Vidaurre, Stephan Waldert, Benjamin Blankertz (2019). *BNCI 2014-001 Motor Imagery dataset*. Modality: eeg Subjects: 9 Recordings: 108 License: CC-BY-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000139 dataset = NM000139(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000139(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000139( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000139, title = {BNCI 2014-001 Motor Imagery dataset}, author = {Michael Tangermann and Klaus-Robert Müller and Ad Aertsen and Niels Birbaumer and Christoph Braun and Clemens Brunner and Robert Leeb and Carsten Mehring and Kai J. Miller and Gernot R. Müller-Putz and Guido Nolte and Gert Pfurtscheller and Hubert Preissl and Gerwin Schalk and Alois Schlögl and Carmen Vidaurre and Stephan Waldert and Benjamin Blankertz}, } ``` ## About This Dataset **BNCI 2014-001 Motor Imagery dataset** BNCI 2014-001 Motor Imagery dataset. **Dataset Overview** - **Code**: BNCI2014-001 - **Paradigm**: imagery - **DOI**: 10.3389/fnins.2012.00055 ### View full README **BNCI 2014-001 Motor Imagery dataset** BNCI 2014-001 Motor Imagery dataset. **Dataset Overview** - **Code**: BNCI2014-001 - **Paradigm**: imagery - **DOI**: 10.3389/fnins.2012.00055 - **Subjects**: 9 - **Sessions per subject**: 2 - **Events**: left_hand=1, right_hand=2, feet=3, tongue=4 - **Trial interval**: [2, 6] s - **Runs per session**: 6 - **File format**: GDF - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 250.0 Hz - **Number of channels**: 25 - **Channel types**: eeg=22, eog=3 - **Channel names**: C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CPz, Cz, EOG1, EOG2, EOG3, FC1, FC2, FC3, FC4, FCz, Fz, P1, P2, POz, Pz - **Montage**: custom - **Hardware**: BrainAmp MR plus - **Software**: BCI2000 - **Reference**: left mastoid - **Ground**: unknown - **Sensor type**: Ag/AgCl - **Line frequency**: 50.0 Hz - **Online filters**: bandpass 0.05-200 Hz - **Cap manufacturer**: EASYCAP GmbH **Participants** - **Number of subjects**: 9 - **Health status**: healthy - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 4 - **Class labels**: left_hand, right_hand, feet, tongue - **Trial duration**: 4.0 s - **Study design**: Two-class motor imagery (selected from left hand, right hand, and foot) with asynchronous/continuous control periods - **Feedback type**: none - **Stimulus type**: arrow_cue - **Stimulus modalities**: visual, auditory - **Primary modality**: multisensory - **Synchronicity**: asynchronous - **Mode**: offline - **Instructions**: Subjects instructed to perform motor imagery during cued periods **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_hand ``` ```text ├─ Sensory-event │ ├─ Experimental-stimulus │ ├─ Visual-presentation │ └─ Leftward, Arrow └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event │ ├─ Experimental-stimulus │ ├─ Visual-presentation │ └─ Rightward, Arrow └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand feet ``` ```text ├─ Sensory-event │ ├─ Experimental-stimulus │ ├─ Visual-presentation │ └─ Downward, Arrow └─ Agent-action └─ Imagine, Move, Foot tongue ``` ```text ├─ Sensory-event │ ├─ Experimental-stimulus │ ├─ Visual-presentation │ └─ Upward, Arrow └─ Agent-action └─ Imagine, Move, Tongue ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_hand, right_hand, foot - **Cue duration**: 4.0 s - **Imagery duration**: 4.0 s **Data Structure** - **Trials**: {‘training’: 200, ‘test’: 240} - **Blocks per session**: 6 - **Trials context**: per subject (2 training runs + 4 test runs) **Preprocessing** - **Data state**: minimally preprocessed (bandpass and notch filtered) - **Preprocessing applied**: True - **Steps**: bandpass filtering - **Highpass filter**: 0.05 Hz - **Lowpass filter**: 200 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.05, ‘high_cutoff_hz’: 200.0} - **Filter type**: analog - **Re-reference**: none - **Downsampled to**: 100.0 Hz - **Notes**: Data provided in two versions: original at 1000 Hz and downsampled to 100 Hz (with Chebyshev Type II filter order 10, stop band ripple 50 dB, stop band edge 49 Hz) **Signal Processing** - **Classifiers**: LDA, SVM, Neural Network, Naive Bayes, RBF Neural Network - **Feature extraction**: CSP, FBCSP, Bandpower, ERD, ERS - **Frequency bands**: mu=[8, 12] Hz; beta=[16, 24] Hz **Cross-Validation** - **Method**: train-test split - **Evaluation type**: within_session **Performance (Original Study)** - **Mse**: 0.382 **BCI Application** - **Applications**: cursor_control, communication - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Motor **Documentation** - **Description**: Review of the BCI competition IV - Data set 1: Asynchronous Motor Imagery - **DOI**: 10.3389/fnins.2012.00055 - **License**: CC-BY-ND-4.0 - **Investigators**: Michael Tangermann, Klaus-Robert Müller, Ad Aertsen, Niels Birbaumer, Christoph Braun, Clemens Brunner, Robert Leeb, Carsten Mehring, Kai J. Miller, Gernot R. Müller-Putz, Guido Nolte, Gert Pfurtscheller, Hubert Preissl, Gerwin Schalk, Alois Schlögl, Carmen Vidaurre, Stephan Waldert, Benjamin Blankertz - **Senior author**: Michael Tangermann - **Contact**: [michael.tangermann@tu-berlin.de](mailto:michael.tangermann@tu-berlin.de) - **Institution**: Berlin Institute of Technology - **Department**: Machine Learning Laboratory - **Address**: FR 6-9, Franklinstr. 28/29, 10587 Berlin, Germany - **Country**: AT - **Repository**: BNCI Horizon - **Data URL**: [http://www.bbci.de/competition/iv/](http://www.bbci.de/competition/iv/) - **Publication year**: 2012 - **Keywords**: brain-computer interface, BCI, competition **References** Tangermann, M., Muller, K.R., Aertsen, A., Birbaumer, N., Braun, C., Brunner, C., Leeb, R., Mehring, C., Miller, K.J., Mueller-Putz, G. and Nolte, G., 2012. Review of the BCI competition IV. Frontiers in neuroscience, 6, p.55. Notes .. note:: `BNCI2014_001` was previously named `BNCI2014001`. `BNCI2014001` will be removed in version 1.1. .. versionadded:: 0.4.0 This is one of the most widely used motor imagery datasets in BCI research, commonly referred to as “BCI Competition IV Dataset 2a”. It serves as a standard benchmark for 4-class motor imagery classification algorithms. The dataset is particularly useful for: - Multi-class motor imagery classification (4 classes) - Transfer learning studies (9 subjects, 2 sessions each) - Cross-session variability analysis See Also BNCI2014_004 : BCI Competition 2008 2-class motor imagery (Dataset B) BNCI2003_004 : BCI Competition III 2-class motor imagery Examples > >> from moabb.datasets import BNCI2014_001 >>> dataset = BNCI2014_001() >>> dataset.subject_list [1, 2, 3, 4, 5, 6, 7, 8, 9] Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000139` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2014-001 Motor Imagery dataset | | Author (year) | `Tangermann2014` | | Canonical | `BNCI2014001`, `BCICIV1`, `BCICompIV1` | | Importable as | `NM000139`, `Tangermann2014`, `BNCI2014001`, `BCICIV1`, `BCICompIV1` | | Year | 2019 | | Authors | Michael Tangermann, Klaus-Robert Müller, Ad Aertsen, Niels Birbaumer, Christoph Braun, Clemens Brunner, Robert Leeb, Carsten Mehring, Kai J. Miller, Gernot R. Müller-Putz, Guido Nolte, Gert Pfurtscheller, Hubert Preissl, Gerwin Schalk, Alois Schlögl, Carmen Vidaurre, Stephan Waldert, Benjamin Blankertz | | License | CC-BY-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000139) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000139) | [Source URL](https://nemar.org/dataexplorer/detail/nm000139) | ## Technical Details - Subjects: 9 - Recordings: 108 - Tasks: 1 - Channels: 22 - Sampling rate (Hz): 250.0 - Duration (hours): 11.60808 - Pathology: Healthy - Modality: Multisensory - Type: Motor - Size on disk: 672.8 MB - File count: 108 - Format: BIDS - License: CC-BY-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000139](https://openneuro.org/datasets/nm000139) - NeMAR: [nm000139](https://nemar.org/dataexplorer/detail?dataset_id=nm000139) ## API Reference Use the `NM000139` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000139(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-001 Motor Imagery dataset * **Study:** `nm000139` (NeMAR) * **Author (year):** `Tangermann2014` * **Canonical:** `BNCI2014001`, `BCICIV1`, `BCICompIV1` Also importable as: `NM000139`, `Tangermann2014`, `BNCI2014001`, `BCICIV1`, `BCICompIV1`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 9; recordings: 108; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000139](https://openneuro.org/datasets/nm000139) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000139](https://nemar.org/dataexplorer/detail?dataset_id=nm000139) ### Examples ```pycon >>> from eegdash.dataset import NM000139 >>> dataset = NM000139(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000139) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000139) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000140: eeg dataset, 12 subjects *BNCI 2015-001 Motor Imagery dataset* Access recordings and metadata through EEGDash. **Citation:** Josef Faller, Carmen Vidaurre, Teodoro Solis-Escalante, Christa Neuper, Reinhold Scherer (2012). *BNCI 2015-001 Motor Imagery dataset*. Modality: eeg Subjects: 12 Recordings: 28 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000140 dataset = NM000140(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000140(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000140( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000140, title = {BNCI 2015-001 Motor Imagery dataset}, author = {Josef Faller and Carmen Vidaurre and Teodoro Solis-Escalante and Christa Neuper and Reinhold Scherer}, } ``` ## About This Dataset **BNCI 2015-001 Motor Imagery dataset** BNCI 2015-001 Motor Imagery dataset. **Dataset Overview** - **Code**: BNCI2015-001 - **Paradigm**: imagery - **DOI**: 10.1109/tnsre.2012.2189584 ### View full README **BNCI 2015-001 Motor Imagery dataset** BNCI 2015-001 Motor Imagery dataset. **Dataset Overview** - **Code**: BNCI2015-001 - **Paradigm**: imagery - **DOI**: 10.1109/tnsre.2012.2189584 - **Subjects**: 12 - **Sessions per subject**: 2 - **Events**: right_hand=1, feet=2 - **Trial interval**: [0, 5] s - **File format**: gdf - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 13 - **Channel types**: eeg=13 - **Channel names**: FC3, FCz, FC4, C5, C3, C1, Cz, C2, C4, C6, CP3, CPz, CP4 - **Montage**: 10-20 - **Hardware**: g.tec - **Software**: Matlab - **Reference**: Car - **Sensor type**: active electrode - **Line frequency**: 50.0 Hz - **Online filters**: 50 Hz notch - **Cap manufacturer**: g.tec - **Cap model**: g.GAMMAsys - **Auxiliary channels**: gsr **Participants** - **Number of subjects**: 12 - **Health status**: healthy - **Age**: mean=24.8 - **Gender distribution**: male=7, female=5 - **Handedness**: all right-handed - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 2 - **Class labels**: right_hand, feet - **Trial duration**: 11.0 s - **Study design**: Two-class motor imagery: sustained right hand movement imagery (palmar grip) versus both feet movement imagery (plantar extension) - **Feedback type**: visual - **Stimulus type**: cursor_feedback - **Stimulus modalities**: visual, auditory - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: training - **Instructions**: Relax during reference period (3s), perform sustained kinesthetic movement imagery during activity period. Condition 1 (arrow right): imagine palmar grip with right hand. Condition 2 (arrow down): imagine plantar extension of both feet. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand feet ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine, Move, Foot ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: right_hand_palmar_grip, both_feet_plantar_extension - **Cue duration**: 1.25 s - **Imagery duration**: 4.0 s **Data Structure** - **Trials**: 200 - **Trials per class**: right_hand=100, feet=100 - **Trials context**: per_session **Preprocessing** - **Data state**: filtered - **Preprocessing applied**: True - **Steps**: bandpass filter, notch filter - **Highpass filter**: 0.5 Hz - **Lowpass filter**: 100.0 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.5, ‘high_cutoff_hz’: 100.0} - **Notch filter**: [50.0] Hz - **Re-reference**: car **Signal Processing** - **Classifiers**: LDA - **Feature extraction**: logarithmic bandpower, CSP - **Frequency bands**: alpha=[10, 13] Hz; beta=[16, 24] Hz **Cross-Validation** - **Method**: leave-one-out - **Evaluation type**: cross_session **Performance (Original Study)** - **Accuracy**: 80.0% **BCI Application** - **Applications**: communication, control - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Motor **Documentation** - **DOI**: 10.1109/tnsre.2012.2189584 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: Josef Faller, Carmen Vidaurre, Teodoro Solis-Escalante, Christa Neuper, Reinhold Scherer - **Senior author**: Reinhold Scherer - **Contact**: [josef.faller@tugraz.at](mailto:josef.faller@tugraz.at); [christa.neuper@uni-graz.at](mailto:christa.neuper@uni-graz.at); [carmen.vidaurre@tu-berlin.de](mailto:carmen.vidaurre@tu-berlin.de) - **Institution**: Graz University of Technology - **Department**: Institute of Knowledge Discovery - **Address**: 8010 Graz, Austria - **Country**: Austria - **Repository**: BNCI Horizon - **Publication year**: 2012 - **Funding**: FP7 Framework EU Research Project BrainAble (No. 247447) **References** Faller, J., Vidaurre, C., Solis-Escalante, T., Neuper, C., & Scherer, R. (2012). Autocalibration and recurrent adaptation: Towards a plug and play online ERD-BCI. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 20(3), 313-319. [https://doi.org/10.1109/tnsre.2012.2189584](https://doi.org/10.1109/tnsre.2012.2189584) Notes .. note:: `BNCI2015_001` was previously named `BNCI2015001`. `BNCI2015001` will be removed in version 1.1. .. versionadded:: 0.4.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000140` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2015-001 Motor Imagery dataset | | Author (year) | `Faller2015` | | Canonical | `BNCI2015`, `BNCI2015001` | | Importable as | `NM000140`, `Faller2015`, `BNCI2015`, `BNCI2015001` | | Year | 2012 | | Authors | Josef Faller, Carmen Vidaurre, Teodoro Solis-Escalante, Christa Neuper, Reinhold Scherer | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000140) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000140) | [Source URL](https://nemar.org/dataexplorer/detail/nm000140) | ## Technical Details - Subjects: 12 - Recordings: 28 - Tasks: 1 - Channels: 13 - Sampling rate (Hz): 512.0 - Duration (hours): 16.68931640625 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 1.1 GB - File count: 28 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000140](https://openneuro.org/datasets/nm000140) - NeMAR: [nm000140](https://nemar.org/dataexplorer/detail?dataset_id=nm000140) ## API Reference Use the `NM000140` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000140(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-001 Motor Imagery dataset * **Study:** `nm000140` (NeMAR) * **Author (year):** `Faller2015` * **Canonical:** `BNCI2015`, `BNCI2015001` Also importable as: `NM000140`, `Faller2015`, `BNCI2015`, `BNCI2015001`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 12; recordings: 28; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000140](https://openneuro.org/datasets/nm000140) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000140](https://nemar.org/dataexplorer/detail?dataset_id=nm000140) ### Examples ```pycon >>> from eegdash.dataset import NM000140 >>> dataset = NM000140(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000140) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000140) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000141: eeg dataset, 14 subjects *Motor execution dataset from Wairagkar et al 2018* Access recordings and metadata through EEGDash. **Citation:** Maitreyee Wairagkar, Yoshikatsu Hayashi, Slawomir J. Nasuto (2018). *Motor execution dataset from Wairagkar et al 2018*. Modality: eeg Subjects: 14 Recordings: 14 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000141 dataset = NM000141(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000141(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000141( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000141, title = {Motor execution dataset from Wairagkar et al 2018}, author = {Maitreyee Wairagkar and Yoshikatsu Hayashi and Slawomir J. Nasuto}, } ``` ## About This Dataset **Motor execution dataset from Wairagkar et al 2018** Motor execution dataset from Wairagkar et al 2018. **Dataset Overview** - **Code**: Wairagkar2018 - **Paradigm**: imagery - **DOI**: 10.1371/journal.pone.0193722 ### View full README **Motor execution dataset from Wairagkar et al 2018** Motor execution dataset from Wairagkar et al 2018. **Dataset Overview** - **Code**: Wairagkar2018 - **Paradigm**: imagery - **DOI**: 10.1371/journal.pone.0193722 - **Subjects**: 14 - **Sessions per subject**: 1 - **Events**: right_hand=1, rest=2, left_hand=3 - **Trial interval**: [0, 3] s - **File format**: MAT - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 1024.0 Hz - **Number of channels**: 19 - **Channel types**: eeg=19 - **Channel names**: Fp1, Fp2, F7, F3, Fz, F4, F8, T7, C3, Cz, C4, T8, P7, P3, Pz, P4, P8, O1, O2 - **Montage**: standard_1020 - **Hardware**: Deymed TruScan 32 - **Reference**: FCz - **Ground**: AFz - **Sensor type**: Ag/AgCl ring - **Line frequency**: 50.0 Hz - **Online filters**: {‘highpass’: 0.5, ‘lowpass’: 60, ‘notch_hz’: 50} **Participants** - **Number of subjects**: 14 - **Health status**: healthy - **Age**: mean=26.0, std=4.0 - **Gender distribution**: female=8, male=6 - **Handedness**: mixed (12 right, 2 left) - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 3 - **Class labels**: right_hand, rest, left_hand - **Trial duration**: 6.0 s - **Study design**: Asynchronous voluntary finger tapping: right tap, left tap, and resting state - **Feedback type**: none - **Stimulus type**: text cues - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: asynchronous - **Mode**: offline - **Instructions**: Participants were asked to tap their index finger at a self-chosen time within a 10-second window after the cue **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand rest ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Rest left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: right_hand, left_hand, rest **Data Structure** - **Trials**: 1665 - **Trials context**: 14 subjects x 120 trials (40 per condition), except subject 2 with 105 trials (35 per condition) **Preprocessing** - **Data state**: preprocessed - **Preprocessing applied**: True - **Steps**: DC offset removal, 0.5 Hz high-pass filter, 50 Hz notch filter, 60 Hz low-pass filter, ICA artifact removal (EEGLAB infomax), trial segmentation (-3 to +3 s around movement onset) - **Highpass filter**: 0.5 Hz - **Lowpass filter**: 60.0 Hz - **Notch filter**: 50.0 Hz **Signal Processing** - **Classifiers**: LDA - **Feature extraction**: autocorrelation_relaxation_time, ERD - **Frequency bands**: broadband=[0.5, 30.0] Hz; mu=[8.0, 13.0] Hz; beta=[13.0, 30.0] Hz; low=[0.5, 8.0] Hz - **Spatial filters**: bipolar_montage **Cross-Validation** - **Method**: 10x10-fold - **Folds**: 10 - **Evaluation type**: within_subject **BCI Application** - **Applications**: motor_control - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Research **Documentation** - **DOI**: 10.1371/journal.pone.0193722 - **License**: CC-BY-4.0 - **Investigators**: Maitreyee Wairagkar, Yoshikatsu Hayashi, Slawomir J. Nasuto - **Senior author**: Slawomir J. Nasuto - **Institution**: University of Reading - **Department**: Brain Embodiment Lab, Biomedical Engineering - **Country**: GB - **Repository**: University of Reading Research Data Archive - **Data URL**: [https://researchdata.reading.ac.uk/117/](https://researchdata.reading.ac.uk/117/) - **Publication year**: 2018 **References** Wairagkar, M., Hayashi, Y., & Nasuto, S. J. (2018). Exploration of neural correlates of movement intention based on characterisation of temporal dependencies in electroencephalography. PLOS ONE, 13(3), e0193722. [https://doi.org/10.1371/journal.pone.0193722](https://doi.org/10.1371/journal.pone.0193722) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000141` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Motor execution dataset from Wairagkar et al 2018 | | Author (year) | `Wairagkar2018` | | Canonical | — | | Importable as | `NM000141`, `Wairagkar2018` | | Year | 2018 | | Authors | Maitreyee Wairagkar, Yoshikatsu Hayashi, Slawomir J. Nasuto | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000141) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000141) | [Source URL](https://nemar.org/dataexplorer/detail/nm000141) | ## Technical Details - Subjects: 14 - Recordings: 14 - Tasks: 1 - Channels: 19 - Sampling rate (Hz): 1024.0 - Duration (hours): 2.8049180772569446 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 571.7 MB - File count: 14 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000141](https://openneuro.org/datasets/nm000141) - NeMAR: [nm000141](https://nemar.org/dataexplorer/detail?dataset_id=nm000141) ## API Reference Use the `NM000141` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000141(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor execution dataset from Wairagkar et al 2018 * **Study:** `nm000141` (NeMAR) * **Author (year):** `Wairagkar2018` * **Canonical:** — Also importable as: `NM000141`, `Wairagkar2018`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 14; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000141](https://openneuro.org/datasets/nm000141) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000141](https://nemar.org/dataexplorer/detail?dataset_id=nm000141) ### Examples ```pycon >>> from eegdash.dataset import NM000141 >>> dataset = NM000141(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000141) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000141) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000142: eeg dataset, 6 subjects *Ear-EEG motor execution dataset from Wu et al 2020* Access recordings and metadata through EEGDash. **Citation:** Xiaoli Wu, Wenhui Zhang, Zhibo Fu, Roy T.H. Cheung, Rosa H.M. Chan (2020). *Ear-EEG motor execution dataset from Wu et al 2020*. Modality: eeg Subjects: 6 Recordings: 13 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000142 dataset = NM000142(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000142(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000142( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000142, title = {Ear-EEG motor execution dataset from Wu et al 2020}, author = {Xiaoli Wu and Wenhui Zhang and Zhibo Fu and Roy T.H. Cheung and Rosa H.M. Chan}, } ``` ## About This Dataset **Ear-EEG motor execution dataset from Wu et al 2020** Ear-EEG motor execution dataset from Wu et al 2020. **Dataset Overview** - **Code**: Wu2020 - **Paradigm**: imagery - **DOI**: 10.1088/1741-2552/abc1b6 ### View full README **Ear-EEG motor execution dataset from Wu et al 2020** Ear-EEG motor execution dataset from Wu et al 2020. **Dataset Overview** - **Code**: Wu2020 - **Paradigm**: imagery - **DOI**: 10.1088/1741-2552/abc1b6 - **Subjects**: 6 - **Sessions per subject**: 1 - **Events**: left_hand=1, right_hand=2 - **Trial interval**: [0, 4] s - **File format**: Curry **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 122 - **Channel types**: eeg=122, misc=10 - **Montage**: standard_1005 - **Hardware**: Neuroscan SynAmps2 - **Reference**: scalp REF - **Ground**: scalp GRD - **Sensor type**: Ag/AgCl - **Line frequency**: 50.0 Hz - **Online filters**: {‘bandpass’: [0.5, 100]} **Participants** - **Number of subjects**: 6 - **Health status**: healthy - **Age**: mean=25.0, min=22.0, max=28.0 - **Gender distribution**: female=4, male=2 - **Handedness**: right-handed - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 2 - **Class labels**: left_hand, right_hand - **Trial duration**: 4.0 s - **Study design**: Motor execution (fist clenching) with simultaneous scalp and ear-EEG recording - **Feedback type**: none - **Stimulus type**: arrow cues - **Stimulus modalities**: visual, auditory - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_hand, right_hand **Data Structure** - **Trials**: 1114 - **Trials context**: S1: 240, S2: 160, S3: 160, S4: 80, S5: 234, S6: 240 = 1114 **Signal Processing** - **Classifiers**: EEGNet **Cross-Validation** - **Evaluation type**: within_subject **BCI Application** - **Applications**: motor_control - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Research **Documentation** - **DOI**: 10.1088/1741-2552/abc1b6 - **License**: CC-BY-4.0 - **Investigators**: Xiaoli Wu, Wenhui Zhang, Zhibo Fu, Roy T.H. Cheung, Rosa H.M. Chan - **Institution**: City University of Hong Kong - **Country**: HK - **Repository**: Zenodo - **Data URL**: [https://zenodo.org/records/18961128](https://zenodo.org/records/18961128) - **Publication year**: 2020 **References** Wu, X., Zhang, W., Fu, Z., Cheung, R. T. H., & Chan, R. H. M. (2020). An investigation of in-ear sensing for motor task classification. Journal of Neural Engineering, 17(6), 066029. [https://doi.org/10.1088/1741-2552/abc1b6](https://doi.org/10.1088/1741-2552/abc1b6) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.4.3 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000142` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Ear-EEG motor execution dataset from Wu et al 2020 | | Author (year) | `Wu2020` | | Canonical | — | | Importable as | `NM000142`, `Wu2020` | | Year | 2020 | | Authors | Xiaoli Wu, Wenhui Zhang, Zhibo Fu, Roy T.H. Cheung, Rosa H.M. Chan | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000142) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000142) | [Source URL](https://nemar.org/dataexplorer/detail/nm000142) | ## Technical Details - Subjects: 6 - Recordings: 13 - Tasks: 1 - Channels: 122 - Sampling rate (Hz): 1000.0 - Duration (hours): 4.0056075 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 4.9 GB - File count: 13 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000142](https://openneuro.org/datasets/nm000142) - NeMAR: [nm000142](https://nemar.org/dataexplorer/detail?dataset_id=nm000142) ## API Reference Use the `NM000142` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000142(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Ear-EEG motor execution dataset from Wu et al 2020 * **Study:** `nm000142` (NeMAR) * **Author (year):** `Wu2020` * **Canonical:** — Also importable as: `NM000142`, `Wu2020`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 6; recordings: 13; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000142](https://openneuro.org/datasets/nm000142) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000142](https://nemar.org/dataexplorer/detail?dataset_id=nm000142) ### Examples ```pycon >>> from eegdash.dataset import NM000142 >>> dataset = NM000142(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000142) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000142) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000143: eeg dataset, 5 subjects *BNCI2003_IVa Motor Imagery dataset* Access recordings and metadata through EEGDash. **Citation:** Guido Dornhege, Benjamin Blankertz, Gabriel Curio, Klaus-Robert Müller (2019). *BNCI2003_IVa Motor Imagery dataset*. Modality: eeg Subjects: 5 Recordings: 5 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000143 dataset = NM000143(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000143(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000143( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000143, title = {BNCI2003_IVa Motor Imagery dataset}, author = {Guido Dornhege and Benjamin Blankertz and Gabriel Curio and Klaus-Robert Müller}, } ``` ## About This Dataset **BNCI2003_IVa Motor Imagery dataset** BNCI2003_IVa Motor Imagery dataset. **Dataset Overview** - **Code**: BNCI2003-004 - **Paradigm**: imagery - **DOI**: 10.1109/TBME.2004.827088 ### View full README **BNCI2003_IVa Motor Imagery dataset** BNCI2003_IVa Motor Imagery dataset. **Dataset Overview** - **Code**: BNCI2003-004 - **Paradigm**: imagery - **DOI**: 10.1109/TBME.2004.827088 - **Subjects**: 5 - **Sessions per subject**: 1 - **Events**: right_hand=0, feet=1 - **Trial interval**: [0, 3.5] s - **File format**: mat - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 100.0 Hz - **Number of channels**: 118 - **Channel types**: eeg=118 - **Channel names**: AF3, AF4, AF7, AF8, AFp1, AFp2, C1, C2, C3, C4, C5, C6, CCP1, CCP2, CCP3, CCP4, CCP5, CCP6, CCP7, CCP8, CFC1, CFC2, CFC3, CFC4, CFC5, CFC6, CFC7, CFC8, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FAF1, FAF2, FAF5, FAF6, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FFC1, FFC2, FFC3, FFC4, FFC5, FFC6, FFC7, FFC8, FT10, FT7, FT8, FT9, Fp1, Fp2, Fpz, Fz, I1, I2, O1, O2, OI1, OI2, OPO1, OPO2, Oz, P1, P10, P2, P3, P4, P5, P6, P7, P8, P9, PCP1, PCP2, PCP3, PCP4, PCP5, PCP6, PCP7, PCP8, PO1, PO2, PO3, PO4, PO7, PO8, POz, PPO1, PPO2, PPO5, PPO6, PPO7, PPO8, Pz, T7, T8, TP10, TP7, TP8, TP9 - **Montage**: standard_1005 - **Hardware**: BrainAmp - **Sensor type**: EEG - **Line frequency**: 50.0 Hz - **Online filters**: {‘bandpass’: [0.05, 200]} **Participants** - **Number of subjects**: 5 - **Health status**: healthy **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 2 - **Class labels**: right_hand, feet - **Trial duration**: 3.5 s - **Stimulus type**: visual cue - **Mode**: offline - **Instructions**: subjects performed motor imagery (left hand, right hand, or right foot) according to visual cue for 3.5 seconds - **Stimulus presentation**: duration=3.5 s, interval=1.75-2.25 s random, modality=visual **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand feet ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine, Move, Foot ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: right_hand, feet - **Cue duration**: 3.5 s **Data Structure** - **Trials**: 280 - **Trials context**: 280 cues per subject, split into labeled training and unlabeled test sets (varying per subject) **Preprocessing** - **Data state**: downsampled to 100 Hz for offline analysis - **Preprocessing applied**: True - **Steps**: bandpass filtering, downsampling - **Bandpass filter**: {‘low_cutoff_hz’: 0.05, ‘high_cutoff_hz’: 200.0} - **Downsampled to**: 100 Hz - **Notes**: Band-pass filtered 0.05-200 Hz during acquisition at 1000 Hz with 16-bit (0.1 uV) accuracy, then downsampled to 100 Hz by picking each 10th sample. Original experiment also recorded EMG and EOG but these are not in the shared data files. **Signal Processing** - **Classifiers**: LDA, regularized LDA - **Feature extraction**: CSP, SUB (MRP/slow potentials), AR - **Frequency bands**: alpha=[8, 13] Hz; beta=[15, 25] Hz; alpha_beta=[7, 30] Hz - **Spatial filters**: CSP, spatial Laplacian **Cross-Validation** - **Method**: 10x10-fold cross validation - **Folds**: 10 - **Evaluation type**: within-subject **BCI Application** - **Applications**: motor_control - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Research **Documentation** - **DOI**: 10.1109/TBME.2004.827088 - **License**: CC-BY-4.0 - **Investigators**: Guido Dornhege, Benjamin Blankertz, Gabriel Curio, Klaus-Robert Müller - **Senior author**: Klaus-Robert Müller - **Contact**: [benjamin.blankertz@tu-berlin.de](mailto:benjamin.blankertz@tu-berlin.de) - **Institution**: Fraunhofer FIRST (IDA); Charité University Medicine Berlin - **Department**: Fraunhofer FIRST (IDA); Department of Neurology, Campus Benjamin Franklin - **Address**: 12489 Berlin, Germany; 12203 Berlin, Germany - **Country**: DE - **Repository**: BBCI - **Publication year**: 2004 - **Funding**: Bundesministerium für Bildung und Forschung (BMBF) under Grants FKZ 01IBB02A and FKZ 01IBB02B - **Keywords**: brain-computer interface, BCI, common spatial patterns, electroencephalogram, EEG, event-related desynchronization, feature combination, movement related potential, multiclass, single-trial analysis **References** Guido Dornhege, Benjamin Blankertz, Gabriel Curio, and Klaus-Robert Muller. Boosting bit rates in non-invasive EEG single-trial classifications by feature combination and multi-class paradigms. IEEE Trans. Biomed. Eng., 51(6):993-1002, June 2004. Notes .. versionadded:: 0.4.0 This is one of the earliest and most influential motor imagery BCI datasets, used extensively for benchmarking classification algorithms. The dataset was part of BCI Competition III and has been cited in hundreds of papers. See Also BNCI2014_001 : BCI Competition IV 4-class motor imagery dataset BNCI2014_004 : BCI Competition 2008 2-class motor imagery dataset Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000143` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI2003_IVa Motor Imagery dataset | | Author (year) | `BNCI2003` | | Canonical | `BCICIII_IVa`, `BCICompIII_IVa`, `BNCI2003_IVa` | | Importable as | `NM000143`, `BNCI2003`, `BCICIII_IVa`, `BCICompIII_IVa`, `BNCI2003_IVa` | | Year | 2019 | | Authors | Guido Dornhege, Benjamin Blankertz, Gabriel Curio, Klaus-Robert Müller | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000143) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000143) | [Source URL](https://nemar.org/dataexplorer/detail/nm000143) | ## Technical Details - Subjects: 5 - Recordings: 5 - Tasks: 1 - Channels: 118 - Sampling rate (Hz): 100.0 - Duration (hours): 3.9763027777777777 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 492.7 MB - File count: 5 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000143](https://openneuro.org/datasets/nm000143) - NeMAR: [nm000143](https://nemar.org/dataexplorer/detail?dataset_id=nm000143) ## API Reference Use the `NM000143` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000143(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI2003_IVa Motor Imagery dataset * **Study:** `nm000143` (NeMAR) * **Author (year):** `BNCI2003` * **Canonical:** `BCICIII_IVa`, `BCICompIII_IVa`, `BNCI2003_IVa` Also importable as: `NM000143`, `BNCI2003`, `BCICIII_IVa`, `BCICompIII_IVa`, `BNCI2003_IVa`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000143](https://openneuro.org/datasets/nm000143) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000143](https://nemar.org/dataexplorer/detail?dataset_id=nm000143) ### Examples ```pycon >>> from eegdash.dataset import NM000143 >>> dataset = NM000143(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000143) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000143) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000144: eeg dataset, 9 subjects *BNCI 2015-004 Mental tasks dataset* Access recordings and metadata through EEGDash. **Citation:** Reinhold Scherer, Josef Faller, Elisabeth V. C. Friedrich, Eloy Opisso, Ursula Costa, Andrea Kübler, Gernot R. Müller-Putz (2017). *BNCI 2015-004 Mental tasks dataset*. Modality: eeg Subjects: 9 Recordings: 18 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000144 dataset = NM000144(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000144(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000144( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000144, title = {BNCI 2015-004 Mental tasks dataset}, author = {Reinhold Scherer and Josef Faller and Elisabeth V. C. Friedrich and Eloy Opisso and Ursula Costa and Andrea Kübler and Gernot R. Müller-Putz}, } ``` ## About This Dataset **BNCI 2015-004 Mental tasks dataset** BNCI 2015-004 Mental tasks dataset. **Dataset Overview** - **Code**: BNCI2015-004 - **Paradigm**: imagery - **DOI**: 10.1371/journal.pone.0123727 ### View full README **BNCI 2015-004 Mental tasks dataset** BNCI 2015-004 Mental tasks dataset. **Dataset Overview** - **Code**: BNCI2015-004 - **Paradigm**: imagery - **DOI**: 10.1371/journal.pone.0123727 - **Subjects**: 9 - **Sessions per subject**: 2 - **Events**: math=1, letter=2, rotation=3, count=4, baseline=5 - **Trial interval**: [0, 4] s - **File format**: gdf - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 30 - **Channel types**: eeg=30 - **Channel names**: AFz, F7, F3, Fz, F4, F8, FC3, FCz, FC4, T3, C3, Cz, C4, T4, CP3, CPz, CP4, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO3, PO4, O1, O2 - **Montage**: 10-20 - **Hardware**: g.tec - **Reference**: left and right mastoid - **Ground**: left and right mastoid - **Sensor type**: active electrode - **Line frequency**: 50.0 Hz - **Online filters**: 0.5-100 Hz bandpass, 50 Hz notch - **Cap manufacturer**: g.tec - **Electrode type**: g.LADYbird active electrodes - **Auxiliary channels**: EOG (2 ch, horizontal, vertical) **Participants** - **Number of subjects**: 9 - **Health status**: CNS tissue damage - **Clinical population**: stroke and spinal cord injury - **Age**: mean=38.0, std=10.0, min=20, max=57 - **Gender distribution**: male=2, female=7 - **Handedness**: not specified - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 5 - **Class labels**: math, letter, rotation, count, baseline - **Trial duration**: 11.0 s - **Tasks**: word_association, mental_subtraction, spatial_navigation, right_hand_imagery, feet_imagery - **Study design**: Five mental tasks: word association (WORD), mental subtraction (SUB), spatial navigation (NAV), motor imagery of right hand (HAND), and motor imagery of both feet (FEET). Cue-guided paradigm with 7 seconds of continuous mental imagery per trial. - **Feedback type**: none - **Stimulus type**: visual cue - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: screening - **Instructions**: Participants were asked to continuously perform the specified mental imagery task for 7 seconds. For MI: kinesthetic imagination of movement (e.g., squeezing a rubber ball for hand, dorsiflexion for feet). For WORD: generate words beginning with presented letter. For SUB: successive elementary subtractions. For NAV: spatial navigation. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text math ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Think └─ Label/math letter ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Think └─ Label/letter rotation ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Think └─ Label/rotation count ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine, Count baseline ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Rest ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: right_hand, feet, word_association, mental_subtraction, spatial_navigation - **Cue duration**: 1.0 s - **Imagery duration**: 7.0 s **Data Structure** - **Trials**: 40 - **Blocks per session**: 8 - **Trials context**: per_class_per_day **Preprocessing** - **Data state**: filtered - **Preprocessing applied**: True - **Steps**: bandpass filter, notch filter, artifact rejection - **Highpass filter**: 0.5 Hz - **Lowpass filter**: 100.0 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.5, ‘high_cutoff_hz’: 100.0} - **Notch filter**: [50] Hz - **Artifact methods**: manual artifact rejection based on EOG - **Re-reference**: left and right mastoid **Signal Processing** - **Classifiers**: LDA - **Feature extraction**: bandpower, temporal features - **Frequency bands**: mu=[8, 12] Hz; beta=[13, 30] Hz **Cross-Validation** - **Method**: 10-fold cross-validation - **Folds**: 10 - **Evaluation type**: within_session, cross_session **Performance (Original Study)** - **Accuracy**: 77.0% - **Best Task Pair Gmac**: 77.0 - **Sub Vs Feet Gmac**: 77.0 - **Word Vs Hand Gmac**: 70.0 - **Hand Vs Feet Gmac**: 64.0 - **Between Day Word Vs Hand Gmac**: 82.0 **BCI Application** - **Applications**: communication, motor_function_restoration - **Environment**: rehabilitation center - **Online feedback**: False **Tags** - **Pathology**: Stroke, Spinal Cord Injury, CNS Damage - **Modality**: Motor, Cognitive - **Type**: Motor, Cognitive **Documentation** - **DOI**: 10.1371/journal.pone.0123727 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: Reinhold Scherer, Josef Faller, Elisabeth V. C. Friedrich, Eloy Opisso, Ursula Costa, Andrea Kübler, Gernot R. Müller-Putz - **Senior author**: Reinhold Scherer - **Contact**: [reinhold.scherer@tugraz.at](mailto:reinhold.scherer@tugraz.at) - **Institution**: Institut Guttmann - **Department**: Institut Universitari de Neurorehabilitació adscrit a la UAB - **Address**: 08916 Badalona, Barcelona, Spain - **Country**: Spain - **Repository**: BNCI Horizon 2020 - **Data URL**: [https://bnci-horizon-2020.eu/database/data-sets](https://bnci-horizon-2020.eu/database/data-sets) - **Publication year**: 2015 - **Funding**: FP7 EU Research Projects BrainAble (No. 247447); ABC (No. 287774); BackHome (No. 288566) - **Ethics approval**: Comitè d’Ètica Assistencial de l’Institut Guttman - **Keywords**: brain-computer interface, motor imagery, mental tasks, EEG, CNS tissue damage, stroke, spinal cord injury, binary classification **References** Zhang, X., Yao, L., Zhang, Q., Kanhere, S., Sheng, M., & Liu, Y. (2017). A survey on deep learning based brain computer interface: Recent advances and new frontiers. IEEE Transactions on Cognitive and Developmental Systems, 10(2), 145-163. Notes .. note:: `BNCI2015_004` was previously named `BNCI2015004`. `BNCI2015004` will be removed in version 1.1. .. versionadded:: 0.4.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000144` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2015-004 Mental tasks dataset | | Author (year) | `Scherer2015` | | Canonical | `BNCI2015` | | Importable as | `NM000144`, `Scherer2015`, `BNCI2015` | | Year | 2017 | | Authors | Reinhold Scherer, Josef Faller, Elisabeth V. C. Friedrich, Eloy Opisso, Ursula Costa, Andrea Kübler, Gernot R. Müller-Putz | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000144) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000144) | [Source URL](https://nemar.org/dataexplorer/detail/nm000144) | ## Technical Details - Subjects: 9 - Recordings: 18 - Tasks: 1 - Channels: 30 - Sampling rate (Hz): 256.0 - Duration (hours): 13.750518663194445 - Pathology: Other - Modality: Visual - Type: Motor - Size on disk: 1.1 GB - File count: 18 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000144](https://openneuro.org/datasets/nm000144) - NeMAR: [nm000144](https://nemar.org/dataexplorer/detail?dataset_id=nm000144) ## API Reference Use the `NM000144` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000144(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-004 Mental tasks dataset * **Study:** `nm000144` (NeMAR) * **Author (year):** `Scherer2015` * **Canonical:** — Also importable as: `NM000144`, `Scherer2015`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 9; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000144](https://openneuro.org/datasets/nm000144) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000144](https://nemar.org/dataexplorer/detail?dataset_id=nm000144) ### Examples ```pycon >>> from eegdash.dataset import NM000144 >>> dataset = NM000144(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000144) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000144) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000145: eeg dataset, 10 subjects *Munich Motor Imagery dataset* Access recordings and metadata through EEGDash. **Citation:** Moritz Grosse-Wentrup, Christian Liefhold, Klaus Gramann, Martin Buss (2009). *Munich Motor Imagery dataset*. Modality: eeg Subjects: 10 Recordings: 10 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000145 dataset = NM000145(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000145(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000145( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000145, title = {Munich Motor Imagery dataset}, author = {Moritz Grosse-Wentrup and Christian Liefhold and Klaus Gramann and Martin Buss}, } ``` ## About This Dataset **Munich Motor Imagery dataset** Munich Motor Imagery dataset. **Dataset Overview** - **Code**: GrosseWentrup2009 - **Paradigm**: imagery - **DOI**: 10.1109/TBME.2008.2009768 ### View full README **Munich Motor Imagery dataset** Munich Motor Imagery dataset. **Dataset Overview** - **Code**: GrosseWentrup2009 - **Paradigm**: imagery - **DOI**: 10.1109/TBME.2008.2009768 - **Subjects**: 10 - **Sessions per subject**: 1 - **Events**: right_hand=2, left_hand=1 - **Trial interval**: [0, 7] s - **File format**: set - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 500.0 Hz - **Number of channels**: 128 - **Channel types**: eeg=128 - **Channel names**: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128 - **Montage**: standard_1020 - **Hardware**: BrainAmp - **Reference**: Cz - **Line frequency**: 50.0 Hz - **Online filters**: {‘highpass_time_constant_s’: 10} - **Impedance threshold**: 10 kOhm **Participants** - **Number of subjects**: 10 - **Health status**: healthy - **Age**: mean=25.6, std=2.5 - **Gender distribution**: male=8, female=2 - **Handedness**: {‘right’: 8} - **BCI experience**: mixed - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Task type**: motor_imagery - **Number of classes**: 2 - **Class labels**: right_hand, left_hand - **Trial duration**: 10 s - **Tasks**: motor_imagery - **Study design**: two-class motor imagery with arrow cues - **Feedback type**: none - **Stimulus type**: arrow_cue - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Instructions**: Subjects were instructed to perform haptic motor imagery of the left or the right hand during display of the arrow, as indicated by the direction of the arrow **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text right_hand ``` ```text ├─ Sensory-event │ ├─ Experimental-stimulus │ ├─ Visual-presentation │ └─ Rightward, Arrow └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand left_hand ``` ```text ├─ Sensory-event │ ├─ Experimental-stimulus │ ├─ Visual-presentation │ └─ Leftward, Arrow └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_hand, right_hand - **Cue duration**: 7.0 s - **Imagery duration**: 7.0 s **Data Structure** - **Trials**: 150 - **Trials context**: per_class **Preprocessing** - **Data state**: preprocessed - **Preprocessing applied**: True - **Artifact methods**: none - **Re-reference**: car - **Notes**: No trials were rejected and no artifact correction was performed. Data were re-referenced to common average reference offline. **Signal Processing** - **Classifiers**: Logistic Regression - **Feature extraction**: CSP, Beamforming, Laplacian, Bandpower - **Frequency bands**: analyzed=[7.0, 30.0] Hz - **Spatial filters**: CSP, Beamforming, Laplacian **Cross-Validation** - **Method**: bootstrapping - **Evaluation type**: within_subject **BCI Application** - **Applications**: motor_control - **Environment**: shielded_room - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Motor **Documentation** - **DOI**: 10.1109/TBME.2008.2009768 - **License**: CC-BY-4.0 - **Investigators**: Moritz Grosse-Wentrup, Christian Liefhold, Klaus Gramann, Martin Buss - **Senior author**: Martin Buss - **Contact**: [moritzgw@ieee.org](mailto:moritzgw@ieee.org) - **Institution**: Technische Universität München - **Department**: Institute of Automatic Control Engineering (LSR) - **Country**: DE - **Repository**: Zenodo - **Publication year**: 2009 - **Keywords**: Beamforming, brain-computer interfaces, common spatial patterns, electroencephalography, motor imagery, spatial filtering **References** Grosse-Wentrup, Moritz, et al. “Beamforming in noninvasive brain–computer interfaces.” IEEE Transactions on Biomedical Engineering 56.4 (2009): 1209-1219. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000145` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Munich Motor Imagery dataset | | Author (year) | `GrosseWentrup2009` | | Canonical | — | | Importable as | `NM000145`, `GrosseWentrup2009` | | Year | 2009 | | Authors | Moritz Grosse-Wentrup, Christian Liefhold, Klaus Gramann, Martin Buss | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000145) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000145) | [Source URL](https://nemar.org/dataexplorer/detail/nm000145) | ## Technical Details - Subjects: 10 - Recordings: 10 - Tasks: 1 - Channels: 128 - Sampling rate (Hz): 500.0 - Duration (hours): 8.404805555555555 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 5.4 GB - File count: 10 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000145](https://openneuro.org/datasets/nm000145) - NeMAR: [nm000145](https://nemar.org/dataexplorer/detail?dataset_id=nm000145) ## API Reference Use the `NM000145` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000145(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Munich Motor Imagery dataset * **Study:** `nm000145` (NeMAR) * **Author (year):** `GrosseWentrup2009` * **Canonical:** — Also importable as: `NM000145`, `GrosseWentrup2009`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 10; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000145](https://openneuro.org/datasets/nm000145) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000145](https://nemar.org/dataexplorer/detail?dataset_id=nm000145) ### Examples ```pycon >>> from eegdash.dataset import NM000145 >>> dataset = NM000145(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000145) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000145) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000146: eeg dataset, 10 subjects *Motor Imagery dataset from Weibo et al 2014* Access recordings and metadata through EEGDash. **Citation:** Weibo Yi, Shuang Qiu, Kun Wang, Hongzhi Qi, Lixin Zhang, Peng Zhou, Feng He, Dong Ming (2014). *Motor Imagery dataset from Weibo et al 2014*. Modality: eeg Subjects: 10 Recordings: 10 License: CC0-1.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000146 dataset = NM000146(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000146(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000146( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000146, title = {Motor Imagery dataset from Weibo et al 2014}, author = {Weibo Yi and Shuang Qiu and Kun Wang and Hongzhi Qi and Lixin Zhang and Peng Zhou and Feng He and Dong Ming}, } ``` ## About This Dataset **Motor Imagery dataset from Weibo et al 2014** Motor Imagery dataset from Weibo et al 2014. **Dataset Overview** - **Code**: Weibo2014 - **Paradigm**: imagery - **DOI**: 10.1371/journal.pone.0114853 ### View full README **Motor Imagery dataset from Weibo et al 2014** Motor Imagery dataset from Weibo et al 2014. **Dataset Overview** - **Code**: Weibo2014 - **Paradigm**: imagery - **DOI**: 10.1371/journal.pone.0114853 - **Subjects**: 10 - **Sessions per subject**: 1 - **Events**: left_hand=1, right_hand=2, hands=3, feet=4, left_hand_right_foot=5, right_hand_left_foot=6, rest=7 - **Trial interval**: [3, 7] s - **File format**: MAT - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 200.0 Hz - **Number of channels**: 60 - **Channel types**: eeg=60, eog=2, misc=2 - **Channel names**: AF3, AF4, C1, C2, C3, C4, C5, C6, CB1, CB2, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, HEO, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO5, PO6, PO7, PO8, POz, Pz, T7, T8, TP7, TP8, VEO - **Montage**: standard_1005 - **Hardware**: Neuroscan SynAmps2 - **Reference**: nose - **Ground**: prefrontal lobe - **Sensor type**: Ag/AgCl - **Line frequency**: 50.0 Hz - **Online filters**: {‘bandpass’: [0.5, 100], ‘notch_hz’: 50} - **Auxiliary channels**: EOG (2 ch, HEO, VEO) **Participants** - **Number of subjects**: 10 - **Health status**: healthy - **Age**: mean=24.0, min=23.0, max=25.0 - **Gender distribution**: female=7, male=3 - **Handedness**: right-handed - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 7 - **Class labels**: left_hand, right_hand, hands, feet, left_hand_right_foot, right_hand_left_foot, rest - **Trial duration**: 8.0 s - **Study design**: Simple limb motor imagery (left hand, right hand, feet) and compound limb motor imagery (both hands, left hand combined with right foot, right hand combined with left foot) - **Feedback type**: none - **Stimulus type**: text cues - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Instructions**: Participants were asked to perform kinesthetic motor imagery rather than a visual type of imagery while avoiding any muscle movement **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand hands ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine, Move, Hand feet ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine, Move, Foot left_hand_right_foot ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action ├─ Imagine │ ├─ Move │ └─ Left, Hand └─ Imagine ├─ Move └─ Right, Foot right_hand_left_foot ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action ├─ Imagine │ ├─ Move │ └─ Right, Hand └─ Imagine ├─ Move └─ Left, Foot rest ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Rest ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_hand, right_hand, feet, both_hands, left_hand_right_foot, right_hand_left_foot - **Cue duration**: 1.0 s - **Imagery duration**: 4.0 s **Data Structure** - **Trials**: 560 - **Trials context**: 8 sections with 60 trials each (10 trials per MI task per section) for 6 MI tasks, plus 1 section with 80 trials for rest state **Preprocessing** - **Data state**: preprocessed - **Preprocessing applied**: True - **Steps**: bandpass filtering, downsampling - **Highpass filter**: 0.5 Hz - **Lowpass filter**: 50.0 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.5, ‘high_cutoff_hz’: 50.0} - **Re-reference**: nose - **Downsampled to**: 200.0 Hz **Signal Processing** - **Feature extraction**: Bandpower, ERD, ERS, ERSP, Time-Frequency, AR, DTF, PLV - **Frequency bands**: theta=[4.0, 5.0] Hz; alpha=[8.0, 13.0] Hz; beta=[13.0, 30.0] Hz; analyzed=[1.0, 40.0] Hz **BCI Application** - **Applications**: motor_control - **Environment**: laboratory **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Research **Documentation** - **DOI**: 10.1371/journal.pone.0114853 - **License**: CC0-1.0 - **Investigators**: Weibo Yi, Shuang Qiu, Kun Wang, Hongzhi Qi, Lixin Zhang, Peng Zhou, Feng He, Dong Ming - **Senior author**: Dong Ming - **Contact**: [qhz@tju.edu.cn](mailto:qhz@tju.edu.cn); [richardming@tju.edu.cn](mailto:richardming@tju.edu.cn) - **Institution**: Tianjin University - **Department**: Department of Biomedical Engineering - **Country**: CN - **Repository**: Harvard Dataverse Database - **Data URL**: [http://dx.doi.org/10.7910/DVN/27306](http://dx.doi.org/10.7910/DVN/27306) - **Publication year**: 2014 - **Funding**: National Natural Science Foundation of China (No. 81222021, 61172008, 81171423, 51377120, 31271062); National Key Technology R&D Program of the Ministry of Science and Technology of China (No. 2012BAI34B02); Program for New Century Excellent Talents in University of the Ministry of Education of China (No. NCET-10-0618); Natural Science Foundation of Tianjin (No. 13JCQNJC13900) - **Ethics approval**: Ethical committee of Tianjin University - **Keywords**: motor imagery, compound limb motor imagery, EEG oscillatory patterns, cognitive process, effective connectivity, ERD, ERS **References** Yi, Weibo, et al. “Evaluation of EEG oscillatory patterns and cognitive process during simple and compound limb motor imagery.” PloS one 9.12 (2014). [https://doi.org/10.1371/journal.pone.0114853](https://doi.org/10.1371/journal.pone.0114853) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000146` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Motor Imagery dataset from Weibo et al 2014 | | Author (year) | `Yi2014` | | Canonical | `Weibo2014` | | Importable as | `NM000146`, `Yi2014`, `Weibo2014` | | Year | 2014 | | Authors | Weibo Yi, Shuang Qiu, Kun Wang, Hongzhi Qi, Lixin Zhang, Peng Zhou, Feng He, Dong Ming | | License | CC0-1.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000146) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000146) | [Source URL](https://nemar.org/dataexplorer/detail/nm000146) | ## Technical Details - Subjects: 10 - Recordings: 10 - Tasks: 1 - Channels: 60 - Sampling rate (Hz): 200.0 - Duration (hours): 13.080541666666663 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 1.6 GB - File count: 10 - Format: BIDS - License: CC0-1.0 - DOI: — - Source: nemar - OpenNeuro: [nm000146](https://openneuro.org/datasets/nm000146) - NeMAR: [nm000146](https://nemar.org/dataexplorer/detail?dataset_id=nm000146) ## API Reference Use the `NM000146` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000146(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Imagery dataset from Weibo et al 2014 * **Study:** `nm000146` (NeMAR) * **Author (year):** `Yi2014` * **Canonical:** `Weibo2014` Also importable as: `NM000146`, `Yi2014`, `Weibo2014`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 10; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000146](https://openneuro.org/datasets/nm000146) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000146](https://nemar.org/dataexplorer/detail?dataset_id=nm000146) ### Examples ```pycon >>> from eegdash.dataset import NM000146 >>> dataset = NM000146(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000146) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000146) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000147: eeg dataset, 22 subjects *RomaniBF2025ERP* Access recordings and metadata through EEGDash. **Citation:** Michele Romani, Devis Zanoni, Elisabetta Farella, Luca Turchet (2019). *RomaniBF2025ERP*. [10.48550/arXiv.2510.10169](https://doi.org/10.48550/arXiv.2510.10169) Modality: eeg Subjects: 22 Recordings: 120 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000147 dataset = NM000147(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000147(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000147( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000147, title = {RomaniBF2025ERP}, author = {Michele Romani and Devis Zanoni and Elisabetta Farella and Luca Turchet}, doi = {10.48550/arXiv.2510.10169}, url = {https://doi.org/10.48550/arXiv.2510.10169}, } ``` ## About This Dataset **RomaniBF2025ERP** MOABB class for BrainForm event-related potentials (ERP) dataset. **Dataset Overview** > Code: RomaniBF2025ERP > Paradigm: p300 > DOI: 10.48550/arXiv.2510.10169 ### View full README **RomaniBF2025ERP** MOABB class for BrainForm event-related potentials (ERP) dataset. **Dataset Overview** > Code: RomaniBF2025ERP > Paradigm: p300 > DOI: 10.48550/arXiv.2510.10169 > Subjects: 22 > Sessions per subject: 2 > Events: Target=1, NonTarget=2 > Trial interval: [-0.1, 1.0] s > File format: EDF > Contributing labs: University of Trento, Fondazione Bruno Kessler **Acquisition** > Sampling rate: 250.0 Hz > Number of channels: 8 > Channel types: eeg=8 > Channel names: Fz, C3, Cz, C4, Pz, PO7, Oz, PO8 > Montage: standard_1020 > Hardware: g.tec Unicorn Hybrid Black > Reference: right mastoid > Ground: left mastoid > Sensor type: EEG > Line frequency: 50.0 Hz > Cap manufacturer: g.tec > Cap model: Unicorn Hybrid Black > Electrode type: conductive gel **Participants** > Number of subjects: 22 > Health status: healthy > Age: mean=21.87, std=3.22 > Gender distribution: female=10, male=12 > BCI experience: naive **Experimental Protocol** > Paradigm: p300 > Number of classes: 2 > Class labels: Target, NonTarget > Trial duration: 0.9 s > Tasks: Complex Task (5 colored laser beams), Speller Task (10 color targets) > Study design: Within-subject study with two main sessions separated by visual texture swap (counterbalanced). Each session: calibration, tutorial, practice run with Complex Task (5 targets) and Speller Task (10 targets). Optional free-play third session for 16 participants. > Study domain: BCI training, serious gaming, skill acquisition > Feedback type: visual > Stimulus type: flickering > Stimulus modalities: visual > Primary modality: visual > Synchronicity: synchronous > Mode: online > Training/test split: True > Instructions: minimize movement during recording to reduce motion artifacts, focus on flickering targets for calibration and task completion **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 > Number of targets: 10 > Stimulus onset asynchrony: 100.0 ms **Data Structure** > Trials: 600 > Trials context: Per calibration session: 600 total stimulus events (60 target + 540 non-target from 10 unique targets). ~1 minute duration. **Preprocessing** > Data state: raw > Preprocessing applied: False **Signal Processing** > Classifiers: LDA **Cross-Validation** > Method: cross-validation > Evaluation type: within-subject **Performance (Original Study)** > Task Accuracy Complex Median T2A: 0.833 > Task Accuracy Speller Median T3B: 0.833 > Itr Complex Mean T2A: 10.76 > Itr Speller Mean T3B: 21.95 > Calibration Attempts Session1 Mean: 2.64 > Calibration Attempts Session2 Mean: 2.68 **BCI Application** > Applications: speller, gaming > Environment: laboratory > Online feedback: True **Tags** > Pathology: Healthy > Modality: ERP > Type: P300 **Documentation** > Description: BrainForm: a Serious Game for BCI Training and Data Collection - gamified BCI training system designed for scalable data collection using consumer hardware > DOI: 10.48550/arXiv.2510.10169 > License: CC-BY-4.0 > Investigators: Michele Romani, Devis Zanoni, Elisabetta Farella, Luca Turchet > Senior author: Luca Turchet > Institution: University of Trento > Address: 38122, Trento, Italy > Country: IT > Repository: GitHub > Data URL: [https://zenodo.org/records/17225966](https://zenodo.org/records/17225966) > Publication year: 2025 > Keywords: Brain-Computer Interfaces, Event-Related Potentials, Machine Learning, Serious Games, Human factors **Abstract** BrainForm is a gamified Brain-Computer Interface (BCI) training system designed for scalable data collection using consumer hardware and a minimal setup. We investigated (1) how users develop BCI control skills across repeated sessions and (2) perceptual and performance effects of two visual stimulation textures. Game Experience Questionnaire (GEQ) scores for Flow, Positive Affect, Competence and Challenge were strongly positive, indicating sustained engagement. A within-subject study with multiple runs, two task complexities, and post-session questionnaires revealed no significant performance differences between textures but increased ocular irritation over time. Online metrics—Task Accuracy, Task Time, and Information Transfer Rate—improved across sessions, confirming learning effects for symbol spelling, even under pressure conditions. Our results highlight the potential of BrainForm as a scalable, user-friendly BCI research tool and offer guidance for sustained engagement and reduced training fatigue. **Methodology** Structured protocol consisting of: (1) introductory tutorial, (2) two practice runs involving calibration and control with up to ten flickering targets, (3) final timed challenge. Two main sessions separated by short break and visual texture swap (counterbalanced). Calibration: 60 trials focusing on single flashing target (~1 minute), repeated until 80%+ accuracy. Tasks: Complex Task (5 colored laser beams, game-oriented) and Speller Task (10 color targets, BCI-oriented symbol spelling). Optional free-play run for 16 participants. Data collection: raw EEG, performance metrics, in-game metadata, and questionnaires (demographic, session questionnaire, GEQ). **References** M. Romani, D. Zanoni, E. Farella, and L. Turchet, “BrainForm: a Serious Game for BCI Training and Data Collection,” Oct. 14, 2025, arXiv: arXiv:2510.10169. doi: 10.48550/arXiv.2510.10169. M. Romani, F. Paissan, A. Fossà, and E. Farella, “Explicit modelling of subject dependency in BCI decoding,” Sept. 27, 2025, arXiv: arXiv:2509.23247. doi: 10.48550/arXiv.2509.23247. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb ## Dataset Information | Dataset ID | `NM000147` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | RomaniBF2025ERP | | Author (year) | `RomaniBF2025` | | Canonical | `Romani2025` | | Importable as | `NM000147`, `RomaniBF2025`, `Romani2025` | | Year | 2019 | | Authors | Michele Romani, Devis Zanoni, Elisabetta Farella, Luca Turchet | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.48550/arXiv.2510.10169](https://doi.org/10.48550/arXiv.2510.10169) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000147) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000147) | [Source URL](https://nemar.org/dataexplorer/detail/nm000147) | ### Copy-paste BibTeX ```bibtex @dataset{nm000147, title = {RomaniBF2025ERP}, author = {Michele Romani and Devis Zanoni and Elisabetta Farella and Luca Turchet}, doi = {10.48550/arXiv.2510.10169}, url = {https://doi.org/10.48550/arXiv.2510.10169}, } ``` ## Technical Details - Subjects: 22 - Recordings: 120 - Tasks: 1 - Channels: 8 - Sampling rate (Hz): 250.0 - Duration (hours): 6.27819111111111 - Pathology: Healthy - Modality: Visual - Type: Learning - Size on disk: 134.3 MB - File count: 120 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.48550/arXiv.2510.10169 - Source: nemar - OpenNeuro: [nm000147](https://openneuro.org/datasets/nm000147) - NeMAR: [nm000147](https://nemar.org/dataexplorer/detail?dataset_id=nm000147) ## API Reference Use the `NM000147` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000147(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RomaniBF2025ERP * **Study:** `nm000147` (NeMAR) * **Author (year):** `RomaniBF2025` * **Canonical:** `Romani2025` Also importable as: `NM000147`, `RomaniBF2025`, `Romani2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 22; recordings: 120; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000147](https://openneuro.org/datasets/nm000147) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000147](https://nemar.org/dataexplorer/detail?dataset_id=nm000147) DOI: [https://doi.org/10.48550/arXiv.2510.10169](https://doi.org/10.48550/arXiv.2510.10169) ### Examples ```pycon >>> from eegdash.dataset import NM000147 >>> dataset = NM000147(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000147) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000147) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000148: eeg dataset, 30 subjects *Motor imagery BCI dataset with pupillometry augmentation* Access recordings and metadata through EEGDash. **Citation:** David Rozado, Andreas Duenser, Ben Howell (2019). *Motor imagery BCI dataset with pupillometry augmentation*. Modality: eeg Subjects: 30 Recordings: 60 License: CC0 1.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000148 dataset = NM000148(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000148(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000148( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000148, title = {Motor imagery BCI dataset with pupillometry augmentation}, author = {David Rozado and Andreas Duenser and Ben Howell}, } ``` ## About This Dataset **Motor imagery BCI dataset with pupillometry augmentation** Motor imagery BCI dataset with pupillometry augmentation. **Dataset Overview** - **Code**: Rozado2015 - **Paradigm**: imagery - **DOI**: 10.1371/journal.pone.0121262 ### View full README **Motor imagery BCI dataset with pupillometry augmentation** Motor imagery BCI dataset with pupillometry augmentation. **Dataset Overview** - **Code**: Rozado2015 - **Paradigm**: imagery - **DOI**: 10.1371/journal.pone.0121262 - **Subjects**: 30 - **Sessions per subject**: 1 - **Events**: left_hand=1, rest=2 - **Trial interval**: [0.0, 6.0] s - **Runs per session**: 2 - **File format**: XDF **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 32 - **Channel types**: eeg=32 - **Montage**: biosemi32 - **Hardware**: BioSemi ActiveTwo - **Reference**: CMS/DRL - **Sensor type**: active - **Line frequency**: 50.0 Hz - **Cap manufacturer**: BioSemi - **Electrode material**: sintered Ag/AgCl **Participants** - **Number of subjects**: 30 - **Health status**: healthy - **Age**: mean=38.0, std=9.69, min=15, max=61 - **Gender distribution**: male=15, female=15 - **Handedness**: {‘right’: 27, ‘left’: 3} **Experimental Protocol** - **Paradigm**: imagery - **Task type**: left hand grasping imagery vs rest - **Number of classes**: 2 - **Class labels**: left_hand, rest - **Trial duration**: 6.0 s - **Study design**: Motor imagery with pupillometry augmentation - **Feedback type**: none - **Stimulus type**: auditory cue - **Stimulus modalities**: auditory - **Primary modality**: auditory - **Synchronicity**: synchronous - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand rest ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Rest ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left hand grasping, rest - **Imagery duration**: 6.0 s **Data Structure** - **Blocks per session**: 2 - **Block duration**: 300.0 s - **Trials context**: 2 experiments of 25 trials each (50 trials total per subject). Each experiment is stored as one XDF file. **Signal Processing** - **Classifiers**: LDA - **Feature extraction**: CSP, pupil_diameter - **Frequency bands**: bandpass=[8.0, 30.0] Hz - **Spatial filters**: CSP **Cross-Validation** - **Method**: 10-fold - **Folds**: 10 - **Evaluation type**: within_subject **BCI Application** - **Environment**: lab - **Online feedback**: False **Tags** - **Pathology**: healthy - **Modality**: auditory - **Type**: motor_imagery **Documentation** - **DOI**: 10.1371/journal.pone.0121262 - **License**: CC0 1.0 - **Investigators**: David Rozado, Andreas Duenser, Ben Howell - **Senior author**: David Rozado - **Institution**: CSIRO - **Department**: Digital Productivity Flagship - **Country**: AU - **Repository**: Harvard Dataverse - **Data URL**: [https://doi.org/10.7910/DVN/28932](https://doi.org/10.7910/DVN/28932) - **Publication year**: 2015 - **Keywords**: motor imagery, BCI, pupillometry, EEG, brain-computer interface **References** D. Rozado, T. Duenser, and B. Gruen, “Improving the performance of an EEG-based motor imagery brain computer interface using task evoked changes in pupil diameter,” PLoS ONE, vol. 10, no. 3, e0121262, 2015. DOI: 10.1371/journal.pone.0121262 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000148` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Motor imagery BCI dataset with pupillometry augmentation | | Author (year) | `Rozado2015` | | Canonical | — | | Importable as | `NM000148`, `Rozado2015` | | Year | 2019 | | Authors | David Rozado, Andreas Duenser, Ben Howell | | License | CC0 1.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000148) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000148) | [Source URL](https://nemar.org/dataexplorer/detail/nm000148) | ## Technical Details - Subjects: 30 - Recordings: 60 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 512.0 - Duration (hours): 5.702619900173612 - Pathology: Healthy - Modality: Auditory - Type: Motor - Size on disk: 975.3 MB - File count: 60 - Format: BIDS - License: CC0 1.0 - DOI: — - Source: nemar - OpenNeuro: [nm000148](https://openneuro.org/datasets/nm000148) - NeMAR: [nm000148](https://nemar.org/dataexplorer/detail?dataset_id=nm000148) ## API Reference Use the `NM000148` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000148(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor imagery BCI dataset with pupillometry augmentation * **Study:** `nm000148` (NeMAR) * **Author (year):** `Rozado2015` * **Canonical:** — Also importable as: `NM000148`, `Rozado2015`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 30; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000148](https://openneuro.org/datasets/nm000148) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000148](https://nemar.org/dataexplorer/detail?dataset_id=nm000148) ### Examples ```pycon >>> from eegdash.dataset import NM000148 >>> dataset = NM000148(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000148) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000148) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000149: eeg dataset, 10 subjects *BNCI 2019-001 Motor Imagery dataset for Spinal Cord Injury patients* Access recordings and metadata through EEGDash. **Citation:** Patrick Ofner, Andreas Schwarz, Joana Pereira, Daniela Wyss, Renate Wildburger, Gernot R. Müller-Putz (2019). *BNCI 2019-001 Motor Imagery dataset for Spinal Cord Injury patients*. Modality: eeg Subjects: 10 Recordings: 90 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000149 dataset = NM000149(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000149(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000149( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000149, title = {BNCI 2019-001 Motor Imagery dataset for Spinal Cord Injury patients}, author = {Patrick Ofner and Andreas Schwarz and Joana Pereira and Daniela Wyss and Renate Wildburger and Gernot R. Müller-Putz}, } ``` ## About This Dataset **BNCI 2019-001 Motor Imagery dataset for Spinal Cord Injury patients** BNCI 2019-001 Motor Imagery dataset for Spinal Cord Injury patients. **Dataset Overview** - **Code**: BNCI2019-001 - **Paradigm**: imagery - **DOI**: 10.1038/s41598-019-43594-9 ### View full README **BNCI 2019-001 Motor Imagery dataset for Spinal Cord Injury patients** BNCI 2019-001 Motor Imagery dataset for Spinal Cord Injury patients. **Dataset Overview** - **Code**: BNCI2019-001 - **Paradigm**: imagery - **DOI**: 10.1038/s41598-019-43594-9 - **Subjects**: 10 - **Sessions per subject**: 1 - **Events**: supination=776, pronation=777, hand_open=779, palmar_grasp=925, lateral_grasp=926 - **Trial interval**: [2, 5] s - **Runs per session**: 9 - **File format**: GDF - **Contributing labs**: Graz University of Technology Institute of Neural Engineering BCI-Lab, AUVA rehabilitation clinic Tobelbad **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 61 - **Channel types**: eeg=61, eog=3 - **Channel names**: AFz, C1, C2, C3, C4, C5, C6, CCP1h, CCP2h, CCP3h, CCP4h, CCP5h, CCP6h, CP1, CP2, CP3, CP4, CP5, CP6, CPP1h, CPP2h, CPP3h, CPP4h, CPP5h, CPP6h, CPz, Cz, F1, F2, F3, F4, FC1, FC2, FC3, FC4, FC5, FC6, FCC1h, FCC2h, FCC3h, FCC4h, FCC5h, FCC6h, FCz, FFC1h, FFC2h, FFC3h, FFC4h, FFC5h, FFC6h, Fz, P1, P2, P3, P4, P5, P6, POz, PPO1h, PPO2h, Pz, eog-l, eog-m, eog-r - **Montage**: 10-5 - **Hardware**: g.tec - **Software**: EEGlab 14.1.1b - **Reference**: left earlobe - **Ground**: AFF2h - **Sensor type**: active electrode - **Line frequency**: 50.0 Hz - **Online filters**: 50 Hz notch, 0.01-100 Hz bandpass - **Cap manufacturer**: g.tec medical engineering GmbH - **Cap model**: g.GAMMAsys/g.LADYbird - **Electrode type**: active electrode - **Auxiliary channels**: EOG (3 ch, above nasion, below outer canthi left, below outer canthi right) **Participants** - **Number of subjects**: 10 - **Health status**: patients - **Clinical population**: spinal cord injury - **Age**: mean=49.8, min=20, max=78 - **Gender distribution**: male=9, female=1 - **Handedness**: right-handed (all participants originally) - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Task type**: attempted movement - **Number of classes**: 5 - **Class labels**: supination, pronation, hand_open, palmar_grasp, lateral_grasp - **Trial duration**: 5.0 s - **Tasks**: hand_open, palmar_grasp, lateral_grasp, pronation, supination - **Study design**: motor imagery and attempted movements - **Feedback type**: visual feedback (online paradigm only - movement icon displayed when movement detected) - **Stimulus type**: visual cue - **Stimulus modalities**: visual, auditory - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: both - **Training/test split**: True - **Instructions**: Participants were instructed to attempt or execute movements based on class cue displayed on screen. They were asked to focus gaze on fixation cross, avoid eye movements, swallowing, and blinking during trial period. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text supination ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Turn ├─ Forearm └─ Label/supination pronation ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Turn ├─ Forearm └─ Label/pronation hand_open ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine, Open, Hand palmar_grasp ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine, Grasp, Hand lateral_grasp ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Grasp ├─ Hand └─ Label/lateral ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: hand_open, palmar_grasp, lateral_grasp, pronation, supination - **Cue duration**: 3.0 s - **Imagery duration**: 3.0 s **Data Structure** - **Trials**: 360 - **Trials per class**: hand_open=72, palmar_grasp=72, lateral_grasp=72, pronation=72, supination=72 - **Blocks per session**: 9 - **Trials context**: total per subject (72 trials per class) **Preprocessing** - **Data state**: raw (GDF format) - **Preprocessing applied**: True - **Steps**: bandpass filter, notch filter, ICA, artifact rejection - **Highpass filter**: 0.01 Hz - **Lowpass filter**: 100 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.01, ‘high_cutoff_hz’: 100.0} - **Notch filter**: [50] Hz - **Filter type**: Chebyshev - **Filter order**: 8 - **Artifact methods**: ICA, visual inspection, abnormal joint probability, abnormal kurtosis - **Re-reference**: CAR - **Notes**: Noisy channels were visually inspected and removed. AFz was removed by default as it is sensitive to eye blinks and eye movements. ICA was performed on 0.3-70 Hz filtered signals using extended infomax. PCA dimensionality reduction retained 99% variance. Artifact-contaminated ICs (muscle and eye-related) were removed. Trials with values above/below ±100 μV, abnormal joint probabilities, or abnormal kurtosis (5x SD threshold) were rejected. Final analysis used 0.3-3 Hz bandpass filter. **Signal Processing** - **Classifiers**: Shrinkage LDA, sLDA - **Feature extraction**: time-domain low-frequency signals, MRCPs, ICA - **Frequency bands**: analyzed=[0.3, 3.0] Hz - **Spatial filters**: CAR **Cross-Validation** - **Method**: 10x10-fold - **Folds**: 10 - **Evaluation type**: within_subject, cross_validation **Performance (Original Study)** - **Accuracy**: 45.3% - **Peak Accuracy 5Class**: 45.3 - **Peak Latency 5Class S**: 1.1 - **Confidence Interval Lower**: 40.3 - **Confidence Interval Upper**: 50.3 - **Chance Level 5Class**: 20.0 - **Significance Level 5Class**: 22.3 - **Peak Accuracy 3Class Subset**: 53.0 - **Peak Latency 3Class Subset S**: 1.0 - **Online Accuracy 2Class**: 68.4 - **Online Tpr**: 31.75 - **Online Fp Per Min**: 3.4 **BCI Application** - **Applications**: neuroprosthetic, upper_limb_control, hand_grasp_control - **Environment**: indoor - **Online feedback**: True **Tags** - **Pathology**: Spinal Cord Injury - **Modality**: Motor - **Type**: Motor **Documentation** - **Description**: This dataset investigates whether attempted arm and hand movements in persons with spinal cord injury can be decoded from low-frequency EEG signals (MRCPs). The study includes offline 5-class classification and online proof-of-concept for self-paced movement detection. - **DOI**: 10.1038/s41598-019-43594-9 - **Associated paper DOI**: 10.1038/s41598-019-43594-9 - **License**: CC-BY-4.0 - **Investigators**: Patrick Ofner, Andreas Schwarz, Joana Pereira, Daniela Wyss, Renate Wildburger, Gernot R. Müller-Putz - **Senior author**: Gernot R. Müller-Putz - **Contact**: [gernot.mueller@tugraz.at](mailto:gernot.mueller@tugraz.at) - **Institution**: Graz University of Technology - **Department**: Institute of Neural Engineering, BCI-Lab - **Address**: Graz, Austria - **Country**: Austria - **Repository**: Zenodo - **Data URL**: [https://doi.org/10.5281/zenodo.2222268](https://doi.org/10.5281/zenodo.2222268) - **Publication year**: 2019 - **Funding**: European ICT Programme Project H2020-643955 ‘MoreGrasp’ - **Ethics approval**: Ethics committee for the hospitals of the Austrian general accident insurance institution AUVA (approval number 3/2017) - **Acknowledgements**: This work is supported by the European ICT Programme Project H2020-643955 ‘MoreGrasp’. **Abstract** We show that persons with spinal cord injury (SCI) retain decodable neural correlates of attempted arm and hand movements. We investigated hand open, palmar grasp, lateral grasp, pronation, and supination in 10 persons with cervical SCI. Discriminative movement information was provided by the time-domain of low-frequency electroencephalography (EEG) signals. Based on these signals, we obtained a maximum average classification accuracy of 45% (chance level was 20%) with respect to the five investigated classes. Pattern analysis indicates central motor areas as the origin of the discriminative signals. Furthermore, we introduce a proof-of-concept to classify movement attempts online in a closed loop, and tested it on a person with cervical SCI. We achieved here a modest classification performance of 68.4% with respect to palmar grasp vs hand open (chance level 50%). **Methodology** 10 participants with cervical SCI were recruited from a rehabilitation center (AUVA rehabilitation clinic, Tobelbad, Austria). Participants were aged 20-78 years with neurological level of injury C1-C7 and AIS scores A-D. They sat in wheelchairs and attempted/executed movements based on visual cues shown on screen. Each trial lasted 5 seconds with a fixation cross and beep at start, class cue displayed at 2 seconds. 9 runs with 40 trials per run were recorded (360 trials total, 72 per class). EEG was recorded from 61 electrodes using g.tec g.USBamps and g.GAMMAsys/g.LADYbird active electrode system at 256 Hz with 0.01-100 Hz bandpass and 50 Hz notch filter. Preprocessing included visual inspection, ICA artifact removal, trial rejection, and 0.3-3 Hz bandpass filtering. Classification used shrinkage LDA with 10x10 cross-validation. Online proof-of-concept used modified training paradigm with ready/go cues and 3-class classifier (hand open, palmar grasp, rest) with pre/post class detection logic. **References** Ofner, P. et al. (2019). Attempted arm and hand movements can be decoded from low-frequency EEG from persons with spinal cord injury. Scientific Reports, 9(1), 7134. [https://doi.org/10.1038/s41598-019-43594-9](https://doi.org/10.1038/s41598-019-43594-9) Notes .. versionadded:: 1.2.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000149` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2019-001 Motor Imagery dataset for Spinal Cord Injury patients | | Author (year) | `Ofner2019` | | Canonical | — | | Importable as | `NM000149`, `Ofner2019` | | Year | 2019 | | Authors | Patrick Ofner, Andreas Schwarz, Joana Pereira, Daniela Wyss, Renate Wildburger, Gernot R. Müller-Putz | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000149) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000149) | [Source URL](https://nemar.org/dataexplorer/detail/nm000149) | ## Technical Details - Subjects: 10 - Recordings: 90 - Tasks: 1 - Channels: 61 - Sampling rate (Hz): 256.0 - Duration (hours): 7.536291232638889 - Pathology: Other - Modality: Visual - Type: Motor - Size on disk: 1.2 GB - File count: 90 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000149](https://openneuro.org/datasets/nm000149) - NeMAR: [nm000149](https://nemar.org/dataexplorer/detail?dataset_id=nm000149) ## API Reference Use the `NM000149` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000149(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2019-001 Motor Imagery dataset for Spinal Cord Injury patients * **Study:** `nm000149` (NeMAR) * **Author (year):** `Ofner2019` * **Canonical:** — Also importable as: `NM000149`, `Ofner2019`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 10; recordings: 90; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000149](https://openneuro.org/datasets/nm000149) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000149](https://nemar.org/dataexplorer/detail?dataset_id=nm000149) ### Examples ```pycon >>> from eegdash.dataset import NM000149 >>> dataset = NM000149(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000149) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000149) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000150: eeg dataset *Liu2025 - NEMAR Dataset* Access recordings and metadata through EEGDash. **Citation:** Unknown (—). *Liu2025 - NEMAR Dataset*. Modality: eeg Subjects: — Recordings: — License: — Source: nemar Metadata: Limited (20%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000150 dataset = NM000150(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000150(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000150( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000150, title = {Liu2025 - NEMAR Dataset}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `NM000150` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Liu2025 - NEMAR Dataset | | Author (year) | `Liu2025_NEMAR` | | Canonical | — | | Importable as | `NM000150`, `Liu2025_NEMAR` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000150) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000150) | [Source URL](https://github.com/nemarDatasets/nm000150) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: nemar - OpenNeuro: [nm000150](https://openneuro.org/datasets/nm000150) - NeMAR: [nm000150](https://nemar.org/dataexplorer/detail?dataset_id=nm000150) ## API Reference Use the `NM000150` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000150(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Liu2025 - NEMAR Dataset * **Study:** `nm000150` (NeMAR) * **Author (year):** `Liu2025_NEMAR` * **Canonical:** — Also importable as: `NM000150`, `Liu2025_NEMAR`. Modality: `eeg`. Subjects: 0; recordings: 0; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000150](https://openneuro.org/datasets/nm000150) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000150](https://nemar.org/dataexplorer/detail?dataset_id=nm000150) ### Examples ```pycon >>> from eegdash.dataset import NM000150 >>> dataset = NM000150(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000150) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000150) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000151: eeg dataset, 12 subjects *Motor imagery dataset for three imaginary states of the same upper extremity* Access recordings and metadata through EEGDash. **Citation:** Mojgan Tavakolan, Zack Frehlick, Xinyi Yong, Carlo Menon (2019). *Motor imagery dataset for three imaginary states of the same upper extremity*. Modality: eeg Subjects: 12 Recordings: 46 License: CC0-1.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000151 dataset = NM000151(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000151(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000151( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000151, title = {Motor imagery dataset for three imaginary states of the same upper extremity}, author = {Mojgan Tavakolan and Zack Frehlick and Xinyi Yong and Carlo Menon}, } ``` ## About This Dataset **Motor imagery dataset for three imaginary states of the same upper extremity** Motor imagery dataset for three imaginary states of the same upper extremity. **Dataset Overview** - **Code**: Tavakolan2017 - **Paradigm**: imagery - **DOI**: 10.1371/journal.pone.0174161 ### View full README **Motor imagery dataset for three imaginary states of the same upper extremity** Motor imagery dataset for three imaginary states of the same upper extremity. **Dataset Overview** - **Code**: Tavakolan2017 - **Paradigm**: imagery - **DOI**: 10.1371/journal.pone.0174161 - **Subjects**: 12 - **Sessions per subject**: 4 - **Events**: rest=1, right_hand=2, right_elbow_flexion=3 - **Trial interval**: [0, 3] s - **File format**: BCI2000 **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 32 - **Channel types**: eeg=32 - **Montage**: GSN-HydroCel-32 - **Hardware**: EGI Geodesic Net Amps 400 series - **Reference**: Cz - **Sensor type**: Ag/AgCl sponge - **Line frequency**: 60.0 Hz - **Online filters**: {‘bandpass’: [0.1, 100]} - **Impedance threshold**: 50 kOhm **Participants** - **Number of subjects**: 12 - **Health status**: healthy - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 3 - **Class labels**: rest, right_hand, right_elbow_flexion - **Trial duration**: 3.0 s - **Study design**: Three-class motor imagery of the same upper extremity: rest, grasping (MI-GRASP), and elbow flexion (MI-ELBOW). 20 trials per class per session, 4 sessions per subject. - **Feedback type**: none - **Stimulus type**: visual cue - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Instructions**: REST: relax without movement. MI-GRASP: imagine opening and closing all fingers to grab an object. MI-ELBOW: imagine moving the forearm up and down. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text rest ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Rest right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand right_elbow_flexion ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Flex └─ Right, Elbow ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: rest, right_hand, right_elbow_flexion - **Cue duration**: 3.0 s - **Imagery duration**: 3.0 s **Data Structure** - **Trials**: 2880 - **Trials per class**: rest=20, right_hand=20, right_elbow_flexion=20 - **Trials context**: 12 subjects x 4 sessions x 60 trials (20 per class) **Preprocessing** - **Data state**: continuous **Signal Processing** - **Classifiers**: SVM-RBF - **Feature extraction**: autoregressive_coefficients, waveform_length, root_mean_square - **Frequency bands**: bandpass=[6.0, 35.0] Hz **Cross-Validation** - **Method**: 10x10-fold - **Folds**: 10 - **Evaluation type**: within_subject **BCI Application** - **Applications**: motor_control, rehabilitation - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Research **Documentation** - **DOI**: 10.1371/journal.pone.0174161 - **License**: CC0-1.0 - **Investigators**: Mojgan Tavakolan, Zack Frehlick, Xinyi Yong, Carlo Menon - **Senior author**: Carlo Menon - **Institution**: Simon Fraser University - **Department**: MENRVA Research Group, Schools of Mechatronic Systems Engineering and Engineering Science - **Country**: CA - **Repository**: Zenodo - **Data URL**: [https://zenodo.org/records/18967205](https://zenodo.org/records/18967205) - **Publication year**: 2017 - **Ethics approval**: Simon Fraser University Office of Research Ethics - **Keywords**: motor imagery, EEG, upper extremity, same limb, time-domain features, SVM, BCI **References** M. Tavakolan, Z. Frehlick, X. Yong, and C. Menon, “Classifying three imaginary states of the same upper extremity using time-domain features,” PLoS ONE, vol. 12, no. 3, e0174161, 2017. DOI: 10.1371/journal.pone.0174161 M. Tavakolan, Z. Frehlick, X. Yong, and C. Menon, “Data from: Classifying three imaginary states of the same upper extremity using time-domain features,” Dryad, 2017. DOI: 10.5061/dryad.6qs86 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000151` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Motor imagery dataset for three imaginary states of the same upper extremity | | Author (year) | `Tavakolan2017` | | Canonical | — | | Importable as | `NM000151`, `Tavakolan2017` | | Year | 2019 | | Authors | Mojgan Tavakolan, Zack Frehlick, Xinyi Yong, Carlo Menon | | License | CC0-1.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000151) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000151) | [Source URL](https://nemar.org/dataexplorer/detail/nm000151) | ## Technical Details - Subjects: 12 - Recordings: 46 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 1000.0 - Duration (hours): 9.901242777777778 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 3.2 GB - File count: 46 - Format: BIDS - License: CC0-1.0 - DOI: — - Source: nemar - OpenNeuro: [nm000151](https://openneuro.org/datasets/nm000151) - NeMAR: [nm000151](https://nemar.org/dataexplorer/detail?dataset_id=nm000151) ## API Reference Use the `NM000151` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000151(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor imagery dataset for three imaginary states of the same upper extremity * **Study:** `nm000151` (NeMAR) * **Author (year):** `Tavakolan2017` * **Canonical:** — Also importable as: `NM000151`, `Tavakolan2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 12; recordings: 46; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000151](https://openneuro.org/datasets/nm000151) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000151](https://nemar.org/dataexplorer/detail?dataset_id=nm000151) ### Examples ```pycon >>> from eegdash.dataset import NM000151 >>> dataset = NM000151(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000151) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000151) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000152: eeg dataset, 12 subjects *Upper-limb elbow-centered motor imagery dataset (10 classes)* Access recordings and metadata through EEGDash. **Citation:** Xin Zhang, Xinyi Yong, Carlo Menon (2019). *Upper-limb elbow-centered motor imagery dataset (10 classes)*. Modality: eeg Subjects: 12 Recordings: 180 License: CC BY 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000152 dataset = NM000152(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000152(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000152( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000152, title = {Upper-limb elbow-centered motor imagery dataset (10 classes)}, author = {Xin Zhang and Xinyi Yong and Carlo Menon}, } ``` ## About This Dataset **Upper-limb elbow-centered motor imagery dataset (10 classes)** Upper-limb elbow-centered motor imagery dataset (10 classes). **Dataset Overview** - **Code**: Zhang2017 - **Paradigm**: imagery - **DOI**: 10.1371/journal.pone.0188293 ### View full README **Upper-limb elbow-centered motor imagery dataset (10 classes)** Upper-limb elbow-centered motor imagery dataset (10 classes). **Dataset Overview** - **Code**: Zhang2017 - **Paradigm**: imagery - **DOI**: 10.1371/journal.pone.0188293 - **Subjects**: 12 - **Sessions per subject**: 1 - **Events**: rest=1, elbow_flexion=2, drawer=3, soup=4, weight_lifting=5, door=6, plate_cleaning=7, combing=8, pizza_cutting=9, pick_and_place=10 - **Trial interval**: [0, 4] s - **Runs per session**: 15 - **File format**: BCI2000 **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 17 - **Channel types**: eeg=17 - **Hardware**: EGI Geodesic Net Amps 400 series (N400) - **Software**: BCI2000 (Stimulus Presentation mode) - **Reference**: Cz - **Ground**: COM - **Sensor type**: Ag/AgCl sponge - **Line frequency**: 60.0 Hz - **Online filters**: {‘bandpass’: [0.1, 40]} **Participants** - **Number of subjects**: 12 - **Health status**: healthy - **Age**: min=20, max=33 - **Gender distribution**: male=10, female=2 - **Handedness**: {‘right’: 11, ‘left’: 1} - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 10 - **Class labels**: rest, elbow_flexion, drawer, soup, weight_lifting, door, plate_cleaning, combing, pizza_cutting, pick_and_place - **Trial duration**: 5.0 s - **Study design**: Upper-limb elbow-centered motor imagery with 9 goal-directed tasks plus rest. Each trial: 4-6 s cue (randomized) then 4-6 s rest (randomized). - **Feedback type**: none - **Stimulus type**: picture cues - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Instructions**: Participants were asked to repetitively perform the kinesthetic motor imagery task displayed on the screen without actually moving. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text rest ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Rest elbow_flexion ``` ```text ├─ Sensory-event └─ Label/elbow_flexion drawer ``` ```text ├─ Sensory-event └─ Label/drawer soup ``` ```text ├─ Sensory-event └─ Label/soup weight_lifting ``` ```text ├─ Sensory-event └─ Label/weight_lifting door ``` ```text ├─ Sensory-event └─ Label/door plate_cleaning ``` ```text ├─ Sensory-event └─ Label/plate_cleaning combing ``` ```text ├─ Sensory-event └─ Label/combing pizza_cutting ``` ```text ├─ Sensory-event └─ Label/pizza_cutting pick_and_place ``` ```text ├─ Sensory-event └─ Label/pick_and_place ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: elbow_flexion, drawer, soup, weight_lifting, door, plate_cleaning, combing, pizza_cutting, pick_and_place - **Cue duration**: 5.0 s - **Imagery duration**: 5.0 s **Data Structure** - **Trials**: 330 - **Trials context**: 15 runs of 24 trials each (4 rest + 4 elbow + 2 each of 8 goal tasks). Total: 60 rest + 30 per MI task = 330. **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False **Signal Processing** - **Classifiers**: LDA, DAL - **Feature extraction**: bandpower, CSP, FBCSP - **Frequency bands**: bandpass=[6.0, 35.0] Hz; mu=[7.0, 13.0] Hz; beta=[13.0, 30.0] Hz - **Spatial filters**: CSP, FBCSP **Cross-Validation** - **Method**: 5x5-fold - **Folds**: 5 - **Evaluation type**: within_subject **BCI Application** - **Applications**: motor_control, rehabilitation - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Research **Documentation** - **DOI**: 10.1371/journal.pone.0188293 - **License**: CC BY 4.0 - **Investigators**: Xin Zhang, Xinyi Yong, Carlo Menon - **Senior author**: Carlo Menon - **Institution**: Simon Fraser University - **Department**: School of Engineering Science - **Country**: CA - **Repository**: Figshare - **Data URL**: [https://doi.org/10.6084/m9.figshare.5579461.v1](https://doi.org/10.6084/m9.figshare.5579461.v1) - **Publication year**: 2017 - **Keywords**: motor imagery, upper limb, elbow, BCI, EEG, kinesthetic imagery **References** X. Zhang, X. Yong, and C. Menon, “Evaluating the versatility of EEG models generated from motor imagery tasks: An exploratory investigation on upper-limb elbow-centered motor imagery tasks,” PLoS ONE, vol. 12, no. 11, e0188293, 2017. DOI: 10.1371/journal.pone.0188293 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000152` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Upper-limb elbow-centered motor imagery dataset (10 classes) | | Author (year) | `Zhang2017` | | Canonical | — | | Importable as | `NM000152`, `Zhang2017` | | Year | 2019 | | Authors | Xin Zhang, Xinyi Yong, Carlo Menon | | License | CC BY 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000152) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000152) | [Source URL](https://nemar.org/dataexplorer/detail/nm000152) | ## Technical Details - Subjects: 12 - Recordings: 180 - Tasks: 1 - Channels: 17 - Sampling rate (Hz): 1000.0 - Duration (hours): 9.24525 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 1.6 GB - File count: 180 - Format: BIDS - License: CC BY 4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000152](https://openneuro.org/datasets/nm000152) - NeMAR: [nm000152](https://nemar.org/dataexplorer/detail?dataset_id=nm000152) ## API Reference Use the `NM000152` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000152(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Upper-limb elbow-centered motor imagery dataset (10 classes) * **Study:** `nm000152` (NeMAR) * **Author (year):** `Zhang2017` * **Canonical:** — Also importable as: `NM000152`, `Zhang2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 12; recordings: 180; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000152](https://openneuro.org/datasets/nm000152) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000152](https://nemar.org/dataexplorer/detail?dataset_id=nm000152) ### Examples ```pycon >>> from eegdash.dataset import NM000152 >>> dataset = NM000152(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000152) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000152) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000155: emg dataset, 6 subjects *MUniverse Caillet et al 2023* Access recordings and metadata through EEGDash. **Citation:** Arnault H. Caillet, Simon Avrillon, Aritra Kundu, Tianyi Yu, Andrew T. M. Phillips, Luca Modenese, Dario Farina (20). *MUniverse Caillet et al 2023*. [https://doi.org/10.7910/DVN/F9GWIW](https://doi.org/https://doi.org/10.7910/DVN/F9GWIW) Modality: emg Subjects: 6 Recordings: 11 License: CC0 BY 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000155 dataset = NM000155(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000155(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000155( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000155, title = {MUniverse Caillet et al 2023}, author = {Arnault H. Caillet and Simon Avrillon and Aritra Kundu and Tianyi Yu and Andrew T. M. Phillips and Luca Modenese and Dario Farina}, doi = {https://doi.org/10.7910/DVN/F9GWIW}, url = {https://doi.org/https://doi.org/10.7910/DVN/F9GWIW}, } ``` ## About This Dataset **Caillet et al 2023: HDsEMG recordings** BIDS-formatted version of the HDsEMG dataset published in \*\`Caillet et al. 2023 <[https://doi.org/10.1523/ENEURO.0064-23.2023](https://doi.org/10.1523/ENEURO.0064-23.2023)>\`_\_\*. **Population** Six healthy male subjects (age: 26 +/- 4 years; height: 174 +/- 7 cm; weight: 66 +/- 15 kg). **Protocol description** ### View full README **Caillet et al 2023: HDsEMG recordings** BIDS-formatted version of the HDsEMG dataset published in \*\`Caillet et al. 2023 <[https://doi.org/10.1523/ENEURO.0064-23.2023](https://doi.org/10.1523/ENEURO.0064-23.2023)>\`_\_\*. **Population** Six healthy male subjects (age: 26 +/- 4 years; height: 174 +/- 7 cm; weight: 66 +/- 15 kg). **Protocol description** Each participant performed two trapezoidal contractions at 30 percent and 50 percent MVC, with 120 s of rest in between, consisting of linear ramps up and down performed at 5 percent per second and a plateau maintained for 20 and 15 s at 30 percent and 50 percent MVC, respectively. The order of the contractions was randomized. **Electrode placement** First, the skin was shaved, abrased and cleansed with 70 percent ethyl alcohol. Next, four grids (64 channels) were carefully positioned side-to-side with a 4-mm distance between the electrodes at the edges of adjacent grids. The 256 electrodes were centered on the muscle belly (right tibialis anterior) and laid within the muscle perimeter identified through palpation. Two bands damped with water were placed around the ankle as ground (R2) and reference (R1) electrodes. **Set-up description** The participant sat on a massage table with the hips flexed at 30 degrees, 0 degrees being the hip neutral position, and their knees fully extended. We fixed the foot of the dominant leg (right in all participants) onto the pedal of a commercial dynamometer (OT Bioelettronica) positioned at 30 degrees in the plantarflexion direction, 0 degrees being the foot perpendicular to the shank. The thigh was fixed to the massage table with an inextensible 3-cm-wide Velcro strap. The foot was fixed to the pedal with inextensible straps positioned around the proximal phalanx, metatarsal, and cuneiform. Force signals were recorded with a load cell (CCT Transducer s.a.s.) connected in-series to the pedal using the same acquisition system as for the HD-EMG recordings. The dynamometer was positioned according to the participant’s lower limb length and secured to the massage table to avoid any motion during the contractions. **Missing data** There is no 50 % MVC ramp-and-hold contraction for the second subject. **Coordinate systems** All electrode coordinates (reported in mm) have been converted to a common reference frame corresponding to the first EMG-array (*space-grid1*). The positions of the reference and ground electrodes are reported in a separate coordinate system (*space-lowerLeg*) reported as a percentage of the lower leg length. **Conversion** The dataset has been converted semi-automatically using the [\*MUniverse\*](https://github.com/dfarinagroup/muniverse/tree/main) software. See *dataset_description.json* for further details. ## Dataset Information | Dataset ID | `NM000155` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MUniverse Caillet et al 2023 | | Author (year) | `Caillet2023` | | Canonical | — | | Importable as | `NM000155`, `Caillet2023` | | Year | 20 | | Authors | Arnault H. Caillet, Simon Avrillon, Aritra Kundu, Tianyi Yu, Andrew T. M. Phillips, Luca Modenese, Dario Farina | | License | CC0 BY 4.0 | | Citation / DOI | [https://doi.org/10.7910/DVN/F9GWIW](https://doi.org/https://doi.org/10.7910/DVN/F9GWIW) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000155) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000155) | [Source URL](https://nemar.org/dataexplorer/detail/nm000155) | ### Copy-paste BibTeX ```bibtex @dataset{nm000155, title = {MUniverse Caillet et al 2023}, author = {Arnault H. Caillet and Simon Avrillon and Aritra Kundu and Tianyi Yu and Andrew T. M. Phillips and Luca Modenese and Dario Farina}, doi = {https://doi.org/10.7910/DVN/F9GWIW}, url = {https://doi.org/https://doi.org/10.7910/DVN/F9GWIW}, } ``` ## Technical Details - Subjects: 6 - Recordings: 11 - Tasks: 2 - Channels: 259 - Sampling rate (Hz): 2048 - Duration (hours): 0.1227777777777777 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 448.3 MB - File count: 11 - Format: BIDS - License: CC0 BY 4.0 - DOI: [https://doi.org/10.7910/DVN/F9GWIW](https://doi.org/10.7910/DVN/F9GWIW) - Source: nemar - OpenNeuro: [nm000155](https://openneuro.org/datasets/nm000155) - NeMAR: [nm000155](https://nemar.org/dataexplorer/detail?dataset_id=nm000155) ## API Reference Use the `NM000155` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000155(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MUniverse Caillet et al 2023 * **Study:** `nm000155` (NeMAR) * **Author (year):** `Caillet2023` * **Canonical:** — Also importable as: `NM000155`, `Caillet2023`. Modality: `emg`. Subjects: 6; recordings: 11; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000155](https://openneuro.org/datasets/nm000155) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000155](https://nemar.org/dataexplorer/detail?dataset_id=nm000155) DOI: [https://doi.org/https://doi.org/10.7910/DVN/F9GWIW](https://doi.org/https://doi.org/10.7910/DVN/F9GWIW) ### Examples ```pycon >>> from eegdash.dataset import NM000155 >>> dataset = NM000155(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000155) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000155) * [eegdash.dataset.NM000104](eegdash.dataset.NM000104.md) * [eegdash.dataset.NM000105](eegdash.dataset.NM000105.md) * [eegdash.dataset.NM000106](eegdash.dataset.NM000106.md) * [eegdash.dataset.NM000107](eegdash.dataset.NM000107.md) * [eegdash.dataset.NM000108](eegdash.dataset.NM000108.md) # NM000157: eeg dataset, 19 subjects *Mainsah2025-B* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *Mainsah2025-B*. [10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) Modality: eeg Subjects: 19 Recordings: 544 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000157 dataset = NM000157(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000157(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000157( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000157, title = {Mainsah2025-B}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## About This Dataset **Mainsah2025-B** BigP3BCI Study B — 6x6 checkerboard, multi-session (19 healthy subjects). **Dataset Overview** > Code: Mainsah2025-B > Paradigm: p300 > DOI: 10.13026/0byy-ry86 ### View full README **Mainsah2025-B** BigP3BCI Study B — 6x6 checkerboard, multi-session (19 healthy subjects). **Dataset Overview** > Code: Mainsah2025-B > Paradigm: p300 > DOI: 10.13026/0byy-ry86 > Subjects: 19 > Sessions per subject: 8 > Events: Target=2, NonTarget=1 > Trial interval: [0, 1.0] s **Acquisition** > Sampling rate: 256.0 Hz > Number of channels: 16 > Channel types: eeg=16 > Montage: standard_1020 > Hardware: g.USBamp (g.tec) > Line frequency: 60.0 Hz **Participants** > Number of subjects: 19 > Health status: patients > Clinical population: ALS **Experimental Protocol** > Paradigm: p300 > Number of classes: 2 > Class labels: Target, NonTarget **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 **Signal Processing** > Feature extraction: P300_ERP_detection **Cross-Validation** > Method: calibration-then-test > Evaluation type: within_subject **BCI Application** > Applications: speller > Environment: laboratory > Online feedback: True **Tags** > Modality: visual > Type: perception **Documentation** > Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. > DOI: 10.13026/0byy-ry86 > License: CC-BY-4.0 > Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins > Institution: Duke University; East Tennessee State University > Country: US > Repository: PhysioNet > Data URL: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) > Publication year: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000157` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mainsah2025-B | | Author (year) | `Mainsah2025` | | Canonical | — | | Importable as | `NM000157`, `Mainsah2025` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000157) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000157) | [Source URL](https://nemar.org/dataexplorer/detail/nm000157) | ### Copy-paste BibTeX ```bibtex @dataset{nm000157, title = {Mainsah2025-B}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## Technical Details - Subjects: 19 - Recordings: 544 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 256.00008099666394 (205), 256.00004960640416 (161), 256.00006152923214 (98), 256 (42), 256.0000879536484 (15), 256.00010844590247 (10), 256.00004522469607 (3), 256.00012796007644 (2), 256.00011633064076 (2), 256.0001184842897 (2), 256.00007397509813 (2), 256.00008239793436 - Duration (hours): 28.657737184336803 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 1.2 GB - File count: 544 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.13026/0byy-ry86 - Source: nemar - OpenNeuro: [nm000157](https://openneuro.org/datasets/nm000157) - NeMAR: [nm000157](https://nemar.org/dataexplorer/detail?dataset_id=nm000157) ## API Reference Use the `NM000157` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000157(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-B * **Study:** `nm000157` (NeMAR) * **Author (year):** `Mainsah2025` * **Canonical:** — Also importable as: `NM000157`, `Mainsah2025`. Modality: `eeg`. Subjects: 19; recordings: 544; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000157](https://openneuro.org/datasets/nm000157) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000157](https://nemar.org/dataexplorer/detail?dataset_id=nm000157) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000157 >>> dataset = NM000157(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000157) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000157) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000158: eeg dataset, 50 subjects *Dataset [1]_ from the study on motor imagery [2]_* Access recordings and metadata through EEGDash. **Citation:** Haijie Liu, Penghu Wei, Haochong Wang, Xiaodong Lv, Wei Duan, Meijie Li, Yan Zhao, Qingmei Wang, Xinyuan Chen, Gaige Shi, Bo Han, Junwei Hao (2022). *Dataset [1]_ from the study on motor imagery [2]_*. Modality: eeg Subjects: 50 Recordings: 50 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000158 dataset = NM000158(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000158(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000158( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000158, title = {Dataset [1]_ from the study on motor imagery [2]_}, author = {Haijie Liu and Penghu Wei and Haochong Wang and Xiaodong Lv and Wei Duan and Meijie Li and Yan Zhao and Qingmei Wang and Xinyuan Chen and Gaige Shi and Bo Han and Junwei Hao}, } ``` ## About This Dataset **Dataset [1] from the study on motor imagery [2]** Dataset ``` [1]_ ``` from the study on motor imagery ``` [2]_ ``` . **Dataset Overview** - **Code**: Liu2024 - **Paradigm**: imagery - **DOI**: 10.1038/s41597-023-02787-8 ### View full README **Dataset [1] from the study on motor imagery [2]** Dataset ``` [1]_ ``` from the study on motor imagery ``` [2]_ ``` . **Dataset Overview** - **Code**: Liu2024 - **Paradigm**: imagery - **DOI**: 10.1038/s41597-023-02787-8 - **Subjects**: 50 - **Sessions per subject**: 1 - **Events**: left_hand=1, right_hand=2 - **Trial interval**: (0, 4) s - **File format**: MAT and EDF - **Data preprocessed**: True - **Contributing labs**: Xuanwu Hospital Capital Medical University **Acquisition** - **Sampling rate**: 500.0 Hz - **Number of channels**: 29 - **Channel types**: eeg=29, eog=2 - **Channel names**: C3, C4, CP3, CP4, Cz, F3, F4, F7, F8, FC3, FC4, FCz, FP1, FP2, FT7, FT8, Fz, HEOL, O1, O2, Oz, P3, P4, Pz, T3, T4, T5, T6, TP7, TP8, VEOR - **Montage**: 10-10 - **Hardware**: ZhenTec NT1 wireless multichannel EEG acquisition system - **Reference**: CPz - **Ground**: FPz - **Sensor type**: semi-dry Ag/AgCl - **Line frequency**: 50.0 Hz - **Impedance threshold**: 20 kOhm - **Cap manufacturer**: Xi’an ZhenTec Intelligence Technology Co., Ltd. - **Cap model**: ZhenTec NT1 - **Electrode type**: semi-dry - **Electrode material**: Ag/AgCl semi-dry electrodes based on highly absorbable porous sponges dampened with 3% NaCl solution - **Auxiliary channels**: EOG (2 ch, horizontal, vertical) **Participants** - **Number of subjects**: 50 - **Health status**: acute stroke patients - **Clinical population**: acute stroke patients (1-30 days post-stroke) - **Age**: mean=56.7, std=10.57, min=31.0, max=77.0 - **Gender distribution**: male=39, female=11 **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 2 - **Class labels**: left_hand, right_hand - **Trial duration**: 8.0 s - **Trials per class**: left_hand=20, right_hand=20 - **Study design**: Imagining grasping a spherical object with left or right hand while watching a video of gripping motion. Each trial: instruction stage (prompt), MI stage (4s video-guided imagery), break stage (rest). - **Feedback type**: none - **Stimulus type**: video and audio - **Stimulus modalities**: visual, audio - **Synchronicity**: cue-based - **Mode**: offline - **Training/test split**: True - **Instructions**: Subject sat approximately 80 cm from computer screen. Computer played audio instructions. Patients imagined grasping spherical object with prompted hand during 4s video playback. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_hand, right_hand - **Cue duration**: 2.0 s - **Imagery duration**: 4.0 s **Data Structure** - **Trials**: 40 - **Trials per class**: left_hand=20, right_hand=20 - **Trials context**: 40 trials per subject total (20 left-hand, 20 right-hand), alternating. Each trial: 8s total (instruction + 4s MI + break). Training/test split: 60%/40%. **Preprocessing** - **Data state**: preprocessed - **Preprocessing applied**: True - **Steps**: baseline removal (mean removal method), FIR filtering (0.5-40 Hz) - **Highpass filter**: 0.5 Hz - **Lowpass filter**: 40.0 Hz - **Bandpass filter**: [0.5, 40.0] - **Filter type**: FIR - **Epoch window**: [0.0, 8.0] - **Notes**: Preprocessed with EEGLAB toolbox in MATLAB R2019b. Filtered data split into trials x channels x time-samples format by marker ‘1’. Some motion artifacts present in subjects 4, 5, 13, 14, 18, 24, 28, 33, 42, 43, 47, 48, 49. **Signal Processing** - **Classifiers**: CSP+LDA, FBCSP+SVM, TSLDA+DGFMDRM, TWFB+DGFMDM - **Feature extraction**: CSP, FBCSP, ERD/ERS, Riemannian geometry (SCMs on SPD manifolds), Tangent Space, Time-Frequency (Morlet wavelet), TWFB (Time Window Filter Bank) - **Frequency bands**: alpha=[8.0, 15.0] Hz; beta=[15.0, 30.0] Hz; analyzed=[8.0, 30.0] Hz - **Spatial filters**: CSP, FBCSP, Discriminant Geodesic Filtering **Cross-Validation** - **Method**: 10-fold cross-validation - **Folds**: 10 - **Evaluation type**: within_subject **Performance (Original Study)** - **Csp+Lda Accuracy**: 55.57 - **Fbcsp+Svm Accuracy**: 57.57 - **Tslda+Dgfmdrm Accuracy**: 61.2 - **Twfb+Dgfmdm Accuracy**: 72.21 - **Twfb+Dgfmdm Kappa**: 0.4442 - **Twfb+Dgfmdm Precision**: 0.7543 - **Twfb+Dgfmdm Sensitivity**: 0.7845 **BCI Application** - **Applications**: rehabilitation - **Environment**: hospital - **Online feedback**: False **Tags** - **Pathology**: Stroke - **Modality**: Motor - **Type**: Motor Imagery **Documentation** - **Description**: EEG motor imagery dataset from 50 acute stroke patients performing left- and right-handed hand-grip imagination tasks. First open dataset addressing left- and right-handed motor imagery in acute stroke patients. - **DOI**: 10.1038/s41597-023-02787-8 - **License**: CC-BY-4.0 - **Investigators**: Haijie Liu, Penghu Wei, Haochong Wang, Xiaodong Lv, Wei Duan, Meijie Li, Yan Zhao, Qingmei Wang, Xinyuan Chen, Gaige Shi, Bo Han, Junwei Hao - **Senior author**: Junwei Hao - **Contact**: [haojunwei@vip.163.com](mailto:haojunwei@vip.163.com) - **Institution**: Xuanwu Hospital Capital Medical University - **Department**: Department of Neurology - **Address**: Beijing, 100053, China - **Country**: CN - **Repository**: Figshare - **Data URL**: [https://doi.org/10.6084/m9.figshare.21679035.v5](https://doi.org/10.6084/m9.figshare.21679035.v5) - **Publication year**: 2024 - **Funding**: National Natural Science Foundation of China (grant nos. 82090043 and 81825008) - **Ethics approval**: Ethics Committee of Xuanwu Hospital of Capital Medical University (No. 2021-236) - **Keywords**: motor imagery, BCI, brain-computer interface, stroke patients, EEG, rehabilitation, acute stroke, hand-grip imagery, databases, scientific data **Abstract** The brain-computer interface (BCI) is a technology that involves direct communication with parts of the brain and has evolved rapidly in recent years; it has begun to be used in clinical practice, such as for patient rehabilitation. Patient electroencephalography (EEG) datasets are critical for algorithm optimization and clinical applications of BCIs but are rare at present. We collected data from 50 acute stroke patients with wireless portable saline EEG devices during the performance of two tasks: 1) imagining right-handed movements and 2) imagining left-handed movements. The dataset consists of four types of data: 1) the motor imagery instructions, 2) raw recording data, 3) pre-processed data after removing artefacts and other manipulations, and 4) patient characteristics. This is the first open dataset to address left- and right-handed motor imagery in acute stroke patients. **Methodology** 50 acute stroke patients (1-30 days post-stroke) performed 40 trials of hand-grip motor imagery (20 left, 20 right). Each 8s trial included instruction, 4s video-guided imagery, and rest phases. EEG recorded with ZhenTec NT1 wireless system (29 EEG + 2 EOG channels) at 500 Hz. Data organized in EEG-BIDS format with raw (.mat) and preprocessed (.edf) versions. Clinical assessments: NIHSS (mean=4.16, SD=2.85), MBI (mean=70.94, SD=18.22), mRS (mean=2.66, SD=1.44). 23 patients right hemiplegia, 27 left hemiplegia. **References** Liu, Haijie; Lv, Xiaodong (2022). EEG datasets of stroke patients. figshare. Dataset. DOI: [https://doi.org/10.6084/m9.figshare.21679035.v5](https://doi.org/10.6084/m9.figshare.21679035.v5) Liu, Haijie, Wei, P., Wang, H. et al. An EEG motor imagery dataset for brain computer interface in acute stroke patients. Sci Data 11, 131 (2024). DOI: [https://doi.org/10.1038/s41597-023-02787-8](https://doi.org/10.1038/s41597-023-02787-8) Notes To add the break and instruction events, set the `break_events` and `instr_events` parameters to True while instantiating the class. .. versionadded:: 1.1.1 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000158` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset

```
[1]_
```

from the study on motor imagery

```
[2]_
``` | | Author (year) | `Liu2024` | | Canonical | — | | Importable as | `NM000158`, `Liu2024` | | Year | 2022 | | Authors | Haijie Liu, Penghu Wei, Haochong Wang, Xiaodong Lv, Wei Duan, Meijie Li, Yan Zhao, Qingmei Wang, Xinyuan Chen, Gaige Shi, Bo Han, Junwei Hao | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000158) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000158) | [Source URL](https://nemar.org/dataexplorer/detail/nm000158) | ## Technical Details - Subjects: 50 - Recordings: 50 - Tasks: 1 - Channels: 29 - Sampling rate (Hz): 500.0 - Duration (hours): 4.444416666666666 - Pathology: Other - Modality: Multisensory - Type: Motor - Size on disk: 673.7 MB - File count: 50 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000158](https://openneuro.org/datasets/nm000158) - NeMAR: [nm000158](https://nemar.org/dataexplorer/detail?dataset_id=nm000158) ## API Reference Use the `NM000158` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000158(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset ``` [1]_ ``` from the study on motor imagery ``` [2]_ ``` * **Study:** `nm000158` (NeMAR) * **Author (year):** `Liu2024` * **Canonical:** — Also importable as: `NM000158`, `Liu2024`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 50; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000158](https://openneuro.org/datasets/nm000158) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000158](https://nemar.org/dataexplorer/detail?dataset_id=nm000158) ### Examples ```pycon >>> from eegdash.dataset import NM000158 >>> dataset = NM000158(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000158) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000158) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000159: emg dataset, 16 subjects *MUniverse Avrillon et al 2024* Access recordings and metadata through EEGDash. **Citation:** Simon Avrillon, Francois Hug, Roger M. Enoka, Arnault H. Caillet, Dario Farina (20). *MUniverse Avrillon et al 2024*. [https://doi.org/10.7910/DVN/L9OQY7](https://doi.org/https://doi.org/10.7910/DVN/L9OQY7) Modality: emg Subjects: 16 Recordings: 124 License: CC0 BY 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000159 dataset = NM000159(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000159(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000159( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000159, title = {MUniverse Avrillon et al 2024}, author = {Simon Avrillon and Francois Hug and Roger M. Enoka and Arnault H. Caillet and Dario Farina}, doi = {https://doi.org/10.7910/DVN/L9OQY7}, url = {https://doi.org/https://doi.org/10.7910/DVN/L9OQY7}, } ``` ## About This Dataset **Avrillon et al 2024: HDsEMG recordings** BIDS-formatted version of the HDsEMG dataset published in \*\`Avrillon et al. 2024 <[https://doi.org/10.7554/eLife.97085.3](https://doi.org/10.7554/eLife.97085.3)>\`_\_\*. Two experimental sessions consisted of either a series of submaximal (10-80 percent MVC) isometric ankle dorsiflexions or isometric knee extensions. EMG signals were recorded from either the tibialis anterior (TA) or the vastus lateralis (VL) muscles using four arrays of 64 surface electrodes for a total of 256 electrodes. **Population** ### View full README **Avrillon et al 2024: HDsEMG recordings** BIDS-formatted version of the HDsEMG dataset published in \*\`Avrillon et al. 2024 <[https://doi.org/10.7554/eLife.97085.3](https://doi.org/10.7554/eLife.97085.3)>\`_\_\*. Two experimental sessions consisted of either a series of submaximal (10-80 percent MVC) isometric ankle dorsiflexions or isometric knee extensions. EMG signals were recorded from either the tibialis anterior (TA) or the vastus lateralis (VL) muscles using four arrays of 64 surface electrodes for a total of 256 electrodes. **Population** 16 young individuals volunteered to participate either in the experiment on the tibialis anterior (n=8; age: 27 +/- 3) or on the vastus lateralis (n=8; age: 27 +/- 10). **Electrode placement** Surface EMG signals were recorded from the TA or the VL using 4 two-dimensional arrays of 64 electrodes (GR04MM1305 for the TA; GR08MM1305 for the VL, 13×5 gold-coated electrodes with one electrode absent on a corner; interelectrode distance: 4 and 8 mm, respectively; OT Bioelettronica, Italy). The grids were positioned over the muscle bellies to cover the largest surface while staying away from the boundaries of the muscle identified by manual palpation. Before placing the electrodes, the skin was shaved and cleaned with an abrasive pad and water. A biadhesive foam layer was used to hold each array of electrodes onto the skin, and conductive paste filled the cavities of the adhesive layers to make skin-electrode contact. **Tibialis anterior: ankle dorsiflexions** For the session of ankle dorsiflexions, participants sat on a massage table with the hips flexed at 45 degree, 0 degree being the hip neutral position, and the knees fully extended. The foot of the dominant leg (right in all participants) was fixed onto the pedal of an ankle dynamometer (OT Bioelettronica, Turin, Italy) positioned at 30 degree in the plantarflexion direction, 0 degree being the foot perpendicular to the shank. The thigh and the foot were fixed with inextensible Velcro straps. Force signals were recorded with a load cell (CCT Transducer s.a.s, Turin, Italy) connected in-series to the pedal using the same acquisition system as for the EMG recordings (EMG-Quattrocento; OT Bioelettronica, Italy). **Vastus lateralis: knee extensions** For the session of knee extensions, participants sat on an instrumented chair with the hips flexed at 85 degree, 0 degree being the hip neutral position, and the knees flexed at 85 degree, 0 degree being the knees fully extended. The torso and the thighs were fixed to the chair with Velcro straps and the tibia were positioned against a rigid resistance connected to force sensors (Metitur, Jyvaskyla, Finland). The force signals were recorded using the same acquisition system as for the EMG recordings. **Coordinate systems** All electrode coordinates (reported in mm) have been converted to a common reference frame corresponding to the first EMG-array (*space-grid1*). The positions of the reference and ground electrodes are reported in a seperate coordinate system (*space-lowerLeg*) reported in percent of the lower leg length (knee-to-ankle). **Missing data** Contraction intensities 50, 60 and 70 % MVC are missing for subject 15. **Conversion** The dataset has been converted semi-automatically using the [\*MUniverse\*](https://github.com/dfarinagroup/muniverse/tree/main) software. See *dataset_description.json* for further details. ## Dataset Information | Dataset ID | `NM000159` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MUniverse Avrillon et al 2024 | | Author (year) | `Avrillon2024` | | Canonical | — | | Importable as | `NM000159`, `Avrillon2024` | | Year | 20 | | Authors | Simon Avrillon, Francois Hug, Roger M. Enoka, Arnault H. Caillet, Dario Farina | | License | CC0 BY 4.0 | | Citation / DOI | [https://doi.org/10.7910/DVN/L9OQY7](https://doi.org/https://doi.org/10.7910/DVN/L9OQY7) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000159) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000159) | [Source URL](https://nemar.org/dataexplorer/detail/nm000159) | ### Copy-paste BibTeX ```bibtex @dataset{nm000159, title = {MUniverse Avrillon et al 2024}, author = {Simon Avrillon and Francois Hug and Roger M. Enoka and Arnault H. Caillet and Dario Farina}, doi = {https://doi.org/10.7910/DVN/L9OQY7}, url = {https://doi.org/https://doi.org/10.7910/DVN/L9OQY7}, } ``` ## Technical Details - Subjects: 16 - Recordings: 124 - Tasks: 8 - Channels: 258 - Sampling rate (Hz): 2048 - Duration (hours): 1.5522222222222222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 5.5 GB - File count: 124 - Format: BIDS - License: CC0 BY 4.0 - DOI: [https://doi.org/10.7910/DVN/L9OQY7](https://doi.org/10.7910/DVN/L9OQY7) - Source: nemar - OpenNeuro: [nm000159](https://openneuro.org/datasets/nm000159) - NeMAR: [nm000159](https://nemar.org/dataexplorer/detail?dataset_id=nm000159) ## API Reference Use the `NM000159` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000159(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MUniverse Avrillon et al 2024 * **Study:** `nm000159` (NeMAR) * **Author (year):** `Avrillon2024` * **Canonical:** — Also importable as: `NM000159`, `Avrillon2024`. Modality: `emg`. Subjects: 16; recordings: 124; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000159](https://openneuro.org/datasets/nm000159) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000159](https://nemar.org/dataexplorer/detail?dataset_id=nm000159) DOI: [https://doi.org/https://doi.org/10.7910/DVN/L9OQY7](https://doi.org/https://doi.org/10.7910/DVN/L9OQY7) ### Examples ```pycon >>> from eegdash.dataset import NM000159 >>> dataset = NM000159(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000159) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000159) * [eegdash.dataset.NM000104](eegdash.dataset.NM000104.md) * [eegdash.dataset.NM000105](eegdash.dataset.NM000105.md) * [eegdash.dataset.NM000106](eegdash.dataset.NM000106.md) * [eegdash.dataset.NM000107](eegdash.dataset.NM000107.md) * [eegdash.dataset.NM000108](eegdash.dataset.NM000108.md) # NM000160: eeg dataset, 18 subjects *Multi-joint upper-limb MI dataset from Yi et al. 2025* Access recordings and metadata through EEGDash. **Citation:** Weibo Yi, Jiaming Chen, Dan Wang, Xinkang Hu, Meng Xu, Fangda Li, Shuhan Wu, Jin Qian (2025). *Multi-joint upper-limb MI dataset from Yi et al. 2025*. Modality: eeg Subjects: 18 Recordings: 141 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000160 dataset = NM000160(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000160(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000160( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000160, title = {Multi-joint upper-limb MI dataset from Yi et al. 2025}, author = {Weibo Yi and Jiaming Chen and Dan Wang and Xinkang Hu and Meng Xu and Fangda Li and Shuhan Wu and Jin Qian}, } ``` ## About This Dataset **Multi-joint upper-limb MI dataset from Yi et al. 2025** Multi-joint upper-limb MI dataset from Yi et al. 2025. **Dataset Overview** - **Code**: Yi2025 - **Paradigm**: imagery - **DOI**: 10.1038/s41597-025-05286-0 ### View full README **Multi-joint upper-limb MI dataset from Yi et al. 2025** Multi-joint upper-limb MI dataset from Yi et al. 2025. **Dataset Overview** - **Code**: Yi2025 - **Paradigm**: imagery - **DOI**: 10.1038/s41597-025-05286-0 - **Subjects**: 18 - **Sessions per subject**: 1 - **Events**: hand_open_close=1, wrist_flex_ext=2, wrist_abd_add=3, elbow_pron_sup=4, elbow_flex_ext=5, shoulder_pron_sup=6, shoulder_abd_add=7, shoulder_flex_ext=8 - **Trial interval**: [0, 4] s - **Runs per session**: 8 - **File format**: CNT **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 62 - **Channel types**: eeg=62 - **Channel names**: Fp1, Fpz, Fp2, AF3, AF4, F7, F5, F3, F1, Fz, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT8, T7, C5, C3, C1, Cz, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPz, CP2, CP4, CP6, TP8, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO7, PO5, PO3, POz, PO4, PO6, PO8, CB1, O1, Oz, O2, CB2 - **Montage**: standard_1005 - **Hardware**: Neuroscan SynAmps2 - **Reference**: left mastoid (M1) - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 18 - **Health status**: healthy - **Age**: min=22, max=27 - **Gender distribution**: female=10, male=8 - **Handedness**: right - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 8 - **Class labels**: hand_open_close, wrist_flex_ext, wrist_abd_add, elbow_pron_sup, elbow_flex_ext, shoulder_pron_sup, shoulder_abd_add, shoulder_flex_ext - **Trial duration**: 4.0 s - **Study design**: 8-class multi-joint upper-limb MI. 8 blocks of 40 trials (5 per class), 320 total trials per subject. - **Feedback type**: none - **Stimulus type**: video + text - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: cue-based - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text hand_open_close ``` ```text ├─ Sensory-event └─ Label/hand_open_close wrist_flex_ext ``` ```text ├─ Sensory-event └─ Label/wrist_flex_ext wrist_abd_add ``` ```text ├─ Sensory-event └─ Label/wrist_abd_add elbow_pron_sup ``` ```text ├─ Sensory-event └─ Label/elbow_pron_sup elbow_flex_ext ``` ```text ├─ Sensory-event └─ Label/elbow_flex_ext shoulder_pron_sup ``` ```text ├─ Sensory-event └─ Label/shoulder_pron_sup shoulder_abd_add ``` ```text ├─ Sensory-event └─ Label/shoulder_abd_add shoulder_flex_ext ``` ```text ├─ Sensory-event └─ Label/shoulder_flex_ext ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: hand_open_close, wrist_flex_ext, wrist_abd_add, elbow_pron_sup, elbow_flex_ext, shoulder_pron_sup, shoulder_abd_add, shoulder_flex_ext - **Cue duration**: 2.0 s - **Imagery duration**: 4.0 s **Data Structure** - **Trials**: 320 - **Trials per class**: hand_open_close=40, wrist_flex_ext=40, wrist_abd_add=40, elbow_pron_sup=40, elbow_flex_ext=40, shoulder_pron_sup=40, shoulder_abd_add=40, shoulder_flex_ext=40 - **Blocks per session**: 8 - **Trials context**: 8 blocks x 40 trials (5 per class x 8 classes) **Signal Processing** - **Classifiers**: ShallowConvNet - **Feature extraction**: ERSP - **Frequency bands**: alpha=[8.0, 13.0] Hz; beta=[13.0, 30.0] Hz; bandpass=[4.0, 40.0] Hz - **Spatial filters**: CAR **Cross-Validation** - **Method**: 5-fold - **Folds**: 5 - **Evaluation type**: within_subject **BCI Application** - **Applications**: rehabilitation - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Motor Imagery **Documentation** - **DOI**: 10.1038/s41597-025-05286-0 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: Weibo Yi, Jiaming Chen, Dan Wang, Xinkang Hu, Meng Xu, Fangda Li, Shuhan Wu, Jin Qian - **Institution**: Beijing University of Technology - **Country**: CN - **Data URL**: [https://figshare.com/articles/dataset/Data/24123303](https://figshare.com/articles/dataset/Data/24123303) - **Publication year**: 2025 **References** Yi, W., Chen, J., Wang, D., et al. (2025). A multi-modal dataset of EEG and fNIRS for motor imagery of multi-types of joints from unilateral upper limb. Scientific Data, 12, 953. [https://doi.org/10.1038/s41597-025-05286-0](https://doi.org/10.1038/s41597-025-05286-0) Notes .. versionadded:: 1.2.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000160` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Multi-joint upper-limb MI dataset from Yi et al. 2025 | | Author (year) | `Yi2025` | | Canonical | — | | Importable as | `NM000160`, `Yi2025` | | Year | 2025 | | Authors | Weibo Yi, Jiaming Chen, Dan Wang, Xinkang Hu, Meng Xu, Fangda Li, Shuhan Wu, Jin Qian | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000160) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000160) | [Source URL](https://nemar.org/dataexplorer/detail/nm000160) | ## Technical Details - Subjects: 18 - Recordings: 141 - Tasks: 1 - Channels: 62 - Sampling rate (Hz): 1000.0 - Duration (hours): 32.48256083333333 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 20.3 GB - File count: 141 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000160](https://openneuro.org/datasets/nm000160) - NeMAR: [nm000160](https://nemar.org/dataexplorer/detail?dataset_id=nm000160) ## API Reference Use the `NM000160` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000160(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multi-joint upper-limb MI dataset from Yi et al. 2025 * **Study:** `nm000160` (NeMAR) * **Author (year):** `Yi2025` * **Canonical:** — Also importable as: `NM000160`, `Yi2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 18; recordings: 141; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000160](https://openneuro.org/datasets/nm000160) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000160](https://nemar.org/dataexplorer/detail?dataset_id=nm000160) ### Examples ```pycon >>> from eegdash.dataset import NM000160 >>> dataset = NM000160(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000160) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000160) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000161: eeg dataset, 20 subjects *BNCI 2024-001 Handwritten Character Classification dataset* Access recordings and metadata through EEGDash. **Citation:** Markus R. Crell, Gernot R. Müller-Putz (2024). *BNCI 2024-001 Handwritten Character Classification dataset*. Modality: eeg Subjects: 20 Recordings: 40 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000161 dataset = NM000161(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000161(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000161( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000161, title = {BNCI 2024-001 Handwritten Character Classification dataset}, author = {Markus R. Crell and Gernot R. Müller-Putz}, } ``` ## About This Dataset **BNCI 2024-001 Handwritten Character Classification dataset** BNCI 2024-001 Handwritten Character Classification dataset. **Dataset Overview** - **Code**: BNCI2024-001 - **Paradigm**: imagery - **DOI**: 10.1016/j.compbiomed.2024.109132 ### View full README **BNCI 2024-001 Handwritten Character Classification dataset** BNCI 2024-001 Handwritten Character Classification dataset. **Dataset Overview** - **Code**: BNCI2024-001 - **Paradigm**: imagery - **DOI**: 10.1016/j.compbiomed.2024.109132 - **Subjects**: 20 - **Sessions per subject**: 1 - **Events**: letter_a=1, letter_d=2, letter_e=3, letter_f=4, letter_j=5, letter_n=6, letter_o=7, letter_s=8, letter_t=9, letter_v=10 - **Trial interval**: [0, 3] s - **Runs per session**: 2 - **File format**: MAT **Acquisition** - **Sampling rate**: 500.0 Hz - **Number of channels**: 60 - **Channel types**: eeg=60, eog=4 - **Channel names**: AF3, AF4, AF7, AF8, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EOG1, EOG2, EOG3, EOG4, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FT10, FT7, FT8, FT9, Fp1, Fp2, Fpz, Fz, M1, M2, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, POz, Pz, T7, T8, TP7, TP8 - **Montage**: eogl1 eogl2 eogl3 eogr1 af7 af3 afz af4 af8 f7 f5 f3 f1 fz f2 f4 f6 f8 ft7 fc5 fc3 fc1 fcz fc2 fc4 fc6 ft8 t7 c5 c3 c1 cz c2 c4 c6 t8 tp7 cp5 cp3 cp1 cpz cp2 cp4 cp6 tp8 p7 p5 p3 p1 pz p2 p4 p6 p8 ppo1h ppo2h po7 po3 poz po4 po8 o1 oz o2 - **Hardware**: BrainVision - **Software**: EEGLAB - **Reference**: right mastoid - **Sensor type**: active electrodes - **Line frequency**: 50.0 Hz - **Online filters**: 50 Hz notch - **Cap manufacturer**: Brain Products GmbH - **Auxiliary channels**: EOG (4 ch, horizontal, vertical) **Participants** - **Number of subjects**: 20 - **Health status**: healthy - **Age**: mean=27.5, std=3.92 - **Gender distribution**: male=11, female=11 - **Handedness**: {‘right’: 22} - **BCI experience**: not specified - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Task type**: handwriting - **Number of classes**: 10 - **Class labels**: letter_a, letter_d, letter_e, letter_f, letter_j, letter_n, letter_o, letter_s, letter_t, letter_v - **Trial duration**: 8.5 s - **Study design**: Handwritten character task with 10 letters (a,d,e,f,j,n,o,s,t,v) using right index finger. Letters fade in (2s), remain visible (0.5s), fade out (2s), then 4s writing phase. Each letter written 60 times across 15 runs. - **Feedback type**: Training included visual feedback showing finger position; main paradigm had no feedback during writing (only fixation cross) - **Stimulus type**: letter cue - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: offline - **Training/test split**: True - **Instructions**: Start movement when letter fades out completely; write letter during 4s writing phase; stop hand at last position until next letter appears; execute home movement during fade-in to return to comfortable starting position - **Stimulus presentation**: fade_in_duration=2.0s, visible_duration=0.5s, fade_out_duration=2.0s, writing_duration=4.0s, total_trial_duration=8.5s **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text letter_a ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Write ├─ Hand └─ Label/a letter_d ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Write ├─ Hand └─ Label/d letter_e ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Write ├─ Hand └─ Label/e letter_f ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Write ├─ Hand └─ Label/f letter_j ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Write ├─ Hand └─ Label/j letter_n ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Write ├─ Hand └─ Label/n letter_o ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Write ├─ Hand └─ Label/o letter_s ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Write ├─ Hand └─ Label/s letter_t ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Write ├─ Hand └─ Label/t letter_v ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Write ├─ Hand └─ Label/v ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: handwriting of letters a, d, e, f, j, n, o, s, t, v - **Cue duration**: 2.0 s - **Imagery duration**: 4.0 s **Data Structure** - **Trials**: 60 - **Blocks per session**: 15 - **Block duration**: 340 s - **Trials context**: per_class **Preprocessing** - **Data state**: raw - **Preprocessing applied**: True - **Steps**: notch filtering, bandpass filtering, bad channel interpolation, EOG artifact correction (SGEYESUB), ICA for artifact removal, re-referencing to CAR, bad segment rejection, lowpass filtering, downsampling, epoching - **Highpass filter**: 0.3 Hz - **Lowpass filter**: 70.0 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.3, ‘high_cutoff_hz’: 70.0} - **Notch filter**: [50] Hz - **Filter type**: Butterworth - **Filter order**: 4 - **Artifact methods**: ICA, SGEYESUB - **Re-reference**: car - **Downsampled to**: 128 Hz - **Epoch window**: [-4.5, 4.0] - **Notes**: Two datasets created: dataset 1 (0.3-3 Hz, 10 Hz sampling) and dataset 2 (0.3-40 Hz, 128 Hz sampling). Bad segments rejected if exceeding ±120 μV or kurtosis/probability > 7 SD from mean. **Signal Processing** - **Classifiers**: Shrinkage Linear Discriminant Analysis (sLDA), EEGNet CNN - **Feature extraction**: low-frequency EEG, broadband EEG, continuous kinematics decoding - **Frequency bands**: analyzed=[0.3, 70.0] Hz - **Spatial filters**: CAR **Cross-Validation** - **Method**: 2-times repeated 5-fold cross-validation - **Folds**: 5 - **Evaluation type**: cross_session **Performance (Original Study)** - **Accuracy**: 26.2% - **10 Letters Direct Lowfreq**: 23.1 - **10 Letters Twostep**: 26.2 - **5 Letters Direct Lowfreq**: 39.0 - **5 Letters Twostep**: 46.7 - **Kinematics Correlation Range**: 0.10-0.57 - **Chance Level Correlation**: 0.04 **BCI Application** - **Applications**: communication, character_selection - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Motor **Documentation** - **Description**: Classification of handwritten letters from EEG through continuous kinematic decoding - **DOI**: 10.1016/j.compbiomed.2024.109132 - **License**: CC-BY-4.0 - **Investigators**: Markus R. Crell, Gernot R. Müller-Putz - **Senior author**: Gernot R. Müller-Putz - **Contact**: [gernot.mueller@tugraz.at](mailto:gernot.mueller@tugraz.at) - **Institution**: Graz University of Technology - **Department**: Institute of Neural Engineering - **Address**: Graz, Austria - **Country**: Austria - **Repository**: BNCI Horizon 2020 - **Data URL**: [https://bnci-horizon-2020.eu/database/data-sets](https://bnci-horizon-2020.eu/database/data-sets) - **Publication year**: 2024 - **Ethics approval**: Ethics Committee at Graz University of Technology - **Keywords**: Brain-computer interface (BCI), Electroencephalography (EEG), Handwriting, Continuous movement decoding, Non-invasive **Abstract** This study explores the classification of ten letters (a,d,e,f,j,n,o,s,t,v) from non-invasive neural signals of 20 participants. Letters were classified with direct classification from low-frequency and broadband EEG, and a two-step approach comprising continuous decoding of hand kinematics followed by classification. The two-step approach yielded significantly higher performances of 26.2% for ten letters and 46.7% for five letters. Hand kinematics could be reconstructed with correlation of 0.10 to 0.57 (average chance level: 0.04). Results suggest movement speed as the most informative kinematic for decoding short hand movements. **Methodology** Participants wrote 10 letters using right index finger with motion capture tracking (30 Hz, 2D positions). Two-round session with 7 runs (round 1) and 8 runs (round 2), 40 trials per run, 8.5s per trial. Training phase included 4 steps: observation, guided following, unguided following, and execution without feedback. Classification using sliding-window approach with sLDA and EEGNet CNN. Trajectory decoding using EEGNet architecture adapted for regression of position-based (px, py, vx, vy), distance-based (d, ḋ, θ, θ̇), and speed-based (s) kinematics. **References** Crell, M. R., & Muller-Putz, G. R. (2024). Handwritten character classification from EEG through continuous kinematic decoding. Computers in Biology and Medicine, 182, 109132. [https://doi.org/10.1016/j.compbiomed.2024.109132](https://doi.org/10.1016/j.compbiomed.2024.109132) Notes .. versionadded:: 1.3.0 This dataset is notable for exploring non-invasive EEG-based handwritten character classification, which could enable communication for individuals with limited movement capacity. The study demonstrated that handwritten characters can be classified from non-invasive EEG and that decoding movement kinematics prior to classification improves performance. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000161` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2024-001 Handwritten Character Classification dataset | | Author (year) | `Crell2024` | | Canonical | — | | Importable as | `NM000161`, `Crell2024` | | Year | 2024 | | Authors | Markus R. Crell, Gernot R. Müller-Putz | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000161) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000161) | [Source URL](https://nemar.org/dataexplorer/detail/nm000161) | ## Technical Details - Subjects: 20 - Recordings: 40 - Tasks: 1 - Channels: 60 - Sampling rate (Hz): 500.0 - Duration (hours): 33.61688888888889 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 10.2 GB - File count: 40 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000161](https://openneuro.org/datasets/nm000161) - NeMAR: [nm000161](https://nemar.org/dataexplorer/detail?dataset_id=nm000161) ## API Reference Use the `NM000161` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000161(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2024-001 Handwritten Character Classification dataset * **Study:** `nm000161` (NeMAR) * **Author (year):** `Crell2024` * **Canonical:** — Also importable as: `NM000161`, `Crell2024`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 20; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000161](https://openneuro.org/datasets/nm000161) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000161](https://nemar.org/dataexplorer/detail?dataset_id=nm000161) ### Examples ```pycon >>> from eegdash.dataset import NM000161 >>> dataset = NM000161(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000161) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000161) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000162: eeg dataset, 20 subjects *BNCI 2025-001 Motor Kinematics Reaching dataset* Access recordings and metadata through EEGDash. **Citation:** Nitikorn Srisrisawang, Gernot R Müller-Putz (2024). *BNCI 2025-001 Motor Kinematics Reaching dataset*. Modality: eeg Subjects: 20 Recordings: 20 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000162 dataset = NM000162(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000162(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000162( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000162, title = {BNCI 2025-001 Motor Kinematics Reaching dataset}, author = {Nitikorn Srisrisawang and Gernot R Müller-Putz}, } ``` ## About This Dataset **BNCI 2025-001 Motor Kinematics Reaching dataset** BNCI 2025-001 Motor Kinematics Reaching dataset. **Dataset Overview** - **Code**: BNCI2025-001 - **Paradigm**: imagery - **DOI**: 10.1088/1741-2552/ada0ea ### View full README **BNCI 2025-001 Motor Kinematics Reaching dataset** BNCI 2025-001 Motor Kinematics Reaching dataset. **Dataset Overview** - **Code**: BNCI2025-001 - **Paradigm**: imagery - **DOI**: 10.1088/1741-2552/ada0ea - **Subjects**: 20 - **Sessions per subject**: 1 - **Events**: up_slow_near=1, up_slow_far=2, up_fast_near=3, up_fast_far=4, down_slow_near=5, down_slow_far=6, down_fast_near=7, down_fast_far=8, left_slow_near=9, left_slow_far=10, left_fast_near=11, left_fast_far=12, right_slow_near=13, right_slow_far=14, right_fast_near=15, right_fast_far=16 - **Trial interval**: [0, 4] s - **File format**: EEG (BrainAmp) - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 500.0 Hz - **Number of channels**: 67 - **Channel types**: eeg=67, eog=4 - **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EOGL1, EOGL2, EOGL3, EOGR1, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fz, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO7, PO8, POz, PPO1h, PPO2h, Pz, T7, T8, TP7, TP8, targetPosX, targetPoxY, validity, vx, vy, x, y - **Montage**: af7 af3 afz af4 af8 f7 f5 f3 f1 fz f2 f4 f6 f8 ft7 fc5 fc3 fc1 fcz fc2 fc4 fc6 ft8 t7 c5 c3 c1 cz c2 c4 c6 t8 tp7 cp5 cp3 cp1 cpz cp2 cp4 cp6 tp8 p7 p5 p3 p1 pz p2 p4 p6 p8 ppo1h ppo2h po7 po3 poz po4 po8 o1 oz o2 - **Hardware**: BrainAmp - **Software**: EEGLAB - **Reference**: common average - **Sensor type**: EEG - **Line frequency**: 50.0 Hz - **Online filters**: 50 Hz notch - **Cap manufacturer**: Zebris Medical GmbH - **Cap model**: ELPOS - **Auxiliary channels**: EOG (4 ch, horizontal, vertical) **Participants** - **Number of subjects**: 20 - **Health status**: patients - **Clinical population**: Healthy - **Age**: mean=26.1, std=4.1 - **Gender distribution**: male=12, female=8 - **Handedness**: {‘right’: 17, ‘left’: 3} - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Task type**: discrete reaching - **Number of classes**: 16 - **Class labels**: up_slow_near, up_slow_far, up_fast_near, up_fast_far, down_slow_near, down_slow_far, down_fast_near, down_fast_far, left_slow_near, left_slow_far, left_fast_near, left_fast_far, right_slow_near, right_slow_far, right_fast_near, right_fast_far - **Tasks**: discrete reaching - **Study design**: Four-direction center-out reaching task with varying speeds (quick/slow) and distances (near/far) following visual cue, self-paced execution with eye fixation on cue - **Feedback type**: visual (cue color: green for correct, red for incorrect direction) - **Stimulus type**: visual cue - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: cue-paced - **Mode**: both - **Instructions**: Follow cue with eyes, wait at least 1s after cue stops, mimic movement while fixating eyes on cue, move smoothly with whole arm avoiding wrist rotation **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text up_slow_near ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Upward ├─ Label/slow └─ Label/near up_slow_far ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Upward ├─ Label/slow └─ Label/far up_fast_near ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Upward ├─ Label/fast └─ Label/near up_fast_far ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Upward ├─ Label/fast └─ Label/far down_slow_near ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Downward ├─ Label/slow └─ Label/near down_slow_far ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Downward ├─ Label/slow └─ Label/far down_fast_near ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Downward ├─ Label/fast └─ Label/near down_fast_far ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Downward ├─ Label/fast └─ Label/far left_slow_near ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Left ├─ Label/slow └─ Label/near left_slow_far ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Left ├─ Label/slow └─ Label/far left_fast_near ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Left ├─ Label/fast └─ Label/near left_fast_far ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Left ├─ Label/fast └─ Label/far right_slow_near ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Right ├─ Label/slow └─ Label/near right_slow_far ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Right ├─ Label/slow └─ Label/far right_fast_near ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Right ├─ Label/fast └─ Label/near right_fast_far ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Reach ├─ Right ├─ Label/fast └─ Label/far ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Number of targets**: 4 - **Imagery tasks**: right_hand_reaching **Data Structure** - **Trials**: 960 - **Trials per class**: up=240, down=240, left=240, right=240 - **Blocks per session**: 10 - **Block duration**: 1200.0 s - **Trials context**: per_participant (before rejection) **Preprocessing** - **Data state**: preprocessed with eye artifact correction - **Preprocessing applied**: True - **Steps**: low-pass filter at 100 Hz, notch filter at 50 Hz, downsampling to 200 Hz, bad channel rejection and interpolation, bandpass filter 0.3-80 Hz, eye artifact correction via SGEYESUB, ICA with FastICA algorithm, IC artifact removal, low-pass filter at 3 Hz, downsampling to 10 Hz, bad trial rejection, common average reference - **Highpass filter**: 0.3 Hz - **Lowpass filter**: 100.0 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.3, ‘high_cutoff_hz’: 80.0} - **Notch filter**: [50] Hz - **Filter type**: Butterworth - **Filter order**: 2 - **Artifact methods**: ICA, SGEYESUB (Sparse Generalized Eye Artifact Subspace Subtraction), IClabel plugin - **Re-reference**: common average - **Downsampled to**: 200.0 Hz - **Epoch window**: [-3.0, 4.0] - **Notes**: Frontal channels (AF7, AF3, AFz, AF4, AF8) and EOG removed prior to CAR to reduce residual eye artifacts. Final analysis used 55 channels. Eye blocks recorded separately for SGEYESUB model training. Bad trials rejected based on amplitude >200 µV or standard deviation >5SD. Movement-related bad trials rejected for incorrect direction, no movement, duration <0.2s or >4s, or movement initiated <0.5s after cue stop. **Signal Processing** - **Classifiers**: sLDA (shrinkage Linear Discriminant Analysis) - **Feature extraction**: Low-frequency EEG (0.3-3 Hz), Source localization (sLORETA), ICA, ROI-based features - **Frequency bands**: delta=[0.3, 3.0] Hz; analyzed=[0.3, 100.0] Hz - **Spatial filters**: Common Average Reference, Source-space projection **Cross-Validation** - **Method**: stratified k-fold - **Folds**: 10 - **Evaluation type**: within_session **Performance (Original Study)** - **Direction Accuracy Cstp**: 39.75 - **Direction Accuracy Mon**: 42.42 - **Speed Accuracy Cstp**: 66.03 - **Speed Accuracy Mon**: 70.49 - **Distance Accuracy Cstp**: 60.83 - **Distance Accuracy Mon**: 55.41 - **Quick Direction Accuracy Cstp**: 44.12 - **Quick Direction Accuracy Mon**: 49.67 - **Slow Direction Accuracy Cstp**: 37.42 - **Slow Direction Accuracy Mon**: 35.89 **BCI Application** - **Applications**: motor_control, rehabilitation - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Motor **Documentation** - **Description**: EEG dataset investigating simultaneous encoding of speed, distance, and direction in discrete hand reaching movements using a four-direction center-out task - **DOI**: 10.1088/1741-2552/ada0ea - **License**: CC-BY-4.0 - **Investigators**: Nitikorn Srisrisawang, Gernot R Müller-Putz - **Senior author**: Gernot R Müller-Putz - **Contact**: [gernot.mueller@tugraz.at](mailto:gernot.mueller@tugraz.at) - **Institution**: Institute of Neural Engineering, Graz University of Technology - **Department**: Institute of Neural Engineering - **Address**: Stremayrgasse 16/IV, 8010 Graz, Austria - **Country**: Austria - **Repository**: GitHub - **Data URL**: [https://github.com/rkobler/eyeartifactcorrection](https://github.com/rkobler/eyeartifactcorrection) - **Publication year**: 2024 - **Funding**: Royal Thai Government (scholar funding for N.S.); BioTechMed Graz - **Ethics approval**: Ethical committee at the Graz University of Technology (EK-28/2024); Declaration of Helsinki - **Acknowledgements**: Members of the Graz BCI team, especially Markus Crell for providing motion capture software - **Keywords**: electroencephalography, brain–computer interface, source localization, discrete reaching, center-out task **Abstract** Objective. The complicated processes of carrying out a hand reach are still far from fully understood. In order to further the understanding of the kinematics of hand movement, the simultaneous representation of speed, distance, and direction in the brain is explored. Approach. We utilized electroencephalography (EEG) signals and hand position recorded during a four-direction center-out reaching task with either quick or slow speed, near and far distance. Linear models were employed in two modes: decoding and encoding. First, to test the discriminability of speed, distance, and direction. Second, to find the contribution of the cortical sources via the source localization. Additionally, we compared the decoding accuracy when using features obtained from EEG signals and source-localized EEG signals based on the results from the encoding model. Main results. Speed, distance, and direction can be classified better than chance. The accuracy of the speed was also higher than the distance, indicating a stronger representation of the speed than the distance. The speed and distance showed similar significant sources in the central regions related to the movement initiation, while the direction indicated significant sources in the parieto-occipital regions related to the movement preparation. The combination of the features from EEG and source localized signals improved the classification. Significance. Directional and non-directional information are represented in two separate networks. The quick movement resulted in improvement in the direction classification. Our results enhance our understanding of hand movement in the brain and help us make informed decisions when designing an improved paradigm in the future. **Methodology** Participants performed discrete reaching movements in four directions (up, down, left, right) with two speeds (quick: 0.4-0.8s cue duration, slow: 1.2-2.4s cue duration) and two distances (near: ~5cm/8.7cm actual, far: ~10cm/15.6cm actual). Each trial consisted of outward and inward movements. Visual cue moved from center to target position. Participants waited ≥1s after cue stop before mimicking movement with eyes fixated on cue. Hand position tracked via camera with pink marker on right index finger. 32 conditions (2 speed × 2 distance × 4 direction × 2 inward/outward) with 30 trials per class = 960 trials total per participant. After rejection, ~852 trials remained. EEG processed with EEGLAB on MATLAB R2019b. Signals epoched in two alignments: cue stop aligned (CStp: -3 to 4s) and movement onset aligned (MOn: -3 to 3s). Analysis included MRCP analysis, point-wise classification with instantaneous and windowed (500ms) features, encoding model using GLM, source localization using BEM with ICBM152 template and sLORETA inverse solution via Brainstorm, and source-space classification using data-driven ROIs derived from encoding model. Classification performed with shrinkage LDA. Permutation testing (1000 repetitions) used for significance. FDR controlled using Benjamini-Hochberg procedures. **References** Srisrisawang, N., & Muller-Putz, G. R. (2024). Simultaneous encoding of speed, distance, and direction in discrete reaching: an EEG study. Journal of Neural Engineering, 21(6). [https://doi.org/10.1088/1741-2552/ada0ea](https://doi.org/10.1088/1741-2552/ada0ea) Notes .. versionadded:: 1.3.0 This dataset is notable for its multi-parameter kinematic design, enabling study of how multiple movement parameters are represented simultaneously in EEG activity. The paradigm uses movement execution rather than motor imagery, making it complementary to MI datasets. The data is compatible with the MOABB motor imagery paradigm for processing purposes, though the underlying task is movement execution. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000162` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2025-001 Motor Kinematics Reaching dataset | | Author (year) | `Srisrisawang2025` | | Canonical | `BNCI2025` | | Importable as | `NM000162`, `Srisrisawang2025`, `BNCI2025` | | Year | 2024 | | Authors | Nitikorn Srisrisawang, Gernot R Müller-Putz | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000162) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000162) | [Source URL](https://nemar.org/dataexplorer/detail/nm000162) | ## Technical Details - Subjects: 20 - Recordings: 20 - Tasks: 1 - Channels: 67 - Sampling rate (Hz): 500.0 - Duration (hours): 44.44853333333333 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 15.0 GB - File count: 20 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000162](https://openneuro.org/datasets/nm000162) - NeMAR: [nm000162](https://nemar.org/dataexplorer/detail?dataset_id=nm000162) ## API Reference Use the `NM000162` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000162(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2025-001 Motor Kinematics Reaching dataset * **Study:** `nm000162` (NeMAR) * **Author (year):** `Srisrisawang2025` * **Canonical:** `BNCI2025` Also importable as: `NM000162`, `Srisrisawang2025`, `BNCI2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000162](https://openneuro.org/datasets/nm000162) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000162](https://nemar.org/dataexplorer/detail?dataset_id=nm000162) ### Examples ```pycon >>> from eegdash.dataset import NM000162 >>> dataset = NM000162(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000162) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000162) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000163: eeg dataset, 12 subjects *c-VEP and Burst-VEP dataset from Castillos et al. (2023)* Access recordings and metadata through EEGDash. **Citation:** Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais (2023). *c-VEP and Burst-VEP dataset from Castillos et al. (2023)*. Modality: eeg Subjects: 12 Recordings: 12 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000163 dataset = NM000163(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000163(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000163( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000163, title = {c-VEP and Burst-VEP dataset from Castillos et al. (2023)}, author = {Kalou Cabrera Castillos and Simon Ladouce and Ludovic Darmet and Frédéric Dehais}, } ``` ## About This Dataset **c-VEP and Burst-VEP dataset from Castillos et al. (2023)** c-VEP and Burst-VEP dataset from Castillos et al. (2023) **Dataset Overview** - **Code**: CastillosBurstVEP100 - **Paradigm**: cvep - **DOI**: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### View full README **c-VEP and Burst-VEP dataset from Castillos et al. (2023)** c-VEP and Burst-VEP dataset from Castillos et al. (2023) **Dataset Overview** - **Code**: CastillosBurstVEP100 - **Paradigm**: cvep - **DOI**: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) - **Subjects**: 12 - **Sessions per subject**: 1 - **Events**: 0=100, 1=101 - **Trial interval**: (0, 0.25) s - **File format**: EEGLAB .set - **Number of contributing labs**: 1 **Acquisition** - **Sampling rate**: 500.0 Hz - **Number of channels**: 32 - **Channel types**: eeg=32 - **Channel names**: C3, C4, CP1, CP2, CP5, CP6, Cz, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, Oz, P10, P3, P4, P7, P8, P9, Pz, T7, T8 - **Montage**: standard_1020 - **Hardware**: BrainProducts LiveAmp 32 - **Reference**: FCz - **Ground**: FPz - **Sensor type**: eeg - **Line frequency**: 50.0 Hz - **Online filters**: {‘notch’: {‘freq’: 50.0, ‘bandwidth’: 0.2, ‘order’: 16, ‘type’: ‘IIR cut-band’}} - **Impedance threshold**: 25.0 kOhm - **Cap manufacturer**: BrainProducts - **Cap model**: Acticap - **Electrode type**: active **Participants** - **Number of subjects**: 12 - **Health status**: healthy - **Age**: mean=30.6, std=7.1 - **Gender distribution**: female=4, male=8 - **Species**: human **Experimental Protocol** - **Paradigm**: cvep - **Task type**: target selection - **Number of classes**: 2 - **Class labels**: 0, 1 - **Trial duration**: 2.2 s - **Tasks**: visual attention, target selection - **Study design**: factorial within-subject - **Study domain**: BCI performance and user experience - **Feedback type**: none - **Stimulus type**: visual - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: False - **Instructions**: Focus on cued targets sequentially in random order - **Stimulus presentation**: software=PsychoPy, monitor=Dell P2419HC, resolution=1920x1080, refresh_rate_hz=60 **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_0 1 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_1 ``` **Paradigm-Specific Parameters** - **Detected paradigm**: cvep - **Code type**: burst - **Number of targets**: 4 - **Cue duration**: 0.5 s **Data Structure** - **Trials**: 60 - **Blocks per session**: 15 - **Trials context**: 15 blocks x 4 trials per block = 60 trials per subject for burst c-VEP at 100% amplitude **Preprocessing** - **Data state**: raw **Signal Processing** - **Classifiers**: Convolutional Neural Network (CNN), Pearson correlation - **Feature extraction**: CNN spatial filtering (8x1 kernel, 16 filters), CNN temporal filtering (1x32 kernel with dilation 2, 8 filters), CNN 2D convolution (5x5 kernel, 4 filters), sliding windows (250ms, 2ms stride) - **Frequency bands**: analyzed=[0.1, 40.0] Hz - **Spatial filters**: CNN 8x1 spatial convolution (16 filters) **Cross-Validation** - **Method**: sequential train/test split - **Evaluation type**: offline classification, iterative calibration (1-6 blocks) **Performance (Original Study)** - **Accuracy**: 95.6% - **Itr**: 67.49 bits/min - **Selection Time S**: 1.5 - **Cnn Training Time S**: 15.0 - **Burst 40 Accuracy**: 94.2 - **Mseq 100 Accuracy**: 85.0 **BCI Application** - **Applications**: reactive BCI - **Environment**: controlled laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: EEG - **Type**: reactive BCI, c-VEP, visual evoked potentials **Documentation** - **Description**: Burst c-VEP based BCI study comparing novel burst code sequences to traditional m-sequences at two amplitude depths (100% and 40%) to optimize classification performance, minimize calibration data, and improve user experience - **DOI**: 10.1016/j.neuroimage.2023.120446 - **Associated paper DOI**: 10.1016/j.neuroimage.2023.120446 - **License**: CC-BY-4.0 - **Investigators**: Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais - **Senior author**: Frédéric Dehais - **Contact**: [kalou.cabrera-castillos@isae-supaero.fr](mailto:kalou.cabrera-castillos@isae-supaero.fr) - **Institution**: Institut Supérieur de l’Aéronautique et de l’Espace (ISAE-SUPAERO) - **Department**: Human Factors and Neuroergonomics - **Address**: 10 Av. Edouard Belin, Toulouse, 31400, France - **Country**: FR - **Repository**: Zenodo - **Data URL**: [https://zenodo.org/record/8255618](https://zenodo.org/record/8255618) - **Publication year**: 2023 - **Funding**: AID (Powerbrain project), France; AXA Research Fund Chair for Neuroergonomics, France; Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France - **Ethics approval**: University of Toulouse ethics committee (CER approval number 2020-334); Declaration of Helsinki - **Acknowledgements**: This work was funded by AID (Powerbrain project), France, the AXA Research Fund Chair for Neuroergonomics, France and Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France. - **Keywords**: Code-VEP, Reactive BCI, CNN, Amplitude depth reduction, Visual comfort **External Links** - **Source**: [https://zenodo.org/record/8255618](https://zenodo.org/record/8255618) - **Github**: [https://github.com/neuroergoISAE/burst_codes](https://github.com/neuroergoISAE/burst_codes) **Abstract** The utilization of aperiodic flickering visual stimuli under the form of code-modulated Visual Evoked Potentials (c-VEP) represents a pivotal advancement in the field of reactive Brain–Computer Interface (rBCI). This study introduces Burst c-VEP, an innovative variant involving short bursts of aperiodic visual flashes at 2-4 flashes per second. The proposed burst c-VEP sequences exhibited higher accuracy (90.5%-95.6%) compared to m-sequence counterparts (71.4%-85.0%) with mean selection time of 1.5s. Reducing stimulus intensity to 40% amplitude depth only slightly decreased accuracy to 94.2% while substantially improving user experience. The collected dataset and CNN architecture implementation are shared through open-access repositories. **Methodology** Twelve healthy participants completed an offline 4-class c-VEP protocol using a factorial design. EEG was recorded at 500 Hz using BrainProducts LiveAmp 32-channel system. Participants focused on cued targets with factorial manipulation of pattern type (burst vs m-sequence) and amplitude depth (100% vs 40%). Visual stimuli were presented on a 60 Hz Dell monitor. Burst codes consisted of brief flashes (~50ms) with minimum 200ms inter-burst interval, while m-sequences used Fibonacci-type LFSR with segmented 132-frame subsequences. A CNN architecture with spatial (8x1, 16 filters), temporal (1x32, 8 filters), and 2D convolution (5x5, 4 filters) layers decoded EEG using 250ms sliding windows with 2ms stride. Calibration data ranged from 1-6 blocks (8.8-52.8s). Classification used sequential train/test splits with Pearson correlation for target selection. VEP analysis examined amplitude, latency, and inter-trial coherence. Statistical analyses used 2×2 repeated measures ANOVA. **References** Kalou Cabrera Castillos. (2023). 4-class code-VEP EEG data [Data set]. Zenodo.(dataset). DOI: [https://doi.org/10.5281/zenodo.8255618](https://doi.org/10.5281/zenodo.8255618) Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais. Burst c-VEP Based BCI: Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience,NeuroImage,Volume 284, 2023,120446,ISSN 1053-8119 DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) Notes .. versionadded:: 1.1.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000163` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | c-VEP and Burst-VEP dataset from Castillos et al. (2023) | | Author (year) | `Castillos2023_VEP` | | Canonical | — | | Importable as | `NM000163`, `Castillos2023_VEP` | | Year | 2023 | | Authors | Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000163) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000163) | [Source URL](https://nemar.org/dataexplorer/detail/nm000163) | ## Technical Details - Subjects: 12 - Recordings: 12 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 0.878318888888889 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 160.1 MB - File count: 12 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000163](https://openneuro.org/datasets/nm000163) - NeMAR: [nm000163](https://nemar.org/dataexplorer/detail?dataset_id=nm000163) ## API Reference Use the `NM000163` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000163(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) c-VEP and Burst-VEP dataset from Castillos et al. (2023) * **Study:** `nm000163` (NeMAR) * **Author (year):** `Castillos2023_VEP` * **Canonical:** — Also importable as: `NM000163`, `Castillos2023_VEP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000163](https://openneuro.org/datasets/nm000163) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000163](https://nemar.org/dataexplorer/detail?dataset_id=nm000163) ### Examples ```pycon >>> from eegdash.dataset import NM000163 >>> dataset = NM000163(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000163) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000163) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000165: emg dataset, 1 subjects *MUniverse Grison et al 2025* Access recordings and metadata through EEGDash. **Citation:** Agnese Grison, Irene Mendez Guerra, Alexander Kenneth Clarke, Silvia Muceli, Jaime Ibanez Pereda, Dario Farina (20). *MUniverse Grison et al 2025*. [https://doi.org/10.7910/DVN/ID1WNQ](https://doi.org/https://doi.org/10.7910/DVN/ID1WNQ) Modality: emg Subjects: 1 Recordings: 10 License: CC0 BY 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000165 dataset = NM000165(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000165(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000165( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000165, title = {MUniverse Grison et al 2025}, author = {Agnese Grison and Irene Mendez Guerra and Alexander Kenneth Clarke and Silvia Muceli and Jaime Ibanez Pereda and Dario Farina}, doi = {https://doi.org/10.7910/DVN/ID1WNQ}, url = {https://doi.org/https://doi.org/10.7910/DVN/ID1WNQ}, } ``` ## About This Dataset **Grison et al 2025: HDsEMG recordings** BIDS-formatted version of a HDsEMG dataset corresponding to \*\`Grison et al. 2025 <[https://doi.org/10.1113/JP287913](https://doi.org/10.1113/JP287913)>\`_\_\*. **Overview** One healthy subject performed 10 submaximal (10 to 70 percent MVC) isometric ankle dorsiflexions. EMG signals were recorded from the right tibialis anterior using two arrays of 64 surface electrodes (4 mm interelectrode distance, 13x5 configuration) ### View full README **Grison et al 2025: HDsEMG recordings** BIDS-formatted version of a HDsEMG dataset corresponding to \*\`Grison et al. 2025 <[https://doi.org/10.1113/JP287913](https://doi.org/10.1113/JP287913)>\`_\_\*. **Overview** One healthy subject performed 10 submaximal (10 to 70 percent MVC) isometric ankle dorsiflexions. EMG signals were recorded from the right tibialis anterior using two arrays of 64 surface electrodes (4 mm interelectrode distance, 13x5 configuration) for a total of 128 electrodes. **Protocol description** The participant performed one, two, or three trapezoidal contractions (with repetitions being specified by the run labels) at 10, 15, 20, 25, 30, 35, 40, 50, 60, and 70 percent MVC with 120 s of rest in between, consisting of linear ramps up and down performed at 5 percent per second and a plateau maintained for 20 s up to 30 percent MVC, 15 s for 35 percent and 40 percent MVC, and 10 s from 50 percent to 70 percent MVC. The order of the contractions was randomized. **Set-up description** The participant sat on a chair with the hips flexed at 30 degrees, 0 degrees being the hip neutral position, and their knees fully extended. The foot of the dominant leg (right) was fixed onto the pedal of a commercial dynamometer (OT Bioelettronica) positioned at 30 degrees in the plantarflexion direction. Force signals were recorded with a load cell (CCT Transducer s.a.s.) connected in-series to the pedal using the same acquisition system as for the HD-EMG recordings. **Coordinate systems** All electrode coordinates (reported in mm) are reported in their respective grid coordinate system (*space-grid1\*and\*space-grid2*). Their relative positions as well as the positions of the reference and ground electrodes are reported in a separate coordinate system (*space-lowerLeg*) reported in percent of the lower leg length. **Labeled motor unit spike trains** Labeled motor unit spike trains were derived from concurrently recorded invasive EMG and curated by an experienced investigator (only available for \*_run-01\* of each trial). **Conversion** The dataset has been converted semi-automatically using the [\*MUniverse\*](https://github.com/dfarinagroup/muniverse/tree/main) software. See *dataset_description.json* for further details. ## Dataset Information | Dataset ID | `NM000165` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | MUniverse Grison et al 2025 | | Author (year) | `Grison2025` | | Canonical | — | | Importable as | `NM000165`, `Grison2025` | | Year | 20 | | Authors | Agnese Grison, Irene Mendez Guerra, Alexander Kenneth Clarke, Silvia Muceli, Jaime Ibanez Pereda, Dario Farina | | License | CC0 BY 4.0 | | Citation / DOI | [https://doi.org/10.7910/DVN/ID1WNQ](https://doi.org/https://doi.org/10.7910/DVN/ID1WNQ) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000165) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000165) | [Source URL](https://nemar.org/dataexplorer/detail/nm000165) | ### Copy-paste BibTeX ```bibtex @dataset{nm000165, title = {MUniverse Grison et al 2025}, author = {Agnese Grison and Irene Mendez Guerra and Alexander Kenneth Clarke and Silvia Muceli and Jaime Ibanez Pereda and Dario Farina}, doi = {https://doi.org/10.7910/DVN/ID1WNQ}, url = {https://doi.org/https://doi.org/10.7910/DVN/ID1WNQ}, } ``` ## Technical Details - Subjects: 1 - Recordings: 10 - Tasks: 10 - Channels: 131 - Sampling rate (Hz): 10240 - Duration (hours): 0.1491666666666666 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 1.3 GB - File count: 10 - Format: BIDS - License: CC0 BY 4.0 - DOI: [https://doi.org/10.7910/DVN/ID1WNQ](https://doi.org/10.7910/DVN/ID1WNQ) - Source: nemar - OpenNeuro: [nm000165](https://openneuro.org/datasets/nm000165) - NeMAR: [nm000165](https://nemar.org/dataexplorer/detail?dataset_id=nm000165) ## API Reference Use the `NM000165` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000165(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MUniverse Grison et al 2025 * **Study:** `nm000165` (NeMAR) * **Author (year):** `Grison2025` * **Canonical:** — Also importable as: `NM000165`, `Grison2025`. Modality: `emg`. Subjects: 1; recordings: 10; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000165](https://openneuro.org/datasets/nm000165) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000165](https://nemar.org/dataexplorer/detail?dataset_id=nm000165) DOI: [https://doi.org/https://doi.org/10.7910/DVN/ID1WNQ](https://doi.org/https://doi.org/10.7910/DVN/ID1WNQ) ### Examples ```pycon >>> from eegdash.dataset import NM000165 >>> dataset = NM000165(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000165) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000165) * [eegdash.dataset.NM000104](eegdash.dataset.NM000104.md) * [eegdash.dataset.NM000105](eegdash.dataset.NM000105.md) * [eegdash.dataset.NM000106](eegdash.dataset.NM000106.md) * [eegdash.dataset.NM000107](eegdash.dataset.NM000107.md) * [eegdash.dataset.NM000108](eegdash.dataset.NM000108.md) # NM000166: eeg dataset, 95 subjects *M3CV: Multi-subject, Multi-session, Multi-task EEG Database* Access recordings and metadata through EEGDash. **Citation:** Gan Huang, Zhenxing Hu, Weize Chen, Shaorong Zhang, Zhen Liang, Linling Li, Li Zhang, Zhiguo Zhang (2022). *M3CV: Multi-subject, Multi-session, Multi-task EEG Database*. [10.1016/j.neuroimage.2022.119666](https://doi.org/10.1016/j.neuroimage.2022.119666) Modality: eeg Subjects: 95 Recordings: 2469 License: CC BY 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000166 dataset = NM000166(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000166(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000166( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000166, title = {M3CV: Multi-subject, Multi-session, Multi-task EEG Database}, author = {Gan Huang and Zhenxing Hu and Weize Chen and Shaorong Zhang and Zhen Liang and Linling Li and Li Zhang and Zhiguo Zhang}, doi = {10.1016/j.neuroimage.2022.119666}, url = {https://doi.org/10.1016/j.neuroimage.2022.119666}, } ``` ## About This Dataset **M3CV: Multi-subject, Multi-session, and Multi-task EEG Database** **Overview** This dataset contains 64-channel EEG from 95 healthy young adults (Age: 21.3 +/- 2.2 years; 73 males, 22 females) from Shenzhen University, recorded across 2 sessions on different days using a BrainAmp amplifier with 64-channel Easycap (standard 10-20 positions). Each subject performed 6 paradigms with 14 ### View full README **M3CV: Multi-subject, Multi-session, and Multi-task EEG Database** **Overview** This dataset contains 64-channel EEG from 95 healthy young adults (Age: 21.3 +/- 2.2 years; 73 males, 22 females) from Shenzhen University, recorded across 2 sessions on different days using a BrainAmp amplifier with 64-channel Easycap (standard 10-20 positions). Each subject performed 6 paradigms with 14 types of EEG signals across 15 runs per session (~50 min recording, ~2 h total including setup and rest). The original paper describes 14 signal types; the distributed data contains 13 task codes because nontarget P300 epochs were not included. The original data was recorded at 1000 Hz and preprocessed using Matlab 2018b with Letswave7 (letswave.cn), then distributed as individual 4-second epoched .mat files at 250 Hz. This BIDS version reconstructs pseudo-continuous EEG by concatenating the distributed epochs per subject/session/task. Event markers indicate epoch boundaries and stimulus onsets derived from the original marker channel. Ethics: Medical Ethics Committee, Health Science Center, Shenzhen University (No. 2019053). All subjects gave informed consent. **Recording Setup** - Amplifier: BrainAmp (Brain Products GmbH, Germany) - Cap: 64-channel Easycap, standard 10-20 positions - Online reference: FCz; Ground: AFz - Sampling rate: 1000 Hz (distributed at 250 Hz after preprocessing) - Impedance: < 20 kOhm - Subject distance: ~1 meter from screen - Screen: 24.5-inch Alienware AW2518H (1920x1080, 240 Hz refresh rate) - Visual stimuli: Psychtoolbox-3 in Matlab - Sensory stimuli: Arduino Uno platform via serial port to Matlab - LED: 3 W, 2 cm diameter circular shield, 45 cm from eyes, 1074 Lux (measured by TES-1332A light meter) - Headphones: Nokia WH-102, 75 dB SPL average - Vibration motor: 1027 disk, 3 W rated, 80% efficiency, 10 mm x 2.7 mm, placed on subject’s left hand - Power line frequency: 50 Hz **Preprocessing (applied before distribution, Table 3 of paper)** Software: Matlab 2018b & Letswave7 (letswave.cn) 1. Bad channels identified manually, interpolated with mean of 3 surrounding > channels (22 of 95 subjects had bad channels) 1. Channel FCz (online reference) added back 2. Channel IO (EOG) removed 3. Bandpass filter: 0.01-200 Hz, 4th-order Butterworth, 24 dB/octave, zero-phase 4. Notch filter: 49-51 Hz bandstop, 4th-order Butterworth, 24 dB/octave, zero-phase 5. Re-referenced to mean of TP9 and TP10 (linked mastoids) 6. ICA artifact removal: eye blink and eye movement components identified by visual inspection of scalp topographies, time courses, and spectra (Huang et al., 2020) 7. Downsampled to 250 Hz 8. No bad epoch rejection (intentional for ML robustness/repeatability) Note: One subject was removed due to strong 10 Hz artifacts. The remaining 95 subjects are included in the distributed data. **Paradigms (14 signal types, 13 in distributed data, 15 runs/session)** Run 01: Eyes Closed resting (restEC) — 1 min; fixate on LED (off) Run 02: Eyes Open resting (restEO) — 1 min; fixate ahead, minimal blink Run 03: Motor execution (motorFoot/motorRHand/motorLHand) — 20 trials each Run 04: Transient sensory (vep/aep/sep) — 30 trials each, random order, ~4.5 min Run 05: SSVEP (ssvep) — 10 Hz LED, 1 min Run 06: Motor execution — 20 trials each Run 07: P300 oddball (p300) — 600 stimuli (5% target=30, nontarget=570), 80 ms, > ISI 200 ms, 2 min; red/white 300x300 px squares; subjects count red Run 08: SSVEP-SA (ssvepSA) — 6 freq (7/8/9/11/13/15 Hz), 12 segments x 10 s Run 09: SSAEP (ssaep) — 45.38 Hz, 2 min Run 10: Motor execution — 20 trials each Run 11: Transient sensory — 30 trials each, random order Run 12: SSSEP (sssep) — 22.04 Hz vibration, 2 min Run 13: Motor execution — 20 trials each Run 14: Eyes Closed resting (restEC) — 1 min Run 15: Eyes Open resting (restEO) — 1 min Motor execution details: subjects gripped (LH/RH) or lifted ankle (FT) at ~2x/sec, ~80% maximum voluntary contraction, 3 s duration until cue offset. No feedback, metronome, or hint was provided. Experimenters monitored movement quality during recording. **Notes on distributed data** - 14 signal types in the paper, 13 task codes in distributed data: nontarget P300 (paper task 10, trigger S10) was not distributed - P300: Only 30 target trials stored per subject; 570 nontarget discarded - SSVEP-SA: 6 frequency classes not distinguishable in marker; all marker=13 - Trigger codes in original recording (S1-S25) differ from CSV Task column (1-13). CSV Task 10=FT, 11=RH, 12=LH (paper tasks 11-13, triggers S6-S8) - Epoch ordering within task may not reflect original temporal sequence - 11 “intruder” subjects in Testing set have hidden SubjectIDs (excluded here) **Subjects and Sessions** - 106 total subjects; 95 completed both sessions - Age: 21.3 +/- 2.2 years (95 subjects); 73 males, 22 females - Normal hearing, normal/corrected vision, no neurological history (self-report) - Between-session interval: 6 to 139 days (mean ~20 days) - ses-01 = session 1 (Enrollment set) - ses-02 = session 2 (Calibration + Testing sets) - 11 “intruder” subjects (session 2 only, hidden IDs) are excluded from BIDS **Competition context** Originally distributed for the M3CV EEG-based Biometric Competition on Kaggle (identification and verification tasks). Competition closed Apr 30, 2023; late submissions remain allowed. **Reference** Huang, G., Hu, Z., Chen, W., Zhang, S., Liang, Z., Li, L., Zhang, L., & Zhang, Z. (2022). M3CV: A multi-subject, multi-session, and multi-task database for EEG-based biometrics challenge. NeuroImage, 264, 119666. [https://doi.org/10.1016/j.neuroimage.2022.119666](https://doi.org/10.1016/j.neuroimage.2022.119666) **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `NM000166` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | M3CV: Multi-subject, Multi-session, Multi-task EEG Database | | Author (year) | `Huang2018` | | Canonical | — | | Importable as | `NM000166`, `Huang2018` | | Year | 2022 | | Authors | Gan Huang, Zhenxing Hu, Weize Chen, Shaorong Zhang, Zhen Liang, Linling Li, Li Zhang, Zhiguo Zhang | | License | CC BY 4.0 | | Citation / DOI | [doi:10.1016/j.neuroimage.2022.119666](https://doi.org/10.1016/j.neuroimage.2022.119666) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000166) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000166) | [Source URL](https://nemar.org/dataexplorer/detail/nm000166) | ### Copy-paste BibTeX ```bibtex @dataset{nm000166, title = {M3CV: Multi-subject, Multi-session, Multi-task EEG Database}, author = {Gan Huang and Zhenxing Hu and Weize Chen and Shaorong Zhang and Zhen Liang and Linling Li and Li Zhang and Zhiguo Zhang}, doi = {10.1016/j.neuroimage.2022.119666}, url = {https://doi.org/10.1016/j.neuroimage.2022.119666}, } ``` ## Technical Details - Subjects: 95 - Recordings: 2469 - Tasks: 13 - Channels: 64 - Sampling rate (Hz): 250 - Duration (hours): 100.47502888888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 21.6 GB - File count: 2469 - Format: BIDS - License: CC BY 4.0 - DOI: doi:10.1016/j.neuroimage.2022.119666 - Source: nemar - OpenNeuro: [nm000166](https://openneuro.org/datasets/nm000166) - NeMAR: [nm000166](https://nemar.org/dataexplorer/detail?dataset_id=nm000166) ## API Reference Use the `NM000166` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000166(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) M3CV: Multi-subject, Multi-session, Multi-task EEG Database * **Study:** `nm000166` (NeMAR) * **Author (year):** `Huang2018` * **Canonical:** — Also importable as: `NM000166`, `Huang2018`. Modality: `eeg`. Subjects: 95; recordings: 2469; tasks: 13. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000166](https://openneuro.org/datasets/nm000166) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000166](https://nemar.org/dataexplorer/detail?dataset_id=nm000166) DOI: [https://doi.org/10.1016/j.neuroimage.2022.119666](https://doi.org/10.1016/j.neuroimage.2022.119666) ### Examples ```pycon >>> from eegdash.dataset import NM000166 >>> dataset = NM000166(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000166) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000166) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000167: eeg dataset, 25 subjects *Motor imagery dataset from Ma et al. 2020* Access recordings and metadata through EEGDash. **Citation:** Xuelin Ma, Shuang Qiu, Changde Du, Junfeng Xing, Huiguang He (2019). *Motor imagery dataset from Ma et al. 2020*. Modality: eeg Subjects: 25 Recordings: 375 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000167 dataset = NM000167(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000167(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000167( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000167, title = {Motor imagery dataset from Ma et al. 2020}, author = {Xuelin Ma and Shuang Qiu and Changde Du and Junfeng Xing and Huiguang He}, } ``` ## About This Dataset **Motor imagery dataset from Ma et al. 2020** Motor imagery dataset from Ma et al. 2020. **Dataset Overview** - **Code**: Ma2020 - **Paradigm**: imagery - **DOI**: 10.1038/s41597-020-0535-2 ### View full README **Motor imagery dataset from Ma et al. 2020** Motor imagery dataset from Ma et al. 2020. **Dataset Overview** - **Code**: Ma2020 - **Paradigm**: imagery - **DOI**: 10.1038/s41597-020-0535-2 - **Subjects**: 25 - **Sessions per subject**: 15 - **Events**: right_hand=1, right_elbow=2 - **Trial interval**: [0, 4] s - **File format**: CNT **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 62 - **Channel types**: eeg=62 - **Channel names**: Fp1, Fpz, Fp2, AF3, AF4, F7, F5, F3, F1, Fz, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT8, T7, C5, C3, C1, Cz, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPz, CP2, CP4, CP6, TP8, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO7, PO5, PO3, POz, PO4, PO6, PO8, CB1, O1, Oz, O2, CB2 - **Montage**: standard_1005 - **Hardware**: Neuroscan SynAmps2 - **Ground**: AFz - **Line frequency**: 50.0 Hz - **Impedance threshold**: 5 kOhm - **Auxiliary channels**: EOG (2 ch, horizontal, vertical), M2 **Participants** - **Number of subjects**: 25 - **Health status**: healthy - **Age**: mean=25.56, min=23, max=29 - **Gender distribution**: male=18, female=7 - **Handedness**: {‘right’: 25} - **BCI experience**: naive **Experimental Protocol** - **Paradigm**: imagery - **Task type**: motor_imagery_same_limb - **Number of classes**: 2 - **Class labels**: right_hand, right_elbow - **Trial duration**: 4.0 s - **Feedback type**: none - **Stimulus type**: visual cue - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: False - **Instructions**: Subjects were asked to concentrate on performing the indicated motor imagery task (right hand or right elbow) using kinesthetic, not visual, motor imagery while avoiding any motion during imagination. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand right_elbow ``` ```text ├─ Sensory-event └─ Label/right_elbow ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: right_hand, right_elbow - **Cue duration**: 1.0 s - **Imagery duration**: 4.0 s **Data Structure** - **Trials**: 600 - **Trials per class**: right_hand=300, right_elbow=300 - **Blocks per session**: 15 - **Trials context**: 3 days x 5 MI sessions/day = 15 sessions, 40 trials/session (20 hand + 20 elbow) **Signal Processing** - **Classifiers**: FBCSP+SVM - **Feature extraction**: FBCSP - **Frequency bands**: alpha=[8.0, 13.0] Hz; beta=[20.0, 25.0] Hz - **Spatial filters**: CAR, FBCSP **Cross-Validation** - **Method**: 5-fold - **Folds**: 5 - **Evaluation type**: within_subject **BCI Application** - **Applications**: motor_rehabilitation, prosthetic_control - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: healthy - **Modality**: motor - **Type**: imagery **Documentation** - **DOI**: 10.1038/s41597-020-0535-2 - **License**: CC-BY-4.0 - **Investigators**: Xuelin Ma, Shuang Qiu, Changde Du, Junfeng Xing, Huiguang He - **Senior author**: Huiguang He - **Institution**: Chinese Academy of Sciences - **Department**: Institute of Automation - **Country**: CN - **Repository**: Harvard Dataverse - **Data URL**: [https://doi.org/10.7910/DVN/RBN3XG](https://doi.org/10.7910/DVN/RBN3XG) - **Publication year**: 2020 - **Funding**: National Key Research and Development Plan of China (No. 2017YFB1002502); National Natural Science Foundation of China (No. 61976209); National Natural Science Foundation of China (No. 61906188) - **Ethics approval**: Ethics Committee of the Institute of Automation, Chinese Academy of Sciences - **Keywords**: motor imagery, EEG, BCI, same limb, hand, elbow **References** X. Ma, S. Qiu, C. Du, J. Xing, and H. He, “Multi-channel EEG recording during motor imagery of different joints from the same limb,” Scientific Data, vol. 7, no. 1, p. 191, 2020. DOI: 10.1038/s41597-020-0535-2 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000167` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Motor imagery dataset from Ma et al. 2020 | | Author (year) | `Ma2020` | | Canonical | — | | Importable as | `NM000167`, `Ma2020` | | Year | 2019 | | Authors | Xuelin Ma, Shuang Qiu, Changde Du, Junfeng Xing, Huiguang He | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000167) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000167) | [Source URL](https://nemar.org/dataexplorer/detail/nm000167) | ## Technical Details - Subjects: 25 - Recordings: 375 - Tasks: 1 - Channels: 64 (225), 62 (150) - Sampling rate (Hz): 1000.0 - Duration (hours): 35.204795833333336 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 22.4 GB - File count: 375 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000167](https://openneuro.org/datasets/nm000167) - NeMAR: [nm000167](https://nemar.org/dataexplorer/detail?dataset_id=nm000167) ## API Reference Use the `NM000167` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000167(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor imagery dataset from Ma et al. 2020 * **Study:** `nm000167` (NeMAR) * **Author (year):** `Ma2020` * **Canonical:** — Also importable as: `NM000167`, `Ma2020`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 25; recordings: 375; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000167](https://openneuro.org/datasets/nm000167) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000167](https://nemar.org/dataexplorer/detail?dataset_id=nm000167) ### Examples ```pycon >>> from eegdash.dataset import NM000167 >>> dataset = NM000167(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000167) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000167) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000168: eeg dataset, 6 subjects *BNCI 2015-013 Error-Related Potentials dataset* Access recordings and metadata through EEGDash. **Citation:** Ricardo Chavarriaga, José del R. Millán (2010). *BNCI 2015-013 Error-Related Potentials dataset*. Modality: eeg Subjects: 6 Recordings: 120 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000168 dataset = NM000168(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000168(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000168( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000168, title = {BNCI 2015-013 Error-Related Potentials dataset}, author = {Ricardo Chavarriaga and José del R. Millán}, } ``` ## About This Dataset **BNCI 2015-013 Error-Related Potentials dataset** BNCI 2015-013 Error-Related Potentials dataset. **Dataset Overview** - **Code**: BNCI2015-013 - **Paradigm**: p300 - **DOI**: 10.1109/TNSRE.2010.2053387 ### View full README **BNCI 2015-013 Error-Related Potentials dataset** BNCI 2015-013 Error-Related Potentials dataset. **Dataset Overview** - **Code**: BNCI2015-013 - **Paradigm**: p300 - **DOI**: 10.1109/TNSRE.2010.2053387 - **Subjects**: 6 - **Sessions per subject**: 20 - **Events**: Target=1, NonTarget=2 - **Trial interval**: [0, 0.6] s - **File format**: matlab **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 64 - **Channel types**: eeg=64 - **Channel names**: Fp1, AF7, AF3, F1, F3, F5, F7, FT7, FC5, FC3, FC1, C1, C3, C5, T7, TP7, CP5, CP3, CP1, P1, P3, P5, P7, P9, PO7, PO3, O1, Iz, Oz, POz, Pz, CPz, Fpz, Fp2, AF8, AF4, AFz, Fz, F2, F4, F6, F8, FT8, FC6, FC4, FC2, FCz, Cz, C2, C4, C6, T8, TP8, CP6, CP4, CP2, P2, P4, P6, P8, P10, PO8, PO4, O2 - **Montage**: standard_1020 - **Hardware**: Biosemi ActiveTwo - **Sensor type**: active - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 6 - **Health status**: patients - **Clinical population**: Healthy - **Age**: mean=27.83, std=2.23 - **Gender distribution**: male=5, female=1 - **Handedness**: not reported - **BCI experience**: not reported - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Task type**: monitoring - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 2.0 s - **Study design**: Error-related potential (ErrP) monitoring task where subjects observe a cursor moving towards a target. The cursor moves autonomously with 20% or 40% error probability. Subjects monitor performance without control. - **Feedback type**: visual - **Stimulus type**: cursor_movement - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: True - **Instructions**: Subjects seat in front of a computer screen and monitor a moving cursor (green square) and target location (blue for left, red for right). No control over cursor movement, only assess whether it performs properly. Fixate center of screen. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 **Data Structure** - **Trials**: ~50 trials per block, ~64 trials per block for error_prob=0.20 - **Blocks per session**: 10 - **Block duration**: 180.0 s - **Trials context**: per_block **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False **Signal Processing** - **Classifiers**: Gaussian classifier - **Feature extraction**: event-related potentials - **Frequency bands**: analyzed=[1.0, 10.0] Hz **Cross-Validation** - **Method**: train-test split - **Evaluation type**: cross_session **Performance (Original Study)** - **Accuracy**: 75.8% - **Correct Recognition Rate**: 63.2 - **Error Recognition Rate**: 75.8 **BCI Application** - **Applications**: error_detection - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Cognitive - **Type**: ErrP **Documentation** - **Description**: Dataset on EEG error-related potentials (ErrPs) elicited when users monitor the behavior of an external autonomous agent. One of the first studies showing that error correlates can be observed and decoded during monitoring of external agents without user control. - **DOI**: 10.1109/TNSRE.2010.2053387 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: Ricardo Chavarriaga, José del R. Millán - **Senior author**: José del R. Millán - **Contact**: [ricardo.chavarriaga@epfl.ch](mailto:ricardo.chavarriaga@epfl.ch); [jose.millan@epfl.ch](mailto:jose.millan@epfl.ch) - **Institution**: Ecole Polytechnique Fédérale de Lausanne - **Department**: Defitech Chair in Brain-Machine Interface, CNBI, Center for Neuroprosthetics - **Country**: CH - **Repository**: BNCI Horizon - **Publication year**: 2010 - **Funding**: EC under Contract BACS FP6-IST-027140 - **Keywords**: error-related potentials, ErrP, brain-computer interface, reinforcement learning, monitoring, error detection **References** Chavarriaga, R., & Millán, J. D. R. (2010). Learning from EEG error-related potentials in noninvasive brain-computer interfaces. IEEE Trans. Neural Syst. Rehabil. Eng., 18(4), 381-388. [https://doi.org/10.1109/TNSRE.2010.2053387](https://doi.org/10.1109/TNSRE.2010.2053387) Notes .. versionadded:: 1.2.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000168` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2015-013 Error-Related Potentials dataset | | Author (year) | `Chavarriaga2015` | | Canonical | `Chavarriaga2010` | | Importable as | `NM000168`, `Chavarriaga2015`, `Chavarriaga2010` | | Year | 2010 | | Authors | Ricardo Chavarriaga, José del R. Millán | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000168) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000168) | [Source URL](https://nemar.org/dataexplorer/detail/nm000168) | ## Technical Details - Subjects: 6 - Recordings: 120 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 512.0 - Duration (hours): 6.0910460069444445 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 2.0 GB - File count: 120 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000168](https://openneuro.org/datasets/nm000168) - NeMAR: [nm000168](https://nemar.org/dataexplorer/detail?dataset_id=nm000168) ## API Reference Use the `NM000168` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000168(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-013 Error-Related Potentials dataset * **Study:** `nm000168` (NeMAR) * **Author (year):** `Chavarriaga2015` * **Canonical:** `Chavarriaga2010` Also importable as: `NM000168`, `Chavarriaga2015`, `Chavarriaga2010`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 6; recordings: 120; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000168](https://openneuro.org/datasets/nm000168) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000168](https://nemar.org/dataexplorer/detail?dataset_id=nm000168) ### Examples ```pycon >>> from eegdash.dataset import NM000168 >>> dataset = NM000168(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000168) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000168) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000169: eeg dataset, 8 subjects *BNCI 2014-008 P300 dataset (ALS patients)* Access recordings and metadata through EEGDash. **Citation:** Angela Riccio, Luca Simione, Francesca Schettini, Alessia Pizzimenti, Maurizio Inghilleri, Marta Olivetti Belardinelli, Donatella Mattia, Febo Cincotti (2013). *BNCI 2014-008 P300 dataset (ALS patients)*. Modality: eeg Subjects: 8 Recordings: 8 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000169 dataset = NM000169(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000169(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000169( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000169, title = {BNCI 2014-008 P300 dataset (ALS patients)}, author = {Angela Riccio and Luca Simione and Francesca Schettini and Alessia Pizzimenti and Maurizio Inghilleri and Marta Olivetti Belardinelli and Donatella Mattia and Febo Cincotti}, } ``` ## About This Dataset **BNCI 2014-008 P300 dataset (ALS patients)** BNCI 2014-008 P300 dataset (ALS patients). **Dataset Overview** - **Code**: BNCI2014-008 - **Paradigm**: p300 - **DOI**: 10.3389/fnhum.2013.00732 ### View full README **BNCI 2014-008 P300 dataset (ALS patients)** BNCI 2014-008 P300 dataset (ALS patients). **Dataset Overview** - **Code**: BNCI2014-008 - **Paradigm**: p300 - **DOI**: 10.3389/fnhum.2013.00732 - **Subjects**: 8 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1.0] s - **File format**: Unknown - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 8 - **Channel types**: eeg=8 - **Channel names**: Fz, Cz, Pz, Oz, P3, P4, PO7, PO8 - **Montage**: 10-10 - **Hardware**: g.MOBILAB - **Software**: BCI2000 - **Reference**: right earlobe - **Ground**: left mastoid - **Sensor type**: active electrodes - **Line frequency**: 50.0 Hz - **Online filters**: 0.1-10 Hz bandpass, 50 Hz notch - **Electrode type**: g.Ladybird - **Electrode material**: Ag/AgCl **Participants** - **Number of subjects**: 8 - **Health status**: ALS patients - **Clinical population**: amyotrophic lateral sclerosis - **Age**: mean=58.0, std=12.0, min=40, max=72 - **Gender distribution**: M=5, F=3 - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Study design**: P300 speller with 6x6 matrix for copy-spelling task in ALS patients - **Feedback type**: visual - **Stimulus type**: row-column intensification - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: online - **Training/test split**: True - **Instructions**: Copy spell seven predefined words of five characters each by focusing attention on desired letters **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 36 - **Number of repetitions**: 10 - **Inter-stimulus interval**: 125.0 ms - **Stimulus onset asynchrony**: 250.0 ms **Data Structure** - **Trials**: 35 - **Blocks per session**: 7 - **Trials context**: per subject (7 words, 5 characters each) **Preprocessing** - **Data state**: preprocessed - **Preprocessing applied**: True - **Steps**: bandpass filtering, notch filtering, artifact rejection, baseline correction - **Highpass filter**: 0.1 Hz - **Lowpass filter**: 10.0 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.1, ‘high_cutoff_hz’: 10.0} - **Notch filter**: [50] Hz - **Filter type**: Butterworth - **Filter order**: 4 - **Artifact methods**: amplitude threshold rejection - **Re-reference**: right earlobe - **Epoch window**: [0.0, 1.0] - **Notes**: Epochs with peak amplitude >70 μV or <-70 μV were rejected. Baseline correction based on 200 ms preceding each epoch. **Signal Processing** - **Classifiers**: SWLDA - **Feature extraction**: temporal features, decimation **Cross-Validation** - **Method**: 7-fold - **Folds**: 7 - **Evaluation type**: within_subject **Performance (Original Study)** - **Accuracy**: 97.5% - **Binary Accuracy Offline**: 87.4 - **P300 Amplitude Mean Uv**: 3.3 **BCI Application** - **Applications**: communication - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: ALS - **Modality**: P300 - **Type**: ERP **Documentation** - **DOI**: 10.3389/fnhum.2013.00732 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: Angela Riccio, Luca Simione, Francesca Schettini, Alessia Pizzimenti, Maurizio Inghilleri, Marta Olivetti Belardinelli, Donatella Mattia, Febo Cincotti - **Senior author**: Febo Cincotti - **Contact**: [a.riccio@hsantalucia.it](mailto:a.riccio@hsantalucia.it) - **Institution**: Fondazione Santa Lucia - **Department**: Neuroelectrical Imaging and BCI Laboratory - **Address**: Via Ardeatina, 306, 00179 Rome, Italy - **Country**: Italy - **Repository**: BNCI Horizon - **Publication year**: 2013 - **Funding**: Italian Agency for Research on ALS-ARiSLA project ‘Brindisys’; FARI project C26I12AJZZ at the Sapienza University of Rome - **Ethics approval**: Fondazione Santa Lucia ethic committee - **Keywords**: brain computer interface, amyotrophic lateral sclerosis, P300, attention, working memory **References** Riccio, A., Simione, L., Schettini, F., Pizzimenti, A., Inghilleri, M., Belardinelli, M. O., & Mattia, D. (2013). Attention and P300-based BCI performance in people with amyotrophic lateral sclerosis. Frontiers in human neuroscience, 7, 732. [https://doi.org/10.3389/fnhum.2013.00732](https://doi.org/10.3389/fnhum.2013.00732) Notes .. note:: `BNCI2014_008` was previously named `BNCI2014008`. `BNCI2014008` will be removed in version 1.1. .. versionadded:: 0.4.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000169` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2014-008 P300 dataset (ALS patients) | | Author (year) | `Riccio2014` | | Canonical | `BNCI2014008` | | Importable as | `NM000169`, `Riccio2014`, `BNCI2014008` | | Year | 2013 | | Authors | Angela Riccio, Luca Simione, Francesca Schettini, Alessia Pizzimenti, Maurizio Inghilleri, Marta Olivetti Belardinelli, Donatella Mattia, Febo Cincotti | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000169) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000169) | [Source URL](https://nemar.org/dataexplorer/detail/nm000169) | ## Technical Details - Subjects: 8 - Recordings: 8 - Tasks: 1 - Channels: 8 - Sampling rate (Hz): 256.0 - Duration (hours): 3.018255208333333 - Pathology: Other - Modality: Visual - Type: Attention - Size on disk: 75.9 MB - File count: 8 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000169](https://openneuro.org/datasets/nm000169) - NeMAR: [nm000169](https://nemar.org/dataexplorer/detail?dataset_id=nm000169) ## API Reference Use the `NM000169` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000169(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-008 P300 dataset (ALS patients) * **Study:** `nm000169` (NeMAR) * **Author (year):** `Riccio2014` * **Canonical:** `BNCI2014008` Also importable as: `NM000169`, `Riccio2014`, `BNCI2014008`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000169](https://openneuro.org/datasets/nm000169) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000169](https://nemar.org/dataexplorer/detail?dataset_id=nm000169) ### Examples ```pycon >>> from eegdash.dataset import NM000169 >>> dataset = NM000169(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000169) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000169) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000170: eeg dataset, 10 subjects *BNCI 2025-002 Continuous 2D Trajectory Decoding dataset* Access recordings and metadata through EEGDash. **Citation:** Hannah S Pulferer, Brynja Ásgeirsdóttir, Valeria Mondini, Andreea I Sburlea, Gernot R Müller-Putz (2022). *BNCI 2025-002 Continuous 2D Trajectory Decoding dataset*. Modality: eeg Subjects: 10 Recordings: 90 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000170 dataset = NM000170(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000170(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000170( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000170, title = {BNCI 2025-002 Continuous 2D Trajectory Decoding dataset}, author = {Hannah S Pulferer and Brynja Ásgeirsdóttir and Valeria Mondini and Andreea I Sburlea and Gernot R Müller-Putz}, } ``` ## About This Dataset **BNCI 2025-002 Continuous 2D Trajectory Decoding dataset** BNCI 2025-002 Continuous 2D Trajectory Decoding dataset. **Dataset Overview** - **Code**: BNCI2025-002 - **Paradigm**: imagery - **DOI**: 10.1088/1741-2552/ac689f ### View full README **BNCI 2025-002 Continuous 2D Trajectory Decoding dataset** BNCI 2025-002 Continuous 2D Trajectory Decoding dataset. **Dataset Overview** - **Code**: BNCI2025-002 - **Paradigm**: imagery - **DOI**: 10.1088/1741-2552/ac689f - **Subjects**: 10 - **Sessions per subject**: 3 - **Events**: snakerun=1, freerun=2, eyerun=3 - **Trial interval**: [0, 8] s - **Runs per session**: 3 - **File format**: gdf - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 200.0 Hz - **Number of channels**: 60 - **Channel types**: eeg=60, eog=4 - **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fz, HEOG1, HEOG2, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO7, PO8, POz, PPO1h, PPO2h, Pz, T7, T8, TP7, TP8, VEOG1, VEOG2 - **Montage**: af7 af3 afz af4 af8 f7 f5 f3 f1 fz f2 f4 f6 f8 ft7 fc5 fc3 fc1 fcz fc2 fc4 fc6 ft8 t7 c5 c3 c1 cz c2 c4 c6 t8 tp7 cp5 cp3 cp1 cpz cp2 cp4 cp6 tp8 p7 p5 p3 p1 pz p2 p4 p6 p8 ppo1h ppo2h po7 po3 poz po4 po8 o1 oz o2 - **Hardware**: actiCAP, Brain Products GmbH - **Software**: MATLAB 2015b, Psychtoolbox, EEGLAB - **Reference**: right mastoid - **Ground**: Fpz - **Sensor type**: EEG - **Line frequency**: 50.0 Hz - **Online filters**: anti-aliasing 25 Hz, notch 50 Hz - **Auxiliary channels**: EOG (4 ch, horizontal, vertical) **Participants** - **Number of subjects**: 10 - **Health status**: patients - **Clinical population**: Healthy (able-bodied participants) + 1 SCI participant - **Age**: mean=24.0, std=5.0 - **Gender distribution**: male=5, female=5 - **Handedness**: {‘right’: 10} - **BCI experience**: naive BCI users in terms of motor decoding; 4 had previous EEG experience - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Task type**: continuous 2D trajectory decoding - **Number of classes**: 3 - **Class labels**: snakerun, freerun, eyerun - **Trial duration**: 23.0 s - **Study design**: Attempted movement paradigm: participants instructed to attempt lower arm movement as if wielding a computer mouse while arm was strapped to armrest. Two task types: snakeruns (target tracking) and freeruns (self-paced shape tracing). Offline calibration followed by online feedback in 50% and 100% EEG feedback conditions. - **Feedback type**: visual (green dot showing EEG-decoded trajectory position) - **Stimulus type**: visual targets (white snake/shapes on black screen) - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: continuous - **Mode**: attempted movement - **Training/test split**: True - **Instructions**: Track snake with gaze and simultaneously attempt movement of strapped lower arm/hand as if wielding computer mouse; for freeruns: trace static shapes at own pace **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text snakerun ``` ```text ├─ Experiment-structure └─ Label/snakerun freerun ``` ```text ├─ Experiment-structure └─ Label/freerun eyerun ``` ```text ├─ Experiment-structure └─ Label/eyerun ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: attempted arm/hand movement (2D continuous trajectory) **Data Structure** - **Trials**: {‘calibration_eyeruns’: 38, ‘calibration_snakeruns’: 48, ‘50%_EEG_feedback_snakeruns’: 36, ‘100%_EEG_feedback_snakeruns’: 36, ‘freeruns’: 9} - **Trials context**: per_paradigm_type **Preprocessing** - **Data state**: preprocessed - **Preprocessing applied**: True - **Steps**: anti-aliasing filter (25 Hz), notch filter (50 Hz), downsampling to 100 Hz, bad channel interpolation, eye artifact subtraction (SGEYESUB algorithm), removal of frontal (AF) row channels, high-pass filter (0.18 Hz), common average re-reference, pops and drifts attenuation (HEAR algorithm), low-pass filter (3 Hz), downsampling to 20 Hz - **Highpass filter**: 0.18 Hz - **Lowpass filter**: 3.0 Hz - **Notch filter**: [50] Hz - **Filter type**: Not specified - **Artifact methods**: SGEYESUB (eye artifact subtraction), HEAR (pops and drifts removal) - **Re-reference**: common average reference - **Downsampled to**: 20.0 Hz **Signal Processing** - **Classifiers**: PLS regression with UKF smoothing - **Feature extraction**: Temporal features (7 time points × 55 channels = 385 features), sLORETA (source localization) - **Spatial filters**: Minimum norm imaging **Cross-Validation** - **Method**: across-session - **Evaluation type**: within-subject, learning effects over sessions **Performance (Original Study)** - **Normalized Correlation Mean**: 0.31 - **Normalized Correlation Std**: 0.02 - **Correlation Range Rc**: 0.4-0.5 - **Nrmse Calibration**: 0.1 - **Nrmse 100% Feedback**: 0.12 **BCI Application** - **Applications**: neuroprosthesis, robotic arm control, upper limb restoration - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy, Spinal cord injury - **Modality**: Visual - **Type**: Motor attempt, Continuous decoding **Documentation** - **Description**: Continuous 2D trajectory decoding from attempted movement: across-session performance in able-bodied and feasibility in a spinal cord injured participant - **DOI**: 10.1088/1741-2552/ac689f - **License**: CC-BY-4.0 - **Investigators**: Hannah S Pulferer, Brynja Ásgeirsdóttir, Valeria Mondini, Andreea I Sburlea, Gernot R Müller-Putz - **Senior author**: Gernot R Müller-Putz - **Contact**: [gernot.mueller@tugraz.at](mailto:gernot.mueller@tugraz.at) - **Institution**: Institute of Neural Engineering, Graz University of Technology - **Address**: Stremayrgasse 16/IV, 8010 Graz, Austria - **Country**: Austria - **Repository**: GitHub - **Data URL**: [https://github.com/sccn/labstreaminglayer](https://github.com/sccn/labstreaminglayer) - **Publication year**: 2022 - **Funding**: European Research Council ERC-CoG 2015 681231 ‘Feel Your Reach’; NTU-TUG joint PhD program - **Ethics approval**: Medical University of Graz, votum number 32–583 ex 19/20 - **Keywords**: electroencephalography, trajectory decoding, learning effects, source localization, motor control, neuroplasticity, brain-computer interface **References** Kobler, R. J., Almeida, I., Sburlea, A. I., & Muller-Putz, G. R. (2022). Continuous 2D trajectory decoding from attempted movement: across-session performance in able-bodied and feasibility in a spinal cord injured participant. Journal of Neural Engineering, 19(3), 036005. [https://doi.org/10.1088/1741-2552/ac689f](https://doi.org/10.1088/1741-2552/ac689f) Notes .. versionadded:: 1.3.0 This dataset is designed for continuous decoding research, specifically for predicting 2D hand movement trajectories from EEG. Unlike classification-based motor imagery datasets, this dataset contains continuous trajectory labels suitable for regression-based decoders. The paradigm “imagery” is used for compatibility with MOABB’s motor imagery processing pipelines, though the actual task involves attempted (rather than imagined) movements. See Also BNCI2014_001 : 4-class motor imagery dataset BNCI2014_004 : 2-class motor imagery dataset Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000170` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2025-002 Continuous 2D Trajectory Decoding dataset | | Author (year) | `Pulferer2025` | | Canonical | `BNCI2025` | | Importable as | `NM000170`, `Pulferer2025`, `BNCI2025` | | Year | 2022 | | Authors | Hannah S Pulferer, Brynja Ásgeirsdóttir, Valeria Mondini, Andreea I Sburlea, Gernot R Müller-Putz | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000170) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000170) | [Source URL](https://nemar.org/dataexplorer/detail/nm000170) | ## Technical Details - Subjects: 10 - Recordings: 90 - Tasks: 1 - Channels: 60 - Sampling rate (Hz): 200.0 - Duration (hours): 28.178534722222224 - Pathology: Other - Modality: Visual - Type: Motor - Size on disk: 3.4 GB - File count: 90 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000170](https://openneuro.org/datasets/nm000170) - NeMAR: [nm000170](https://nemar.org/dataexplorer/detail?dataset_id=nm000170) ## API Reference Use the `NM000170` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000170(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2025-002 Continuous 2D Trajectory Decoding dataset * **Study:** `nm000170` (NeMAR) * **Author (year):** `Pulferer2025` * **Canonical:** — Also importable as: `NM000170`, `Pulferer2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 10; recordings: 90; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000170](https://openneuro.org/datasets/nm000170) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000170](https://nemar.org/dataexplorer/detail?dataset_id=nm000170) ### Examples ```pycon >>> from eegdash.dataset import NM000170 >>> dataset = NM000170(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000170) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000170) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000171: eeg dataset, 14 subjects *BNCI 2014-002 Motor Imagery dataset* Access recordings and metadata through EEGDash. **Citation:** David Steyrl, Reinhold Scherer, Oswin Förstner, Gernot R. Müller-Putz (2015). *BNCI 2014-002 Motor Imagery dataset*. Modality: eeg Subjects: 14 Recordings: 112 License: CC-BY-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000171 dataset = NM000171(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000171(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000171( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000171, title = {BNCI 2014-002 Motor Imagery dataset}, author = {David Steyrl and Reinhold Scherer and Oswin Förstner and Gernot R. Müller-Putz}, } ``` ## About This Dataset **BNCI 2014-002 Motor Imagery dataset** BNCI 2014-002 Motor Imagery dataset. **Dataset Overview** - **Code**: BNCI2014-002 - **Paradigm**: imagery - **DOI**: 10.1007/s00500-012-0895-4 ### View full README **BNCI 2014-002 Motor Imagery dataset** BNCI 2014-002 Motor Imagery dataset. **Dataset Overview** - **Code**: BNCI2014-002 - **Paradigm**: imagery - **DOI**: 10.1007/s00500-012-0895-4 - **Subjects**: 14 - **Sessions per subject**: 1 - **Events**: right_hand=1, feet=2 - **Trial interval**: [3, 8] s - **Runs per session**: 8 - **File format**: MAT - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 15 - **Channel types**: eeg=15 - **Channel names**: EEG1, EEG2, EEG3, EEG4, EEG5, EEG6, EEG7, EEG8, EEG9, EEG10, EEG11, EEG12, EEG13, EEG14, EEG15 - **Montage**: Laplacian - **Hardware**: g.USBamp - **Software**: BCI2000 - **Reference**: left mastoid - **Ground**: right mastoid - **Sensor type**: Ag/AgCl - **Line frequency**: 50.0 Hz - **Online filters**: 8th order Butterworth band-pass filters - **Cap manufacturer**: Guger Technologies OG - **Cap model**: g.LADYbird - **Electrode type**: active - **Electrode material**: Ag/AgCl **Participants** - **Number of subjects**: 14 - **Health status**: healthy - **Age**: min=20.0, max=30.0 - **BCI experience**: mixed - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 2 - **Class labels**: right_hand, feet - **Trial duration**: 5.0 s - **Study design**: Two-class motor imagery: right hand and feet. Cue-guided Graz-BCI training paradigm with recording, training, and feedback within a single session. - **Feedback type**: continuous - **Stimulus type**: bar_graph - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: online **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand feet ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine, Move, Foot ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: right_hand, feet - **Imagery duration**: 5.0 s **Data Structure** - **Trials**: 160 - **Trials per class**: right_hand=80, feet=80 - **Blocks per session**: 8 - **Trials context**: total per subject **Preprocessing** - **Data state**: minimally preprocessed (online filtered) - **Preprocessing applied**: True - **Steps**: bandpass filtering - **Filter type**: Butterworth - **Filter order**: 8 **Signal Processing** - **Classifiers**: Random Forest, Shrinkage LDA - **Feature extraction**: CSP, DFT, Bandpower - **Frequency bands**: alpha=[6, 14] Hz; beta=[14, 40] Hz - **Spatial filters**: CSP, Laplacian **Cross-Validation** - **Method**: train-test split - **Evaluation type**: within_subject **Performance (Original Study)** - **Accuracy**: 79.3% - **Peak Accuracy**: 89.67 - **Median Accuracy**: 80.42 **BCI Application** - **Applications**: communication, control - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Motor Imagery **Documentation** - **DOI**: 10.1515/bmt-2014-0117 - **Associated paper DOI**: 10.3217/978-3-85125-378-8-61 - **License**: CC-BY-ND-4.0 - **Investigators**: David Steyrl, Reinhold Scherer, Oswin Förstner, Gernot R. Müller-Putz - **Contact**: [david.steyrl@tugraz.at](mailto:david.steyrl@tugraz.at); [reinhold.scherer@tugraz.at](mailto:reinhold.scherer@tugraz.at); [oswin.foerstner@student.tugraz.at](mailto:oswin.foerstner@student.tugraz.at); [gernot.mueller@tugraz.at](mailto:gernot.mueller@tugraz.at) - **Institution**: Graz University of Technology - **Department**: Institute for Knowledge Discovery, Laboratory of Brain-Computer Interfaces - **Country**: Austria - **Repository**: BNCI Horizon - **Publication year**: 2014 - **Funding**: FP7 BackHome (No. 288566); FP7 ABC (No. 287774) - **Keywords**: brain-computer interfaces, machine learning, random forests, regularized linear discriminant analysis, sensorimotor rhythms **References** Scherer, R., Faller, J., Balderas, D., Friedrich, E. V., & Müller-Putz, G. (2015). Brain-computer interfacing: more than the sum of its parts. Soft Computing, 19(11), 3173-3186. [https://doi.org/10.1007/s00500-012-0895-4](https://doi.org/10.1007/s00500-012-0895-4) Notes .. note:: `BNCI2014_002` was previously named `BNCI2014002`. `BNCI2014002` will be removed in version 1.1. .. versionadded:: 0.4.0 See Also BNCI2014_001 : 4-class motor imagery (BCI Competition IV Dataset 2a) BNCI2014_004 : 2-class motor imagery (Dataset B) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000171` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2014-002 Motor Imagery dataset | | Author (year) | `Steyrl2014` | | Canonical | `BNCI2014002` | | Importable as | `NM000171`, `Steyrl2014`, `BNCI2014002` | | Year | 2015 | | Authors | David Steyrl, Reinhold Scherer, Oswin Förstner, Gernot R. Müller-Putz | | License | CC-BY-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000171) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000171) | [Source URL](https://nemar.org/dataexplorer/detail/nm000171) | ## Technical Details - Subjects: 14 - Recordings: 112 - Tasks: 1 - Channels: 15 - Sampling rate (Hz): 512.0 - Duration (hours): 6.865494791666666 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 554.3 MB - File count: 112 - Format: BIDS - License: CC-BY-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000171](https://openneuro.org/datasets/nm000171) - NeMAR: [nm000171](https://nemar.org/dataexplorer/detail?dataset_id=nm000171) ## API Reference Use the `NM000171` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000171(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-002 Motor Imagery dataset * **Study:** `nm000171` (NeMAR) * **Author (year):** `Steyrl2014` * **Canonical:** `BNCI2014002` Also importable as: `NM000171`, `Steyrl2014`, `BNCI2014002`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 14; recordings: 112; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000171](https://openneuro.org/datasets/nm000171) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000171](https://nemar.org/dataexplorer/detail?dataset_id=nm000171) ### Examples ```pycon >>> from eegdash.dataset import NM000171 >>> dataset = NM000171(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000171) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000171) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000172: eeg dataset, 14 subjects *High-gamma dataset described in Schirrmeister et al. 2017* Access recordings and metadata through EEGDash. **Citation:** Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, Tonio Ball (2017). *High-gamma dataset described in Schirrmeister et al. 2017*. Modality: eeg Subjects: 14 Recordings: 28 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000172 dataset = NM000172(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000172(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000172( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000172, title = {High-gamma dataset described in Schirrmeister et al. 2017}, author = {Robin Tibor Schirrmeister and Jost Tobias Springenberg and Lukas Dominique Josef Fiederer and Martin Glasstetter and Katharina Eggensperger and Michael Tangermann and Frank Hutter and Wolfram Burgard and Tonio Ball}, } ``` ## About This Dataset **High-gamma dataset described in Schirrmeister et al. 2017** High-gamma dataset described in Schirrmeister et al. 2017. **Dataset Overview** - **Code**: Schirrmeister2017 - **Paradigm**: imagery - **DOI**: 10.1002/hbm.23730 ### View full README **High-gamma dataset described in Schirrmeister et al. 2017** High-gamma dataset described in Schirrmeister et al. 2017. **Dataset Overview** - **Code**: Schirrmeister2017 - **Paradigm**: imagery - **DOI**: 10.1002/hbm.23730 - **Subjects**: 14 - **Sessions per subject**: 1 - **Events**: right_hand=1, left_hand=2, rest=3, feet=4 - **Trial interval**: [0, 4] s - **Runs per session**: 2 - **File format**: EDF **Acquisition** - **Sampling rate**: 500.0 Hz - **Number of channels**: 128 - **Channel types**: eeg=128 - **Channel names**: Fp1, Fp2, Fpz, F7, F3, Fz, F4, F8, FC5, FC1, FC2, FC6, M1, T7, C3, Cz, C4, T8, M2, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, POz, O1, Oz, O2, AF7, AF3, AF4, AF8, F5, F1, F2, F6, FC3, FCz, FC4, C5, C1, C2, C6, CP3, CPz, CP4, P5, P1, P2, P6, PO5, PO3, PO4, PO6, FT7, FT8, TP7, TP8, PO7, PO8, FT9, FT10, TPP9h, TPP10h, PO9, PO10, P9, P10, AFF1, AFz, AFF2, FFC5h, FFC3h, FFC4h, FFC6h, FCC5h, FCC3h, FCC4h, FCC6h, CCP5h, CCP3h, CCP4h, CCP6h, CPP5h, CPP3h, CPP4h, CPP6h, PPO1, PPO2, I1, Iz, I2, AFp3h, AFp4h, AFF5h, AFF6h, FFT7h, FFC1h, FFC2h, FFT8h, FTT9h, FTT7h, FCC1h, FCC2h, FTT8h, FTT10h, TTP7h, CCP1h, CCP2h, TTP8h, TPP7h, CPP1h, CPP2h, TPP8h, PPO9h, PPO5h, PPO6h, PPO10h, POO9h, POO3h, POO4h, POO10h, OI1h, OI2h - **Montage**: standard_1005 - **Software**: BCI2000 - **Sensor type**: EEG - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 14 - **Health status**: healthy - **Age**: mean=27.2, std=3.6 - **Gender distribution**: female=6, male=8 - **Handedness**: {‘right’: 12, ‘left’: 2} **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 4 - **Class labels**: right_hand, left_hand, rest, feet - **Trial duration**: 4.0 s - **Study design**: Executed movements including left hand (sequential finger-tapping), right hand (sequential finger-tapping), feet (repetitive toe clenching), and rest conditions - **Stimulus type**: visual - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: cue-based - **Mode**: offline - **Training/test split**: True - **Instructions**: Subjects performed repetitive movements at their own pace when arrow was showing - **Stimulus presentation**: type=gray arrow on black background, direction_mapping=downward=feet, leftward=left_hand, rightward=right_hand, upward=rest **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand rest ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Rest feet ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine, Move, Foot ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_hand_finger_tapping, right_hand_finger_tapping, feet_toe_clenching, rest **Data Structure** - **Trials**: {‘total_per_subject’: 963, ‘training_set’: 880, ‘test_set’: 160} - **Trials per class**: per_class_per_subject=260 - **Blocks per session**: 13 - **Trials context**: 13 runs per subject, 80 trials per run (4 seconds each), 3-4 seconds inter-trial interval, pseudo-randomized presentation with all 4 classes shown every 4 trials **Signal Processing** - **Classifiers**: Deep ConvNet, Shallow ConvNet, ResNet, FBCSP with LDA - **Feature extraction**: FBCSP, CSP, Bandpower, Spectral power modulations - **Frequency bands**: alpha=[7.0, 13.0] Hz; beta=[13.0, 30.0] Hz; gamma=[30.0, 100.0] Hz - **Spatial filters**: CSP **Cross-Validation** - **Method**: holdout - **Evaluation type**: within_subject **Performance (Original Study)** - **Fbcsp Accuracy**: 91.2 - **Deep Convnet Accuracy**: 89.3 - **Shallow Convnet Accuracy**: 92.5 **BCI Application** - **Applications**: motor_control - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Motor Imagery, Motor Execution **Documentation** - **DOI**: 10.1002/hbm.23730 - **License**: CC-BY-4.0 - **Investigators**: Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, Tonio Ball - **Senior author**: Tonio Ball - **Contact**: [robin.schirrmeister@uniklinik-freiburg.de](mailto:robin.schirrmeister@uniklinik-freiburg.de) - **Institution**: University of Freiburg - **Department**: Translational Neurotechnology Lab, Epilepsy Center, Medical Center - **Address**: Engelberger Str. 21, Freiburg 79106, Germany - **Country**: DE - **Repository**: GitHub - **Data URL**: [https://web.gin.g-node.org/robintibor/high-gamma-dataset/](https://web.gin.g-node.org/robintibor/high-gamma-dataset/) - **Publication year**: 2017 - **Funding**: BrainLinks-BrainTools Cluster of Excellence (DFG) EXC1086; Federal Ministry of Education and Research (BMBF) Motor-BIC 13GW0053D - **Ethics approval**: Approved by the ethical committee of the University of Freiburg - **Acknowledgements**: Funded by BrainLinks-BrainTools Cluster of Excellence (DFG, EXC1086) and the Federal Ministry of Education and Research (BMBF, Motor-BIC 13GW0053D). - **How to acknowledge**: Please cite: Schirrmeister et al. (2017). Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping, 38(11), 5391-5420. [https://doi.org/10.1002/hbm.23730](https://doi.org/10.1002/hbm.23730) - **Keywords**: electroencephalography, EEG analysis, machine learning, end-to-end learning, brain-machine interface, brain-computer interface, model interpretability, brain mapping **Abstract** Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning. This study investigates deep ConvNets for end-to-end EEG decoding of imagined or executed movements from raw EEG. Results show that recent advances including batch normalization and exponential linear units, together with a cropped training strategy, boosted decoding performance to match or exceed FBCSP (82.1% FBCSP vs 84.0% deep ConvNets). Novel visualization methods demonstrated that ConvNets learned to use spectral power modulations in alpha, beta, and high gamma frequencies with meaningful spatial distributions. **Methodology** End-to-end deep learning approach comparing shallow ConvNets, deep ConvNets, and ResNets against FBCSP baseline. Evaluated design choices including batch normalization, exponential linear units, dropout, and cropped training strategies. Novel visualization techniques developed to understand learned features and verify that ConvNets use spectral power modulations in task-relevant frequency bands. **References** Schirrmeister, Robin Tibor, et al. “Deep learning with convolutional neural networks for EEG decoding and visualization.” Human brain mapping 38.11 (2017): 5391-5420. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000172` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | High-gamma dataset described in Schirrmeister et al. 2017 | | Author (year) | `Schirrmeister2017` | | Canonical | — | | Importable as | `NM000172`, `Schirrmeister2017` | | Year | 2017 | | Authors | Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, Tonio Ball | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000172) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000172) | [Source URL](https://nemar.org/dataexplorer/detail/nm000172) | ## Technical Details - Subjects: 14 - Recordings: 28 - Tasks: 1 - Channels: 128 - Sampling rate (Hz): 500.0 - Duration (hours): 28.695817777777776 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 18.5 GB - File count: 28 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000172](https://openneuro.org/datasets/nm000172) - NeMAR: [nm000172](https://nemar.org/dataexplorer/detail?dataset_id=nm000172) ## API Reference Use the `NM000172` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000172(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High-gamma dataset described in Schirrmeister et al. 2017 * **Study:** `nm000172` (NeMAR) * **Author (year):** `Schirrmeister2017` * **Canonical:** — Also importable as: `NM000172`, `Schirrmeister2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 14; recordings: 28; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000172](https://openneuro.org/datasets/nm000172) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000172](https://nemar.org/dataexplorer/detail?dataset_id=nm000172) ### Examples ```pycon >>> from eegdash.dataset import NM000172 >>> dataset = NM000172(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000172) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000172) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000173: eeg dataset, 15 subjects *Motor Imagery ataset from Ofner et al 2017* Access recordings and metadata through EEGDash. **Citation:** Patrick Ofner, Andreas Schwarz, Joana Pereira, Gernot R. Müller-Putz (2019). *Motor Imagery ataset from Ofner et al 2017*. Modality: eeg Subjects: 15 Recordings: 300 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000173 dataset = NM000173(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000173(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000173( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000173, title = {Motor Imagery ataset from Ofner et al 2017}, author = {Patrick Ofner and Andreas Schwarz and Joana Pereira and Gernot R. Müller-Putz}, } ``` ## About This Dataset **Motor Imagery ataset from Ofner et al 2017** Motor Imagery ataset from Ofner et al 2017. **Dataset Overview** - **Code**: Ofner2017 - **Paradigm**: imagery - **DOI**: 10.1371/journal.pone.0182578 ### View full README **Motor Imagery ataset from Ofner et al 2017** Motor Imagery ataset from Ofner et al 2017. **Dataset Overview** - **Code**: Ofner2017 - **Paradigm**: imagery - **DOI**: 10.1371/journal.pone.0182578 - **Subjects**: 15 - **Sessions per subject**: 2 - **Events**: right_elbow_flexion=1536, right_elbow_extension=1537, right_supination=1538, right_pronation=1539, right_hand_close=1540, right_hand_open=1541, rest=1542 - **Trial interval**: [0, 3] s - **Runs per session**: 10 - **Session IDs**: movement_execution, motor_imagery - **File format**: gdf **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 61 - **Channel types**: eeg=61, eog=3, misc=32 - **Channel names**: C1, C2, C3, C4, C5, C6, CCP1h, CCP2h, CCP3h, CCP4h, CCP5h, CCP6h, CP1, CP2, CP3, CP4, CP5, CP6, CPP1h, CPP2h, CPP3h, CPP4h, CPP5h, CPP6h, CPz, Cz, F1, F2, F3, F4, FC1, FC2, FC3, FC4, FC5, FC6, FCC1h, FCC2h, FCC3h, FCC4h, FCC5h, FCC6h, FCz, FFC1h, FFC2h, FFC3h, FFC4h, FFC5h, FFC6h, FTT7h, FTT8h, Fz, P1, P2, P3, P4, PPO1h, PPO2h, Pz, TTP7h, TTP8h, armeodummy-0, armeodummy-1, armeodummy-10, armeodummy-11, armeodummy-12, armeodummy-2, armeodummy-3, armeodummy-4, armeodummy-5, armeodummy-6, armeodummy-7, armeodummy-8, armeodummy-9, eog-l, eog-m, eog-r, gesture, index_far, index_middle, index_near, litte_far, litte_near, middle_far, middle_near, middle_ring, pitch, ring_far, ring_little, ring_near, roll, thumb_far, thumb_index, thumb_near, thumb_palm, wrist_bend - **Montage**: standard_1005 - **Hardware**: g.tec medical engineering GmbH - **Reference**: right mastoid - **Ground**: AFz - **Sensor type**: active - **Line frequency**: 50.0 Hz - **Online filters**: 0.01-200 Hz bandpass (8th order Chebyshev), 50 Hz notch **Participants** - **Number of subjects**: 15 - **Health status**: healthy - **Age**: mean=27.0, std=5.0, min=22.0, max=40.0 - **Gender distribution**: female=9, male=6 - **Handedness**: {‘right’: 14, ‘left’: 1} - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 7 - **Class labels**: right_elbow_flexion, right_elbow_extension, right_supination, right_pronation, right_hand_close, right_hand_open, rest - **Study design**: Trial-based paradigm with sustained movements/motor imagery. Each trial: fixation cross at 0s, cue presentation at 2s, sustained movement/MI execution. Subjects performed both movement execution (ME) and motor imagery (MI) in separate sessions. - **Feedback type**: none - **Stimulus type**: visual cue - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: False - **Instructions**: Subjects were instructed to execute sustained movements in ME session and perform kinesthetic motor imagery in MI session. For rest class, subjects were instructed to avoid any movement and to stay in the starting position. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text right_elbow_flexion ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Flex └─ Right, Elbow right_elbow_extension ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Stretch └─ Right, Elbow right_supination ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Turn ├─ Right, Forearm └─ Label/supination right_pronation ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Turn ├─ Right, Forearm └─ Label/pronation right_hand_close ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Close └─ Right, Hand right_hand_open ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Open └─ Right, Hand rest ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Rest ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: elbow_flexion, elbow_extension, forearm_supination, forearm_pronation, hand_open, hand_close **Data Structure** - **Trials**: 420 - **Trials per class**: elbow_flexion=60, elbow_extension=60, forearm_supination=60, forearm_pronation=60, hand_open=60, hand_close=60, rest=60 - **Trials context**: per_session **Preprocessing** - **Preprocessing applied**: False **Signal Processing** - **Classifiers**: sLDA - **Feature extraction**: time-domain signals, discriminative spatial patterns (DSP) - **Frequency bands**: analyzed=[0.3, 3.0] Hz - **Spatial filters**: sLORETA source localization **Cross-Validation** - **Method**: 10x10-fold cross-validation - **Folds**: 10 - **Evaluation type**: within-session **Performance (Original Study)** - **Mov Vs Mov Me**: 55.0 - **Mov Vs Rest Me**: 87.0 - **Mov Vs Mov Mi**: 27.0 - **Mov Vs Rest Mi**: 73.0 **BCI Application** - **Applications**: neuroprosthesis, robotic_arm - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Motor Imagery, Motor Execution **Documentation** - **DOI**: 10.1371/journal.pone.0182578 - **Associated paper DOI**: 10.1371/journal.pone.0182578 - **License**: CC-BY-4.0 - **Investigators**: Patrick Ofner, Andreas Schwarz, Joana Pereira, Gernot R. Müller-Putz - **Senior author**: Gernot R. Müller-Putz - **Contact**: [gernot.mueller@tugraz.at](mailto:gernot.mueller@tugraz.at) - **Institution**: Graz University of Technology - **Department**: Institute of Neural Engineering, BCI-Lab - **Country**: AT - **Repository**: BNCI Horizon 2020 - **Data URL**: [https://bnci-horizon-2020.eu/database/data-sets](https://bnci-horizon-2020.eu/database/data-sets) - **Publication year**: 2017 - **Funding**: H2020-643955 MoreGrasp; ERC Consolidator Grant ERC-681231 Feel Your Reach - **Ethics approval**: Medical University of Graz, approval number 28-108 ex 15/16 - **Acknowledgements**: Data are available from the BNCI Horizon 2020 database at [http://bnci-horizon-2020.eu/database/data-sets](http://bnci-horizon-2020.eu/database/data-sets) (accession number 001-2017) and from Zenodo at DOI 10.5281/zenodo.834976 - **Keywords**: upper limb movements, EEG, motor imagery, movement execution, low-frequency, time-domain, BCI, neuroprosthesis **Abstract** How neural correlates of movements are represented in the human brain is of ongoing interest and has been researched with invasive and non-invasive methods. In this study, we analyzed the encoding of single upper limb movements in the time-domain of low-frequency electroencephalography (EEG) signals. Fifteen healthy subjects executed and imagined six different sustained upper limb movements. We classified these six movements and a rest class and obtained significant average classification accuracies of 55% (movement vs movement) and 87% (movement vs rest) for executed movements, and 27% and 73%, respectively, for imagined movements. Furthermore, we analyzed the classifier patterns in the source space and located the brain areas conveying discriminative movement information. The classifier patterns indicate that mainly premotor areas, primary motor cortex, somatosensory cortex and posterior parietal cortex convey discriminative movement information. The decoding of single upper limb movements is specially interesting in the context of a more natural non-invasive control of e.g., a motor neuroprosthesis or a robotic arm in highly motor disabled persons. **Methodology** Subjects performed 6 sustained upper limb movements (elbow flexion/extension, forearm supination/pronation, hand open/close) plus rest in two separate sessions (movement execution and motor imagery). EEG was recorded from 61 channels, filtered to 0.3-3 Hz, and classified using shrinkage LDA with discriminative spatial patterns. Source localization was performed using sLORETA. Classification employed both single time-point and time-window approaches with 10x10-fold cross-validation. **References** Ofner, P., Schwarz, A., Pereira, J. and Müller-Putz, G.R., 2017. Upper limb movements can be decoded from the time-domain of low-frequency EEG. PloS one, 12(8), p.e0182578. [https://doi.org/10.1371/journal.pone.0182578](https://doi.org/10.1371/journal.pone.0182578) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000173` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Motor Imagery ataset from Ofner et al 2017 | | Author (year) | `Ofner2017` | | Canonical | — | | Importable as | `NM000173`, `Ofner2017` | | Year | 2019 | | Authors | Patrick Ofner, Andreas Schwarz, Joana Pereira, Gernot R. Müller-Putz | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000173) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000173) | [Source URL](https://nemar.org/dataexplorer/detail/nm000173) | ## Technical Details - Subjects: 15 - Recordings: 300 - Tasks: 1 - Channels: 61 - Sampling rate (Hz): 512.0 - Duration (hours): 27.10289279513889 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 8.5 GB - File count: 300 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000173](https://openneuro.org/datasets/nm000173) - NeMAR: [nm000173](https://nemar.org/dataexplorer/detail?dataset_id=nm000173) ## API Reference Use the `NM000173` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000173(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Imagery ataset from Ofner et al 2017 * **Study:** `nm000173` (NeMAR) * **Author (year):** `Ofner2017` * **Canonical:** — Also importable as: `NM000173`, `Ofner2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 15; recordings: 300; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000173](https://openneuro.org/datasets/nm000173) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000173](https://nemar.org/dataexplorer/detail?dataset_id=nm000173) ### Examples ```pycon >>> from eegdash.dataset import NM000173 >>> dataset = NM000173(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000173) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000173) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000175: fnirs dataset, 5 subjects *fNIRS Finger Tapping* Access recordings and metadata through EEGDash. **Citation:** Robert Luke, Eric Larson, Alexandre Gramfort, Macquarie University (—). *fNIRS Finger Tapping*. Modality: fnirs Subjects: 5 Recordings: 5 License: CC0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000175 dataset = NM000175(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000175(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000175( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000175, title = {fNIRS Finger Tapping}, author = {Robert Luke and Eric Larson and Alexandre Gramfort and Macquarie University}, } ``` ## About This Dataset **BIDS fNIRS Example Dataset** [DOI](https://zenodo.org/badge/latestdoi/294376526) **The fNIRS BIDS specification is a work in progress. Expect changes while the BEP is in developement.** Example fNIRS dataset that is formated according to the [BIDS specification](https://bids-specification--802.org.readthedocs.build/en/802/03-modality-agnostic-files.html). This repository provides an example dataset demonstrating how a BIDS dataset should be stored. And also demonstrates how to convert measurements obtained using a NIRx device to BIDS using [MNE-BIDS](https://mne.tools/mne-bids/stable/index.html) (see branches below for script details). **Experiment Description** ### View full README **BIDS fNIRS Example Dataset** [DOI](https://zenodo.org/badge/latestdoi/294376526) **The fNIRS BIDS specification is a work in progress. Expect changes while the BEP is in developement.** Example fNIRS dataset that is formated according to the [BIDS specification](https://bids-specification--802.org.readthedocs.build/en/802/03-modality-agnostic-files.html). This repository provides an example dataset demonstrating how a BIDS dataset should be stored. And also demonstrates how to convert measurements obtained using a NIRx device to BIDS using [MNE-BIDS](https://mne.tools/mne-bids/stable/index.html) (see branches below for script details). **Experiment Description** This experiment examines how the motor cortex is activated during a finger tapping task. Participants are asked to either tap their left thumb to fingers, tap their right thumb to fingers, or nothing (control). Tapping lasts for 5 seconds as is propted by an auditory cue. Sensors are placed over the motor cortex as described in the montage section in the link below, short channels are attached to the scalp too. Further details about the experiment (including presentation code) can be found at [rob-luke/experiment-fNIRS-tapping](https://github.com/rob-luke/experiment-fNIRS-tapping) **Data Description** The dataset contains measurements from 5 participants. All details have been anonymised by hand in the raw data. Alternatively the `anonymise` argument could be used when [writing](https://mne.tools/mne-bids/stable/generated/mne_bids.write_raw_bids.html#mne_bids.write_raw_bids) the BIDS dataset. **How to use this repository** I have used branches in this repository to describe the steps taken to convert this data to the BIDS format. Using the GitHub interface you can select the branch you wish to view. The branches are… \* [00-Raw-data](https://github.com/rob-luke/BIDS-NIRS-Tapping/tree/00-Raw-data): Contains just the raw recordings \* [01-Raw-to-SNIRF](https://github.com/rob-luke/BIDS-NIRS-Tapping/tree/01-Raw-to-SNIRF): Converts the original data to snirf, but not BIDS. \* [02-Raw-to-BIDS](https://github.com/rob-luke/BIDS-NIRS-Tapping/tree/02-Raw-to-BIDS): Converts the original data to BIDS (or as close as can be automated, before manual editing and movement to master). \* [master](https://github.com/rob-luke/BIDS-NIRS-Tapping): Dataset in BIDS format. Branches `00` and `01` are only included for interested researchers. To generate the data in master use branch `02`, then remove the sourcedata directory and manually enter the author in to `dataset_description.json`. ## Dataset Information | Dataset ID | `NM000175` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | fNIRS Finger Tapping | | Author (year) | `Luke2024` | | Canonical | — | | Importable as | `NM000175`, `Luke2024` | | Year | — | | Authors | Robert Luke, Eric Larson, Alexandre Gramfort, Macquarie University | | License | CC0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000175) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000175) | [Source URL](https://nemar.org/dataexplorer/detail/nm000175) | ## Technical Details - Subjects: 5 - Recordings: 5 - Tasks: 1 - Channels: 56 - Sampling rate (Hz): 7.8125 - Duration (hours): 3.808533333333333 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 47.5 MB - File count: 5 - Format: BIDS - License: CC0 - DOI: — - Source: nemar - OpenNeuro: [nm000175](https://openneuro.org/datasets/nm000175) - NeMAR: [nm000175](https://nemar.org/dataexplorer/detail?dataset_id=nm000175) ## API Reference Use the `NM000175` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000175(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) fNIRS Finger Tapping * **Study:** `nm000175` (NeMAR) * **Author (year):** `Luke2024` * **Canonical:** — Also importable as: `NM000175`, `Luke2024`. Modality: `fnirs`. Subjects: 5; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000175](https://openneuro.org/datasets/nm000175) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000175](https://nemar.org/dataexplorer/detail?dataset_id=nm000175) ### Examples ```pycon >>> from eegdash.dataset import NM000175 >>> dataset = NM000175(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000175) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000175) * [eegdash.dataset.DS004830](eegdash.dataset.DS004830.md) * [eegdash.dataset.DS004929](eegdash.dataset.DS004929.md) * [eegdash.dataset.DS004973](eegdash.dataset.DS004973.md) * [eegdash.dataset.DS005776](eegdash.dataset.DS005776.md) * [eegdash.dataset.DS005777](eegdash.dataset.DS005777.md) # NM000176: eeg dataset, 5 subjects *BigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects)* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *BigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects)*. Modality: eeg Subjects: 5 Recordings: 128 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000176 dataset = NM000176(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000176(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000176( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000176, title = {BigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects)}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, } ``` ## About This Dataset **BigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects)** BigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects). **Dataset Overview** - **Code**: Mainsah2025-K - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 ### View full README **BigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects)** BigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects). **Dataset Overview** - **Code**: Mainsah2025-K - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 - **Subjects**: 5 - **Sessions per subject**: 2 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1.0] s **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Montage**: standard_1020 - **Hardware**: g.USBamp (g.tec) - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 5 - **Health status**: healthy **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 **Signal Processing** - **Feature extraction**: P300_ERP_detection **Cross-Validation** - **Method**: calibration-then-test - **Evaluation type**: within_subject **BCI Application** - **Applications**: speller - **Environment**: laboratory - **Online feedback**: True **Tags** - **Modality**: visual - **Type**: perception **Documentation** - **Description**: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. - **DOI**: 10.13026/0byy-ry86 - **License**: CC-BY-4.0 - **Investigators**: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins - **Institution**: Duke University; East Tennessee State University - **Country**: US - **Repository**: PhysioNet - **Data URL**: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) - **Publication year**: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000176` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects) | | Author (year) | `Mainsah2025_BigP3BCI` | | Canonical | `BigP3BCI_StudyK`, `BigP3BCI_K` | | Importable as | `NM000176`, `Mainsah2025_BigP3BCI`, `BigP3BCI_StudyK`, `BigP3BCI_K` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000176) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000176) | [Source URL](https://nemar.org/dataexplorer/detail/nm000176) | ## Technical Details - Subjects: 5 - Recordings: 128 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 256.0 - Duration (hours): 3.5955902777777777 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 168.3 MB - File count: 128 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000176](https://openneuro.org/datasets/nm000176) - NeMAR: [nm000176](https://nemar.org/dataexplorer/detail?dataset_id=nm000176) ## API Reference Use the `NM000176` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000176(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects) * **Study:** `nm000176` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI` * **Canonical:** `BigP3BCI_StudyK`, `BigP3BCI_K` Also importable as: `NM000176`, `Mainsah2025_BigP3BCI`, `BigP3BCI_StudyK`, `BigP3BCI_K`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 5; recordings: 128; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000176](https://openneuro.org/datasets/nm000176) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000176](https://nemar.org/dataexplorer/detail?dataset_id=nm000176) ### Examples ```pycon >>> from eegdash.dataset import NM000176 >>> dataset = NM000176(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000176) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000176) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000179: eeg dataset, 215 subjects *LEMON: MPI Leipzig Mind-Brain-Body EEG (Resting State)* Access recordings and metadata through EEGDash. **Citation:** Anahit Babayan, Miray Erbey, Deniz Kumral, Janis D. Reinelt, Andrea M.F. Reiter, Josefin Röbbig, H. Lina Schaare, Marie Uhlig, Alfred Anwander, Pierre-Louis Bazin, Annette Horstmann, Leonie Lampe, Vadim V. Nikulin, Hadas Okon-Singer, Sven Preusser, Andre Pampel, Christiane S. Rohr, Julia Sacher, Angelika Thone-Otto, Sabrina Trapp, Till Nierhaus, Denise Altmann, Katrin Arelin, Maria Blochl, Edith Bongartz, Patric Breig, Elena Cesnaite, Sufang Chen, Roberto Cozatl, Saskia Czerwonatis, Gabriele Dambrauskaite, Maria Dreyer, Jessica Enders, Melina Engelhardt, Marie Michele Fischer, Norman Forschack, Johannes Golchert, Laura Golz, C. Alexandrina Guran, Susanna Hedrich, Nicole Hentschel, Daria I. Hoffmann, Julia M. Huntenburg, Rebecca Jost, Anna Kosatschek, Stella Kunzendorf, Hannah Lammers, Mark E. Lauckner, Keyvan Mahjoory, Natacha Mendes, Ramona Menger, Enzo Morino, Karina Nathe, Jennifer Neubauer, Handan Noyan, Sabine Oligschlager, Patricia Panczyszyn-Trzewik, Dorothee Poehlchen, Nadine Putzke, Sabrina Roski, Marie-Catherine Schaller, Anja Schieferbein, Benito Schlaak, Hanna Maria Schmidt, Robert Schmidt, Anne Schrimpf, Sylvia Stasch, Maria Voss, Anett Wiedemann, Daniel S. Margulies, Michael Gaebler, Arno Villringer (2019). *LEMON: MPI Leipzig Mind-Brain-Body EEG (Resting State)*. [10.1038/sdata.2018.308](https://doi.org/10.1038/sdata.2018.308) Modality: eeg Subjects: 215 Recordings: 215 License: CC BY 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000179 dataset = NM000179(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000179(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000179( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000179, title = {LEMON: MPI Leipzig Mind-Brain-Body EEG (Resting State)}, author = {Anahit Babayan and Miray Erbey and Deniz Kumral and Janis D. Reinelt and Andrea M.F. Reiter and Josefin Röbbig and H. Lina Schaare and Marie Uhlig and Alfred Anwander and Pierre-Louis Bazin and Annette Horstmann and Leonie Lampe and Vadim V. Nikulin and Hadas Okon-Singer and Sven Preusser and Andre Pampel and Christiane S. Rohr and Julia Sacher and Angelika Thone-Otto and Sabrina Trapp and Till Nierhaus and Denise Altmann and Katrin Arelin and Maria Blochl and Edith Bongartz and Patric Breig and Elena Cesnaite and Sufang Chen and Roberto Cozatl and Saskia Czerwonatis and Gabriele Dambrauskaite and Maria Dreyer and Jessica Enders and Melina Engelhardt and Marie Michele Fischer and Norman Forschack and Johannes Golchert and Laura Golz and C. Alexandrina Guran and Susanna Hedrich and Nicole Hentschel and Daria I. Hoffmann and Julia M. Huntenburg and Rebecca Jost and Anna Kosatschek and Stella Kunzendorf and Hannah Lammers and Mark E. Lauckner and Keyvan Mahjoory and Natacha Mendes and Ramona Menger and Enzo Morino and Karina Nathe and Jennifer Neubauer and Handan Noyan and Sabine Oligschlager and Patricia Panczyszyn-Trzewik and Dorothee Poehlchen and Nadine Putzke and Sabrina Roski and Marie-Catherine Schaller and Anja Schieferbein and Benito Schlaak and Hanna Maria Schmidt and Robert Schmidt and Anne Schrimpf and Sylvia Stasch and Maria Voss and Anett Wiedemann and Daniel S. Margulies and Michael Gaebler and Arno Villringer}, doi = {10.1038/sdata.2018.308}, url = {https://doi.org/10.1038/sdata.2018.308}, } ``` ## About This Dataset **LEMON: MPI Leipzig Mind-Brain-Body EEG Dataset (Resting State)** **Overview** Resting-state EEG from 215 healthy participants (young and old adults) from the Leipzig Study for Mind-Body-Emotion Interactions (LEMON). Subjects alternated between eyes-closed (EC) and eyes-open (EO) blocks of ~60 seconds each for approximately 16 minutes total. ### View full README **LEMON: MPI Leipzig Mind-Brain-Body EEG Dataset (Resting State)** **Overview** Resting-state EEG from 215 healthy participants (young and old adults) from the Leipzig Study for Mind-Body-Emotion Interactions (LEMON). Subjects alternated between eyes-closed (EC) and eyes-open (EO) blocks of ~60 seconds each for approximately 16 minutes total. Demographics: Young adults (20-35 years, N=153) and older adults (59-77 years, N=74). All right-handed, normal or corrected-to-normal vision, no history of neurological or psychiatric disorders. **Recording Setup** - Amplifier: BrainVision actiCHamp (Brain Products GmbH) - Channels: 62 EEG (standard 10-20 extended, ActiCAP) - Online reference: FCz - Ground: AFz (inferred from BrainVision convention) - Sampling rate: 2500 Hz - Impedance: < 5 kOhm (active electrodes) - Recording duration: ~16 min per subject **Task** Resting state with alternating eyes-open (EO) and eyes-closed (EC) blocks. - Eyes-open: fixate on LED (off state), eyes open - Eyes-closed: close eyes, fixate on LED (off state) - Block duration: ~60 seconds each - Event markers: S200 = eyes open onset, S210 = eyes closed onset **Known Issues** - Subjects sub-010020, sub-010044, sub-010193, sub-010219 have incorrect .vhdr file paths that were fixed during conversion - Subject sub-010203 has no marker file (.vmrk) - 5 subjects (sub-010235, sub-010237, sub-010259, sub-010281, sub-010293) are absent from the dataset (not recorded) **Reference** Babayan, A. et al. (2019). A mind-brain-body dataset of MRI, EEG, cognition, emotion, and peripheral physiology in young and old adults. Scientific Data, 6, 180308. [https://doi.org/10.1038/sdata.2018.308](https://doi.org/10.1038/sdata.2018.308) **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `NM000179` | |----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | LEMON: MPI Leipzig Mind-Brain-Body EEG (Resting State) | | Author (year) | `Babayan2018` | | Canonical | `LEMON` | | Importable as | `NM000179`, `Babayan2018`, `LEMON` | | Year | 2019 | | Authors | Anahit Babayan, Miray Erbey, Deniz Kumral, Janis D. Reinelt, Andrea M.F. Reiter, Josefin Röbbig, H. Lina Schaare, Marie Uhlig, Alfred Anwander, Pierre-Louis Bazin, Annette Horstmann, Leonie Lampe, Vadim V. Nikulin, Hadas Okon-Singer, Sven Preusser, Andre Pampel, Christiane S. Rohr, Julia Sacher, Angelika Thone-Otto, Sabrina Trapp, Till Nierhaus, Denise Altmann, Katrin Arelin, Maria Blochl, Edith Bongartz, Patric Breig, Elena Cesnaite, Sufang Chen, Roberto Cozatl, Saskia Czerwonatis, Gabriele Dambrauskaite, Maria Dreyer, Jessica Enders, Melina Engelhardt, Marie Michele Fischer, Norman Forschack, Johannes Golchert, Laura Golz, C. Alexandrina Guran, Susanna Hedrich, Nicole Hentschel, Daria I. Hoffmann, Julia M. Huntenburg, Rebecca Jost, Anna Kosatschek, Stella Kunzendorf, Hannah Lammers, Mark E. Lauckner, Keyvan Mahjoory, Natacha Mendes, Ramona Menger, Enzo Morino, Karina Nathe, Jennifer Neubauer, Handan Noyan, Sabine Oligschlager, Patricia Panczyszyn-Trzewik, Dorothee Poehlchen, Nadine Putzke, Sabrina Roski, Marie-Catherine Schaller, Anja Schieferbein, Benito Schlaak, Hanna Maria Schmidt, Robert Schmidt, Anne Schrimpf, Sylvia Stasch, Maria Voss, Anett Wiedemann, Daniel S. Margulies, Michael Gaebler, Arno Villringer | | License | CC BY 4.0 | | Citation / DOI | [doi:10.1038/sdata.2018.308](https://doi.org/10.1038/sdata.2018.308) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000179) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000179) | [Source URL](https://nemar.org/dataexplorer/detail/nm000179) | ### Copy-paste BibTeX ```bibtex @dataset{nm000179, title = {LEMON: MPI Leipzig Mind-Brain-Body EEG (Resting State)}, author = {Anahit Babayan and Miray Erbey and Deniz Kumral and Janis D. Reinelt and Andrea M.F. Reiter and Josefin Röbbig and H. Lina Schaare and Marie Uhlig and Alfred Anwander and Pierre-Louis Bazin and Annette Horstmann and Leonie Lampe and Vadim V. Nikulin and Hadas Okon-Singer and Sven Preusser and Andre Pampel and Christiane S. Rohr and Julia Sacher and Angelika Thone-Otto and Sabrina Trapp and Till Nierhaus and Denise Altmann and Katrin Arelin and Maria Blochl and Edith Bongartz and Patric Breig and Elena Cesnaite and Sufang Chen and Roberto Cozatl and Saskia Czerwonatis and Gabriele Dambrauskaite and Maria Dreyer and Jessica Enders and Melina Engelhardt and Marie Michele Fischer and Norman Forschack and Johannes Golchert and Laura Golz and C. Alexandrina Guran and Susanna Hedrich and Nicole Hentschel and Daria I. Hoffmann and Julia M. Huntenburg and Rebecca Jost and Anna Kosatschek and Stella Kunzendorf and Hannah Lammers and Mark E. Lauckner and Keyvan Mahjoory and Natacha Mendes and Ramona Menger and Enzo Morino and Karina Nathe and Jennifer Neubauer and Handan Noyan and Sabine Oligschlager and Patricia Panczyszyn-Trzewik and Dorothee Poehlchen and Nadine Putzke and Sabrina Roski and Marie-Catherine Schaller and Anja Schieferbein and Benito Schlaak and Hanna Maria Schmidt and Robert Schmidt and Anne Schrimpf and Sylvia Stasch and Maria Voss and Anett Wiedemann and Daniel S. Margulies and Michael Gaebler and Arno Villringer}, doi = {10.1038/sdata.2018.308}, url = {https://doi.org/10.1038/sdata.2018.308}, } ``` ## Technical Details - Subjects: 215 - Recordings: 215 - Tasks: 1 - Channels: 62 - Sampling rate (Hz): 2500 (208), 1000 (6), 500 - Duration (hours): 62.371518555555554 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 126.9 GB - File count: 215 - Format: BIDS - License: CC BY 4.0 - DOI: doi:10.1038/sdata.2018.308 - Source: nemar - OpenNeuro: [nm000179](https://openneuro.org/datasets/nm000179) - NeMAR: [nm000179](https://nemar.org/dataexplorer/detail?dataset_id=nm000179) ## API Reference Use the `NM000179` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000179(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LEMON: MPI Leipzig Mind-Brain-Body EEG (Resting State) * **Study:** `nm000179` (NeMAR) * **Author (year):** `Babayan2018` * **Canonical:** `LEMON` Also importable as: `NM000179`, `Babayan2018`, `LEMON`. Modality: `eeg`. Subjects: 215; recordings: 215; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000179](https://openneuro.org/datasets/nm000179) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000179](https://nemar.org/dataexplorer/detail?dataset_id=nm000179) DOI: [https://doi.org/10.1038/sdata.2018.308](https://doi.org/10.1038/sdata.2018.308) ### Examples ```pycon >>> from eegdash.dataset import NM000179 >>> dataset = NM000179(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000179) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000179) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000180: eeg dataset, 45 subjects *Brennan2019: EEG during Alice in Wonderland Listening* Access recordings and metadata through EEGDash. **Citation:** Jonathan R. Brennan, John T. Hale (2019). *Brennan2019: EEG during Alice in Wonderland Listening*. [10.1371/journal.pone.0207741](https://doi.org/10.1371/journal.pone.0207741) Modality: eeg Subjects: 45 Recordings: 45 License: CC BY 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000180 dataset = NM000180(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000180(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000180( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000180, title = {Brennan2019: EEG during Alice in Wonderland Listening}, author = {Jonathan R. Brennan and John T. Hale}, doi = {10.1371/journal.pone.0207741}, url = {https://doi.org/10.1371/journal.pone.0207741}, } ``` ## About This Dataset **Brennan2019: EEG during Alice in Wonderland Listening** **Overview** EEG recorded from 33 subjects while listening to the first chapter of “Alice’s Adventures in Wonderland” by Lewis Carroll. Naturalistic auditory comprehension paradigm for studying hierarchical linguistic structure processing. ### View full README **Brennan2019: EEG during Alice in Wonderland Listening** **Overview** EEG recorded from 33 subjects while listening to the first chapter of “Alice’s Adventures in Wonderland” by Lewis Carroll. Naturalistic auditory comprehension paradigm for studying hierarchical linguistic structure processing. **Recording Setup** - Channels: 61 EEG + 1 VEOG + 1 audio channel - Sampling rate: 500 Hz - Montage: easycap-M10 - Reference: Average reference (offline) - Bandpass: 0.1-200 Hz (online) **Task** Passive listening to continuous naturalistic speech (audiobook). Subjects listened to the full first chapter (~25 minutes). **Reference** Brennan, J.R. & Hale, J.T. (2019). PLoS ONE, 14(1), e0207741. **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `NM000180` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Brennan2019: EEG during Alice in Wonderland Listening | | Author (year) | `Brennan2019` | | Canonical | — | | Importable as | `NM000180`, `Brennan2019` | | Year | 2019 | | Authors | Jonathan R. Brennan, John T. Hale | | License | CC BY 4.0 | | Citation / DOI | [doi:10.1371/journal.pone.0207741](https://doi.org/10.1371/journal.pone.0207741) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000180) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000180) | [Source URL](https://nemar.org/dataexplorer/detail/nm000180) | ### Copy-paste BibTeX ```bibtex @dataset{nm000180, title = {Brennan2019: EEG during Alice in Wonderland Listening}, author = {Jonathan R. Brennan and John T. Hale}, doi = {10.1371/journal.pone.0207741}, url = {https://doi.org/10.1371/journal.pone.0207741}, } ``` ## Technical Details - Subjects: 45 - Recordings: 45 - Tasks: 1 - Channels: 62 - Sampling rate (Hz): 500 - Duration (hours): 9.154141666666668 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 3.8 GB - File count: 45 - Format: BIDS - License: CC BY 4.0 - DOI: doi:10.1371/journal.pone.0207741 - Source: nemar - OpenNeuro: [nm000180](https://openneuro.org/datasets/nm000180) - NeMAR: [nm000180](https://nemar.org/dataexplorer/detail?dataset_id=nm000180) ## API Reference Use the `NM000180` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000180(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Brennan2019: EEG during Alice in Wonderland Listening * **Study:** `nm000180` (NeMAR) * **Author (year):** `Brennan2019` * **Canonical:** — Also importable as: `NM000180`, `Brennan2019`. Modality: `eeg`. Subjects: 45; recordings: 45; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000180](https://openneuro.org/datasets/nm000180) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000180](https://nemar.org/dataexplorer/detail?dataset_id=nm000180) DOI: [https://doi.org/10.1371/journal.pone.0207741](https://doi.org/10.1371/journal.pone.0207741) ### Examples ```pycon >>> from eegdash.dataset import NM000180 >>> dataset = NM000180(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000180) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000180) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000181: eeg dataset, 2417 subjects *NMT: Neurodiagnostic Montage Template Scalp EEG* Access recordings and metadata through EEGDash. **Citation:** Hussain A. Khan (2019). *NMT: Neurodiagnostic Montage Template Scalp EEG*. [10.5281/zenodo.10909103](https://doi.org/10.5281/zenodo.10909103) Modality: eeg Subjects: 2417 Recordings: 2417 License: CC BY-SA 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000181 dataset = NM000181(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000181(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000181( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000181, title = {NMT: Neurodiagnostic Montage Template Scalp EEG}, author = {Hussain A. Khan}, doi = {10.5281/zenodo.10909103}, url = {https://doi.org/10.5281/zenodo.10909103}, } ``` ## About This Dataset **NMT: Neurodiagnostic Montage Template Scalp EEG Dataset** **Overview** 2,417 clinical EEG recordings (normal and abnormal) in standard 10-20 montage with 19 EEG channels + 2 reference electrodes. EDF format, variable sampling rates and durations. This dataset was collected for EEG-based pathology detection and normal/abnormal classification tasks. Source: Zenodo (doi:10.5281/zenodo.10909103) License: CC BY-SA 4.0 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `NM000181` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | NMT: Neurodiagnostic Montage Template Scalp EEG | | Author (year) | `Khan2019` | | Canonical | — | | Importable as | `NM000181`, `Khan2019` | | Year | 2019 | | Authors | Hussain A. Khan | | License | CC BY-SA 4.0 | | Citation / DOI | [doi:10.5281/zenodo.10909103](https://doi.org/10.5281/zenodo.10909103) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000181) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000181) | [Source URL](https://nemar.org/dataexplorer/detail/nm000181) | ### Copy-paste BibTeX ```bibtex @dataset{nm000181, title = {NMT: Neurodiagnostic Montage Template Scalp EEG}, author = {Hussain A. Khan}, doi = {10.5281/zenodo.10909103}, url = {https://doi.org/10.5281/zenodo.10909103}, } ``` ## Technical Details - Subjects: 2417 - Recordings: 2417 - Tasks: 1 - Channels: 21 - Sampling rate (Hz): 200 - Duration (hours): 488.9631958333334 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 13.8 GB - File count: 2417 - Format: BIDS - License: CC BY-SA 4.0 - DOI: doi:10.5281/zenodo.10909103 - Source: nemar - OpenNeuro: [nm000181](https://openneuro.org/datasets/nm000181) - NeMAR: [nm000181](https://nemar.org/dataexplorer/detail?dataset_id=nm000181) ## API Reference Use the `NM000181` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000181(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NMT: Neurodiagnostic Montage Template Scalp EEG * **Study:** `nm000181` (NeMAR) * **Author (year):** `Khan2019` * **Canonical:** — Also importable as: `NM000181`, `Khan2019`. Modality: `eeg`. Subjects: 2417; recordings: 2417; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000181](https://openneuro.org/datasets/nm000181) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000181](https://nemar.org/dataexplorer/detail?dataset_id=nm000181) DOI: [https://doi.org/10.5281/zenodo.10909103](https://doi.org/10.5281/zenodo.10909103) ### Examples ```pycon >>> from eegdash.dataset import NM000181 >>> dataset = NM000181(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000181) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000181) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000185: eeg dataset, 100 subjects *Sleep-EDF Expanded: Whole-Night PSG Recordings* Access recordings and metadata through EEGDash. **Citation:** Bob Kemp, Aeilko H. Zwinderman, Bert Tuk, Hilbert A.C. Kamphuisen, Josefien J.L. Oberye (2000). *Sleep-EDF Expanded: Whole-Night PSG Recordings*. [10.13026/C2X676](https://doi.org/10.13026/C2X676) Modality: eeg Subjects: 100 Recordings: 197 License: ODbL v1.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000185 dataset = NM000185(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000185(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000185( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000185, title = {Sleep-EDF Expanded: Whole-Night PSG Recordings}, author = {Bob Kemp and Aeilko H. Zwinderman and Bert Tuk and Hilbert A.C. Kamphuisen and Josefien J.L. Oberye}, doi = {10.13026/C2X676}, url = {https://doi.org/10.13026/C2X676}, } ``` ## About This Dataset **Sleep-EDF Expanded: Whole-Night PSG Recordings** 197 whole-night PSG recordings from PhysioNet Sleep-EDF Expanded. - Cassette study: 78 healthy subjects, ambulatory 48h recordings - Telemetry study: 22 subjects, Temazepam drug study Channels: EEG Fpz-Cz, EEG Pz-Oz (100 Hz), EOG horizontal, EMG submental (+ respiration, temperature in some recordings) Sleep staging: Expert-annotated 30-second epochs in \_events.tsv files. Stages: Wake, N1, N2, N3 (combines original S3+S4 per AASM), REM, Unknown. Original Rechtschaffen & Kales stages preserved in ‘original_stage’ column. Reference: Kemp et al. (2000) IEEE TBME 47(9), 1185-1194. PhysioNet: [https://physionet.org/content/sleep-edfx/1.0.0/](https://physionet.org/content/sleep-edfx/1.0.0/) ## Dataset Information | Dataset ID | `NM000185` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Sleep-EDF Expanded: Whole-Night PSG Recordings | | Author (year) | `Kemp2000` | | Canonical | `SleepEDF`, `SleepEDFExpanded` | | Importable as | `NM000185`, `Kemp2000`, `SleepEDF`, `SleepEDFExpanded` | | Year | 2000 | | Authors | Bob Kemp, Aeilko H. Zwinderman, Bert Tuk, Hilbert A.C. Kamphuisen, Josefien J.L. Oberye | | License | ODbL v1.0 | | Citation / DOI | [doi:10.13026/C2X676](https://doi.org/10.13026/C2X676) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000185) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000185) | [Source URL](https://nemar.org/dataexplorer/detail/nm000185) | ### Copy-paste BibTeX ```bibtex @dataset{nm000185, title = {Sleep-EDF Expanded: Whole-Night PSG Recordings}, author = {Bob Kemp and Aeilko H. Zwinderman and Bert Tuk and Hilbert A.C. Kamphuisen and Josefien J.L. Oberye}, doi = {10.13026/C2X676}, url = {https://doi.org/10.13026/C2X676}, } ``` ## Technical Details - Subjects: 100 - Recordings: 197 - Tasks: 1 - Channels: 7 (153), 5 (44) - Sampling rate (Hz): 100 - Duration (hours): 3849.036111111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 8.1 GB - File count: 197 - Format: BIDS - License: ODbL v1.0 - DOI: doi:10.13026/C2X676 - Source: nemar - OpenNeuro: [nm000185](https://openneuro.org/datasets/nm000185) - NeMAR: [nm000185](https://nemar.org/dataexplorer/detail?dataset_id=nm000185) ## API Reference Use the `NM000185` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000185(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sleep-EDF Expanded: Whole-Night PSG Recordings * **Study:** `nm000185` (NeMAR) * **Author (year):** `Kemp2000` * **Canonical:** `SleepEDF`, `SleepEDFExpanded` Also importable as: `NM000185`, `Kemp2000`, `SleepEDF`, `SleepEDFExpanded`. Modality: `eeg`. Subjects: 100; recordings: 197; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000185](https://openneuro.org/datasets/nm000185) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000185](https://nemar.org/dataexplorer/detail?dataset_id=nm000185) DOI: [https://doi.org/10.13026/C2X676](https://doi.org/10.13026/C2X676) ### Examples ```pycon >>> from eegdash.dataset import NM000185 >>> dataset = NM000185(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000185) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000185) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000186: eeg dataset, 8 subjects *BigP3BCI Study E — 6x6 checkerboard (8 healthy subjects)* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *BigP3BCI Study E — 6x6 checkerboard (8 healthy subjects)*. Modality: eeg Subjects: 8 Recordings: 88 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000186 dataset = NM000186(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000186(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000186( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000186, title = {BigP3BCI Study E — 6x6 checkerboard (8 healthy subjects)}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, } ``` ## About This Dataset **BigP3BCI Study E — 6x6 checkerboard (8 healthy subjects)** BigP3BCI Study E — 6x6 checkerboard (8 healthy subjects). **Dataset Overview** - **Code**: Mainsah2025-E - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 ### View full README **BigP3BCI Study E — 6x6 checkerboard (8 healthy subjects)** BigP3BCI Study E — 6x6 checkerboard (8 healthy subjects). **Dataset Overview** - **Code**: Mainsah2025-E - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 - **Subjects**: 8 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1.0] s **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Montage**: standard_1020 - **Hardware**: g.USBamp (g.tec) - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 8 - **Health status**: healthy **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 **Signal Processing** - **Feature extraction**: P300_ERP_detection **Cross-Validation** - **Method**: calibration-then-test - **Evaluation type**: within_subject **BCI Application** - **Applications**: speller - **Environment**: laboratory - **Online feedback**: True **Tags** - **Modality**: visual - **Type**: perception **Documentation** - **Description**: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. - **DOI**: 10.13026/0byy-ry86 - **License**: CC-BY-4.0 - **Investigators**: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins - **Institution**: Duke University; East Tennessee State University - **Country**: US - **Repository**: PhysioNet - **Data URL**: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) - **Publication year**: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000186` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BigP3BCI Study E — 6x6 checkerboard (8 healthy subjects) | | Author (year) | `Mainsah2025_BigP3BCI_E` | | Canonical | `BigP3BCI_StudyE`, `BigP3BCI_E` | | Importable as | `NM000186`, `Mainsah2025_BigP3BCI_E`, `BigP3BCI_StudyE`, `BigP3BCI_E` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000186) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000186) | [Source URL](https://nemar.org/dataexplorer/detail/nm000186) | ## Technical Details - Subjects: 8 - Recordings: 88 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 256.0 - Duration (hours): 2.3882378472222223 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 104.7 MB - File count: 88 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000186](https://openneuro.org/datasets/nm000186) - NeMAR: [nm000186](https://nemar.org/dataexplorer/detail?dataset_id=nm000186) ## API Reference Use the `NM000186` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000186(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study E — 6x6 checkerboard (8 healthy subjects) * **Study:** `nm000186` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_E` * **Canonical:** `BigP3BCI_StudyE`, `BigP3BCI_E` Also importable as: `NM000186`, `Mainsah2025_BigP3BCI_E`, `BigP3BCI_StudyE`, `BigP3BCI_E`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 8; recordings: 88; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000186](https://openneuro.org/datasets/nm000186) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000186](https://nemar.org/dataexplorer/detail?dataset_id=nm000186) ### Examples ```pycon >>> from eegdash.dataset import NM000186 >>> dataset = NM000186(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000186) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000186) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000187: eeg dataset, 8 subjects *BigP3BCI Study N — 9x8 dry/wet electrode comparison (8 ALS subjects)* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *BigP3BCI Study N — 9x8 dry/wet electrode comparison (8 ALS subjects)*. Modality: eeg Subjects: 8 Recordings: 160 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000187 dataset = NM000187(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000187(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000187( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000187, title = {BigP3BCI Study N — 9x8 dry/wet electrode comparison (8 ALS subjects)}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, } ``` ## About This Dataset **BigP3BCI Study N — 9x8 dry/wet electrode comparison (8 ALS subjects)** BigP3BCI Study N — 9x8 dry/wet electrode comparison (8 ALS subjects). **Dataset Overview** - **Code**: Mainsah2025-N - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 ### View full README **BigP3BCI Study N — 9x8 dry/wet electrode comparison (8 ALS subjects)** BigP3BCI Study N — 9x8 dry/wet electrode comparison (8 ALS subjects). **Dataset Overview** - **Code**: Mainsah2025-N - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 - **Subjects**: 8 - **Sessions per subject**: 2 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1.0] s **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Montage**: standard_1020 - **Hardware**: g.USBamp (g.tec) - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 8 - **Health status**: patients - **Clinical population**: ALS **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 **Signal Processing** - **Feature extraction**: P300_ERP_detection **Cross-Validation** - **Method**: calibration-then-test - **Evaluation type**: within_subject **BCI Application** - **Applications**: speller - **Environment**: laboratory - **Online feedback**: True **Tags** - **Modality**: visual - **Type**: perception **Documentation** - **Description**: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. - **DOI**: 10.13026/0byy-ry86 - **License**: CC-BY-4.0 - **Investigators**: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins - **Institution**: Duke University; East Tennessee State University - **Country**: US - **Repository**: PhysioNet - **Data URL**: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) - **Publication year**: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000187` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BigP3BCI Study N — 9x8 dry/wet electrode comparison (8 ALS subjects) | | Author (year) | `Mainsah2025_BigP3BCI_N` | | Canonical | `BigP3BCI_StudyN` | | Importable as | `NM000187`, `Mainsah2025_BigP3BCI_N`, `BigP3BCI_StudyN` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000187) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000187) | [Source URL](https://nemar.org/dataexplorer/detail/nm000187) | ## Technical Details - Subjects: 8 - Recordings: 160 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 256.0 - Duration (hours): 8.200868055555556 - Pathology: Other - Modality: Visual - Type: Attention - Size on disk: 353.2 MB - File count: 160 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000187](https://openneuro.org/datasets/nm000187) - NeMAR: [nm000187](https://nemar.org/dataexplorer/detail?dataset_id=nm000187) ## API Reference Use the `NM000187` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000187(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study N — 9x8 dry/wet electrode comparison (8 ALS subjects) * **Study:** `nm000187` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_N` * **Canonical:** `BigP3BCI_StudyN` Also importable as: `NM000187`, `Mainsah2025_BigP3BCI_N`, `BigP3BCI_StudyN`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 8; recordings: 160; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000187](https://openneuro.org/datasets/nm000187) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000187](https://nemar.org/dataexplorer/detail?dataset_id=nm000187) ### Examples ```pycon >>> from eegdash.dataset import NM000187 >>> dataset = NM000187(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000187) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000187) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000188: eeg dataset, 10 subjects *BNCI 2014-009 P300 dataset* Access recordings and metadata through EEGDash. **Citation:** P Aricò, F Aloise, F Schettini, S Salinari, D Mattia, F Cincotti (2013). *BNCI 2014-009 P300 dataset*. Modality: eeg Subjects: 10 Recordings: 30 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000188 dataset = NM000188(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000188(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000188( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000188, title = {BNCI 2014-009 P300 dataset}, author = {P Aricò and F Aloise and F Schettini and S Salinari and D Mattia and F Cincotti}, } ``` ## About This Dataset **BNCI 2014-009 P300 dataset** BNCI 2014-009 P300 dataset. **Dataset Overview** - **Code**: BNCI2014-009 - **Paradigm**: p300 - **DOI**: 10.1088/1741-2560/11/3/035008 ### View full README **BNCI 2014-009 P300 dataset** BNCI 2014-009 P300 dataset. **Dataset Overview** - **Code**: BNCI2014-009 - **Paradigm**: p300 - **DOI**: 10.1088/1741-2560/11/3/035008 - **Subjects**: 10 - **Sessions per subject**: 3 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 0.8] s - **File format**: MAT - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Channel names**: Fz, Cz, Pz, Oz, P3, P4, PO7, PO8, F3, F4, FCz, C3, C4, CP3, CPz, CP4 - **Montage**: 10-10 - **Hardware**: g.USBamp - **Software**: BCI2000 - **Reference**: linked earlobes - **Ground**: right mastoid - **Sensor type**: Ag/AgCl - **Line frequency**: 50.0 Hz - **Online filters**: bandpass 0.1-20 Hz - **Impedance threshold**: 10.0 kOhm - **Cap manufacturer**: Electro-Cap International, Inc. **Participants** - **Number of subjects**: 10 - **Health status**: healthy - **Age**: mean=26.8, std=5.6 - **Gender distribution**: female=10, male=0 - **BCI experience**: experienced - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Task type**: spelling - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 16.0 s - **Study design**: P300-based BCI with two interfaces: P300 Speller (overt attention) and GeoSpell (covert attention). 36 alphanumeric characters presented. Eight stimulation sequences per trial with 16 target intensifications. - **Feedback type**: none - **Stimulus type**: visual_intensification - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: False - **Instructions**: Subject focused on one out of 36 different characters. At the beginning of each trial, the system prompted the subject with the character to attend. Target prompt appeared during a 2 s pre-trial interval. - **Stimulus presentation**: stimulus_duration_ms=125, isi_ms=125, soa_ms=250, n_sequences=8, n_intensifications_per_target=16, pre_trial_interval_s=2.0, tti_min_ms=500 **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 36 - **Number of repetitions**: 8 - **Inter-stimulus interval**: 125.0 ms - **Stimulus onset asynchrony**: 250.0 ms **Data Structure** - **Trials**: 18 - **Blocks per session**: 3 - **Trials context**: 6 trials × 3 runs per session **Preprocessing** - **Data state**: preprocessed - **Preprocessing applied**: True - **Steps**: bandpass filtering - **Highpass filter**: 0.1 Hz - **Lowpass filter**: 20.0 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.1, ‘high_cutoff_hz’: 20.0} - **Filter type**: Butterworth - **Filter order**: 8 - **Re-reference**: linked earlobes - **Epoch window**: [0.0, 0.8] - **Notes**: EEG acquired using g.USBamp amplifier (g.Tec, Austria), digitized at 256 Hz **Signal Processing** - **Classifiers**: LDA, SWLDA - **Feature extraction**: Wavelet, Time-Frequency, CWT - **Frequency bands**: analyzed=[1.0, 20.0] Hz **Cross-Validation** - **Method**: cross-validation - **Folds**: 3 - **Evaluation type**: within_session **Performance (Original Study)** - **P300 Latency Jitter Correlation**: negative correlation with accuracy **BCI Application** - **Applications**: communication, spelling - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: P300, ERP **Documentation** - **Description**: Complete record of P300 evoked potentials recorded with BCI2000 using two different paradigms: P300 Speller (overt attention) and GeoSpell (covert attention). 10 healthy subjects focused on one out of 36 different characters. - **DOI**: 10.1088/1741-2560/11/3/035008 - **Associated paper DOI**: 10.3389/fnhum.2013.00732 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: P Aricò, F Aloise, F Schettini, S Salinari, D Mattia, F Cincotti - **Senior author**: F Cincotti - **Contact**: [p.arico@hsantalucia.it](mailto:p.arico@hsantalucia.it) - **Institution**: Fondazione Santa Lucia IRCCS - **Department**: Neuroelectrical Imaging and BCI Lab - **Address**: Rome, Italy - **Country**: Italy - **Repository**: BNCI Horizon - **Publication year**: 2014 - **Ethics approval**: Approved by local Ethics Committee - **Keywords**: P300 latency jitter, brain-computer interface, covert attention, wavelet analysis, single epoch **Abstract** This dataset represents a complete record of P300 evoked potentials recorded with BCI2000 using two different paradigms: a paradigm based on the P300 Speller originally described by Farwell and Donchin in overt attention condition and a paradigm based on the GeoSpell interface used in covert attention condition. In these sessions, 10 healthy subjects focused on one out of 36 different characters. The objective was to predict the correct character in each of the provided character selection epochs. **Methodology** Ten healthy subjects (10 female, mean age = 26.8 ± 5.6) with previous experience with P300-based BCIs attended 4 recording sessions. Scalp EEG potentials were measured using 16 Ag/AgCl electrodes arranged on an elastic cap per the 10-10 standard. Each electrode was referenced to the linked earlobes and grounded to the right mastoid. The EEG was acquired using a g.USBamp amplifier (g.Tec, Austria), digitized at 256 Hz, high pass- and low pass-filtered with cutoff frequencies of 0.1 Hz and 20 Hz, respectively. The electrode impedance did not exceed 10 kΩ. Visual stimulation, acquisition and online classification were performed with BCI2000. Each subject attended 4 recording sessions. During each session, the subject performed three runs with each of the stimulation interfaces. Each trial consisted of eight stimulation sequences, and thus, 16 intensifications of the target character. Each stimulus was intensified for 125 ms, with an inter stimulus interval (ISI) of 125 ms, yielding a 250 ms lag between the appearance of two stimuli (SOA). Pseudorandom stimulation sequences were assembled so that each target intensification would not occur within 500 ms after the previous one to avoid the attentional blink phenomenon. **References** Riccio, A., Simione, L., Schettini, F., Pizzimenti, A., Inghilleri, M., Belardinelli, M. O., & Mattia, D. (2013). Attention and P300-based BCI performance in people with amyotrophic lateral sclerosis. Frontiers in human neuroscience, 7, 732. [https://doi.org/10.3389/fnhum.2013.00732](https://doi.org/10.3389/fnhum.2013.00732) Notes .. note:: `BNCI2014_009` was previously named `BNCI2014009`. `BNCI2014009` will be removed in version 1.1. .. versionadded:: 0.4.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000188` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2014-009 P300 dataset | | Author (year) | `Arico2014` | | Canonical | `BNCI2014_009_P300` | | Importable as | `NM000188`, `Arico2014`, `BNCI2014_009_P300` | | Year | 2013 | | Authors | P Aricò, F Aloise, F Schettini, S Salinari, D Mattia, F Cincotti | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000188) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000188) | [Source URL](https://nemar.org/dataexplorer/detail/nm000188) | ## Technical Details - Subjects: 10 - Recordings: 30 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 256.0 - Duration (hours): 1.6335611979166669 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 70.9 MB - File count: 30 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000188](https://openneuro.org/datasets/nm000188) - NeMAR: [nm000188](https://nemar.org/dataexplorer/detail?dataset_id=nm000188) ## API Reference Use the `NM000188` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000188(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-009 P300 dataset * **Study:** `nm000188` (NeMAR) * **Author (year):** `Arico2014` * **Canonical:** `BNCI2014_009_P300` Also importable as: `NM000188`, `Arico2014`, `BNCI2014_009_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000188](https://openneuro.org/datasets/nm000188) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000188](https://nemar.org/dataexplorer/detail?dataset_id=nm000188) ### Examples ```pycon >>> from eegdash.dataset import NM000188 >>> dataset = NM000188(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000188) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000188) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000189: eeg dataset, 10 subjects *BNCI 2015-003 P300 dataset* Access recordings and metadata through EEGDash. **Citation:** Martijn Schreuder, Thomas Rost, Michael Tangermann (2011). *BNCI 2015-003 P300 dataset*. Modality: eeg Subjects: 10 Recordings: 20 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000189 dataset = NM000189(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000189(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000189( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000189, title = {BNCI 2015-003 P300 dataset}, author = {Martijn Schreuder and Thomas Rost and Michael Tangermann}, } ``` ## About This Dataset **BNCI 2015-003 P300 dataset** BNCI 2015-003 P300 dataset. **Dataset Overview** - **Code**: BNCI2015-003 - **Paradigm**: p300 - **DOI**: 10.1016/j.neulet.2009.06.045 ### View full README **BNCI 2015-003 P300 dataset** BNCI 2015-003 P300 dataset. **Dataset Overview** - **Code**: BNCI2015-003 - **Paradigm**: p300 - **DOI**: 10.1016/j.neulet.2009.06.045 - **Subjects**: 10 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 0.8] s - **Runs per session**: 2 - **Session IDs**: Session 1, Session 2 - **File format**: gdf - **Data preprocessed**: True - **Number of contributing labs**: 1 **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 8 - **Channel types**: eeg=8 - **Channel names**: Fz, Cz, P3, Pz, P4, PO7, Oz, PO8 - **Montage**: standard_1005 - **Hardware**: BrainAmp - **Software**: Matlab - **Reference**: nose - **Sensor type**: Ag/AgCl electrodes - **Line frequency**: 50.0 Hz - **Online filters**: hardware analog band-pass filter between 0.1 and 250 Hz - **Impedance threshold**: 15.0 kOhm - **Cap manufacturer**: Brain Products - **Electrode type**: Ag/AgCl - **Electrode material**: silver/silver chloride - **Auxiliary channels**: EOG (2 ch, bipolar) **Participants** - **Number of subjects**: 10 - **Health status**: patients - **Clinical population**: Healthy - **Age**: mean=34.1, std=11.4, min=20, max=57 - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Task type**: auditory_oddball - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Tasks**: spelling, auditory_attention - **Study design**: Auditory Multi-class Spatial ERP (AMUSE) paradigm using spatial auditory cues from six speaker locations in azimuth plane. Two-step hex-o-spell like interface for character selection. Subjects mentally count target stimuli from one of six spatial directions. - **Study domain**: communication - **Feedback type**: auditory - **Stimulus type**: spatial_auditory - **Stimulus modalities**: auditory - **Primary modality**: auditory - **Synchronicity**: synchronous - **Mode**: online - **Training/test split**: True - **Instructions**: Focus attention to one target direction and mentally count the number of appearances - **Stimulus presentation**: soa_ms=175, stimulus_duration_ms=40, stimulus_intensity_db=58, speaker_arrangement=6 speakers at ear height, evenly distributed in circle with 60° distance, radius 65 cm **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 6 - **Stimulus onset asynchrony**: 175.0 ms **Data Structure** - **Trials**: 48 - **Trials per class**: calibration_per_direction=8 - **Trials context**: calibration_phase **Preprocessing** - **Data state**: filtered - **Preprocessing applied**: True - **Steps**: low-pass filter, downsampling, baselining - **Highpass filter**: 0.1 Hz - **Lowpass filter**: 40.0 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.1, ‘high_cutoff_hz’: 40.0} - **Filter type**: analog hardware filter for acquisition; low-pass for online - **Artifact methods**: variance criterium, peak-to-peak difference criterium - **Re-reference**: nose - **Downsampled to**: 100.0 Hz - **Epoch window**: [-0.15, None] - **Notes**: For online use signal was low-pass filtered below 40 Hz and downsampled to 100 Hz. Data baselined using 150 ms pre-stimulus data as reference. **Signal Processing** - **Classifiers**: LDA, linear binary classifier - **Feature extraction**: spatio-temporal features, r2 coefficient, interval averaging - **Spatial filters**: shrinkage regularization (Ledoit-Wolf) **Cross-Validation** - **Method**: online - **Evaluation type**: online **Performance (Original Study)** - **Accuracy**: 77.4% - **Itr**: 2.84 bits/min - **Char Per Min Session1**: 0.59 - **Char Per Min Session2 Max**: 1.41 - **Char Per Min Session2 Avg**: 0.94 - **Itr Session2 Avg**: 5.26 - **Itr Session2 Max**: 7.55 - **Success Rate Session1**: 76.0 **BCI Application** - **Applications**: speller, communication - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Auditory - **Type**: ERP, P300 **Documentation** - **Description**: Auditory BCI speller using spatial cues (AMUSE paradigm) allowing purely auditory communication interface - **DOI**: 10.1016/j.neulet.2009.06.045 - **Associated paper DOI**: 10.3389/fnins.2011.00112 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: Martijn Schreuder, Thomas Rost, Michael Tangermann - **Senior author**: Michael Tangermann - **Contact**: [schreuder@tu-berlin.de](mailto:schreuder@tu-berlin.de) - **Institution**: Berlin Institute of Technology - **Department**: Machine Learning Laboratory - **Address**: Machine Learning Laboratory, Berlin Institute of Technology, FR6-9, Franklinstraße 28/29, 10587 Berlin, Germany - **Country**: Germany - **Repository**: BNCI Horizon - **Publication year**: 2011 - **Funding**: European ICT Programme Project FP7-224631; European ICT Programme Project FP7-216886; Deutsche Forschungsgemeinschaft (DFG MU 987/3-2); Bundesministerium fur Bildung und Forschung (BMBF FKZ 01IB001A, 01GQ0850); FP7-ICT PASCAL2 Network of Excellence ICT-216886 - **Ethics approval**: Ethics Committee of the Charité University Hospital - **Acknowledgements**: Thomas Denck, David List and Larissa Queda for help with experiments. Klaus-Robert Müller and Benjamin Blankertz for fruitful discussions. - **Keywords**: brain-computer interface, directional hearing, auditory event-related potentials, P300, N200, dynamic subtrials **External Links** - **Source**: [http://www.frontiersin.org/neuroprosthetics/10.3389/fnins.2011.00112/abstract](http://www.frontiersin.org/neuroprosthetics/10.3389/fnins.2011.00112/abstract) **Abstract** This online study introduces an auditory spelling interface that eliminates the necessity for visual representation. In up to two sessions, a group of healthy subjects (N=21) was asked to use a text entry application, utilizing the spatial cues of the AMUSE paradigm (Auditory Multi-class Spatial ERP). The speller relies on the auditory sense both for stimulation and the core feedback. Without prior BCI experience, 76% of the participants were able to write a full sentence during the first session. By exploiting the advantages of a newly introduced dynamic stopping method, a maximum writing speed of 1.41 char/min (7.55 bits/min) could be reached during the second session (average: 0.94 char/min, 5.26 bits/min). **Methodology** Participants surrounded by six speakers at ear height in circle (60° spacing, 65 cm radius). Each direction associated with unique combination of tone (base frequency + harmonics) and band-pass filtered noise. Two-step hex-o-spell interface for character selection. Session 1: calibration (48 trials, 8 per direction, 15 iterations each) followed by online spelling with 15 fixed iterations. Session 2: calibration followed by online spelling with dynamic stopping method (4-15 iterations). Spatio-temporal feature extraction using r2 coefficient and interval selection (2-4 intervals for early and late components, 112-224 features total). Linear binary classifier with shrinkage regularization (Ledoit-Wolf). Decision making based on median classifier scores across iterations. **References** Schreuder, M., Rost, T., & Tangermann, M. (2011). Listen, you are writing! Speeding up online spelling with a dynamic auditory BCI. Frontiers in neuroscience, 5, 112. [https://doi.org/10.3389/fnins.2011.00112](https://doi.org/10.3389/fnins.2011.00112) Notes .. note:: `BNCI2015_003` was previously named `BNCI2015003`. `BNCI2015003` will be removed in version 1.1. .. versionadded:: 0.4.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000189` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2015-003 P300 dataset | | Author (year) | `Schreuder2015_P300` | | Canonical | `BNCI2015_P300`, `BNCI2015_003_P300`, `BNCI2015_003_AMUSE` | | Importable as | `NM000189`, `Schreuder2015_P300`, `BNCI2015_P300`, `BNCI2015_003_P300`, `BNCI2015_003_AMUSE` | | Year | 2011 | | Authors | Martijn Schreuder, Thomas Rost, Michael Tangermann | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000189) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000189) | [Source URL](https://nemar.org/dataexplorer/detail/nm000189) | ## Technical Details - Subjects: 10 - Recordings: 20 - Tasks: 1 - Channels: 8 - Sampling rate (Hz): 256.0 - Duration (hours): 0.9342003038194444 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 21.8 MB - File count: 20 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000189](https://openneuro.org/datasets/nm000189) - NeMAR: [nm000189](https://nemar.org/dataexplorer/detail?dataset_id=nm000189) ## API Reference Use the `NM000189` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000189(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-003 P300 dataset * **Study:** `nm000189` (NeMAR) * **Author (year):** `Schreuder2015_P300` * **Canonical:** `BNCI2015_P300`, `BNCI2015_003_P300`, `BNCI2015_003_AMUSE` Also importable as: `NM000189`, `Schreuder2015_P300`, `BNCI2015_P300`, `BNCI2015_003_P300`, `BNCI2015_003_AMUSE`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000189](https://openneuro.org/datasets/nm000189) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000189](https://nemar.org/dataexplorer/detail?dataset_id=nm000189) ### Examples ```pycon >>> from eegdash.dataset import NM000189 >>> dataset = NM000189(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000189) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000189) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000190: eeg dataset, 10 subjects *BNCI 2015-012 PASS2D P300 dataset* Access recordings and metadata through EEGDash. **Citation:** Johannes Höhne, Martijn Schreuder, Benjamin Blankertz, Michael Tangermann (2011). *BNCI 2015-012 PASS2D P300 dataset*. Modality: eeg Subjects: 10 Recordings: 20 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000190 dataset = NM000190(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000190(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000190( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000190, title = {BNCI 2015-012 PASS2D P300 dataset}, author = {Johannes Höhne and Martijn Schreuder and Benjamin Blankertz and Michael Tangermann}, } ``` ## About This Dataset **BNCI 2015-012 PASS2D P300 dataset** BNCI 2015-012 PASS2D P300 dataset. **Dataset Overview** - **Code**: BNCI2015-012 - **Paradigm**: p300 - **DOI**: 10.3389/fnins.2011.00099 ### View full README **BNCI 2015-012 PASS2D P300 dataset** BNCI 2015-012 PASS2D P300 dataset. **Dataset Overview** - **Code**: BNCI2015-012 - **Paradigm**: p300 - **DOI**: 10.3389/fnins.2011.00099 - **Subjects**: 10 - **Sessions per subject**: 1 - **Events**: Target=1, NonTarget=2 - **Trial interval**: [0, 0.8] s - **Runs per session**: 2 - **Session IDs**: session_1 - **File format**: gdf - **Data preprocessed**: True - **Contributing labs**: Berlin Institute of Technology, Fraunhofer FIRST **Acquisition** - **Sampling rate**: 250.0 Hz - **Number of channels**: 63 - **Channel types**: eeg=63 - **Channel names**: AF3, AF4, AF7, AF8, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F10, F2, F3, F4, F5, F6, F7, F8, F9, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fz, O1, O2, Oz, P1, P10, P2, P3, P4, P5, P6, P7, P8, P9, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP7, TP8 - **Montage**: 10-20 - **Hardware**: Brain Products - **Software**: Matlab - **Reference**: nose - **Sensor type**: wet Ag/AgCl electrodes - **Line frequency**: 50.0 Hz - **Online filters**: 0.1-250 Hz analog bandpass, then 40 Hz lowpass - **Cap manufacturer**: EasyCap GmbH - **Cap model**: Fast’n Easy Cap - **Electrode type**: wet Ag/AgCl electrodes - **Electrode material**: Ag/AgCl - **Auxiliary channels**: EOG (1 ch) **Participants** - **Number of subjects**: 10 - **Health status**: patients - **Clinical population**: Healthy - **Age**: mean=25.1, min=21, max=34 - **Gender distribution**: male=9, female=3 - **BCI experience**: mostly naive - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Task type**: auditory ERP speller - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Tasks**: text spelling, counting task - **Study design**: Nine-class auditory ERP paradigm with predictive text entry system (PASS2D). Users focus attention on two-dimensional auditory stimuli varying in pitch (high/medium/low) and direction (left/middle/right) presented via headphones. - **Study domain**: communication - **Feedback type**: visual - **Stimulus type**: auditory tones - **Stimulus modalities**: auditory, visual - **Primary modality**: auditory - **Synchronicity**: synchronous - **Mode**: online - **Training/test split**: True - **Instructions**: Focus on target stimuli while ignoring all non-target stimuli. Minimize eye movements and muscle artifacts. Count targets during calibration. Spell sentences during online phase. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Stimulus frequencies**: [708.0, 524.0, 380.0] Hz - **Number of targets**: 9 - **Number of repetitions**: 15 - **Inter-stimulus interval**: 125.0 ms - **Stimulus onset asynchrony**: 225.0 ms **Data Structure** - **Trials**: 27 - **Trials context**: total across all calibration runs (3 runs × 9 trials per run) **Preprocessing** - **Data state**: filtered and downsampled - **Preprocessing applied**: True - **Steps**: analog bandpass filter, lowpass filter, downsampling, artifact rejection - **Highpass filter**: 0.1 Hz - **Lowpass filter**: 40.0 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.1, ‘high_cutoff_hz’: 250.0} - **Filter type**: analog bandpass then digital lowpass - **Artifact methods**: threshold rejection - **Re-reference**: nose - **Downsampled to**: 100.0 Hz - **Epoch window**: [-0.15, 0.8] - **Notes**: Epochs with peak-to-peak voltage difference exceeding 100 μV in any channel were rejected during calibration. No artifact correction applied in online runs. **Signal Processing** - **Classifiers**: FDA, Fisher discriminant analysis - **Feature extraction**: mean amplitude in discriminative intervals - **Spatial filters**: shrinkage regularization **Cross-Validation** - **Method**: cross-validation - **Evaluation type**: within_session **Performance (Original Study)** - **Accuracy**: 72.5% - **Itr**: 3.4 bits/min - **Characters Per Minute**: 0.8 - **Spelling Speed Chars Per Min**: 0.8 **BCI Application** - **Applications**: speller, communication - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Auditory - **Type**: ERP, P300 **Documentation** - **Description**: A novel 9-class auditory ERP paradigm driving a predictive text entry system - **DOI**: 10.3389/fnins.2011.00099 - **Associated paper DOI**: 10.3389/fnins.2011.00112 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: Johannes Höhne, Martijn Schreuder, Benjamin Blankertz, Michael Tangermann - **Senior author**: Michael Tangermann - **Contact**: [j.hoehne@tu-berlin.de](mailto:j.hoehne@tu-berlin.de) - **Institution**: Berlin Institute of Technology - **Department**: Machine Learning Laboratory - **Address**: Franklinstr. 28/19, 10587 Berlin, Germany - **Country**: Germany - **Repository**: BNCI Horizon - **Publication year**: 2011 - **Keywords**: brain–computer interface, BCI, auditory ERP, P300, N200, spatial auditory stimuli, T9, user-centered design **Abstract** Brain–computer interfaces (BCIs) based on event related potentials (ERPs) strive for offering communication pathways which are independent of muscle activity. While most visual ERP-based BCI paradigms require good control of the user’s gaze direction, auditory BCI paradigms overcome this restriction. The present work proposes a novel approach using auditory evoked potentials for the example of a multiclass text spelling application. To control the ERP speller, BCI users focus their attention to two-dimensional auditory stimuli that vary in both, pitch (high/medium/low) and direction (left/middle/right) and that are presented via headphones. The resulting nine different control signals are exploited to drive a predictive text entry system. It enables the user to spell a letter by a single nine-class decision plus two additional decisions to confirm a spelled word. This paradigm – called PASS2D – was investigated in an online study with 12 healthy participants. Users spelled with more than 0.8 characters per minute on average (3.4 bits/min) which makes PASS2D a competitive method. It could enrich the toolbox of existing ERP paradigms for BCI end users like people with amyotrophic lateral sclerosis disease in a late stage. **Methodology** Participants performed a single session lasting 3-4 hours consisting of calibration phase and online spelling task. Calibration: 3 runs (plus 1 practice run), each with 9 trials covering all 9 stimuli as targets. Each trial had 13-14 pseudo-random sequences of all 9 auditory stimuli (108 subtrials total, 12 target + 96 non-target). Online spelling: 2 runs spelling German sentences using T9-style predictive text system with 9-class decisions. Each trial consisted of 135 subtrials (15 iterations of 9 stimuli). Binary classification using linear FDA with shrinkage regularization on 2-4 amplitude values per channel from discriminative intervals (N200 at 230-300ms and P300 at 350+ ms). Multiclass decision based on one-sided t-test with unequal variances across 15 classifier outputs per key. **References** Schreuder, M., Rost, T., & Tangermann, M. (2011). Listen, you are writing! Speeding up online spelling with a dynamic auditory BCI. Frontiers in neuroscience, 5, 112. [https://doi.org/10.3389/fnins.2011.00112](https://doi.org/10.3389/fnins.2011.00112) Notes .. versionadded:: 1.2.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000190` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2015-012 PASS2D P300 dataset | | Author (year) | `Hohne2015` | | Canonical | `BNCI2015` | | Importable as | `NM000190`, `Hohne2015`, `BNCI2015` | | Year | 2011 | | Authors | Johannes Höhne, Martijn Schreuder, Benjamin Blankertz, Michael Tangermann | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000190) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000190) | [Source URL](https://nemar.org/dataexplorer/detail/nm000190) | ## Technical Details - Subjects: 10 - Recordings: 20 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 250.0 - Duration (hours): 13.575294444444443 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 2.2 GB - File count: 20 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000190](https://openneuro.org/datasets/nm000190) - NeMAR: [nm000190](https://nemar.org/dataexplorer/detail?dataset_id=nm000190) ## API Reference Use the `NM000190` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000190(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-012 PASS2D P300 dataset * **Study:** `nm000190` (NeMAR) * **Author (year):** `Hohne2015` * **Canonical:** — Also importable as: `NM000190`, `Hohne2015`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000190](https://openneuro.org/datasets/nm000190) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000190](https://nemar.org/dataexplorer/detail?dataset_id=nm000190) ### Examples ```pycon >>> from eegdash.dataset import NM000190 >>> dataset = NM000190(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000190) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000190) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000191: eeg dataset, 10 subjects *BigP3BCI Study F — 6x6 multi-paradigm, 3 sessions (10 healthy subjects)* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *BigP3BCI Study F — 6x6 multi-paradigm, 3 sessions (10 healthy subjects)*. Modality: eeg Subjects: 10 Recordings: 270 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000191 dataset = NM000191(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000191(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000191( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000191, title = {BigP3BCI Study F — 6x6 multi-paradigm, 3 sessions (10 healthy subjects)}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, } ``` ## About This Dataset **BigP3BCI Study F — 6x6 multi-paradigm, 3 sessions (10 healthy subjects)** BigP3BCI Study F — 6x6 multi-paradigm, 3 sessions (10 healthy subjects). **Dataset Overview** - **Code**: Mainsah2025-F - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 ### View full README **BigP3BCI Study F — 6x6 multi-paradigm, 3 sessions (10 healthy subjects)** BigP3BCI Study F — 6x6 multi-paradigm, 3 sessions (10 healthy subjects). **Dataset Overview** - **Code**: Mainsah2025-F - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 - **Subjects**: 10 - **Sessions per subject**: 3 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1.0] s **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Montage**: standard_1020 - **Hardware**: g.USBamp (g.tec) - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 10 - **Health status**: patients - **Clinical population**: ALS **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 **Signal Processing** - **Feature extraction**: P300_ERP_detection **Cross-Validation** - **Method**: calibration-then-test - **Evaluation type**: within_subject **BCI Application** - **Applications**: speller - **Environment**: laboratory - **Online feedback**: True **Tags** - **Modality**: visual - **Type**: perception **Documentation** - **Description**: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. - **DOI**: 10.13026/0byy-ry86 - **License**: CC-BY-4.0 - **Investigators**: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins - **Institution**: Duke University; East Tennessee State University - **Country**: US - **Repository**: PhysioNet - **Data URL**: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) - **Publication year**: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000191` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BigP3BCI Study F — 6x6 multi-paradigm, 3 sessions (10 healthy subjects) | | Author (year) | `Mainsah2025_BigP3BCI_F` | | Canonical | `BigP3BCI_StudyF`, `BigP3BCI_F` | | Importable as | `NM000191`, `Mainsah2025_BigP3BCI_F`, `BigP3BCI_StudyF`, `BigP3BCI_F` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000191) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000191) | [Source URL](https://nemar.org/dataexplorer/detail/nm000191) | ## Technical Details - Subjects: 10 - Recordings: 270 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 256.0 - Duration (hours): 12.812762586805556 - Pathology: Other - Modality: Visual - Type: Attention - Size on disk: 551.9 MB - File count: 270 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000191](https://openneuro.org/datasets/nm000191) - NeMAR: [nm000191](https://nemar.org/dataexplorer/detail?dataset_id=nm000191) ## API Reference Use the `NM000191` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000191(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study F — 6x6 multi-paradigm, 3 sessions (10 healthy subjects) * **Study:** `nm000191` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_F` * **Canonical:** `BigP3BCI_StudyF`, `BigP3BCI_F` Also importable as: `NM000191`, `Mainsah2025_BigP3BCI_F`, `BigP3BCI_StudyF`, `BigP3BCI_F`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 10; recordings: 270; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000191](https://openneuro.org/datasets/nm000191) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000191](https://nemar.org/dataexplorer/detail?dataset_id=nm000191) ### Examples ```pycon >>> from eegdash.dataset import NM000191 >>> dataset = NM000191(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000191) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000191) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000192: eeg dataset, 11 subjects *BNCI 2015-006 Music BCI dataset* Access recordings and metadata through EEGDash. **Citation:** M S Treder, H Purwins, D Miklody, I Sturm, B Blankertz (2014). *BNCI 2015-006 Music BCI dataset*. Modality: eeg Subjects: 11 Recordings: 11 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000192 dataset = NM000192(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000192(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000192( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000192, title = {BNCI 2015-006 Music BCI dataset}, author = {M S Treder and H Purwins and D Miklody and I Sturm and B Blankertz}, } ``` ## About This Dataset **BNCI 2015-006 Music BCI dataset** BNCI 2015-006 Music BCI dataset. **Dataset Overview** - **Code**: BNCI2015-006 - **Paradigm**: p300 - **DOI**: 10.1088/1741-2560/11/2/026009 ### View full README **BNCI 2015-006 Music BCI dataset** BNCI 2015-006 Music BCI dataset. **Dataset Overview** - **Code**: BNCI2015-006 - **Paradigm**: p300 - **DOI**: 10.1088/1741-2560/11/2/026009 - **Subjects**: 11 - **Sessions per subject**: 1 - **Events**: Target=1, NonTarget=2 - **Trial interval**: [0, 1.0] s - **File format**: gdf - **Data preprocessed**: True - **Contributing labs**: Neurotechnology Group TU Berlin, Bernstein Focus Neurotechnology, Aalborg University Copenhagen, Berlin School of Mind and Brain **Acquisition** - **Sampling rate**: 200.0 Hz - **Number of channels**: 64 - **Channel types**: eeg=64 - **Channel names**: AF3, AF4, AF7, AF8, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EOGvu, F1, F10, F2, F3, F4, F5, F6, F7, F8, F9, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fz, O1, O2, Oz, P1, P10, P2, P3, P4, P5, P6, P7, P8, P9, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP7, TP8 - **Montage**: 10-10 - **Hardware**: Brain Products - **Reference**: left mastoid - **Ground**: forehead - **Sensor type**: active electrode - **Line frequency**: 50.0 Hz - **Online filters**: {‘bandpass’: [0.016, 250]} - **Impedance threshold**: 20.0 kOhm - **Cap manufacturer**: Brain Products - **Cap model**: actiCAP - **Electrode type**: active **Participants** - **Number of subjects**: 11 - **Health status**: patients - **Clinical population**: Healthy - **Age**: mean=28.0, min=21, max=50 - **Gender distribution**: male=7, female=4 - **Handedness**: all but one right-handed - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Task type**: auditory oddball - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 40.0 s - **Tasks**: selective auditory attention, deviant counting - **Study design**: Multi-streamed musical oddball paradigm with three concurrent instruments. Participants attended to one instrument and counted deviants while ignoring the other two instruments. Two music conditions tested: Synth-Pop (bass, drums, keyboard) and Jazz (double-bass, piano, flute). - **Study domain**: auditory BCI - **Feedback type**: none - **Stimulus type**: musical oddball - **Stimulus modalities**: visual, auditory - **Primary modality**: auditory - **Synchronicity**: asynchronous - **Mode**: offline - **Training/test split**: False - **Instructions**: Attend to cued instrument, count the number of deviants in that instrument, ignore other two instruments, maintain fixation on cross, minimize eye movements - **Stimulus presentation**: visual_cue=instrument indication, fixation_cross=continuous during music playback, music_clips=40-second polyphonic music **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 3 **Data Structure** - **Trials**: 3-7 deviants per instrument per clip - **Blocks per session**: 10 - **Trials context**: per_instrument_per_clip **Preprocessing** - **Data state**: epoched - **Preprocessing applied**: True - **Steps**: downsampling, lowpass filtering, epoching, baseline correction, artifact rejection - **Lowpass filter**: 42.0 Hz - **Filter type**: Chebyshev - **Artifact methods**: min-max criterion (100 μV threshold on Fp1 or Fp2) - **Downsampled to**: 250.0 Hz - **Epoch window**: [-0.2, 1.2] - **Notes**: Artifact rejection applied only to training set, preserved in test set. Passbands: 42 Hz, stopbands: 49 Hz for Chebyshev filter. **Signal Processing** - **Classifiers**: LDA with shrinkage covariance - **Feature extraction**: spatio-temporal features, voltage averaging in time windows - **Frequency bands**: alpha=[8, 13] Hz **Cross-Validation** - **Method**: leave-one-clip-out - **Evaluation type**: cross_trial **Performance (Original Study)** - **Accuracy**: 91.0% - **Binary Classifier Accuracy Synth Pop**: 69.25 - **Binary Classifier Accuracy Jazz**: 71.47 - **Posterior Probability Accuracy Synth Pop**: 91.0 - **Posterior Probability Accuracy Jazz**: 91.5 **BCI Application** - **Applications**: communication, speller, message selection - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Auditory - **Type**: Perception, Attention **Documentation** - **Description**: Multi-streamed musical oddball paradigm for auditory BCI. Each of three concurrent instruments has its own standard and deviant patterns. Participants selectively attend to one instrument to detect deviants. - **DOI**: 10.1088/1741-2560/11/2/026009 - **Associated paper DOI**: 10.1088/1741-2560/11/2/026009 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: M S Treder, H Purwins, D Miklody, I Sturm, B Blankertz - **Senior author**: B Blankertz - **Contact**: [matthias.treder@tu-berlin.de](mailto:matthias.treder@tu-berlin.de) - **Institution**: Technische Universität Berlin - **Department**: Neurotechnology Group; Bernstein Focus: Neurotechnology - **Address**: Berlin, Germany - **Country**: Germany - **Repository**: GitHub - **Data URL**: [https://github.com/bbci/bbci_public/blob/master/doc/index.markdown](https://github.com/bbci/bbci_public/blob/master/doc/index.markdown) - **Publication year**: 2014 - **Funding**: German Bundesministerium für Bildung und Forschung (Grant Nos. 16SV5839 and 01GQ0850) - **Ethics approval**: Declaration of Helsinki - **Acknowledgements**: We acknowledge financial support by the German Bundesministerium für Bildung und Forschung (Grant Nos. 16SV5839 and 01GQ0850). - **Keywords**: brain–computer interface, EEG, auditory, music, attention, oddball paradigm, P300 **Abstract** Polyphonic music (music consisting of several instruments playing in parallel) is an intuitive way of embedding multiple information streams. The different instruments in a musical piece form concurrent information streams that seamlessly integrate into a coherent and hedonistically appealing entity. Here, we explore polyphonic music as a novel stimulation approach for use in a brain–computer interface. In a multi-streamed oddball experiment, we had participants shift selective attention to one out of three different instruments in music audio clips. Each instrument formed an oddball stream with its own specific standard stimuli (a repetitive musical pattern) and oddballs (deviating musical pattern). Contrasting attended versus unattended instruments, ERP analysis shows subject- and instrument-specific responses including P300 and early auditory components. The attended instrument can be classified offline with a mean accuracy of 91% across 11 participants. This is a proof of concept that attention paid to a particular instrument in polyphonic music can be inferred from ongoing EEG, a finding that is potentially relevant for both brain–computer interface and music research. **Methodology** Participants listened to 40-second polyphonic music clips with three concurrent instruments (Synth-Pop: bass, drums, keyboard; Jazz: double-bass, piano, flute). Each instrument had standard patterns and infrequent deviants (3-7 per clip). Participants were cued to attend to one instrument and count deviants. EEG recorded at 1000 Hz with 64 electrodes, downsampled to 250 Hz, lowpass filtered (Chebyshev, 42 Hz passband), epoched (-200 to 1200 ms), baseline corrected, and artifact rejected. Two classification approaches: (1) general binary classifier and (2) instrument-specific classifiers with posterior probabilities. Features: spatio-temporal (3 time intervals × 63 electrodes = 189 dimensions). LDA with shrinkage covariance. Leave-one-clip-out cross-validation. Main experiment: 10 blocks of 21 clips (7 clips per instrument as target). Total: 3 Synth-Pop mixed blocks, 3 Jazz mixed blocks, 2 Synth-Pop solo blocks, 2 Jazz solo blocks. **References** Treder, M. S., Purwins, H., Miklody, D., Sturm, I., & Blankertz, B. (2014). Decoding auditory attention to instruments in polyphonic music using single-trial EEG classification. Journal of Neural Engineering, 11(2), 026009. [https://doi.org/10.1088/1741-2560/11/2/026009](https://doi.org/10.1088/1741-2560/11/2/026009) Notes .. versionadded:: 1.2.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000192` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2015-006 Music BCI dataset | | Author (year) | `Treder2015_BNCI_006_Music` | | Canonical | `BNCI2015_BNCI_006_Music`, `BNCI_2015_006_Music`, `BNCI2015_006_MusicBCI` | | Importable as | `NM000192`, `Treder2015_BNCI_006_Music`, `BNCI2015_BNCI_006_Music`, `BNCI_2015_006_Music`, `BNCI2015_006_MusicBCI` | | Year | 2014 | | Authors | M S Treder, H Purwins, D Miklody, I Sturm, B Blankertz | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000192) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000192) | [Source URL](https://nemar.org/dataexplorer/detail/nm000192) | ## Technical Details - Subjects: 11 - Recordings: 11 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 200.0 - Duration (hours): 33.94770694444444 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 4.4 GB - File count: 11 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000192](https://openneuro.org/datasets/nm000192) - NeMAR: [nm000192](https://nemar.org/dataexplorer/detail?dataset_id=nm000192) ## API Reference Use the `NM000192` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000192(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-006 Music BCI dataset * **Study:** `nm000192` (NeMAR) * **Author (year):** `Treder2015_BNCI_006_Music` * **Canonical:** `BNCI2015_BNCI_006_Music`, `BNCI_2015_006_Music`, `BNCI2015_006_MusicBCI` Also importable as: `NM000192`, `Treder2015_BNCI_006_Music`, `BNCI2015_BNCI_006_Music`, `BNCI_2015_006_Music`, `BNCI2015_006_MusicBCI`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 11; recordings: 11; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000192](https://openneuro.org/datasets/nm000192) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000192](https://nemar.org/dataexplorer/detail?dataset_id=nm000192) ### Examples ```pycon >>> from eegdash.dataset import NM000192 >>> dataset = NM000192(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000192) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000192) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000193: eeg dataset, 11 subjects *Class for Kojima2024A dataset management. P300 dataset* Access recordings and metadata through EEGDash. **Citation:** Simon Kojima, Shin’ichiro Kanoh (2024). *Class for Kojima2024A dataset management. P300 dataset*. Modality: eeg Subjects: 11 Recordings: 66 License: CC0-1.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000193 dataset = NM000193(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000193(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000193( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000193, title = {Class for Kojima2024A dataset management. P300 dataset}, author = {Simon Kojima and Shin'ichiro Kanoh}, } ``` ## About This Dataset **Class for Kojima2024A dataset management. P300 dataset** Class for Kojima2024A dataset management. P300 dataset. **Dataset Overview** - **Code**: Kojima2024A - **Paradigm**: p300 - **DOI**: 10.7910/DVN/MQOVEY ### View full README **Class for Kojima2024A dataset management. P300 dataset** Class for Kojima2024A dataset management. P300 dataset. **Dataset Overview** - **Code**: Kojima2024A - **Paradigm**: p300 - **DOI**: 10.7910/DVN/MQOVEY - **Subjects**: 11 - **Sessions per subject**: 1 - **Events**: Target=1, NonTarget=0 - **Trial interval**: [-0.5, 1.2] s - **Runs per session**: 6 - **File format**: BrainVision - **Number of contributing labs**: 1 **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 64 - **Channel types**: eeg=64, eog=2 - **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT10, FT7, FT8, FT9, Fp1, Fp2, Fz, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP10, TP7, TP8, TP9, hEOG, vEOG - **Montage**: standard_1020 - **Hardware**: Brain Amp DC (Brain Products GmbH, Germany) and MR plus (Brain Products GmbH, Germany) - **Reference**: right earlobe - **Ground**: left earlobe - **Sensor type**: eeg - **Line frequency**: 50.0 Hz - **Online filters**: {‘bandpass’: ‘0.1 Hz to 100 Hz’} - **Cap manufacturer**: EASYCAP GmbH - **Electrode material**: Ag-AgCl - **Auxiliary channels**: EOG (2 ch, vertical, horizontal) **Participants** - **Number of subjects**: 11 - **Health status**: healthy - **Age**: mean=22.5, min=22.0, max=23.0 - **Gender distribution**: male=10, female=1 - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Task type**: auditory selective attention - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Tasks**: attend to Stream 1, attend to Stream 2, attend to Stream 3 - **Study design**: within-subject - **Study domain**: auditory BCI - **Feedback type**: none - **Stimulus type**: auditory musical tones - **Stimulus modalities**: auditory - **Primary modality**: auditory - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: False - **Instructions**: Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream - **Stimulus presentation**: method=Digital signal processor (System3, Tucker-Davis Technologies, USA) and headphones (HDA200, Sennheiser), ear=right ear only, tone_generator=Software synthesizer (Piano tones Grand Piano 1 SE from SampleTank3, IK multimedia Production, Italy) **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 3 - **Stimulus onset asynchrony**: 180.0 ms **Data Structure** - **Blocks per session**: 6 - **Block duration**: 300.0 s - **Trials context**: Each task block had 3 runs (5 minutes each). Subjects counted target stimuli in Streams 1, 2, and 3 on the 1st, 2nd, and 3rd measurements respectively. Task block was repeated twice. **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False **Signal Processing** - **Classifiers**: Logistic Regression, Minimum Distance to Mean (MDM) - **Feature extraction**: xDAWN spatial filtering, Riemannian geometry covariance matrices - **Frequency bands**: analyzed=[1.0, 40.0] Hz - **Spatial filters**: xDAWN **Cross-Validation** - **Method**: 10-fold cross validation - **Folds**: 10 - **Evaluation type**: within-subject **Performance (Original Study)** - **Description**: Classification accuracy over 80% for 5 subjects, over 75% for 9 subjects - **Metric**: MCC (Matthews correlation coefficient) **BCI Application** - **Applications**: communication - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: auditory - **Type**: EEG, P300, BCI **Documentation** - **Description**: A 3-class auditory BCI using three tone sequences based on auditory stream segregation. Musical tones were presented to subjects’ right ear, and subjects attended to one of three streams while counting target stimuli. P300 activity was elicited by target stimuli in the attended stream. - **DOI**: 10.1371/journal.pone.0303565 - **Associated paper DOI**: 10.1371/journal.pone.0303565 - **License**: CC0-1.0 - **Investigators**: Simon Kojima, Shin’ichiro Kanoh - **Senior author**: Shin’ichiro Kanoh - **Contact**: [nb21106@shibaura-it.ac.jp](mailto:nb21106@shibaura-it.ac.jp) - **Institution**: Shibaura Institute of Technology - **Department**: Graduate School of Engineering and Science; College of Engineering - **Address**: Koto-ku, Tokyo, Japan - **Country**: JP - **Repository**: Harvard dataverse - **Data URL**: [https://doi.org/10.7910/DVN/MQOVEY](https://doi.org/10.7910/DVN/MQOVEY) - **Publication year**: 2024 - **Funding**: JSPS KAKENHI Grant Number JP23K11811 - **Ethics approval**: Review Board on Bioengineering Research Ethics of Shibaura Institute of Technology; Declaration of Helsinki - **Keywords**: auditory BCI, P300, auditory stream segregation, selective attention, oddball paradigm, Riemannian geometry **External Links** - **Source**: [https://doi.org/10.7910/DVN/MQOVEY](https://doi.org/10.7910/DVN/MQOVEY) - **Paper**: [https://doi.org/10.1371/journal.pone.0303565](https://doi.org/10.1371/journal.pone.0303565) **Abstract** In this study, we attempted to improve brain-computer interface (BCI) systems by means of auditory stream segregation in which alternately presented tones are perceived as sequences of various different tones (streams). A 3-class BCI using three tone sequences, which were perceived as three different tone streams, was investigated and evaluated. Each presented musical tone was generated by a software synthesizer. Eleven subjects took part in the experiment. Stimuli were presented to each user’s right ear. Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream. In addition, 64-channel electroencephalogram (EEG) and two-channel electrooculogram (EOG) signals were recorded from participants with a sampling frequency of 1000 Hz. The measured EEG data were classified based on Riemannian geometry to detect the object of the subject’s selective attention. P300 activity was elicited by the target stimuli in the segregated tone streams. In five out of eleven subjects, P300 activity was elicited only by the target stimuli included in the attended stream. In a 10-fold cross validation test, a classification accuracy over 80% for five subjects and over 75% for nine subjects was achieved. For subjects whose accuracy was lower than 75%, either the P300 was also elicited for nonattended streams or the amplitude of P300 was small. It was concluded that the number of selected BCI systems based on auditory stream segregation can be increased to three classes, and these classes can be detected by a single ear without the aid of any visual modality. **Methodology** Musical tones generated by a digital auditory workstation were used as auditory stimuli. Piano tones from a MIDI sound source were presented using a digital signal processor and headphones to participants’ right ear only. Three tone streams were created using auditory stream segregation, each consisting of standard (90% probability) and deviant (10% probability) tones. The duration of each tone was 150 ms with stimulus onset asynchrony of 180 ms. The 64-channel EEG and 2-channel EOG signals were recorded at 1000 Hz. Each experiment consisted of two task blocks with three runs each (5 minutes per run). Subjects counted target stimuli in different streams across runs. Data analysis involved bandpass filtering (0.1-40 Hz for ERP analysis, 1-40 Hz for classification), baseline correction, artifact rejection (±100μV for EEG, ±500μV for EOG), xDAWN spatial filtering, and classification using Riemannian geometry with covariance matrices and logistic regression. Performance was evaluated using 10-fold cross validation with accuracy and Matthews correlation coefficient (MCC) metrics. **References** Kojima, S. (2024). Replication Data for: An auditory brain-computer interface based on selective attention to multiple tone streams. Harvard Dataverse, V1. DOI: [https://doi.org/10.7910/DVN/MQOVEY](https://doi.org/10.7910/DVN/MQOVEY) Kojima, S. & Kanoh, S. (2024). An auditory brain-computer interface based on selective attention to multiple tone streams. PLoS ONE 19(5): e0303565. DOI: [https://doi.org/10.1371/journal.pone.0303565](https://doi.org/10.1371/journal.pone.0303565) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000193` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Class for Kojima2024A dataset management. P300 dataset | | Author (year) | `Kojima2024A_P300` | | Canonical | — | | Importable as | `NM000193`, `Kojima2024A_P300` | | Year | 2024 | | Authors | Simon Kojima, Shin’ichiro Kanoh | | License | CC0-1.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000193) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000193) | [Source URL](https://nemar.org/dataexplorer/detail/nm000193) | ## Technical Details - Subjects: 11 - Recordings: 66 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 5.797537222222223 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 3.7 GB - File count: 66 - Format: BIDS - License: CC0-1.0 - DOI: — - Source: nemar - OpenNeuro: [nm000193](https://openneuro.org/datasets/nm000193) - NeMAR: [nm000193](https://nemar.org/dataexplorer/detail?dataset_id=nm000193) ## API Reference Use the `NM000193` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000193(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Class for Kojima2024A dataset management. P300 dataset * **Study:** `nm000193` (NeMAR) * **Author (year):** `Kojima2024A_P300` * **Canonical:** — Also importable as: `NM000193`, `Kojima2024A_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 11; recordings: 66; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000193](https://openneuro.org/datasets/nm000193) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000193](https://nemar.org/dataexplorer/detail?dataset_id=nm000193) ### Examples ```pycon >>> from eegdash.dataset import NM000193 >>> dataset = NM000193(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000193) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000193) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000194: eeg dataset, 12 subjects *BNCI 2015-010 RSVP P300 dataset* Access recordings and metadata through EEGDash. **Citation:** Laura Acqualagna, Benjamin Blankertz (2013). *BNCI 2015-010 RSVP P300 dataset*. Modality: eeg Subjects: 12 Recordings: 24 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000194 dataset = NM000194(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000194(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000194( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000194, title = {BNCI 2015-010 RSVP P300 dataset}, author = {Laura Acqualagna and Benjamin Blankertz}, } ``` ## About This Dataset **BNCI 2015-010 RSVP P300 dataset** BNCI 2015-010 RSVP P300 dataset. **Dataset Overview** - **Code**: BNCI2015-010 - **Paradigm**: p300 - **DOI**: 10.1016/j.clinph.2012.12.050 ### View full README **BNCI 2015-010 RSVP P300 dataset** BNCI 2015-010 RSVP P300 dataset. **Dataset Overview** - **Code**: BNCI2015-010 - **Paradigm**: p300 - **DOI**: 10.1016/j.clinph.2012.12.050 - **Subjects**: 12 - **Sessions per subject**: 1 - **Events**: Target=1, NonTarget=2 - **Trial interval**: [0, 0.8] s - **Runs per session**: 2 - **Session IDs**: calibration, copy-spelling, free-spelling - **File format**: EEG - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 200.0 Hz - **Number of channels**: 63 - **Channel types**: eeg=63 - **Channel names**: Fp1, Fp2, AF3, AF4, Fz, F1, F2, F3, F4, F5, F6, F7, F8, F9, F10, FCz, FC1, FC2, FC3, FC4, FC5, FC6, FT7, FT8, Cz, C1, C2, C3, C4, C5, C6, T7, T8, CPz, CP1, CP2, CP3, CP4, CP5, CP6, TP7, TP8, Pz, P1, P2, P3, P4, P5, P6, P7, P8, P9, P10, POz, PO3, PO4, PO7, PO8, PO9, PO10, Oz, O1, O2 - **Montage**: 10-20 - **Hardware**: BrainAmp amplifiers - **Software**: Python with Pyff framework - **Reference**: left mastoid - **Sensor type**: active electrode - **Line frequency**: 50.0 Hz - **Online filters**: lowpass Chebyshev filter up to 40 Hz - **Impedance threshold**: 10.0 kOhm - **Cap manufacturer**: Brain Products - **Cap model**: actiCap - **Electrode type**: active electrode **Participants** - **Number of subjects**: 12 - **Health status**: patients - **Clinical population**: Healthy - **Age**: mean=29.17, std=8.4, min=24, max=55 - **Gender distribution**: male=6, female=6 - **Handedness**: all right-handed - **BCI experience**: mixed - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Task type**: spelling - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 46.5 s - **Study design**: RSVP (Rapid Serial Visual Presentation) BCI speller where 30 symbols are presented one-by-one in random order at the center of the screen. Three conditions tested: NoColor 116ms SOA, Color 116ms SOA, and Color 83ms SOA. Colors used to facilitate discrimination. - **Feedback type**: visual - **Stimulus type**: RSVP letters - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: online - **Training/test split**: True - **Instructions**: Participants fixate center of screen, concentrate on target letter, silently count its occurrences. Avoid blinking during visual presentation. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 30 - **Number of repetitions**: 10 - **Stimulus onset asynchrony**: 116.0 ms **Data Structure** - **Trials**: 10 sequences of 30 symbols - **Blocks per session**: 3 - **Trials context**: per sequence **Preprocessing** - **Data state**: filtered - **Preprocessing applied**: True - **Steps**: lowpass filter, downsampling, baseline correction, artifact rejection - **Lowpass filter**: 40.0 Hz - **Filter type**: Chebyshev - **Filter order**: passband up to 40 Hz, stopband starting at 49 Hz - **Artifact methods**: min-max criterion for eye movement rejection (75 µV on F9, Fz, F10, AF3, AF4), broadband power rejection (5-40 Hz) - **Re-reference**: linked mastoids (offline) - **Downsampled to**: 200.0 Hz - **Epoch window**: [-0.1, 1.2] - **Notes**: Baseline correction on pre-stimulus interval (116ms for 116ms SOA, 83/2ms for 83ms SOA). Non-target epochs excluded if 3 preceding or following symbols were targets. **Signal Processing** - **Classifiers**: LDA with shrinkage - **Feature extraction**: spatio-temporal features, averaged voltages within time windows - **Frequency bands**: alpha=[7, 13] Hz - **Spatial filters**: 55 channels used for classification (all except Fp1,2, AF3,4, F9,10, FT7,8) **Cross-Validation** - **Method**: calibration/test split - **Evaluation type**: within_session **Performance (Original Study)** - **Accuracy**: 94.8% - **Mean Spelling Rate Symb Per Min**: 1.43 - **Trial Duration 116Ms Soa S**: 46.5 - **Trial Duration 83Ms Soa S**: 36.6 **BCI Application** - **Applications**: speller, communication - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: ERP **Documentation** - **DOI**: 10.1016/j.clinph.2012.12.050 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: Laura Acqualagna, Benjamin Blankertz - **Senior author**: Benjamin Blankertz - **Contact**: [laura.acqualagna@tu-berlin.de](mailto:laura.acqualagna@tu-berlin.de); [benjamin.blankertz@tu-berlin.de](mailto:benjamin.blankertz@tu-berlin.de) - **Institution**: Berlin Institute of Technology - **Department**: Machine Learning Laboratory; Neurotechnology Group - **Country**: Germany - **Repository**: BNCI Horizon - **Publication year**: 2013 - **Funding**: BMBF Grant; Grant Nos s; Grant No. MU MU; DFG Grant - **Ethics approval**: Study performed in accordance with the declaration of Helsinki - **Keywords**: Brain Computer Interfaces, RSVP, ERPs, Speller, P300, N2, gaze-independent **Abstract** A Brain Computer Interface (BCI) speller using rapid serial visual presentation (RSVP) paradigm for gaze-independent mental typewriting. Twelve healthy participants successfully operated the RSVP speller with mean online spelling rate of 1.43 symb/min and mean symbol selection accuracy of 94.8%. The RSVP speller does not require gaze shifts and can be operated by non-spatial visual attention, making it suitable for patients with impaired oculo-motor control. **Methodology** Three experimental conditions tested (NoColor 116ms, Color 116ms, Color 83ms SOA). Each condition included calibration, copy-spelling, and free-spelling phases. Vocabulary of 30 symbols presented one-by-one at screen center in pseudo-random order. EEG recorded at 1000 Hz with 63 channels, downsampled to 200 Hz for ERP analysis. Classification using LDA with shrinkage on spatio-temporal features from 5 individually selected time windows. Symbol selection based on averaged classifier output across 10 sequences. **References** Acqualagna, L., & Blankertz, B. (2013). Gaze-independent BCI-spelling using rapid serial visual presentation (RSVP). Clinical Neurophysiology, 124(5), 901-908. [https://doi.org/10.1016/j.clinph.2012.12.050](https://doi.org/10.1016/j.clinph.2012.12.050) Notes .. versionadded:: 1.2.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000194` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2015-010 RSVP P300 dataset | | Author (year) | `Acqualagna2015` | | Canonical | `BNCI2015` | | Importable as | `NM000194`, `Acqualagna2015`, `BNCI2015` | | Year | 2013 | | Authors | Laura Acqualagna, Benjamin Blankertz | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000194) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000194) | [Source URL](https://nemar.org/dataexplorer/detail/nm000194) | ## Technical Details - Subjects: 12 - Recordings: 24 - Tasks: 1 - Channels: 63 (22), 61 (2) - Sampling rate (Hz): 200.0 - Duration (hours): 16.163227777777777 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 2.1 GB - File count: 24 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000194](https://openneuro.org/datasets/nm000194) - NeMAR: [nm000194](https://nemar.org/dataexplorer/detail?dataset_id=nm000194) ## API Reference Use the `NM000194` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000194(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-010 RSVP P300 dataset * **Study:** `nm000194` (NeMAR) * **Author (year):** `Acqualagna2015` * **Canonical:** — Also importable as: `NM000194`, `Acqualagna2015`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000194](https://openneuro.org/datasets/nm000194) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000194](https://nemar.org/dataexplorer/detail?dataset_id=nm000194) ### Examples ```pycon >>> from eegdash.dataset import NM000194 >>> dataset = NM000194(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000194) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000194) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000195: eeg dataset, 12 subjects *Mixture of LLP and EM for a visual matrix speller (ERP) dataset from* Access recordings and metadata through EEGDash. **Citation:** David Hübner, Thibault Verhoeven, Klaus-Robert Müller, Pieter-Jan Kindermans, Michael Tangermann (2018). *Mixture of LLP and EM for a visual matrix speller (ERP) dataset from*. Modality: eeg Subjects: 12 Recordings: 360 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000195 dataset = NM000195(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000195(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000195( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000195, title = {Mixture of LLP and EM for a visual matrix speller (ERP) dataset from}, author = {David Hübner and Thibault Verhoeven and Klaus-Robert Müller and Pieter-Jan Kindermans and Michael Tangermann}, } ``` ## About This Dataset **Mixture of LLP and EM for a visual matrix speller (ERP) dataset from** Mixture of LLP and EM for a visual matrix speller (ERP) dataset from Hübner et al 2018 ``` [1]_ ``` . **Dataset Overview** - **Code**: Huebner2018 - **Paradigm**: p300 - **DOI**: 10.1109/MCI.2018.2807039 ### View full README **Mixture of LLP and EM for a visual matrix speller (ERP) dataset from** Mixture of LLP and EM for a visual matrix speller (ERP) dataset from Hübner et al 2018 ``` [1]_ ``` . **Dataset Overview** - **Code**: Huebner2018 - **Paradigm**: p300 - **DOI**: 10.1109/MCI.2018.2807039 - **Subjects**: 12 - **Sessions per subject**: 3 - **Events**: Target=10002, NonTarget=10001 - **Trial interval**: [-0.2, 0.7] s - **Session IDs**: 0, 1, 2 - **File format**: BrainVision **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 31 - **Channel types**: eeg=31, misc=6 - **Channel names**: C3, C4, CP1, CP2, CP5, CP6, Cz, EOGvu, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, P10, P3, P4, P7, P8, P9, Pz, T7, T8, x_EMGl, x_GSR, x_Optic, x_Pulse, x_Respi - **Montage**: extended 10-20 - **Hardware**: BrainAmp DC - **Software**: BBCI toolbox - **Reference**: nose - **Sensor type**: Ag/AgCl - **Line frequency**: 50.0 Hz - **Impedance threshold**: 20.0 kOhm - **Cap manufacturer**: EasyCap **Participants** - **Number of subjects**: 12 - **Health status**: healthy - **Age**: mean=26, min=19, max=31 - **Gender distribution**: female=8, male=4 - **BCI experience**: mixed - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 17.0 s - **Tasks**: copy-spelling - **Study design**: Visual ERP copy-spelling task using a modified 6x6 grid extended with 10 # symbols as visual blanks, using flexible highlighting scheme with two interleaved sequences to enable unsupervised learning methods (EM, LLP, MIX) - **Feedback type**: visual - **Stimulus type**: modified matrix speller with flexible highlighting - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: online - **Instructions**: copy-spelling task - spell German sentence ‘Franzy jagt im Taxi quer durch das’ - **Stimulus presentation**: soa_ms=250, stimulus_duration_ms=100, isi_ms=150, highlighting_type=combination of brightness enhancement, rotation, enlargement and trichromatic grid overlay, distance_to_screen_cm=80, screen_size_inches=24 **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 46 - **Inter-stimulus interval**: 150.0 ms - **Stimulus onset asynchrony**: 250.0 ms **Data Structure** - **Trials**: 35 - **Blocks per session**: 3 - **Trials context**: 35 characters per block (one trial = spelling one character), 3 blocks per session (one block per unsupervised algorithm: EM, LLP, MIX in pseudo-randomized order) **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False **Signal Processing** - **Classifiers**: EM (Expectation-Maximization), LLP (Learning from Label Proportions), MIX (mixture of EM and LLP), shrinkage-regularized LDA (Ledoit-Wolf), Bayesian least square regression - **Feature extraction**: mean amplitudes in six temporal intervals per channel **Cross-Validation** - **Method**: leave-one-character-out for offline analysis; online sequential testing - **Evaluation type**: online, within_session, unsupervised_learning **Performance (Original Study)** - **Accuracy**: 80.0% - **Mix Auc After 7 Chars**: 80.0 - **Time To 80 Accuracy Seconds**: 168.0 - **Epochs To 80 Accuracy**: 476.0 - **Characters To 80 Accuracy**: 7.0 **BCI Application** - **Applications**: speller, communication - **Environment**: controlled laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Research **Documentation** - **DOI**: 10.5281/zenodo.192684 - **Associated paper DOI**: 10.1109/MCI.2018.2807039 - **License**: CC-BY-4.0 - **Investigators**: David Hübner, Thibault Verhoeven, Klaus-Robert Müller, Pieter-Jan Kindermans, Michael Tangermann - **Contact**: [p.kindermans@tu-berlin.de](mailto:p.kindermans@tu-berlin.de); [michael.tangermann@blbt.uni-freiburg.de](mailto:michael.tangermann@blbt.uni-freiburg.de) - **Institution**: University of Freiburg - **Department**: Brain State Decoding Lab - **Address**: Brain State Decoding Lab, University of Freiburg, Freiburg, GERMANY - **Country**: DE - **Repository**: Zenodo - **Data URL**: [https://zenodo.org/record/5831879](https://zenodo.org/record/5831879) - **Publication year**: 2018 - **Funding**: BrainLinks-BrainTools Cluster of Excellence funded by the German Research Foundation (DFG), grant number EXC 1086; bwHPC initiative, grant INST 39/963-1 FUGG; European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement NO 657679; Special Research Fund of Ghent University; DFG (DFG SPP 1527, MU 987/14-1); Federal Ministry for Education and Research (BMBF No. 2017-0-00451); Brain Korea 21 Plus Program by the Institute for Information & Communications Technology Promotion (IITP) grant (1IS14013A) funded by the Korean government - **Ethics approval**: University Medical Center Freiburg ethics committee - **Keywords**: unsupervised learning, brain-computer interface, event-related potentials, P300 speller, expectation-maximization, learning from label proportions, MIX method, EEG **Abstract** One of the fundamental challenges in brain-computer interfaces (BCIs) is to tune a brain signal decoder to reliably detect a user’s intention. While information about the decoder can partially be transferred between subjects or sessions, optimal decoding performance can only be reached with novel data from the current session. Thus, it is preferable to learn from unlabeled data gained from the actual usage of the BCI application instead of conducting a calibration recording prior to BCI usage. We review such unsupervised machine learning methods for BCIs based on event-related potentials of the electroencephalogram. We present results of an online study with twelve healthy participants controlling a visual speller. Online performance is reported for three completely unsupervised learning methods: (1) learning from label proportions, (2) an expectation-maximization approach and (3) MIX, which combines the strengths of the two other methods. After a short ramp-up, we observed that the MIX method not only defeats its two unsupervised competitors but even performs on par with a state-of-the-art regularized linear discriminant analysis trained on the same number of data points and with full label access. With this online study, we deliver the best possible proof in BCI that an unsupervised decoding method can in practice render a supervised method unnecessary. This is possible despite skipping the calibration, without losing much performance and with the prospect of continuous improvement over a session. Thus, our findings pave the way for a transition from supervised to unsupervised learning methods in BCIs based on event-related potentials. **Methodology** Online study comparing three unsupervised learning methods (EM, LLP, MIX) for P300 speller. Twelve healthy volunteers (8 female, 4 male, mean age 26, range 19-31 years) participated in a single session each. Subjects spelled the German sentence ‘Franzy jagt im Taxi quer durch das’ (35 characters) in three blocks, each using a different unsupervised algorithm in pseudo-randomized order. Each trial (spelling one character) consisted of 68 highlighting events with 250 ms SOA and 100 ms stimulus duration (ISI=150 ms). The speller used a modified 6x6 grid with 36 normal characters extended with 10 # symbols as visual blanks (total 46 symbols). Two interleaved highlighting sequences were used: S1 highlighted only normal characters, S2 highlighted both normal characters and # symbols, creating different known target-to-non-target ratios to enable learning from label proportions. Highlighting consisted of brightness enhancement, rotation, enlargement and trichromatic grid overlay. Classifiers were randomly initialized at block start and updated after each trial. No labeled data was provided during online session. Participants sat 80 cm from a 24-inch screen. EEG was recorded from 31 passive Ag/AgCl electrodes (EasyCap) placed according to extended 10-20 system, with impedances kept below 20 kOhm. Signals were recorded and amplified by BrainAmp DC at 1 kHz sampling rate using BBCI toolbox in Matlab. Data was bandpass filtered (0.5-8 Hz, 3rd order Chebyshev Type II), downsampled to 100 Hz, epoched to [-200, 700] ms relative to stimulus onset, and baseline corrected using [-200, 0] ms interval. Features were mean amplitudes of six time intervals ([50-120], [121-200], [201-280], [281-380], [381-530], [531-700] ms post-stimulus) per channel. No artifact rejection was applied; participants were instructed to avoid artifacts. Performance metrics: spelling accuracy and AUC for target vs. non-target discrimination. Results showed MIX method achieved ~80% accuracy after ~7 characters (168 seconds, 476 epochs) and performed comparably to supervised regularized LDA trained on same amount of labeled data after 10+ characters. Ethics approval was obtained from University Medical Center Freiburg. Participants were compensated 8 Euros per hour for the ~3 hour session (including EEG setup). **References** Huebner, D., Verhoeven, T., Mueller, K. R., Kindermans, P. J., & Tangermann, M. (2018). Unsupervised learning for brain-computer interfaces based on event-related potentials: Review and online comparison [research frontier]. IEEE Computational Intelligence Magazine, 13(2), 66-77. [https://doi.org/10.1109/MCI.2018.2807039](https://doi.org/10.1109/MCI.2018.2807039) .. versionadded:: 0.4.5 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000195` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mixture of LLP and EM for a visual matrix speller (ERP) dataset from | | Author (year) | `Hubner2018` | | Canonical | `Huebner2018` | | Importable as | `NM000195`, `Hubner2018`, `Huebner2018` | | Year | 2018 | | Authors | David Hübner, Thibault Verhoeven, Klaus-Robert Müller, Pieter-Jan Kindermans, Michael Tangermann | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000195) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000195) | [Source URL](https://nemar.org/dataexplorer/detail/nm000195) | ## Technical Details - Subjects: 12 - Recordings: 360 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 1000.0 - Duration (hours): 15.3207975 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 4.8 GB - File count: 360 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000195](https://openneuro.org/datasets/nm000195) - NeMAR: [nm000195](https://nemar.org/dataexplorer/detail?dataset_id=nm000195) ## API Reference Use the `NM000195` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000195(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mixture of LLP and EM for a visual matrix speller (ERP) dataset from * **Study:** `nm000195` (NeMAR) * **Author (year):** `Hubner2018` * **Canonical:** `Huebner2018` Also importable as: `NM000195`, `Hubner2018`, `Huebner2018`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 360; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000195](https://openneuro.org/datasets/nm000195) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000195](https://nemar.org/dataexplorer/detail?dataset_id=nm000195) ### Examples ```pycon >>> from eegdash.dataset import NM000195 >>> dataset = NM000195(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000195) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000195) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000196: eeg dataset, 12 subjects *c-VEP dataset from Thielen et al. (2015)* Access recordings and metadata through EEGDash. **Citation:** Jordy Thielen, Philip van den Broek, Jason Farquhar, Peter Desain (2015). *c-VEP dataset from Thielen et al. (2015)*. Modality: eeg Subjects: 12 Recordings: 36 License: CC0-1.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000196 dataset = NM000196(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000196(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000196( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000196, title = {c-VEP dataset from Thielen et al. (2015)}, author = {Jordy Thielen and Philip van den Broek and Jason Farquhar and Peter Desain}, } ``` ## About This Dataset **c-VEP dataset from Thielen et al. (2015)** c-VEP dataset from Thielen et al. (2015) **Dataset Overview** - **Code**: Thielen2015 - **Paradigm**: cvep - **DOI**: 10.34973/1ecz-1232 ### View full README **c-VEP dataset from Thielen et al. (2015)** c-VEP dataset from Thielen et al. (2015) **Dataset Overview** - **Code**: Thielen2015 - **Paradigm**: cvep - **DOI**: 10.34973/1ecz-1232 - **Subjects**: 12 - **Sessions per subject**: 1 - **Events**: 1.0=101, 0.0=100 - **Trial interval**: (0, 0.3) s - **Runs per session**: 3 - **File format**: mat - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 2048.0 Hz - **Number of channels**: 64 - **Channel types**: eeg=64 - **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, Iz, O1, O2, Oz, P1, P10, P2, P3, P4, P5, P6, P7, P8, P9, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP7, TP8 - **Montage**: standard_1020 - **Hardware**: Biosemi ActiveTwo - **Reference**: CMS/DRL - **Sensor type**: EEG - **Line frequency**: 50.0 Hz - **Electrode type**: active **Participants** - **Number of subjects**: 12 - **Health status**: patients - **Clinical population**: Healthy - **Age**: mean=24.0, std=2.3 - **Gender distribution**: male=4, female=8 - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: cvep - **Number of classes**: 2 - **Class labels**: 1.0, 0.0 - **Trial duration**: 4.2 s - **Study design**: 6x6 matrix speller BCI using modulated Gold codes for visual stimulation; participants focused on target symbols while cells flashed according to pseudo-random bit-sequences - **Feedback type**: visual - **Stimulus type**: pseudo-random noise-code - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: online - **Training/test split**: False - **Instructions**: participants visually attended cells containing target symbols during stimulation **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 1.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_1_0 0.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_0_0 ``` **Paradigm-Specific Parameters** - **Detected paradigm**: cvep - **Code type**: modulated Gold codes - **Code length**: 126 - **Number of targets**: 36 **Data Structure** - **Trials**: 108 - **Trials context**: 108 total per subject: 3 fixed-length copy-spelling runs x 36 trials per run, each trial 4.2 seconds (4 code cycles) **Preprocessing** - **Data state**: preprocessed - **Preprocessing applied**: True - **Steps**: downsampling from 2048 Hz to 360 Hz, linear de-trending, common average referencing, spectral filtering - **Highpass filter**: 5 Hz - **Lowpass filter**: 100 Hz - **Bandpass filter**: {‘band1’: [5, 48], ‘band2’: [52, 100]} - **Re-reference**: car - **Downsampled to**: 360.0 Hz **Signal Processing** - **Classifiers**: template matching, CCA - **Feature extraction**: correlation - **Spatial filters**: Canonical Correlation Analysis **Cross-Validation** - **Method**: training-testing split - **Evaluation type**: within-subject **Performance (Original Study)** - **Accuracy Fixed Length**: 86.0 - **Itr Fixed Length**: 38.12 - **Spm Fixed Length**: 6.93 - **Accuracy Early Stopping**: 86.0 - **Itr Early Stopping**: 48.37 - **Spm Early Stopping**: 8.99 **BCI Application** - **Applications**: speller - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Research **Documentation** - **DOI**: 10.1371/journal.pone.0133797 - **License**: CC0-1.0 - **Investigators**: Jordy Thielen, Philip van den Broek, Jason Farquhar, Peter Desain - **Senior author**: Peter Desain - **Contact**: [jordy.thielen@gmail.com](mailto:jordy.thielen@gmail.com); [info@donders.ru.nl](mailto:info@donders.ru.nl) - **Institution**: Radboud University Nijmegen - **Department**: Donders Center for Cognition - **Country**: NL - **Repository**: GitHub - **Data URL**: [https://public.data.ru.nl/dcc/DSC_2018.00047_553_v3](https://public.data.ru.nl/dcc/DSC_2018.00047_553_v3) - **Publication year**: 2015 - **Funding**: BrainGain Smart Mix Program of the Netherlands Ministry of Economic Affairs; Netherlands Ministry of Education, Culture and Science (SSM06011) - **Ethics approval**: Ethical Committee of the Faculty of Social Sciences at the Radboud University Nijmegen - **Keywords**: Brain-Computer Interface, BCI, Broad-Band Visually Evoked Potentials, BBVEP, Gold codes, reconvolution, speller, visual stimulation **Abstract** Brain-Computer Interfaces (BCIs) allow users to control devices and communicate by using brain activity only. BCIs based on broad-band visual stimulation can outperform BCIs using other stimulation paradigms. Visual stimulation with pseudo-random bit-sequences evokes specific Broad-Band Visually Evoked Potentials (BBVEPs) that can be reliably used in BCI for high-speed communication in speller applications. In this study, we report a novel paradigm for a BBVEP-based BCI that utilizes a generative framework to predict responses to broad-band stimulation sequences. In this study we designed a BBVEP-based BCI using modulated Gold codes to mark cells in a visual speller BCI. We defined a linear generative model that decomposes full responses into overlapping single-flash responses. These single-flash responses are used to predict responses to novel stimulation sequences, which in turn serve as templates for classification. The linear generative model explains on average 50% and up to 66% of the variance of responses to both seen and unseen sequences. In an online experiment, 12 participants tested a 6 × 6 matrix speller BCI. On average, an online accuracy of 86% was reached with trial lengths of 3.21 seconds. This corresponds to an Information Transfer Rate of 48 bits per minute (approximately 9 symbols per minute). This study indicates the potential to model and predict responses to broad-band stimulation. These predicted responses are proven to be well-suited as templates for a BBVEP-based BCI, thereby enabling communication and control by brain activity only. **Methodology** The study implements a novel BBVEP-based BCI using modulated Gold codes with a reconvolution approach for template generation. The reconvolution model decomposes responses into single-flash responses (short and long pulses) and predicts responses to unseen sequences. Two sets of Gold codes were used: set V for training (65 sequences) and set U for testing (65 sequences). Each sequence had 126 bits with duration of 1.05s. The classifier uses template matching with correlation, combined with Canonical Correlation Analysis for spatial filtering. Subset optimization (Platinum subset) selects the most distinguishable codes, and layout optimization arranges codes on the 6x6 grid to minimize cross-talk. An early stopping algorithm was implemented to reduce trial duration. Online experiments were conducted with 12 participants using a synchronous BCI paradigm. **References** Thielen, J. (Jordy), Jason Farquhar, Desain, P.W.M. (Peter) (2023): Broad-Band Visually Evoked Potentials: Re(con)volution in Brain-Computer Interfacing. Version 2. Radboud University. (dataset). DOI: [https://doi.org/10.34973/1ecz-1232](https://doi.org/10.34973/1ecz-1232) Thielen, J., Van Den Broek, P., Farquhar, J., & Desain, P. (2015). Broad-Band visually evoked potentials: re(con)volution in brain-computer interfacing. PLOS ONE, 10(7), e0133797. DOI: [https://doi.org/10.1371/journal.pone.0133797](https://doi.org/10.1371/journal.pone.0133797) Notes .. versionadded:: 1.0.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000196` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | c-VEP dataset from Thielen et al. (2015) | | Author (year) | `Thielen2015` | | Canonical | — | | Importable as | `NM000196`, `Thielen2015` | | Year | 2015 | | Authors | Jordy Thielen, Philip van den Broek, Jason Farquhar, Peter Desain | | License | CC0-1.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000196) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000196) | [Source URL](https://nemar.org/dataexplorer/detail/nm000196) | ## Technical Details - Subjects: 12 - Recordings: 36 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 2048.0 - Duration (hours): 2.6154667154947915 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 3.5 GB - File count: 36 - Format: BIDS - License: CC0-1.0 - DOI: — - Source: nemar - OpenNeuro: [nm000196](https://openneuro.org/datasets/nm000196) - NeMAR: [nm000196](https://nemar.org/dataexplorer/detail?dataset_id=nm000196) ## API Reference Use the `NM000196` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000196(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) c-VEP dataset from Thielen et al. (2015) * **Study:** `nm000196` (NeMAR) * **Author (year):** `Thielen2015` * **Canonical:** — Also importable as: `NM000196`, `Thielen2015`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000196](https://openneuro.org/datasets/nm000196) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000196](https://nemar.org/dataexplorer/detail?dataset_id=nm000196) ### Examples ```pycon >>> from eegdash.dataset import NM000196 >>> dataset = NM000196(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000196) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000196) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000197: eeg dataset, 21 subjects *BigP3BCI Study M — 9x8 adaptive/checkerboard (21 ALS subjects)* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *BigP3BCI Study M — 9x8 adaptive/checkerboard (21 ALS subjects)*. Modality: eeg Subjects: 21 Recordings: 420 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000197 dataset = NM000197(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000197(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000197( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000197, title = {BigP3BCI Study M — 9x8 adaptive/checkerboard (21 ALS subjects)}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, } ``` ## About This Dataset **BigP3BCI Study M — 9x8 adaptive/checkerboard (21 ALS subjects)** BigP3BCI Study M — 9x8 adaptive/checkerboard (21 ALS subjects). **Dataset Overview** - **Code**: Mainsah2025-M - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 ### View full README **BigP3BCI Study M — 9x8 adaptive/checkerboard (21 ALS subjects)** BigP3BCI Study M — 9x8 adaptive/checkerboard (21 ALS subjects). **Dataset Overview** - **Code**: Mainsah2025-M - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 - **Subjects**: 21 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1.0] s **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Montage**: standard_1020 - **Hardware**: g.USBamp (g.tec) - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 21 - **Health status**: healthy **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 **Signal Processing** - **Feature extraction**: P300_ERP_detection **Cross-Validation** - **Method**: calibration-then-test - **Evaluation type**: within_subject **BCI Application** - **Applications**: speller - **Environment**: laboratory - **Online feedback**: True **Tags** - **Modality**: visual - **Type**: perception **Documentation** - **Description**: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. - **DOI**: 10.13026/0byy-ry86 - **License**: CC-BY-4.0 - **Investigators**: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins - **Institution**: Duke University; East Tennessee State University - **Country**: US - **Repository**: PhysioNet - **Data URL**: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) - **Publication year**: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000197` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BigP3BCI Study M — 9x8 adaptive/checkerboard (21 ALS subjects) | | Author (year) | `Mainsah2025_BigP3BCI_M` | | Canonical | `BigP3BCI_StudyM`, `BigP3BCI_M` | | Importable as | `NM000197`, `Mainsah2025_BigP3BCI_M`, `BigP3BCI_StudyM`, `BigP3BCI_M` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000197) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000197) | [Source URL](https://nemar.org/dataexplorer/detail/nm000197) | ## Technical Details - Subjects: 21 - Recordings: 420 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 256.0 - Duration (hours): 11.218919270833334 - Pathology: Other - Modality: Visual - Type: Attention - Size on disk: 491.6 MB - File count: 420 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000197](https://openneuro.org/datasets/nm000197) - NeMAR: [nm000197](https://nemar.org/dataexplorer/detail?dataset_id=nm000197) ## API Reference Use the `NM000197` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000197(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study M — 9x8 adaptive/checkerboard (21 ALS subjects) * **Study:** `nm000197` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_M` * **Canonical:** `BigP3BCI_StudyM`, `BigP3BCI_M` Also importable as: `NM000197`, `Mainsah2025_BigP3BCI_M`, `BigP3BCI_StudyM`, `BigP3BCI_M`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 21; recordings: 420; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000197](https://openneuro.org/datasets/nm000197) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000197](https://nemar.org/dataexplorer/detail?dataset_id=nm000197) ### Examples ```pycon >>> from eegdash.dataset import NM000197 >>> dataset = NM000197(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000197) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000197) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000198: eeg dataset, 13 subjects *BNCI 2015-008 Center Speller P300 dataset* Access recordings and metadata through EEGDash. **Citation:** M S Treder, N M Schmidt, B Blankertz (2011). *BNCI 2015-008 Center Speller P300 dataset*. Modality: eeg Subjects: 13 Recordings: 26 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000198 dataset = NM000198(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000198(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000198( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000198, title = {BNCI 2015-008 Center Speller P300 dataset}, author = {M S Treder and N M Schmidt and B Blankertz}, } ``` ## About This Dataset **BNCI 2015-008 Center Speller P300 dataset** BNCI 2015-008 Center Speller P300 dataset. **Dataset Overview** - **Code**: BNCI2015-008 - **Paradigm**: p300 - **DOI**: 10.1088/1741-2560/8/6/066003 ### View full README **BNCI 2015-008 Center Speller P300 dataset** BNCI 2015-008 Center Speller P300 dataset. **Dataset Overview** - **Code**: BNCI2015-008 - **Paradigm**: p300 - **DOI**: 10.1088/1741-2560/8/6/066003 - **Subjects**: 13 - **Sessions per subject**: 1 - **Events**: Target=1, NonTarget=2 - **Trial interval**: [0, 1.0] s - **Runs per session**: 2 - **File format**: gdf - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 250.0 Hz - **Number of channels**: 63 - **Channel types**: eeg=63 - **Channel names**: Fp2, AF3, AF4, Fz, F1, F2, F3, F4, F5, F6, F7, F8, F9, F10, FCz, FC1, FC2, FC3, FC4, FC5, FC6, T7, T8, Cz, C1, C2, C3, C4, C5, C6, TP7, TP8, CPz, CP1, CP2, CP3, CP4, CP5, CP6, Pz, P1, P2, P3, P4, P5, P6, P7, P8, P9, P10, POz, PO3, PO4, PO7, PO8, PO9, PO10, Oz, O1, O2, Iz, I1, I2 - **Montage**: 10-10 - **Hardware**: Brain Products actiCAP - **Reference**: left mastoid - **Ground**: forehead - **Sensor type**: active electrode - **Line frequency**: 50.0 Hz - **Online filters**: 0.016-250 Hz bandpass - **Impedance threshold**: 20.0 kOhm - **Cap manufacturer**: Brain Products **Participants** - **Number of subjects**: 13 - **Health status**: patients - **Clinical population**: Healthy - **Age**: mean=27.0, min=16.0, max=45.0 - **Gender distribution**: male=8, female=5 - **Handedness**: {‘right’: 12, ‘left’: 1} - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 30.0 s - **Study design**: Two-stage visual speller using covert spatial attention and non-spatial feature attention (color and form). Three speller variants tested: Hex-o-Spell (6 discs with size enhancement and unique colors), Cake Speller (6 triangular faces with unique colors), Center Speller (sequential presentation of 6 geometric shapes with unique colors and forms). - **Feedback type**: none - **Stimulus type**: visual_flash - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: online - **Training/test split**: True - **Instructions**: Participants had to strictly fixate the center of the screen and covertly attend to the target symbol. They were instructed to silently count the number of intensifications of the target symbol. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 30 - **Number of repetitions**: 10 - **Stimulus onset asynchrony**: 200.0 ms **Data Structure** - **Trials**: 60 intensifications per stage (10 sequences × 6 elements) - **Trials context**: per_stage **Preprocessing** - **Data state**: filtered - **Preprocessing applied**: True - **Steps**: downsampling, lowpass filter, baseline correction - **Highpass filter**: 0.016 Hz - **Lowpass filter**: 49.0 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.016, ‘high_cutoff_hz’: 250.0} - **Filter type**: Chebyshev - **Re-reference**: linked mastoids - **Downsampled to**: 250.0 Hz - **Epoch window**: [-200.0, 800.0] - **Notes**: For offline ERP analysis: downsampled to 250 Hz, lowpass filtered below 49 Hz using Chebyshev filter (passbands/stopbands: 42/49 Hz). For online classification: downsampled to 100 Hz, no software filter applied. Baseline correction using -200 ms prestimulus interval. **Signal Processing** - **Classifiers**: LDA, SLDA - **Feature extraction**: ERP components, P300, P3 - **Spatial filters**: shrinkage covariance **Cross-Validation** - **Method**: calibration-test split - **Evaluation type**: within_session **Performance (Original Study)** - **Accuracy**: 92.0% - **Hex O Spell Accuracy**: 88.0 - **Cake Speller Accuracy**: 90.0 - **Center Speller Accuracy**: 97.0 - **Communication Rate Symbols Per Min**: 2.3 **BCI Application** - **Applications**: speller, communication - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: ERP, P300 **Documentation** - **DOI**: 10.1088/1741-2560/8/6/066003 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: M S Treder, N M Schmidt, B Blankertz - **Institution**: Berlin Institute of Technology - **Department**: Machine Learning Laboratory - **Country**: Germany - **Repository**: GitHub - **Data URL**: [https://github.com/bbci/bbci_public/blob/master/doc/index.markdown](https://github.com/bbci/bbci_public/blob/master/doc/index.markdown) - **Publication year**: 2011 - **Keywords**: P300, ERP, BCI, speller, covert attention, feature attention, gaze-independent **References** Treder, M. S., Schmidt, N. M., & Blankertz, B. (2011). Gaze-independent brain-computer interfaces based on covert attention and feature attention. Journal of Neural Engineering, 8(6), 066003. [https://doi.org/10.1088/1741-2560/8/6/066003](https://doi.org/10.1088/1741-2560/8/6/066003) Notes .. versionadded:: 1.2.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000198` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2015-008 Center Speller P300 dataset | | Author (year) | `Treder2015_P300` | | Canonical | `BNCI2015_P300`, `BNCI2015_008_P300`, `BNCI2015_008_CenterSpeller` | | Importable as | `NM000198`, `Treder2015_P300`, `BNCI2015_P300`, `BNCI2015_008_P300`, `BNCI2015_008_CenterSpeller` | | Year | 2011 | | Authors | M S Treder, N M Schmidt, B Blankertz | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000198) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000198) | [Source URL](https://nemar.org/dataexplorer/detail/nm000198) | ## Technical Details - Subjects: 13 - Recordings: 26 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 250.0 - Duration (hours): 19.37079333333333 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 3.1 GB - File count: 26 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000198](https://openneuro.org/datasets/nm000198) - NeMAR: [nm000198](https://nemar.org/dataexplorer/detail?dataset_id=nm000198) ## API Reference Use the `NM000198` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000198(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-008 Center Speller P300 dataset * **Study:** `nm000198` (NeMAR) * **Author (year):** `Treder2015_P300` * **Canonical:** `BNCI2015_008_P300`, `BNCI2015_008_CenterSpeller` Also importable as: `NM000198`, `Treder2015_P300`, `BNCI2015_008_P300`, `BNCI2015_008_CenterSpeller`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000198](https://openneuro.org/datasets/nm000198) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000198](https://nemar.org/dataexplorer/detail?dataset_id=nm000198) ### Examples ```pycon >>> from eegdash.dataset import NM000198 >>> dataset = NM000198(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000198) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000198) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000199: eeg dataset, 13 subjects *Learning from label proportions for a visual matrix speller (ERP)* Access recordings and metadata through EEGDash. **Citation:** David Hübner, Thibault Verhoeven, Konstantin Schmid, Klaus-Robert Müller, Michael Tangermann, Pieter-Jan Kindermans (2017). *Learning from label proportions for a visual matrix speller (ERP)*. Modality: eeg Subjects: 13 Recordings: 342 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000199 dataset = NM000199(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000199(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000199( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000199, title = {Learning from label proportions for a visual matrix speller (ERP)}, author = {David Hübner and Thibault Verhoeven and Konstantin Schmid and Klaus-Robert Müller and Michael Tangermann and Pieter-Jan Kindermans}, } ``` ## About This Dataset **Learning from label proportions for a visual matrix speller (ERP)** Learning from label proportions for a visual matrix speller (ERP) dataset from Hübner et al 2017 ``` [1]_ ``` . **Dataset Overview** - **Code**: Huebner2017 - **Paradigm**: p300 - **DOI**: 10.1371/journal.pone.0175856 ### View full README **Learning from label proportions for a visual matrix speller (ERP)** Learning from label proportions for a visual matrix speller (ERP) dataset from Hübner et al 2017 ``` [1]_ ``` . **Dataset Overview** - **Code**: Huebner2017 - **Paradigm**: p300 - **DOI**: 10.1371/journal.pone.0175856 - **Subjects**: 13 - **Sessions per subject**: 3 - **Events**: Target=10002, NonTarget=10001 - **Trial interval**: [-0.2, 0.7] s - **Runs per session**: 9 - **Session IDs**: session_1 - **File format**: BrainVision **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 31 - **Channel types**: eeg=31, misc=6 - **Channel names**: C3, C4, CP1, CP2, CP5, CP6, Cz, EOGvu, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, P10, P3, P4, P7, P8, P9, Pz, T7, T8, x_EMGl, x_GSR, x_Optic, x_Pulse, x_Respi - **Montage**: standard_1020 - **Hardware**: BrainAmp DC - **Reference**: nose - **Ground**: FCz - **Sensor type**: passive Ag/AgCl - **Line frequency**: 50.0 Hz - **Impedance threshold**: 20.0 kOhm - **Cap manufacturer**: EasyCap - **Auxiliary channels**: EOG (1 ch, vertical), pulse, respiration **Participants** - **Number of subjects**: 13 - **Health status**: healthy - **Age**: mean=26.0, std=1.5 - **Gender distribution**: female=5, male=8 - **BCI experience**: mostly naive - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 25.0 s - **Study design**: Visual ERP speller copy-spelling task using a 6x7 grid with learning from label proportions (LLP) classifier. Two sequences with different target/non-target ratios: sequence 1 (3 targets/8 stimuli), sequence 2 (2 targets/18 stimuli). Unsupervised calibrationless approach. - **Feedback type**: visual - **Stimulus type**: character matrix - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: online - **Training/test split**: False - **Instructions**: Copy-spelling task: subjects spelled the sentence ‘FRANZY JAGT IM KOMPLETT VERWAHRLOSTEN TAXI QUER DURCH FREIBURG’ three times - **Stimulus presentation**: soa_ms=250, stimulus_duration_ms=100, grid_size=6x7, highlighting_method=salient (brightness enhancement, rotation, enlargement, trichromatic grid overlay), viewing_distance_cm=80, screen_size_inches=24 **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 42 - **Stimulus onset asynchrony**: 250.0 ms **Data Structure** - **Trials**: 12852 - **Trials context**: 68 highlighting events per character, 63 characters per sentence, 3 sentences = 68\*63\*3 = 12852 EEG epochs per subject. Each epoch is a Target (10002) or NonTarget (10001) event. **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False **Signal Processing** - **Classifiers**: LLP (Learning from Label Proportions), shrinkage-LDA, EM-algorithm - **Feature extraction**: mean amplitude per time interval - **Frequency bands**: analyzed=[0.5, 8.0] Hz **Cross-Validation** - **Method**: 5-fold chronological cross-validation - **Folds**: 5 - **Evaluation type**: within_subject **Performance (Original Study)** - **Accuracy**: 84.5% - **Auc**: 0.975 - **Online Spelling Accuracy**: 84.5 - **Post Hoc Spelling Accuracy**: 95.0 - **Accuracy After Rampup**: 90.2 - **Supervised Auc**: 0.975 - **Max Spelling Speed Chars Per Min**: 2.4 **BCI Application** - **Applications**: speller, communication - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Research **Documentation** - **DOI**: 10.1371/journal.pone.0175856 - **License**: CC-BY-4.0 - **Investigators**: David Hübner, Thibault Verhoeven, Konstantin Schmid, Klaus-Robert Müller, Michael Tangermann, Pieter-Jan Kindermans - **Senior author**: Michael Tangermann - **Contact**: [david.huebner@blbt.uni-freiburg.de](mailto:david.huebner@blbt.uni-freiburg.de); [michael.tangermann@blbt.uni-freiburg.de](mailto:michael.tangermann@blbt.uni-freiburg.de); [p.kindermans@tu-berlin.de](mailto:p.kindermans@tu-berlin.de) - **Institution**: Albert-Ludwigs-University - **Department**: Brain State Decoding Lab, Cluster of Excellence BrainLinks-BrainTools, Department of Computer Science - **Address**: Freiburg, Germany - **Country**: DE - **Repository**: Zenodo - **Data URL**: [http://doi.org/10.5281/zenodo.192684](http://doi.org/10.5281/zenodo.192684) - **Publication year**: 2017 - **Funding**: BrainLinks-BrainTools Cluster of Excellence funded by the German Research Foundation (DFG), grant number EXC 1086; bwHPC initiative, grant INST 39/963-1 FUGG; European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 657679; Special Research Fund from Ghent University; BK21 program funded by Korean National Research Foundation grant No. 2012-005741 - **Ethics approval**: Ethics Committee of the University Medical Center Freiburg; Declaration of Helsinki - **Keywords**: brain-computer interface, BCI, event-related potentials, ERP, P300, learning from label proportions, LLP, unsupervised learning, calibrationless, visual speller **Abstract** Using traditional approaches, a brain-computer interface (BCI) requires the collection of calibration data for new subjects prior to online use. This work introduces learning from label proportions (LLP) to the BCI community as a new unsupervised, and easy-to-implement classification approach for ERP-based BCIs. The LLP estimates the mean target and non-target responses based on known proportions of these two classes in different groups of the data. We present a visual ERP speller to meet the requirements of LLP. For evaluation, we ran simulations on artificially created data sets and conducted an online BCI study with 13 subjects performing a copy-spelling task. Theoretical considerations show that LLP is guaranteed to minimize the loss function similar to a corresponding supervised classifier. LLP performed well in simulations and in the online application, where 84.5% of characters were spelled correctly on average without prior calibration. **Methodology** The experiment used a modified visual ERP speller with a 6×7 grid. Two distinct stimulus sequences with different target/non-target ratios were used: sequence 1 had 3 targets in 8 stimuli, sequence 2 had 2 targets in 18 stimuli. Each trial consisted of 4 sequences of length 8 and 2 sequences of length 18, totaling 68 highlighting events per character. The LLP algorithm exploited these known proportions to reconstruct mean target and non-target ERP responses without requiring labeled data. The classifier was reset at the start of each sentence and retrained after each character. Subjects spelled a German pangram sentence three times. One subject (S2) had prior EEG experience; others were naive. Sessions lasted about 3 hours including setup. Participants were compensated 8 Euros per hour. **References** Hübner, D., Verhoeven, T., Schmid, K., Müller, K. R., Tangermann, M., & Kindermans, P. J. (2017) Learning from label proportions in brain-computer interfaces: Online unsupervised learning with guarantees. PLOS ONE 12(4): e0175856. [https://doi.org/10.1371/journal.pone.0175856](https://doi.org/10.1371/journal.pone.0175856) .. versionadded:: 0.4.5 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000199` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Learning from label proportions for a visual matrix speller (ERP) | | Author (year) | `Hubner2017` | | Canonical | `Huebner2017` | | Importable as | `NM000199`, `Hubner2017`, `Huebner2017` | | Year | 2017 | | Authors | David Hübner, Thibault Verhoeven, Konstantin Schmid, Klaus-Robert Müller, Michael Tangermann, Pieter-Jan Kindermans | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000199) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000199) | [Source URL](https://nemar.org/dataexplorer/detail/nm000199) | ## Technical Details - Subjects: 13 - Recordings: 342 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 1000.0 - Duration (hours): 16.410199166666665 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 5.1 GB - File count: 342 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000199](https://openneuro.org/datasets/nm000199) - NeMAR: [nm000199](https://nemar.org/dataexplorer/detail?dataset_id=nm000199) ## API Reference Use the `NM000199` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000199(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Learning from label proportions for a visual matrix speller (ERP) * **Study:** `nm000199` (NeMAR) * **Author (year):** `Hubner2017` * **Canonical:** `Huebner2017` Also importable as: `NM000199`, `Hubner2017`, `Huebner2017`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 342; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000199](https://openneuro.org/datasets/nm000199) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000199](https://nemar.org/dataexplorer/detail?dataset_id=nm000199) ### Examples ```pycon >>> from eegdash.dataset import NM000199 >>> dataset = NM000199(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000199) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000199) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000200: eeg dataset, 13 subjects *BigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects)* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *BigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects)*. Modality: eeg Subjects: 13 Recordings: 265 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000200 dataset = NM000200(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000200(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000200( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000200, title = {BigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects)}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, } ``` ## About This Dataset **BigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects)** BigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects). **Dataset Overview** - **Code**: Mainsah2025-I - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 ### View full README **BigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects)** BigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects). **Dataset Overview** - **Code**: Mainsah2025-I - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 - **Subjects**: 13 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1.0] s **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Montage**: standard_1020 - **Hardware**: g.USBamp (g.tec) - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 13 - **Health status**: healthy **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 **Signal Processing** - **Feature extraction**: P300_ERP_detection **Cross-Validation** - **Method**: calibration-then-test - **Evaluation type**: within_subject **BCI Application** - **Applications**: speller - **Environment**: laboratory - **Online feedback**: True **Tags** - **Modality**: visual - **Type**: perception **Documentation** - **Description**: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. - **DOI**: 10.13026/0byy-ry86 - **License**: CC-BY-4.0 - **Investigators**: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins - **Institution**: Duke University; East Tennessee State University - **Country**: US - **Repository**: PhysioNet - **Data URL**: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) - **Publication year**: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000200` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects) | | Author (year) | `Mainsah2025_BigP3BCI_I` | | Canonical | `BigP3BCI_StudyI`, `BigP3BCI_I` | | Importable as | `NM000200`, `Mainsah2025_BigP3BCI_I`, `BigP3BCI_StudyI`, `BigP3BCI_I` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000200) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000200) | [Source URL](https://nemar.org/dataexplorer/detail/nm000200) | ## Technical Details - Subjects: 13 - Recordings: 265 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 256.0 - Duration (hours): 7.403184678819445 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 324.4 MB - File count: 265 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000200](https://openneuro.org/datasets/nm000200) - NeMAR: [nm000200](https://nemar.org/dataexplorer/detail?dataset_id=nm000200) ## API Reference Use the `NM000200` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000200(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects) * **Study:** `nm000200` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_I` * **Canonical:** `BigP3BCI_StudyI`, `BigP3BCI_I` Also importable as: `NM000200`, `Mainsah2025_BigP3BCI_I`, `BigP3BCI_StudyI`, `BigP3BCI_I`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 265; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000200](https://openneuro.org/datasets/nm000200) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000200](https://nemar.org/dataexplorer/detail?dataset_id=nm000200) ### Examples ```pycon >>> from eegdash.dataset import NM000200 >>> dataset = NM000200(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000200) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000200) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000201: eeg dataset, 24 subjects *ERP paradigm of the Mobile BCI dataset* Access recordings and metadata through EEGDash. **Citation:** Young-Eun Lee, Gi-Hwan Shin, Minji Lee, Seong-Whan Lee (2019). *ERP paradigm of the Mobile BCI dataset*. Modality: eeg Subjects: 24 Recordings: 113 License: CC BY 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000201 dataset = NM000201(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000201(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000201( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000201, title = {ERP paradigm of the Mobile BCI dataset}, author = {Young-Eun Lee and Gi-Hwan Shin and Minji Lee and Seong-Whan Lee}, } ``` ## About This Dataset **ERP paradigm of the Mobile BCI dataset** ERP paradigm of the Mobile BCI dataset. **Dataset Overview** - **Code**: Lee2021Mobile-ERP - **Paradigm**: p300 - **DOI**: 10.1038/s41597-021-01094-4 ### View full README **ERP paradigm of the Mobile BCI dataset** ERP paradigm of the Mobile BCI dataset. **Dataset Overview** - **Code**: Lee2021Mobile-ERP - **Paradigm**: p300 - **DOI**: 10.1038/s41597-021-01094-4 - **Subjects**: 24 - **Sessions per subject**: 5 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1.0] s - **File format**: BrainVision **Acquisition** - **Sampling rate**: 100.0 Hz - **Number of channels**: 73 - **Channel types**: eeg=73 - **Montage**: standard_1005 - **Hardware**: BrainAmp (Brain Product GmbH) - **Reference**: FCz - **Ground**: Fpz - **Sensor type**: Ag/AgCl - **Line frequency**: 60.0 Hz - **Impedance threshold**: 50 kOhm - **Electrode material**: Ag/AgCl - **Auxiliary channels**: EOG (4 ch, vertical, horizontal) **Participants** - **Number of subjects**: 24 - **Health status**: healthy - **Age**: mean=24.5, std=2.9, min=19, max=32 - **Gender distribution**: male=14, female=10 **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 1.0 s - **Study design**: BCI during motion (standing/walking/running) - **Stimulus type**: visual oddball - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **BCI Application** - **Environment**: mobile - **Online feedback**: False **Tags** - **Pathology**: healthy - **Modality**: visual - **Type**: perception **Documentation** - **DOI**: 10.1038/s41597-021-01094-4 - **License**: CC BY 4.0 - **Investigators**: Young-Eun Lee, Gi-Hwan Shin, Minji Lee, Seong-Whan Lee - **Senior author**: Seong-Whan Lee - **Institution**: Korea University - **Country**: KR - **Repository**: OSF - **Data URL**: [https://osf.io/r7s9b/](https://osf.io/r7s9b/) - **Publication year**: 2021 - **Funding**: IITP No. 2017-0-00451; IITP No. 2015-0-00185; IITP No. 2019-0-00079 - **Ethics approval**: Institutional Review Board of Korea University, KUIRB-2019-0194-01 - **Keywords**: SSVEP, ERP, mobile BCI, ear-EEG, locomotion **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000201` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | ERP paradigm of the Mobile BCI dataset | | Author (year) | `Lee2021_ERP` | | Canonical | — | | Importable as | `NM000201`, `Lee2021_ERP` | | Year | 2019 | | Authors | Young-Eun Lee, Gi-Hwan Shin, Minji Lee, Seong-Whan Lee | | License | CC BY 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000201) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000201) | [Source URL](https://nemar.org/dataexplorer/detail/nm000201) | ## Technical Details - Subjects: 24 - Recordings: 113 - Tasks: 1 - Channels: 48 (108), 73 (5) - Sampling rate (Hz): 500.0 (108), 100.0 (5) - Duration (hours): 22.13410722222222 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 5.2 GB - File count: 113 - Format: BIDS - License: CC BY 4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000201](https://openneuro.org/datasets/nm000201) - NeMAR: [nm000201](https://nemar.org/dataexplorer/detail?dataset_id=nm000201) ## API Reference Use the `NM000201` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000201(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ERP paradigm of the Mobile BCI dataset * **Study:** `nm000201` (NeMAR) * **Author (year):** `Lee2021_ERP` * **Canonical:** — Also importable as: `NM000201`, `Lee2021_ERP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 24; recordings: 113; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000201](https://openneuro.org/datasets/nm000201) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000201](https://nemar.org/dataexplorer/detail?dataset_id=nm000201) ### Examples ```pycon >>> from eegdash.dataset import NM000201 >>> dataset = NM000201(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000201) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000201) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000204: eeg dataset, 14 subjects *Bluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch)* Access recordings and metadata through EEGDash. **Citation:** Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim (2019). *Bluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch)*. Modality: eeg Subjects: 14 Recordings: 420 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000204 dataset = NM000204(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000204(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000204( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000204, title = {Bluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch)}, author = {Jongmin Lee and Minju Kim and Dojin Heo and Jongsu Kim and Min-Ki Kim and Taejun Lee and Jongwoo Park and HyunYoung Kim and Minho Hwang and Laehyun Kim and Sung-Phil Kim}, } ``` ## About This Dataset **Bluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch)** Bluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch). **Dataset Overview** - **Code**: Lee2024-BS - **Paradigm**: p300 - **DOI**: 10.3389/fnhum.2024.1320457 ### View full README **Bluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch)** Bluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch). **Dataset Overview** - **Code**: Lee2024-BS - **Paradigm**: p300 - **DOI**: 10.3389/fnhum.2024.1320457 - **Subjects**: 14 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1] s - **File format**: MATLAB **Acquisition** - **Sampling rate**: 500.0 Hz - **Number of channels**: 31 - **Channel types**: eeg=31 - **Channel names**: Fp1, Fpz, Fp2, F7, F3, Fz, F4, F8, FT9, FC5, FC1, FC2, FC6, FT10, T7, C3, Cz, C4, T8, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, O1, Oz, O2 - **Montage**: standard_1020 - **Hardware**: actiCHamp (Brain Products) - **Reference**: linked mastoids - **Sensor type**: active - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 14 - **Health status**: healthy - **Age**: mean=22.64, std=3.08 - **Gender distribution**: male=9, female=5 - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 1.0 s - **Study design**: P300 BCI for BS home appliance control; 6-class oddball; LCD display - **Feedback type**: visual - **Stimulus type**: flash - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: online **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Stimulus onset asynchrony**: 750.0 ms **Data Structure** - **Trials**: 50 training + 30 testing blocks per subject - **Trials context**: per_subject **BCI Application** - **Applications**: home_appliance_control - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: ERP - **Type**: P300 **Documentation** - **DOI**: 10.3389/fnhum.2024.1320457 - **License**: CC-BY-4.0 - **Investigators**: Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim - **Institution**: Ulsan National Institute of Science and Technology - **Country**: KR - **Data URL**: [https://github.com/jml226/Home-Appliance-Control-Dataset](https://github.com/jml226/Home-Appliance-Control-Dataset) - **Publication year**: 2024 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000204` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Bluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch) | | Author (year) | `Lee2024_Bluetooth_speaker_14` | | Canonical | — | | Importable as | `NM000204`, `Lee2024_Bluetooth_speaker_14` | | Year | 2019 | | Authors | Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000204) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000204) | [Source URL](https://nemar.org/dataexplorer/detail/nm000204) | ## Technical Details - Subjects: 14 - Recordings: 420 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 500.0 - Duration (hours): 1.95331 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 323.0 MB - File count: 420 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000204](https://openneuro.org/datasets/nm000204) - NeMAR: [nm000204](https://nemar.org/dataexplorer/detail?dataset_id=nm000204) ## API Reference Use the `NM000204` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000204(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Bluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch) * **Study:** `nm000204` (NeMAR) * **Author (year):** `Lee2024_Bluetooth_speaker_14` * **Canonical:** — Also importable as: `NM000204`, `Lee2024_Bluetooth_speaker_14`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 14; recordings: 420; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000204](https://openneuro.org/datasets/nm000204) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000204](https://nemar.org/dataexplorer/detail?dataset_id=nm000204) ### Examples ```pycon >>> from eegdash.dataset import NM000204 >>> dataset = NM000204(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000204) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000204) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000205: eeg dataset, 14 subjects *RSVP collaborative BCI dataset from Zheng et al 2020* Access recordings and metadata through EEGDash. **Citation:** Li Zheng, Sen Sun, Hongze Zhao, Weihua Pei, Hongda Chen, Xiaorong Gao, Lijian Zhang, Yijun Wang (2020). *RSVP collaborative BCI dataset from Zheng et al 2020*. Modality: eeg Subjects: 14 Recordings: 84 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000205 dataset = NM000205(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000205(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000205( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000205, title = {RSVP collaborative BCI dataset from Zheng et al 2020}, author = {Li Zheng and Sen Sun and Hongze Zhao and Weihua Pei and Hongda Chen and Xiaorong Gao and Lijian Zhang and Yijun Wang}, } ``` ## About This Dataset **RSVP collaborative BCI dataset from Zheng et al 2020** RSVP collaborative BCI dataset from Zheng et al 2020. **Dataset Overview** - **Code**: Zheng2020 - **Paradigm**: p300 - **DOI**: 10.3389/fnins.2020.579469 ### View full README **RSVP collaborative BCI dataset from Zheng et al 2020** RSVP collaborative BCI dataset from Zheng et al 2020. **Dataset Overview** - **Code**: Zheng2020 - **Paradigm**: p300 - **DOI**: 10.3389/fnins.2020.579469 - **Subjects**: 14 - **Sessions per subject**: 2 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1] s - **Runs per session**: 3 - **File format**: MATLAB **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 62 - **Channel types**: eeg=62 - **Channel names**: FP1, FPz, FP2, AF3, AF4, F7, F5, F3, F1, Fz, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT8, T7, C5, C3, C1, Cz, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPz, CP2, CP4, CP6, TP8, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO7, PO5, PO3, POz, PO4, PO6, PO8, O1, CB1, Oz, O2, CB2 - **Montage**: standard_1020 - **Hardware**: Neuroscan Synamps2 - **Reference**: vertex (Cz) - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 14 - **Health status**: healthy - **Age**: mean=24.9, min=23, max=29 - **Gender distribution**: female=10, male=4 - **Handedness**: all right-handed - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 1.0 s - **Study design**: RSVP target detection (human vs non-human images); 14 subjects in 7 pairs, synchronized EEG recording - **Feedback type**: visual - **Stimulus type**: RSVP images - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Stimulus onset asynchrony**: 100.0 ms **Data Structure** - **Trials**: {‘target’: 168, ‘nontarget’: 4032} - **Trials context**: per subject across both sessions **Signal Processing** - **Classifiers**: HDCA - **Feature extraction**: SIM, CSP, TRCA, PCA - **Frequency bands**: bandpass=[2.0, 30.0] Hz - **Spatial filters**: SIM, CSP, PCA, CAR, TRCA **Cross-Validation** - **Method**: holdout - **Evaluation type**: within_subject, cross_session **BCI Application** - **Applications**: target_image_detection, collaborative_BCI - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: ERP - **Type**: RSVP **Documentation** - **DOI**: 10.3389/fnins.2020.579469 - **License**: CC-BY-4.0 - **Investigators**: Li Zheng, Sen Sun, Hongze Zhao, Weihua Pei, Hongda Chen, Xiaorong Gao, Lijian Zhang, Yijun Wang - **Institution**: Chinese Academy of Sciences - **Country**: CN - **Data URL**: [https://figshare.com/articles/dataset/12824771](https://figshare.com/articles/dataset/12824771) - **Publication year**: 2020 **References** Zheng, L., Sun, S., Zhao, H., et al. (2020). A Cross-Session Dataset for Collaborative Brain-Computer Interfaces Based on Rapid Serial Visual Presentation. Frontiers in Neuroscience, 14, 579469. [https://doi.org/10.3389/fnins.2020.579469](https://doi.org/10.3389/fnins.2020.579469) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000205` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | RSVP collaborative BCI dataset from Zheng et al 2020 | | Author (year) | `Zheng2020` | | Canonical | — | | Importable as | `NM000205`, `Zheng2020` | | Year | 2020 | | Authors | Li Zheng, Sen Sun, Hongze Zhao, Weihua Pei, Hongda Chen, Xiaorong Gao, Lijian Zhang, Yijun Wang | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000205) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000205) | [Source URL](https://nemar.org/dataexplorer/detail/nm000205) | ## Technical Details - Subjects: 14 - Recordings: 84 - Tasks: 1 - Channels: 62 - Sampling rate (Hz): 1000.0 - Duration (hours): 8.461972777777778 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 5.3 GB - File count: 84 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000205](https://openneuro.org/datasets/nm000205) - NeMAR: [nm000205](https://nemar.org/dataexplorer/detail?dataset_id=nm000205) ## API Reference Use the `NM000205` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000205(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RSVP collaborative BCI dataset from Zheng et al 2020 * **Study:** `nm000205` (NeMAR) * **Author (year):** `Zheng2020` * **Canonical:** — Also importable as: `NM000205`, `Zheng2020`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 14; recordings: 84; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000205](https://openneuro.org/datasets/nm000205) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000205](https://nemar.org/dataexplorer/detail?dataset_id=nm000205) ### Examples ```pycon >>> from eegdash.dataset import NM000205 >>> dataset = NM000205(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000205) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000205) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000206: eeg dataset, 15 subjects *Neuroergonomic 2021 dataset* Access recordings and metadata through EEGDash. **Citation:** Marcel F. Hinss, Emilie S. Jahanpour, Bertille Somon, Lou Pluchon, Frédéric Dehais, Raphaëlle N. Roy (2023). *Neuroergonomic 2021 dataset*. Modality: eeg Subjects: 15 Recordings: 30 License: CC-BY-SA-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000206 dataset = NM000206(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000206(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000206( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000206, title = {Neuroergonomic 2021 dataset}, author = {Marcel F. Hinss and Emilie S. Jahanpour and Bertille Somon and Lou Pluchon and Frédéric Dehais and Raphaëlle N. Roy}, } ``` ## About This Dataset **Neuroergonomic 2021 dataset** Neuroergonomic 2021 dataset. **Dataset Overview** - **Code**: Hinss2021 - **Paradigm**: rstate - **DOI**: 10.1038/s41597-022-01898-y ### View full README **Neuroergonomic 2021 dataset** Neuroergonomic 2021 dataset. **Dataset Overview** - **Code**: Hinss2021 - **Paradigm**: rstate - **DOI**: 10.1038/s41597-022-01898-y - **Subjects**: 15 - **Sessions per subject**: 2 - **Events**: rs=1, easy=2, medium=3, diff=4 - **Trial interval**: [0, 2] s - **File format**: set **Acquisition** - **Sampling rate**: 500.0 Hz - **Number of channels**: 62 - **Channel types**: eeg=62 - **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT10, FT7, FT8, FT9, Fp1, Fp2, Fz, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP7, TP8 - **Montage**: standard_1020 - **Hardware**: ActiCHamp (Brain Products Gmbh) - **Reference**: Fpz - **Sensor type**: active Ag/AgCl - **Line frequency**: 50.0 Hz - **Impedance threshold**: 25 kOhm - **Auxiliary channels**: ecg **Participants** - **Number of subjects**: 15 - **Health status**: healthy - **Age**: mean=23.9 - **Gender distribution**: female=11, male=18 **Experimental Protocol** - **Paradigm**: rstate - **Number of classes**: 4 - **Class labels**: rs, easy, medium, diff - **Study design**: Passive BCI neuroergonomics dataset with resting state and 3 difficulty levels of MATB-II task (easy, medium, difficult). The MOABB loader provides resting state and MATB conditions only. - **Feedback type**: none - **Stimulus type**: visual display - **Training/test split**: False **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text rs ``` ```text ├─ Experiment-structure └─ Rest easy ``` ```text ├─ Experiment-structure └─ Label/easy medium ``` ```text ├─ Experiment-structure └─ Label/medium diff ``` ```text ├─ Experiment-structure └─ Label/difficult ``` **Paradigm-Specific Parameters** - **Detected paradigm**: resting_state **Data Structure** - **Trials**: 90 - **Trials context**: total **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False **Signal Processing** - **Classifiers**: MDM, Riemannian - **Feature extraction**: Bandpower, Covariance/Riemannian, ICA - **Frequency bands**: alpha=[8.0, 13.0] Hz; theta=[4.0, 8.0] Hz **Cross-Validation** - **Method**: 5-fold - **Folds**: 5 - **Evaluation type**: cross_subject, cross_session, transfer_learning **Performance (Original Study)** - **Accuracy**: 70.67% **BCI Application** - **Applications**: neuroergonomics, mental_workload_estimation - **Environment**: laboratory **Tags** - **Pathology**: Healthy - **Modality**: Cognitive - **Type**: Research **Documentation** - **DOI**: 10.1038/s41597-022-01898-y - **License**: CC-BY-SA-4.0 - **Investigators**: Marcel F. Hinss, Emilie S. Jahanpour, Bertille Somon, Lou Pluchon, Frédéric Dehais, Raphaëlle N. Roy - **Senior author**: Raphaëlle N. Roy - **Contact**: [marcel.hinss@isae-supaero.fr](mailto:marcel.hinss@isae-supaero.fr) - **Institution**: ISAE-SUPAERO, Université de Toulouse - **Department**: Department of Information Processing and Systems - **Address**: Toulouse, France - **Country**: FR - **Repository**: Zenodo - **Data URL**: [https://doi.org/10.5281/zenodo.6874128](https://doi.org/10.5281/zenodo.6874128) - **Publication year**: 2023 - **Funding**: ERASMUS program; ANITI (Artificial and Natural Intelligence Toulouse Institute) - **Ethics approval**: Comité d’Éthique de la Recherche (CER), Université de Toulouse (CER number 2021-342) - **Acknowledgements**: This research was supported in part by the ERASMUS program (which funded Mr Hinss’ internship), and by ANITI (Artificial and Natural Intelligence Toulouse Institute), Toulouse, France. - **How to acknowledge**: Please cite: Hinss et al. (2023). Open multi-session and multi-task EEG cognitive dataset for passive brain-computer interface applications. Scientific Data, 10, 85. [https://doi.org/10.1038/s41597-022-01898-y](https://doi.org/10.1038/s41597-022-01898-y) **References** [Hinss2021] M. Hinss, B. Somon, F. Dehais & R. N. Roy (2021) Open EEG Datasets for Passive Brain-Computer Interface Applications: Lacks and Perspectives. IEEE Neural Engineering Conference. [Hinss2023] M. F. Hinss, et al. (2023) An EEG dataset for cross-session mental workload estimation: Passive BCI competition of the Neuroergonomics Conference 2021. Scientific Data, 10, 85. [https://doi.org/10.1038/s41597-022-01898-y](https://doi.org/10.1038/s41597-022-01898-y) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000206` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Neuroergonomic 2021 dataset | | Author (year) | `Hinss2021_Neuroergonomic` | | Canonical | `Hinss2021` | | Importable as | `NM000206`, `Hinss2021_Neuroergonomic`, `Hinss2021` | | Year | 2023 | | Authors | Marcel F. Hinss, Emilie S. Jahanpour, Bertille Somon, Lou Pluchon, Frédéric Dehais, Raphaëlle N. Roy | | License | CC-BY-SA-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000206) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000206) | [Source URL](https://nemar.org/dataexplorer/detail/nm000206) | ## Technical Details - Subjects: 15 - Recordings: 30 - Tasks: 1 - Channels: 61 - Sampling rate (Hz): 500.0 - Duration (hours): 3.974983333333333 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 1.2 GB - File count: 30 - Format: BIDS - License: CC-BY-SA-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000206](https://openneuro.org/datasets/nm000206) - NeMAR: [nm000206](https://nemar.org/dataexplorer/detail?dataset_id=nm000206) ## API Reference Use the `NM000206` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000206(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neuroergonomic 2021 dataset * **Study:** `nm000206` (NeMAR) * **Author (year):** `Hinss2021_Neuroergonomic` * **Canonical:** `Hinss2021` Also importable as: `NM000206`, `Hinss2021_Neuroergonomic`, `Hinss2021`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000206](https://openneuro.org/datasets/nm000206) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000206](https://nemar.org/dataexplorer/detail?dataset_id=nm000206) ### Examples ```pycon >>> from eegdash.dataset import NM000206 >>> dataset = NM000206(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000206) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000206) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000207: eeg dataset, 15 subjects *Class for Kojima2024B dataset management. P300 dataset* Access recordings and metadata through EEGDash. **Citation:** Simon Kojima, Shin’ichiro Kanoh (2024). *Class for Kojima2024B dataset management. P300 dataset*. Modality: eeg Subjects: 15 Recordings: 180 License: CC0-1.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000207 dataset = NM000207(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000207(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000207( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000207, title = {Class for Kojima2024B dataset management. P300 dataset}, author = {Simon Kojima and Shin'ichiro Kanoh}, } ``` ## About This Dataset **Class for Kojima2024B dataset management. P300 dataset** Class for Kojima2024B dataset management. P300 dataset. **Dataset Overview** - **Code**: Kojima2024B - **Paradigm**: p300 - **DOI**: 10.7910/DVN/1UJDV6 ### View full README **Class for Kojima2024B dataset management. P300 dataset** Class for Kojima2024B dataset management. P300 dataset. **Dataset Overview** - **Code**: Kojima2024B - **Paradigm**: p300 - **DOI**: 10.7910/DVN/1UJDV6 - **Subjects**: 15 - **Sessions per subject**: 1 - **Events**: Target=[111, 112, 113, 114], NonTarget=[101, 102, 103, 104] - **Trial interval**: [-0.5, 1.2] s - **Runs per session**: 12 - **File format**: BrainVision - **Number of contributing labs**: 1 **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 64 - **Channel types**: eeg=64, eog=2 - **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT10, FT7, FT8, FT9, Fp1, Fp2, Fz, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP10, TP7, TP8, TP9, hEOG, vEOG - **Montage**: standard_1020 - **Hardware**: BrainAmp - **Reference**: right mastoid - **Ground**: left mastoid - **Sensor type**: EEG - **Line frequency**: 50.0 Hz - **Cap manufacturer**: EasyCap - **Electrode type**: passive Ag/AgCl - **Electrode material**: Ag/AgCl - **Auxiliary channels**: EOG (2 ch, vertical, horizontal) **Participants** - **Number of subjects**: 15 - **Health status**: healthy - **Age**: mean=22.8, min=21.0, max=24.0 - **Gender distribution**: male=13, female=2 - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Task type**: auditory stream segregation with oddball - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 90.0 s - **Tasks**: ASME-4stream, ASME-2stream - **Study design**: within-subject comparison - **Study domain**: auditory BCI - **Feedback type**: none - **Stimulus type**: auditory tones - **Stimulus modalities**: auditory - **Primary modality**: auditory - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: False - **Instructions**: focus selectively on deviant stimuli in one of the streams and count target deviant stimuli **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 4 - **Number of repetitions**: 15 - **Stimulus onset asynchrony**: {‘ASME-4stream_overall’: 150.0, ‘ASME-2stream_overall’: 300.0, ‘within_stream’: 600.0} ms **Data Structure** - **Trials**: {‘ASME-4stream’: ‘600 stimuli per trial (4 trials per run, 6 runs)’, ‘ASME-2stream’: ‘300 stimuli per trial (4 trials per run, 6 runs)’} - **Blocks per session**: 12 - **Block duration**: 90.0 s - **Trials context**: 12 runs alternating between ASME-4stream and ASME-2stream, 4 trials per run **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False **Signal Processing** - **Classifiers**: Linear Discriminant Analysis (LDA), shrinkage-LDA - **Feature extraction**: mean amplitudes in 10 intervals (0.1s non-overlapping, 0-1.0s) - **Frequency bands**: analyzed=[0.1, 8.0] Hz **Cross-Validation** - **Method**: 3-fold chronological cross-validation (BCI simulation); 4-fold chronological cross-validation (binary classification) - **Evaluation type**: offline simulation **Performance (Original Study)** - **Asme-4Stream Accuracy**: 0.83 - **Asme-2Stream Accuracy**: 0.86 **BCI Application** - **Applications**: communication - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: auditory - **Type**: ERP, P300 **Documentation** - **Description**: Four-class ASME BCI investigation comparing two strategies for multiclassing: ASME-4stream (four streams with single target stimulus each) vs ASME-2stream (two streams with two target stimuli each) - **DOI**: 10.3389/fnhum.2024.1461960 - **Associated paper DOI**: 10.3389/fnhum.2024.1461960 - **License**: CC0-1.0 - **Investigators**: Simon Kojima, Shin’ichiro Kanoh - **Senior author**: Shin’ichiro Kanoh - **Contact**: [simon.kojima@ieee.org](mailto:simon.kojima@ieee.org) - **Institution**: Shibaura Institute of Technology - **Department**: Graduate School of Engineering and Science (Simon Kojima); College of Engineering (Shin’ichiro Kanoh) - **Address**: Tokyo, Japan - **Country**: JP - **Repository**: Harvard dataverse - **Data URL**: [https://doi.org/10.7910/DVN/1UJDV6](https://doi.org/10.7910/DVN/1UJDV6) - **Publication year**: 2024 - **Funding**: JSPS KAKENHI (Grant Number JP23K11811 to Shin’ichiro Kanoh) - **Ethics approval**: Review Board on Bioengineering Research Ethics of the Shibaura Institute of Technology - **Keywords**: brain-computer interface, electroencephalogram, event-related potential, auditory scene analysis, stream segregation, machine learning, NASA-TLX **Abstract** The ASME (Auditory Stream segregation Multiclass ERP) paradigm is used for an auditory brain-computer interface (BCI). Two approaches for achieving four-class ASME were investigated: ASME-4stream (four streams with a single target stimulus each) and ASME-2stream (two streams with two target stimuli each). Fifteen healthy subjects participated. ERPs were analyzed, and binary classification and BCI simulation were conducted offline using linear discriminant analysis. Average accuracies were 0.83 (ASME-4stream) and 0.86 (ASME-2stream). The ASME-2stream paradigm showed shorter latency and larger amplitude of P300, higher binary classification accuracy, and smaller workload. Both paradigms achieved sufficiently high accuracy (over 80%) for practical auditory BCI. **Methodology** Subjects performed 12 runs alternating between ASME-4stream and ASME-2stream paradigms. Each run contained 4 trials with ~90s duration. ASME-4stream presented 4 streams (SOA=0.15s, 600 stimuli/trial, ratio 9:1 standard:deviant). ASME-2stream presented 2 streams with 2 deviant stimuli each (SOA=0.3s, 300 stimuli/trial, ratio 8:1:1). EEG recorded at 1000 Hz from 64 channels. EOG artifacts removed using ICA on 15 PCs. Data filtered (1-40 Hz for ERP, 0.1-8 Hz for classification), epoched (-0.1 to 1.2s), downsampled to 250 Hz. Classification used shrinkage-LDA with mean amplitudes from 10 intervals (0-1.0s) as features. Performance evaluated using 4-fold chronological cross-validation. Usability assessed via NASA-TLX questionnaire. **References** Kojima, S. (2024). Replication Data for: Four-class ASME BCI: investigation of the feasibility and comparison of two strategies for multiclassing. Harvard Dataverse, V1. DOI: [https://doi.org/10.7910/DVN/1UJDV6](https://doi.org/10.7910/DVN/1UJDV6) Kojima, S. & Kanoh, S. (2024). Four-class ASME BCI: investigation of the feasibility and comparison of two strategies for multiclassing. Frontiers in Human Neuroscience 18:1461960. DOI: [https://doi.org/10.3389/fnhum.2024.1461960](https://doi.org/10.3389/fnhum.2024.1461960) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000207` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Class for Kojima2024B dataset management. P300 dataset | | Author (year) | `Kojima2024B_P300` | | Canonical | — | | Importable as | `NM000207`, `Kojima2024B_P300` | | Year | 2024 | | Authors | Simon Kojima, Shin’ichiro Kanoh | | License | CC0-1.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000207) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000207) | [Source URL](https://nemar.org/dataexplorer/detail/nm000207) | ## Technical Details - Subjects: 15 - Recordings: 180 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 21.62847222222222 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 13.9 GB - File count: 180 - Format: BIDS - License: CC0-1.0 - DOI: — - Source: nemar - OpenNeuro: [nm000207](https://openneuro.org/datasets/nm000207) - NeMAR: [nm000207](https://nemar.org/dataexplorer/detail?dataset_id=nm000207) ## API Reference Use the `NM000207` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000207(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Class for Kojima2024B dataset management. P300 dataset * **Study:** `nm000207` (NeMAR) * **Author (year):** `Kojima2024B_P300` * **Canonical:** — Also importable as: `NM000207`, `Kojima2024B_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 180; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000207](https://openneuro.org/datasets/nm000207) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000207](https://nemar.org/dataexplorer/detail?dataset_id=nm000207) ### Examples ```pycon >>> from eegdash.dataset import NM000207 >>> dataset = NM000207(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000207) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000207) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000208: eeg dataset, 14 subjects *Door lock control experiment (15 subjects, 4 classes, 31 EEG ch)* Access recordings and metadata through EEGDash. **Citation:** Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim (2019). *Door lock control experiment (15 subjects, 4 classes, 31 EEG ch)*. Modality: eeg Subjects: 14 Recordings: 434 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000208 dataset = NM000208(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000208(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000208( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000208, title = {Door lock control experiment (15 subjects, 4 classes, 31 EEG ch)}, author = {Jongmin Lee and Minju Kim and Dojin Heo and Jongsu Kim and Min-Ki Kim and Taejun Lee and Jongwoo Park and HyunYoung Kim and Minho Hwang and Laehyun Kim and Sung-Phil Kim}, } ``` ## About This Dataset **Door lock control experiment (15 subjects, 4 classes, 31 EEG ch)** Door lock control experiment (15 subjects, 4 classes, 31 EEG ch). **Dataset Overview** - **Code**: Lee2024-DL - **Paradigm**: p300 - **DOI**: 10.3389/fnhum.2024.1320457 ### View full README **Door lock control experiment (15 subjects, 4 classes, 31 EEG ch)** Door lock control experiment (15 subjects, 4 classes, 31 EEG ch). **Dataset Overview** - **Code**: Lee2024-DL - **Paradigm**: p300 - **DOI**: 10.3389/fnhum.2024.1320457 - **Subjects**: 15 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1] s - **File format**: MATLAB **Acquisition** - **Sampling rate**: 500.0 Hz - **Number of channels**: 31 - **Channel types**: eeg=31 - **Channel names**: Fp1, Fpz, Fp2, F7, F3, Fz, F4, F8, FT9, FC5, FC1, FC2, FC6, FT10, T7, C3, Cz, C4, T8, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, O1, Oz, O2 - **Montage**: standard_1020 - **Hardware**: actiCHamp (Brain Products) - **Reference**: linked mastoids - **Sensor type**: active - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 15 - **Health status**: healthy - **Age**: mean=22.87, std=2.07 - **Gender distribution**: male=12, female=3 - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 1.0 s - **Study design**: P300 BCI for DL home appliance control; 4-class oddball; LCD display - **Feedback type**: visual - **Stimulus type**: flash - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: online **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Stimulus onset asynchrony**: 750.0 ms **Data Structure** - **Trials**: 50 training + 30 testing blocks per subject - **Trials context**: per_subject **BCI Application** - **Applications**: home_appliance_control - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: ERP - **Type**: P300 **Documentation** - **DOI**: 10.3389/fnhum.2024.1320457 - **License**: CC-BY-4.0 - **Investigators**: Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim - **Institution**: Ulsan National Institute of Science and Technology - **Country**: KR - **Data URL**: [https://github.com/jml226/Home-Appliance-Control-Dataset](https://github.com/jml226/Home-Appliance-Control-Dataset) - **Publication year**: 2024 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000208` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Door lock control experiment (15 subjects, 4 classes, 31 EEG ch) | | Author (year) | `Lee2024_Door_lock_control` | | Canonical | — | | Importable as | `NM000208`, `Lee2024_Door_lock_control` | | Year | 2019 | | Authors | Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000208) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000208) | [Source URL](https://nemar.org/dataexplorer/detail/nm000208) | ## Technical Details - Subjects: 14 - Recordings: 434 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 500.0 - Duration (hours): 3.671492222222222 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 609.6 MB - File count: 434 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000208](https://openneuro.org/datasets/nm000208) - NeMAR: [nm000208](https://nemar.org/dataexplorer/detail?dataset_id=nm000208) ## API Reference Use the `NM000208` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000208(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Door lock control experiment (15 subjects, 4 classes, 31 EEG ch) * **Study:** `nm000208` (NeMAR) * **Author (year):** `Lee2024_Door_lock_control` * **Canonical:** — Also importable as: `NM000208`, `Lee2024_Door_lock_control`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 14; recordings: 434; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000208](https://openneuro.org/datasets/nm000208) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000208](https://nemar.org/dataexplorer/detail?dataset_id=nm000208) ### Examples ```pycon >>> from eegdash.dataset import NM000208 >>> dataset = NM000208(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000208) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000208) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000209: eeg dataset, 25 subjects *Motor imagery + spatial attention dataset from Forenzo & He 2023* Access recordings and metadata through EEGDash. **Citation:** Dylan Forenzo, Yixuan Liu, Jeehyun Kim, Yidan Ding, Taehyung Yoon, Bin He (2024). *Motor imagery + spatial attention dataset from Forenzo & He 2023*. Modality: eeg Subjects: 25 Recordings: 150 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000209 dataset = NM000209(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000209(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000209( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000209, title = {Motor imagery + spatial attention dataset from Forenzo & He 2023}, author = {Dylan Forenzo and Yixuan Liu and Jeehyun Kim and Yidan Ding and Taehyung Yoon and Bin He}, } ``` ## About This Dataset **Motor imagery + spatial attention dataset from Forenzo & He 2023** Motor imagery + spatial attention dataset from Forenzo & He 2023. **Dataset Overview** - **Code**: Forenzo2023 - **Paradigm**: imagery - **DOI**: 10.1109/TBME.2023.3298957 ### View full README **Motor imagery + spatial attention dataset from Forenzo & He 2023** Motor imagery + spatial attention dataset from Forenzo & He 2023. **Dataset Overview** - **Code**: Forenzo2023 - **Paradigm**: imagery - **DOI**: 10.1109/TBME.2023.3298957 - **Subjects**: 25 - **Sessions per subject**: 5 - **Events**: left_hand=1, right_hand=2 - **Trial interval**: [0, 4] s - **Runs per session**: 3 - **File format**: MAT **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 64 - **Channel types**: eeg=64 - **Montage**: standard_1005 - **Hardware**: Neuroscan Quik-Cap 64-ch, SynAmps 2/RT - **Reference**: between Cz and CPz - **Sensor type**: Ag/AgCl - **Line frequency**: 60.0 Hz - **Online filters**: {‘lowpass’: 200, ‘notch_hz’: 60} **Participants** - **Number of subjects**: 25 - **Health status**: healthy - **Age**: mean=25.5 - **Gender distribution**: female=10, male=15 - **Handedness**: right-handed (24 of 25) - **BCI experience**: mixed (19 naive, 6 experienced) - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 2 - **Class labels**: left_hand, right_hand - **Trial duration**: 6.0 s - **Study design**: 5-session BCI study with motor imagery (MI), overt spatial attention (OSA), and combined (MIOSA) tasks - **Feedback type**: cursor - **Stimulus type**: continuous pursuit - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: online **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_hand, right_hand - **Imagery duration**: 6.0 s **Data Structure** - **Trials**: 1875 - **Trials context**: 25 subjects x 5 sessions x 3 MI runs x 5 trials **Signal Processing** - **Classifiers**: linear_classifier - **Feature extraction**: AR_spectral_estimation, alpha_bandpower - **Frequency bands**: alpha=[8.0, 13.0] Hz - **Spatial filters**: Laplacian **Cross-Validation** - **Evaluation type**: within_subject **BCI Application** - **Applications**: cursor_control - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Research **Documentation** - **DOI**: 10.1109/TBME.2023.3298957 - **License**: CC-BY-4.0 - **Investigators**: Dylan Forenzo, Yixuan Liu, Jeehyun Kim, Yidan Ding, Taehyung Yoon, Bin He - **Institution**: Carnegie Mellon University - **Department**: Department of Biomedical Engineering - **Country**: US - **Data URL**: [https://kilthub.cmu.edu/articles/dataset/23677098](https://kilthub.cmu.edu/articles/dataset/23677098) - **Publication year**: 2023 **References** Forenzo, D., & He, B. (2024). Integrating simultaneous motor imagery and spatial attention for EEG-BCI control. IEEE Trans. Biomed. Eng., 71(1), 282-294. [https://doi.org/10.1109/TBME.2023.3298957](https://doi.org/10.1109/TBME.2023.3298957) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000209` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Motor imagery + spatial attention dataset from Forenzo & He 2023 | | Author (year) | `Forenzo2023` | | Canonical | — | | Importable as | `NM000209`, `Forenzo2023` | | Year | 2024 | | Authors | Dylan Forenzo, Yixuan Liu, Jeehyun Kim, Yidan Ding, Taehyung Yoon, Bin He | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000209) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000209) | [Source URL](https://nemar.org/dataexplorer/detail/nm000209) | ## Technical Details - Subjects: 25 - Recordings: 150 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 7.572991388888889 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 4.9 GB - File count: 150 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000209](https://openneuro.org/datasets/nm000209) - NeMAR: [nm000209](https://nemar.org/dataexplorer/detail?dataset_id=nm000209) ## API Reference Use the `NM000209` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000209(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor imagery + spatial attention dataset from Forenzo & He 2023 * **Study:** `nm000209` (NeMAR) * **Author (year):** `Forenzo2023` * **Canonical:** — Also importable as: `NM000209`, `Forenzo2023`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 25; recordings: 150; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000209](https://openneuro.org/datasets/nm000209) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000209](https://nemar.org/dataexplorer/detail?dataset_id=nm000209) ### Examples ```pycon >>> from eegdash.dataset import NM000209 >>> dataset = NM000209(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000209) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000209) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000210: eeg dataset, 15 subjects *BCIAUT-P300 dataset for autism from Simoes et al 2020* Access recordings and metadata through EEGDash. **Citation:** Marco Simoes, Davide Borra, Eduardo Santamaria-Vazquez, Mayra Bittencourt-Villalpando, Dominik Krzeminski, Aleksandar Miladinovic, Carlos Amaral, Bruno Direito, Miguel Castelo-Branco (2020). *BCIAUT-P300 dataset for autism from Simoes et al 2020*. Modality: eeg Subjects: 15 Recordings: 210 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000210 dataset = NM000210(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000210(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000210( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000210, title = {BCIAUT-P300 dataset for autism from Simoes et al 2020}, author = {Marco Simoes and Davide Borra and Eduardo Santamaria-Vazquez and Mayra Bittencourt-Villalpando and Dominik Krzeminski and Aleksandar Miladinovic and Carlos Amaral and Bruno Direito and Miguel Castelo-Branco}, } ``` ## About This Dataset **BCIAUT-P300 dataset for autism from Simoes et al 2020** BCIAUT-P300 dataset for autism from Simoes et al 2020. **Dataset Overview** - **Code**: Simoes2020 - **Paradigm**: p300 - **DOI**: 10.3389/fnins.2020.568104 ### View full README **BCIAUT-P300 dataset for autism from Simoes et al 2020** BCIAUT-P300 dataset for autism from Simoes et al 2020. **Dataset Overview** - **Code**: Simoes2020 - **Paradigm**: p300 - **DOI**: 10.3389/fnins.2020.568104 - **Subjects**: 15 - **Sessions per subject**: 7 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1.2] s - **Runs per session**: 2 - **File format**: MATLAB (epoched) - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 250.0 Hz - **Number of channels**: 8 - **Channel types**: eeg=8 - **Channel names**: C3, Cz, C4, CPz, P3, Pz, P4, POz - **Montage**: standard_1020 - **Hardware**: g.Nautilus (g.tec, wireless) - **Reference**: right ear - **Ground**: AFz - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 15 - **Health status**: patients - **Clinical population**: autism spectrum disorder (ASD) - **Age**: mean=22.17, std=5.5, min=16, max=38 - **Gender distribution**: male=15 - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 1.2 s - **Study design**: P300 BCI joint-attention training in virtual environment; 8 flashing objects; 15 ASD subjects across 7 sessions (clinical trial NCT02445625) - **Feedback type**: visual - **Stimulus type**: object flash - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: online **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 **Data Structure** - \*\*Trials\*\*: 1600 train + 400\*K test per session (K=3-10) - **Trials context**: per_session **Signal Processing** - **Classifiers**: EEGNet, LDA, SVM, MLP - **Feature extraction**: temporal_features, deep_learning - **Frequency bands**: bandpass=[2.0, 30.0] Hz **Cross-Validation** - **Method**: calibration_vs_online - **Evaluation type**: within_subject, cross_session, cross_subject **BCI Application** - **Applications**: joint_attention_training - **Environment**: clinical - **Online feedback**: True **Tags** - **Pathology**: Autism - **Modality**: ERP - **Type**: P300 **Documentation** - **DOI**: 10.3389/fnins.2020.568104 - **License**: CC-BY-4.0 - **Investigators**: Marco Simoes, Davide Borra, Eduardo Santamaria-Vazquez, Mayra Bittencourt-Villalpando, Dominik Krzeminski, Aleksandar Miladinovic, Carlos Amaral, Bruno Direito, Miguel Castelo-Branco - **Institution**: University of Coimbra - **Country**: PT - **Data URL**: [https://zenodo.org/records/19005186](https://zenodo.org/records/19005186) - **Publication year**: 2020 **References** Simoes, M., Borra, D., Santamaria-Vazquez, E., et al. (2020). BCIAUT-P300: A Multi-Session and Multi-Subject Benchmark Dataset on Autism for P300-Based Brain-Computer- Interfaces. Frontiers in Neuroscience, 14, 568104. [https://doi.org/10.3389/fnins.2020.568104](https://doi.org/10.3389/fnins.2020.568104) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000210` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BCIAUT-P300 dataset for autism from Simoes et al 2020 | | Author (year) | `Simoes2020` | | Canonical | `BCIAUTP300`, `BCIAUT_P300`, `BCIAUT` | | Importable as | `NM000210`, `Simoes2020`, `BCIAUTP300`, `BCIAUT_P300`, `BCIAUT` | | Year | 2020 | | Authors | Marco Simoes, Davide Borra, Eduardo Santamaria-Vazquez, Mayra Bittencourt-Villalpando, Dominik Krzeminski, Aleksandar Miladinovic, Carlos Amaral, Bruno Direito, Miguel Castelo-Branco | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000210) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000210) | [Source URL](https://nemar.org/dataexplorer/detail/nm000210) | ## Technical Details - Subjects: 15 - Recordings: 210 - Tasks: 1 - Channels: 8 - Sampling rate (Hz): 250.0 - Duration (hours): 187.43532222222225 - Pathology: Development - Modality: Visual - Type: Clinical/Intervention - Size on disk: 3.8 GB - File count: 210 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000210](https://openneuro.org/datasets/nm000210) - NeMAR: [nm000210](https://nemar.org/dataexplorer/detail?dataset_id=nm000210) ## API Reference Use the `NM000210` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000210(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIAUT-P300 dataset for autism from Simoes et al 2020 * **Study:** `nm000210` (NeMAR) * **Author (year):** `Simoes2020` * **Canonical:** `BCIAUTP300`, `BCIAUT_P300`, `BCIAUT` Also importable as: `NM000210`, `Simoes2020`, `BCIAUTP300`, `BCIAUT_P300`, `BCIAUT`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 15; recordings: 210; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000210](https://openneuro.org/datasets/nm000210) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000210](https://nemar.org/dataexplorer/detail?dataset_id=nm000210) ### Examples ```pycon >>> from eegdash.dataset import NM000210 >>> dataset = NM000210(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000210) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000210) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000211: eeg dataset, 15 subjects *RSVP ERP dataset for authentication from Zhang et al 2025* Access recordings and metadata through EEGDash. **Citation:** Yufeng Zhang, Hongxin Zhang, Yixuan Li, Yijun Wang, Xiaorong Gao, Chen Yang (2025). *RSVP ERP dataset for authentication from Zhang et al 2025*. Modality: eeg Subjects: 15 Recordings: 240 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000211 dataset = NM000211(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000211(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000211( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000211, title = {RSVP ERP dataset for authentication from Zhang et al 2025}, author = {Yufeng Zhang and Hongxin Zhang and Yixuan Li and Yijun Wang and Xiaorong Gao and Chen Yang}, } ``` ## About This Dataset **RSVP ERP dataset for authentication from Zhang et al 2025** RSVP ERP dataset for authentication from Zhang et al 2025. **Dataset Overview** - **Code**: Zhang2025 - **Paradigm**: p300 - **DOI**: 10.1038/s41597-025-05378-x ### View full README **RSVP ERP dataset for authentication from Zhang et al 2025** RSVP ERP dataset for authentication from Zhang et al 2025. **Dataset Overview** - **Code**: Zhang2025 - **Paradigm**: p300 - **DOI**: 10.1038/s41597-025-05378-x - **Subjects**: 15 - **Sessions per subject**: 4 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 0.6] s - **Runs per session**: 4 - **File format**: MATLAB (HDF5) **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 57 - **Channel types**: eeg=57 - **Channel names**: Fpz, Fp1, Fp2, AF3, AF4, AF7, AF8, Fz, F1, F2, F3, F4, F5, F6, F7, F8, FCz, FC1, FC2, FC3, FC4, FC5, FC6, FT7, FT8, Cz, C1, C2, C3, C4, C5, C6, T7, T8, CP1, CP2, CP3, CP4, CP5, CP6, TP7, TP8, Pz, P3, P4, P5, P6, P7, P8, POz, PO3, PO4, PO7, PO8, Oz, O1, O2 - **Montage**: standard_1020 - **Hardware**: Neuracle Neusen - **Reference**: CPz - **Ground**: AFz - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 15 - **Health status**: healthy - **Age**: min=22, max=26 - **Gender distribution**: female=6, male=9 - **Handedness**: all right-handed - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 1.0 s - **Study design**: RSVP face authentication; self-face vs AI-generated faces; 4 sessions over 200 days (longitudinal) - **Feedback type**: none - **Stimulus type**: RSVP face images - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Stimulus onset asynchrony**: 100.0 ms **Data Structure** - **Trials**: ~160 target + ~6240 nontarget per session - **Trials context**: per session (4 blocks x 8 sequences x 200 images) **Signal Processing** - **Classifiers**: HDCA - **Feature extraction**: HDCA - **Frequency bands**: ERP_dominant=[0.0, 10.0] Hz **Cross-Validation** - **Evaluation type**: within_subject **BCI Application** - **Applications**: identity_authentication, target_detection - **Environment**: laboratory **Tags** - **Pathology**: Healthy - **Modality**: ERP - **Type**: RSVP **Documentation** - **DOI**: 10.1038/s41597-025-05378-x - **License**: CC-BY-NC-ND-4.0 - **Investigators**: Yufeng Zhang, Hongxin Zhang, Yixuan Li, Yijun Wang, Xiaorong Gao, Chen Yang - **Institution**: Beijing University of Posts and Telecommunications - **Country**: CN - **Data URL**: [https://figshare.com/articles/dataset/27201003](https://figshare.com/articles/dataset/27201003) - **Publication year**: 2025 **References** Zhang, Y., Zhang, H., Li, Y., Wang, Y., Gao, X., & Yang, C. (2025). A longitudinal EEG dataset of event-related potential for EEG-based identity authentication. Scientific Data, 12, 1069. [https://doi.org/10.1038/s41597-025-05378-x](https://doi.org/10.1038/s41597-025-05378-x) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000211` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | RSVP ERP dataset for authentication from Zhang et al 2025 | | Author (year) | `Zhang2025_RSVP` | | Canonical | `Zhang2025` | | Importable as | `NM000211`, `Zhang2025_RSVP`, `Zhang2025` | | Year | 2025 | | Authors | Yufeng Zhang, Hongxin Zhang, Yixuan Li, Yijun Wang, Xiaorong Gao, Chen Yang | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000211) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000211) | [Source URL](https://nemar.org/dataexplorer/detail/nm000211) | ## Technical Details - Subjects: 15 - Recordings: 240 - Tasks: 1 - Channels: 57 - Sampling rate (Hz): 1000.0 - Duration (hours): 15.022525 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 8.7 GB - File count: 240 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000211](https://openneuro.org/datasets/nm000211) - NeMAR: [nm000211](https://nemar.org/dataexplorer/detail?dataset_id=nm000211) ## API Reference Use the `NM000211` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000211(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RSVP ERP dataset for authentication from Zhang et al 2025 * **Study:** `nm000211` (NeMAR) * **Author (year):** `Zhang2025_RSVP` * **Canonical:** `Zhang2025` Also importable as: `NM000211`, `Zhang2025_RSVP`, `Zhang2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 240; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000211](https://openneuro.org/datasets/nm000211) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000211](https://nemar.org/dataexplorer/detail?dataset_id=nm000211) ### Examples ```pycon >>> from eegdash.dataset import NM000211 >>> dataset = NM000211(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000211) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000211) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000212: eeg dataset, 16 subjects *BNCI 2015-007 Motion VEP (mVEP) Speller dataset* Access recordings and metadata through EEGDash. **Citation:** Sulamith Schaeff, Matthias Sebastian Treder, Bastian Venthur, Benjamin Blankertz (2012). *BNCI 2015-007 Motion VEP (mVEP) Speller dataset*. Modality: eeg Subjects: 16 Recordings: 32 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000212 dataset = NM000212(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000212(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000212( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000212, title = {BNCI 2015-007 Motion VEP (mVEP) Speller dataset}, author = {Sulamith Schaeff and Matthias Sebastian Treder and Bastian Venthur and Benjamin Blankertz}, } ``` ## About This Dataset **BNCI 2015-007 Motion VEP (mVEP) Speller dataset** BNCI 2015-007 Motion VEP (mVEP) Speller dataset. **Dataset Overview** - **Code**: BNCI2015-007 - **Paradigm**: p300 - **DOI**: 10.1088/1741-2560/9/4/045006 ### View full README **BNCI 2015-007 Motion VEP (mVEP) Speller dataset** BNCI 2015-007 Motion VEP (mVEP) Speller dataset. **Dataset Overview** - **Code**: BNCI2015-007 - **Paradigm**: p300 - **DOI**: 10.1088/1741-2560/9/4/045006 - **Subjects**: 16 - **Sessions per subject**: 1 - **Events**: Target=1, NonTarget=2 - **Trial interval**: [0, 0.7] s - **Runs per session**: 2 - **Session IDs**: practice, calibration, copy_spelling, free_spelling - **File format**: gdf - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 100.0 Hz - **Number of channels**: 63 - **Channel types**: eeg=63 - **Channel names**: Fp1, Fp2, AF3, AF4, AF7, AF8, Fz, F1, F2, F3, F4, F5, F6, F7, F8, F9, F10, FCz, FC1, FC2, FC3, FC4, FC5, FC6, FT7, FT8, T7, T8, Cz, C1, C2, C3, C4, C5, C6, TP7, TP8, CPz, CP1, CP2, CP3, CP4, CP5, CP6, Pz, P1, P2, P3, P4, P5, P6, P7, P8, P9, P10, POz, PO3, PO4, PO7, PO8, Oz, O1, O2 - **Montage**: 10-10 - **Hardware**: BrainAmp EEG amplifier - **Software**: Pyff, VisionEgg, MATLAB - **Reference**: linked mastoids - **Ground**: forehead - **Sensor type**: active electrode - **Line frequency**: 50.0 Hz - **Online filters**: hardware bandpass filter 0.016–250 Hz - **Impedance threshold**: 10.0 kOhm - **Cap manufacturer**: Brain Products - **Electrode type**: actiCap active electrode system **Participants** - **Number of subjects**: 16 - **Health status**: patients - **Clinical population**: Healthy - **Age**: mean=23.8, min=21, max=30 - **Gender distribution**: male=10, female=6 - **Handedness**: normal or corrected-to-normal vision - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Task type**: visual_speller - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 30.0 s - **Study design**: Three different Cake Speller modifications: Overt Cake Speller (gaze toward target), Covert Cake Speller (central fixation, covert attention), Motion Center Speller (foveal stimulation). Two-level selection (group-level and symbol-level) from 30 symbols. - **Study domain**: gaze-independent communication - **Feedback type**: visual - **Stimulus type**: motion VEP (mVEP) - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: online - **Training/test split**: True - **Instructions**: Copy-spelling and free-spelling with attention to target symbols. Participants counted moving bar/pattern presentations in target location. - **Stimulus presentation**: soa_ms=200 ms (Cake Spellers) or 266 ms (Motion Center Speller), stimulus_duration_ms=100 ms, isi_ms=100 ms, repetitions=10 repetitions per level, total_presentations=120 per selection (2 levels × 10 repetitions × 6 groups/symbols) **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 6 - **Number of repetitions**: 10 - **Inter-stimulus interval**: 100.0 ms - **Stimulus onset asynchrony**: 200.0 ms **Data Structure** - **Trials**: 120 - **Blocks per session**: 4 - **Trials context**: per_selection (2 levels × 10 repetitions × 6 groups/symbols) **Preprocessing** - **Data state**: filtered - **Preprocessing applied**: True - **Steps**: downsampling, low-pass filter, baseline correction, artifact rejection - **Highpass filter**: 0.016 Hz - **Lowpass filter**: 250.0 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.016, ‘high_cutoff_hz’: 250.0} - **Filter type**: hardware bandpass, Chebyshev low-pass for offline - **Artifact methods**: min-max criterion (70 μV), variance criterion - **Re-reference**: linked mastoids - **Downsampled to**: 100.0 Hz - **Epoch window**: [-0.2, 1.0] - **Notes**: For offline analysis: downsampled to 200 Hz, low-pass filtered (42 Hz passband, 49 Hz stopband). For online: downsampled to 100 Hz. Artifact rejection: min-max ≥70 μV. Nontarget epochs filtered to avoid overlap with targets (3 preceding and 4 following stimuli must be nontargets). **Signal Processing** - **Classifiers**: LDA with shrinkage of covariance matrix - **Feature extraction**: signed square values of point-biserial correlation coefficients - **Frequency bands**: analyzed=[100.0, 800.0] Hz - **Spatial filters**: LDA spatial filter **Cross-Validation** - **Method**: train on calibration, test on copy-spelling and free-spelling - **Evaluation type**: within_session **Performance (Original Study)** - **N200 Latency Overt Ms**: 164.0 - **N200 Latency Covert Ms**: 180.0 - **N200 Latency Motion Center Ms**: 198.0 - **P300 Latency Range Ms**: 300-500 - **N200 Latency Range Ms**: 100-250 **BCI Application** - **Applications**: speller, communication - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: P300, VEP **Documentation** - **Description**: Exploring motion VEPs for gaze-independent communication - **DOI**: 10.1088/1741-2560/9/4/045006 - **Associated paper DOI**: 10.1088/1741-2560/11/2/026009 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: Sulamith Schaeff, Matthias Sebastian Treder, Bastian Venthur, Benjamin Blankertz - **Senior author**: Benjamin Blankertz - **Contact**: [benjamin.blankertz@tu-berlin.de](mailto:benjamin.blankertz@tu-berlin.de) - **Institution**: Berlin Institute of Technology - **Department**: Neurotechnology Group - **Country**: Germany - **Repository**: BNCI Horizon - **Publication year**: 2012 - **Funding**: DFG grant; grant nos s; BMBF grant; grant no MU MU - **Ethics approval**: Declaration of Helsinki - **Keywords**: motion visually evoked potentials, mVEP, BCI, speller, gaze-independent, covert attention, P300, N200 **References** Treder, M. S., Purwins, H., Miklody, D., Sturm, I., & Blankertz, B. (2012). Decoding auditory attention to instruments in polyphonic music using single-trial EEG classification. Journal of Neural Engineering, 11(2), 026009. [https://doi.org/10.1088/1741-2560/11/2/026009](https://doi.org/10.1088/1741-2560/11/2/026009) Notes .. versionadded:: 1.2.0 See Also BNCI2015_008 : Center Speller P300 dataset (gaze-independent) BNCI2015_009 : AMUSE auditory spatial P300 dataset BNCI2015_010 : RSVP visual speller (gaze-independent visual paradigm) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000212` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2015-007 Motion VEP (mVEP) Speller dataset | | Author (year) | `Schaeff2015` | | Canonical | `BNCI2015` | | Importable as | `NM000212`, `Schaeff2015`, `BNCI2015` | | Year | 2012 | | Authors | Sulamith Schaeff, Matthias Sebastian Treder, Bastian Venthur, Benjamin Blankertz | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000212) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000212) | [Source URL](https://nemar.org/dataexplorer/detail/nm000212) | ## Technical Details - Subjects: 16 - Recordings: 32 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 100.0 - Duration (hours): 19.95450555555556 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 1.3 GB - File count: 32 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000212](https://openneuro.org/datasets/nm000212) - NeMAR: [nm000212](https://nemar.org/dataexplorer/detail?dataset_id=nm000212) ## API Reference Use the `NM000212` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000212(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-007 Motion VEP (mVEP) Speller dataset * **Study:** `nm000212` (NeMAR) * **Author (year):** `Schaeff2015` * **Canonical:** — Also importable as: `NM000212`, `Schaeff2015`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 16; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000212](https://openneuro.org/datasets/nm000212) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000212](https://nemar.org/dataexplorer/detail?dataset_id=nm000212) ### Examples ```pycon >>> from eegdash.dataset import NM000212 >>> dataset = NM000212(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000212) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000212) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000213: eeg dataset, 30 subjects *Television control experiment (30 subjects, 4 classes, 31 EEG ch)* Access recordings and metadata through EEGDash. **Citation:** Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim (2019). *Television control experiment (30 subjects, 4 classes, 31 EEG ch)*. Modality: eeg Subjects: 30 Recordings: 2300 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000213 dataset = NM000213(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000213(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000213( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000213, title = {Television control experiment (30 subjects, 4 classes, 31 EEG ch)}, author = {Jongmin Lee and Minju Kim and Dojin Heo and Jongsu Kim and Min-Ki Kim and Taejun Lee and Jongwoo Park and HyunYoung Kim and Minho Hwang and Laehyun Kim and Sung-Phil Kim}, } ``` ## About This Dataset **Television control experiment (30 subjects, 4 classes, 31 EEG ch)** Television control experiment (30 subjects, 4 classes, 31 EEG ch). **Dataset Overview** - **Code**: Lee2024-TV - **Paradigm**: p300 - **DOI**: 10.3389/fnhum.2024.1320457 ### View full README **Television control experiment (30 subjects, 4 classes, 31 EEG ch)** Television control experiment (30 subjects, 4 classes, 31 EEG ch). **Dataset Overview** - **Code**: Lee2024-TV - **Paradigm**: p300 - **DOI**: 10.3389/fnhum.2024.1320457 - **Subjects**: 30 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1] s - **File format**: MATLAB **Acquisition** - **Sampling rate**: 500.0 Hz - **Number of channels**: 31 - **Channel types**: eeg=31 - **Channel names**: Fp1, Fpz, Fp2, F7, F3, Fz, F4, F8, FT9, FC5, FC1, FC2, FC6, FT10, T7, C3, Cz, C4, T8, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, O1, Oz, O2 - **Montage**: standard_1020 - **Hardware**: actiCHamp (Brain Products) - **Reference**: linked mastoids - **Sensor type**: active - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 30 - **Health status**: healthy - **Age**: mean=21.63, std=2.31 - **Gender distribution**: male=23, female=7 - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 1.0 s - **Study design**: P300 BCI for TV home appliance control; 4-class oddball; LCD display - **Feedback type**: visual - **Stimulus type**: flash - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: online **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Stimulus onset asynchrony**: 750.0 ms **Data Structure** - **Trials**: 50 training + 30 testing blocks per subject - **Trials context**: per_subject **BCI Application** - **Applications**: home_appliance_control - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: ERP - **Type**: P300 **Documentation** - **DOI**: 10.3389/fnhum.2024.1320457 - **License**: CC-BY-4.0 - **Investigators**: Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim - **Institution**: Ulsan National Institute of Science and Technology - **Country**: KR - **Data URL**: [https://github.com/jml226/Home-Appliance-Control-Dataset](https://github.com/jml226/Home-Appliance-Control-Dataset) - **Publication year**: 2024 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000213` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Television control experiment (30 subjects, 4 classes, 31 EEG ch) | | Author (year) | `Lee2024_Television_control_30` | | Canonical | — | | Importable as | `NM000213`, `Lee2024_Television_control_30` | | Year | 2019 | | Authors | Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000213) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000213) | [Source URL](https://nemar.org/dataexplorer/detail/nm000213) | ## Technical Details - Subjects: 30 - Recordings: 2300 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 500.0 - Duration (hours): 8.477633333333333 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 1.4 GB - File count: 2300 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000213](https://openneuro.org/datasets/nm000213) - NeMAR: [nm000213](https://nemar.org/dataexplorer/detail?dataset_id=nm000213) ## API Reference Use the `NM000213` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000213(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Television control experiment (30 subjects, 4 classes, 31 EEG ch) * **Study:** `nm000213` (NeMAR) * **Author (year):** `Lee2024_Television_control_30` * **Canonical:** — Also importable as: `NM000213`, `Lee2024_Television_control_30`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 30; recordings: 2300; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000213](https://openneuro.org/datasets/nm000213) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000213](https://nemar.org/dataexplorer/detail?dataset_id=nm000213) ### Examples ```pycon >>> from eegdash.dataset import NM000213 >>> dataset = NM000213(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000213) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000213) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000214: eeg dataset, 30 subjects *c-VEP dataset from Thielen et al. (2021)* Access recordings and metadata through EEGDash. **Citation:** J Thielen, P Marsman, J Farquhar, P Desain (2021). *c-VEP dataset from Thielen et al. (2021)*. Modality: eeg Subjects: 30 Recordings: 150 License: CC0-1.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000214 dataset = NM000214(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000214(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000214( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000214, title = {c-VEP dataset from Thielen et al. (2021)}, author = {J Thielen and P Marsman and J Farquhar and P Desain}, } ``` ## About This Dataset **c-VEP dataset from Thielen et al. (2021)** c-VEP dataset from Thielen et al. (2021) **Dataset Overview** - **Code**: Thielen2021 - **Paradigm**: cvep - **DOI**: 10.34973/9txv-z787 ### View full README **c-VEP dataset from Thielen et al. (2021)** c-VEP dataset from Thielen et al. (2021) **Dataset Overview** - **Code**: Thielen2021 - **Paradigm**: cvep - **DOI**: 10.34973/9txv-z787 - **Subjects**: 30 - **Sessions per subject**: 1 - **Events**: 1.0=101, 0.0=100 - **Trial interval**: (0, 0.3) s - **Runs per session**: 5 - **File format**: gdf - **Contributing labs**: MindAffect, Radboud University **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 8 - **Channel types**: eeg=8 - **Channel names**: Fpz, Iz, O1, O2, Oz, POz, T7, T8 - **Montage**: custom - **Hardware**: Biosemi ActiveTwo - **Reference**: CMS/DRL - **Sensor type**: sintered Ag/AgCl active electrodes - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 30 - **Health status**: healthy - **Age**: mean=25.0, min=19, max=62 - **Gender distribution**: female=17, male=13 **Experimental Protocol** - **Paradigm**: cvep - **Number of classes**: 2 - **Class labels**: 1.0, 0.0 - **Trial duration**: 31.5 s - **Study design**: Code-modulated visual evoked potentials BCI task where participants fixated on target cells in a calculator grid (offline) or keyboard layout (online) while all cells flashed with unique pseudo-random Gold code modulated bit-sequences - **Feedback type**: none - **Stimulus type**: visual - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: False - **Instructions**: Participants maintained fixation at the target cell which was cued in green for 1 s before trial onset. No feedback was given after trials in the offline experiment. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 1.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_1_0 0.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_0_0 ``` **Paradigm-Specific Parameters** - **Detected paradigm**: cvep - **Code type**: modulated Gold codes - **Code length**: 126 - **Number of targets**: 20 **Data Structure** - **Trials**: 100 - **Blocks per session**: 5 - **Trials context**: per_subject (5 blocks × 20 trials each) **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False **Signal Processing** - **Classifiers**: template-matching, reconvolution, CCA - **Feature extraction**: encoding model, event responses, spatio-temporal - **Spatial filters**: CCA **Cross-Validation** - **Method**: cross-validation - **Folds**: 5 - **Evaluation type**: within_session, transfer_learning, zero_training **Performance (Original Study)** - **High Communication Rates**: achieved in online spelling task **BCI Application** - **Applications**: speller - **Environment**: indoor - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Research **Documentation** - **DOI**: 10.1088/1741-2552/abecef - **Associated paper DOI**: 10.1088/1741-2552/ab4057 - **License**: CC0-1.0 - **Investigators**: J Thielen, P Marsman, J Farquhar, P Desain - **Senior author**: P Desain - **Contact**: [jordy.thielen@donders.ru.nl](mailto:jordy.thielen@donders.ru.nl) - **Institution**: Radboud University - **Department**: Donders Institute for Brain, Cognition and Behaviour - **Country**: NL - **Repository**: Radboud - **Data URL**: [https://doi.org/10.34973/9txv-z787](https://doi.org/10.34973/9txv-z787) - **Publication year**: 2021 - **Funding**: NWO/TTW Takeoff Grant No. 14054; International ALS Association and Dutch ALS Foundation Grant Nos. ATC20610 and 2017-57 - **Ethics approval**: Approved by the local ethical committee of the Faculty of Social Sciences of Radboud University - **Keywords**: brain–computer interface (BCI), electroencephalography (EEG), code-modulated visual evoked potentials (cVEPs), reconvolution, zero training, spread spectrum communication **External Links** - **Source**: [https://doi.org/10.34973/9txv-z787](https://doi.org/10.34973/9txv-z787) **Abstract** Objective. Typically, a brain–computer interface (BCI) is calibrated using user- and session-specific data because of the individual idiosyncrasies and the non-stationary signal properties of the electroencephalogram (EEG). Therefore, it is normal for BCIs to undergo a time-consuming passive training stage that prevents users from directly operating them. In this study, we systematically reduce the training data set in a stepwise fashion, to ultimately arrive at a calibration-free method for a code-modulated visually evoked potential (cVEP)-based BCI to fully eliminate the tedious training stage. Approach. In an extensive offline analysis, we compare our sophisticated encoding model with a traditional event-related potential (ERP) technique. We calibrate the encoding model in a standard way, with data limited to a single class while generalizing to all others and without any data. In addition, we investigate the feasibility of the zero-training cVEP BCI in an online setting. Main results. By adopting the encoding model, the training data can be reduced substantially, while maintaining both the classification performance as well as the explained variance of the ERP method. Moreover, with data from only one class or even no data at all, it still shows excellent performance. In addition, the zero-training cVEP BCI achieved high communication rates in an online spelling task, proving its feasibility for practical use. Significance. To date, this is the fastest zero-training cVEP BCI in the field, allowing high communication speeds without calibration while using only a few non-invasive water-based EEG electrodes. This allows us to skip the training stage altogether and spend all the valuable time on direct operation. This minimizes the session time and opens up new exciting directions for practical plug-and-play BCI. Fundamentally, these results validate that the adopted neural encoding model compresses data into event responses without the loss of explanatory power compared to using full ERPs as a template. **Methodology** The study compared four training regimes: (1) e-train: traditional ERP template-matching with data from all classes, (2) n-train: encoding model (reconvolution) with data from all n classes, (3) 1-train: encoding model with data from only one class while generating templates for all sequences, (4) 0-train: zero-training encoding model requiring no calibration data. Offline experiment: 30 participants completed 5 blocks of 20 trials each (100 trials total), with 31.5 s trials using a 4×5 calculator grid (n=20 symbols). Stimuli were luminance-modulated pseudo-random Gold codes (126-bit sequences, 2.1 s duration) presented on an iPad Pro at 60 Hz. Online experiment: 11 participants (9 analyzed) used a keyboard layout (n=29 symbols) with dynamic stopping rule for spelling tasks. EEG recorded at 512 Hz from 8 electrodes, preprocessed with 2-30 Hz Butterworth filtering and downsampled to 120 Hz. Classification used template-matching with reconvolution encoding model that decomposes responses to sequences into linear sums of individual event responses. **References** Thielen, J. (Jordy), Pieter Marsman, Jason Farquhar, Desain, P.W.M. (Peter) (2023): From full calibration to zero training for a code-modulated visual evoked potentials brain computer interface. Version 3. Radboud University. (dataset). DOI: [https://doi.org/10.34973/9txv-z787](https://doi.org/10.34973/9txv-z787) Thielen, J., Marsman, P., Farquhar, J., & Desain, P. (2021). From full calibration to zero training for a code-modulated visual evoked potentials for brain–computer interface. Journal of Neural Engineering, 18(5), 056007. DOI: [https://doi.org/10.1088/1741-2552/abecef](https://doi.org/10.1088/1741-2552/abecef) Ahmadi, S., Borhanazad, M., Tump, D., Farquhar, J., & Desain, P. (2019). Low channel count montages using sensor tying for VEP-based BCI. Journal of Neural Engineering, 16(6), 066038. DOI: [https://doi.org/10.1088/1741-2552/ab4057](https://doi.org/10.1088/1741-2552/ab4057) Notes .. versionadded:: 0.6.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000214` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | c-VEP dataset from Thielen et al. (2021) | | Author (year) | `Thielen2021` | | Canonical | — | | Importable as | `NM000214`, `Thielen2021` | | Year | 2021 | | Authors | J Thielen, P Marsman, J Farquhar, P Desain | | License | CC0-1.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000214) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000214) | [Source URL](https://nemar.org/dataexplorer/detail/nm000214) | ## Technical Details - Subjects: 30 - Recordings: 150 - Tasks: 1 - Channels: 8 - Sampling rate (Hz): 512.0 - Duration (hours): 27.764727105034723 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 1.5 GB - File count: 150 - Format: BIDS - License: CC0-1.0 - DOI: — - Source: nemar - OpenNeuro: [nm000214](https://openneuro.org/datasets/nm000214) - NeMAR: [nm000214](https://nemar.org/dataexplorer/detail?dataset_id=nm000214) ## API Reference Use the `NM000214` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000214(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) c-VEP dataset from Thielen et al. (2021) * **Study:** `nm000214` (NeMAR) * **Author (year):** `Thielen2021` * **Canonical:** — Also importable as: `NM000214`, `Thielen2021`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 30; recordings: 150; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000214](https://openneuro.org/datasets/nm000214) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000214](https://nemar.org/dataexplorer/detail?dataset_id=nm000214) ### Examples ```pycon >>> from eegdash.dataset import NM000214 >>> dataset = NM000214(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000214) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000214) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000215: eeg dataset, 38 subjects *P300 dataset BI2014b from a “Brain Invaders” experiment* Access recordings and metadata through EEGDash. **Citation:** Louis Korczowski, Ekaterina Ostaschenko, Anton Andreev, Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Violette Gautheret, Marco Congedo (2019). *P300 dataset BI2014b from a “Brain Invaders” experiment*. Modality: eeg Subjects: 38 Recordings: 38 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000215 dataset = NM000215(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000215(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000215( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000215, title = {P300 dataset BI2014b from a "Brain Invaders" experiment}, author = {Louis Korczowski and Ekaterina Ostaschenko and Anton Andreev and Grégoire Cattan and Pedro Luiz Coelho Rodrigues and Violette Gautheret and Marco Congedo}, } ``` ## About This Dataset **P300 dataset BI2014b from a “Brain Invaders” experiment** P300 dataset BI2014b from a “Brain Invaders” experiment. **Dataset Overview** - **Code**: BrainInvaders2014b - **Paradigm**: p300 - **DOI**: [https://doi.org/10.5281/zenodo.3267301](https://doi.org/10.5281/zenodo.3267301) ### View full README **P300 dataset BI2014b from a “Brain Invaders” experiment** P300 dataset BI2014b from a “Brain Invaders” experiment. **Dataset Overview** - **Code**: BrainInvaders2014b - **Paradigm**: p300 - **DOI**: [https://doi.org/10.5281/zenodo.3267301](https://doi.org/10.5281/zenodo.3267301) - **Subjects**: 38 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1] s - **File format**: mat and csv **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 32 - **Channel types**: eeg=32 - **Channel names**: Fp1, Fp2, AFz, F7, F3, F4, F8, FC5, FC1, FC2, FC6, T7, C3, Cz, C4, T8, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, PO7, O1, Oz, O2, PO8, PO9, PO10 - **Montage**: standard_1010 - **Hardware**: g.USBamp (g.tec, Schiedlberg, Austria) - **Software**: OpenVibe - **Reference**: right earlobe - **Ground**: Fz - **Sensor type**: wet electrodes - **Line frequency**: 50.0 Hz - **Cap manufacturer**: g.tec - **Cap model**: g.GAMMAcap - **Electrode type**: wet - **Electrode material**: Ag/AgCl **Participants** - **Number of subjects**: 38 - **Health status**: healthy - **Age**: mean=24.1, std=3.09 - **Gender distribution**: male=24, female=14 - **BCI experience**: not naïve users - selected on the basis of their individual score during a preliminary session of Brain Invaders - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Task type**: oddball - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Study design**: multi-user/hyperscanning experiment with three randomized conditions (Solo1, Solo2, Collaboration). Subjects played in pairs. Solo conditions used a control design where non-playing participant focused on unanimated cross to prevent stimulus observation while EEG was recorded (to correct for fake inter-brain synchrony). - **Study domain**: inter-brain synchrony in collaborative BCI - **Feedback type**: visual - **Stimulus type**: visual flashes - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: online - **Training/test split**: False - **Instructions**: destroy the target alien symbol as fast as possible. Up to eight attempts per level. If all attempts missed, level restarted. - **Stimulus presentation**: repetition_structure=12 flashes per repetition of pseudo-random groups of 6 symbols, such that each symbol flashes exactly twice per repetition, target_ratio=1:5 (Target vs Non-Target), flash_groups=6 rows and 6 columns (pseudo-random groups, not physical arrangement), animation=aliens slowly and regularly moved according to predefined path with constant inter-distance **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 1 **Data Structure** - **Trials**: variable per session (9 levels, up to 8 attempts per level) - **Blocks per session**: 9 - **Block duration**: variable, average ~33 seconds per level (5 minutes total for 9 levels) s - **Trials context**: 9 levels per game session, each with unique predefined spatial configuration of 36 aliens. Up to 8 attempts to destroy target per level. **Preprocessing** - **Data state**: raw EEG with no digital filter applied, synchronized experimental tags using USB analog-to-digital converter to reduce jitter - **Preprocessing applied**: False - **Notes**: Experimental tags produced by Brain Invaders 2 were synchronized with EEG signals using USB analog-to-digital converter connected to g.USBamp trigger channel. This tagging procedure allows consistent tagging latency and jitter. **Signal Processing** - **Classifiers**: RMDM (Riemannian Minimum Distance to Mean), Riemannian - **Feature extraction**: Covariance/Riemannian **Cross-Validation** - **Evaluation type**: cross_session **Performance (Original Study)** - **Classifier**: real-time adaptive RMDM classifier (calibration-free procedure) **BCI Application** - **Applications**: gaming - **Environment**: small room with 24’ screen, subjects sitting side by side at ~125cm distance, experimenter in adjacent room with one-way glass window - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Perception **Documentation** - **Description**: EEG recordings of 38 subjects playing in pairs to the multi-user version of Brain Invaders P300-based BCI. Contains three conditions: Solo1, Solo2, and Collaboration. - **DOI**: 10.5281/zenodo.3267301 - **Associated paper DOI**: hal-02173958 - **License**: CC-BY-4.0 - **Investigators**: Louis Korczowski, Ekaterina Ostaschenko, Anton Andreev, Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Violette Gautheret, Marco Congedo - **Senior author**: Marco Congedo - **Institution**: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP - **Address**: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France - **Country**: FR - **Repository**: Zenodo - **Data URL**: [https://doi.org/10.5281/zenodo.3267301](https://doi.org/10.5281/zenodo.3267301) - **Publication year**: 2019 - **Ethics approval**: Ethical Committee of the University of Grenoble Alpes (Comité d’Ethique pour la Recherche Non-Interventionnelle) - **Acknowledgements**: At the end of the experiment two tickets of cinema were offered to each subject, for a total value of 15 euros per subject. - **Keywords**: Electroencephalography (EEG), P300, Brain-Computer Interface (BCI), Experiment, Collaboration, Multi-User, Hyperscanning **Abstract** We describe the experimental procedures for a dataset containing electroencephalographic (EEG) recordings of 38 subjects playing in pairs to the multi-user version of a visual P300-based Brain-Computer Interface (BCI) named Brain Invaders. The interface uses the oddball paradigm on a grid of 36 symbols (1 Target, 35 Non-Target) that are flashed pseudo-randomly to elicit a P300 response. EEG data were recorded using 32 active wet electrodes per subject (total: 64 electrodes) during three randomised conditions (Solo1, Solo2, Collaboration). The experiment took place at GIPSA-lab, Grenoble, France, in 2014. **Methodology** Multi-user hyperscanning P300 BCI experiment designed to study inter-brain synchrony. Participants played Brain Invaders 2 in three conditions: Solo1 (player1 plays, player2 watches cross), Solo2 (roles reversed), and Collaboration (4 game sessions with both players). Each game session consisted of 9 levels with predefined alien configurations. A repetition used 12 flashes of pseudo-random groups of 6 symbols, ensuring each symbol flashed twice per repetition (1:5 Target:Non-Target ratio). Real-time adaptive RMDM classifier provided online feedback. Control condition (non-playing participant) allowed correction for fake inter-brain synchrony. **References** Korczowski, L., Ostaschenko, E., Andreev, A., Cattan, G., Rodrigues, P. L. C., Gautheret, V., & Congedo, M. (2019). Brain Invaders Solo versus Collaboration: Multi-User P300-Based Brain-Computer Interface Dataset (BI2014b). [https://hal.archives-ouvertes.fr/hal-02173958](https://hal.archives-ouvertes.fr/hal-02173958) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000215` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | P300 dataset BI2014b from a “Brain Invaders” experiment | | Author (year) | `Korczowski2014_P300` | | Canonical | `BrainInvaders2014b`, `BI2014b`, `BrainInvadersBI2014b` | | Importable as | `NM000215`, `Korczowski2014_P300`, `BrainInvaders2014b`, `BI2014b`, `BrainInvadersBI2014b` | | Year | 2019 | | Authors | Louis Korczowski, Ekaterina Ostaschenko, Anton Andreev, Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Violette Gautheret, Marco Congedo | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000215) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000215) | [Source URL](https://nemar.org/dataexplorer/detail/nm000215) | ## Technical Details - Subjects: 38 - Recordings: 38 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 512.0 - Duration (hours): 2.362566189236111 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 401.8 MB - File count: 38 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000215](https://openneuro.org/datasets/nm000215) - NeMAR: [nm000215](https://nemar.org/dataexplorer/detail?dataset_id=nm000215) ## API Reference Use the `NM000215` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000215(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset BI2014b from a “Brain Invaders” experiment * **Study:** `nm000215` (NeMAR) * **Author (year):** `Korczowski2014_P300` * **Canonical:** `BrainInvaders2014b`, `BI2014b`, `BrainInvadersBI2014b` Also importable as: `NM000215`, `Korczowski2014_P300`, `BrainInvaders2014b`, `BI2014b`, `BrainInvadersBI2014b`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 38; recordings: 38; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000215](https://openneuro.org/datasets/nm000215) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000215](https://nemar.org/dataexplorer/detail?dataset_id=nm000215) ### Examples ```pycon >>> from eegdash.dataset import NM000215 >>> dataset = NM000215(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000215) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000215) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000216: eeg dataset, 43 subjects *P300 dataset BI2015a from a “Brain Invaders” experiment* Access recordings and metadata through EEGDash. **Citation:** Louis Korczowski, Martine Cederhout, Anton Andreev, Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Violette Gautheret, Marco Congedo (2019). *P300 dataset BI2015a from a “Brain Invaders” experiment*. Modality: eeg Subjects: 43 Recordings: 129 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000216 dataset = NM000216(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000216(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000216( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000216, title = {P300 dataset BI2015a from a "Brain Invaders" experiment}, author = {Louis Korczowski and Martine Cederhout and Anton Andreev and Grégoire Cattan and Pedro Luiz Coelho Rodrigues and Violette Gautheret and Marco Congedo}, } ``` ## About This Dataset **P300 dataset BI2015a from a “Brain Invaders” experiment** P300 dataset BI2015a from a “Brain Invaders” experiment. **Dataset Overview** - **Code**: BrainInvaders2015a - **Paradigm**: p300 - **DOI**: [https://doi.org/10.5281/zenodo.3266929](https://doi.org/10.5281/zenodo.3266929) ### View full README **P300 dataset BI2015a from a “Brain Invaders” experiment** P300 dataset BI2015a from a “Brain Invaders” experiment. **Dataset Overview** - **Code**: BrainInvaders2015a - **Paradigm**: p300 - **DOI**: [https://doi.org/10.5281/zenodo.3266929](https://doi.org/10.5281/zenodo.3266929) - **Subjects**: 43 - **Sessions per subject**: 3 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1] s - **File format**: mat and csv **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 32 - **Channel types**: eeg=32 - **Channel names**: Fp1, Fp2, AFz, F7, F3, F4, F8, FC5, FC1, FC2, FC6, T7, C3, Cz, C4, T8, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, PO7, O1, Oz, O2, PO8, PO9, PO10 - **Montage**: 10-10 - **Hardware**: g.USBamp (g.tec, Schiedlberg, Austria) - **Software**: OpenVibe - **Reference**: right earlobe - **Ground**: Fz - **Sensor type**: wet electrodes - **Line frequency**: 50.0 Hz - **Online filters**: no digital filter applied - **Cap manufacturer**: g.tec - **Cap model**: g.GAMMAcap - **Electrode type**: wet - **Electrode material**: Silver/Silver Chloride **Participants** - **Number of subjects**: 43 - **Health status**: healthy - **Age**: mean=23.7, std=3.19 - **Gender distribution**: male=31, female=12 - **BCI experience**: mostly students and young researchers **Experimental Protocol** - **Paradigm**: p300 - **Task type**: target detection - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Study design**: calibration-less P300-based BCI with modulation of flash duration; three game sessions (9 levels each) with different flash durations (110ms, 80ms, 50ms); resting state and eyes closed recorded before and after sessions; subjects instructed to limit eye blinks, head movements and face muscular contractions - **Feedback type**: visual (game interface with real-time adaptive Riemannian RMDM classifier) - **Stimulus type**: oddball paradigm on grid of 36 symbols (1 Target, 35 Non-Target) flashed pseudo-randomly - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: online - **Training/test split**: False - **Instructions**: destroy target symbol within 8 attempts; aliens move slowly and regularly according to predefined path to maintain attention - **Stimulus presentation**: SoftwareName=OpenViBE **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 1 - **Number of repetitions**: 12 **Data Structure** - **Trials**: variable per subject (up to 8 attempts per level, 9 levels per session, 3 sessions) - **Blocks per session**: 3 - **Trials context**: 9 levels per session with variable duration (average ~5 minutes per session, max 10 minutes) **Preprocessing** - **Data state**: raw EEG with synchronized USB tagging (reduced jitter using USB digital-to-analog converter) - **Preprocessing applied**: False - **Notes**: no digital filter applied during acquisition; tags synchronized with EEG signals to reduce jitter; consistent tagging latency across Brain Invaders databases **Signal Processing** - **Classifiers**: Riemannian Minimum Distance to Mean (RMDM), adaptive - **Feature extraction**: Covariance/Riemannian **Cross-Validation** - **Evaluation type**: cross_session **BCI Application** - **Applications**: gaming - **Environment**: small room (4 square meters) with 24 inch screen - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Perception **Documentation** - **DOI**: 10.5281/zenodo.3266930 - **Associated paper DOI**: hal-02172347 - **License**: CC-BY-4.0 - **Investigators**: Louis Korczowski, Martine Cederhout, Anton Andreev, Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Violette Gautheret, Marco Congedo - **Senior author**: Marco Congedo - **Institution**: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP - **Address**: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France - **Country**: FR - **Repository**: Zenodo - **Data URL**: [https://doi.org/10.5281/zenodo.3266930](https://doi.org/10.5281/zenodo.3266930) - **Publication year**: 2019 - **Ethics approval**: Ethical Committee of the University of Grenoble Alpes (Comité d’Ethique pour la Recherche Non-Interventionnelle) - **How to acknowledge**: Korczowski, L., Cederhout, M., Andreev, A., Cattan, G., Rodrigues, P.L.C., Gautheret, V., Congedo, M. (2019). Brain Invaders calibration-less P300-based BCI with modulation of flash duration Dataset (bi2015a). Technical Report, GIPSA-lab. - **Keywords**: Electroencephalography (EEG), P300, Brain-Computer Interface, Experiment **Abstract** This dataset contains electroencephalographic (EEG) recordings of 50 subjects playing to a visual P300 Brain-Computer Interface (BCI) videogame named Brain Invaders. The interface uses the oddball paradigm on a grid of 36 symbols (1 Target, 35 Non-Target) that are flashed pseudo-randomly to elicit the P300 response. EEG data were recorded using 32 active wet electrodes with three conditions: flash duration 50ms, 80ms or 110ms. The experiment took place at GIPSA-lab, Grenoble, France, in 2015. **Methodology** The experiment was designed to study the influence of the flash duration on a calibration-less P300-based BCI system with wet electrodes and as a screening session for potential candidates for a broader multi-user BCI study. The visual P300 is an event-related potential (ERP) elicited by an expected but unpredictable target visual stimulation (oddball paradigm), with peaking amplitude 240-600 ms after stimulus onset. During the experiment, the output of a real-time adaptive Riemannian Minimum Distance to Mean (RMDM) classifier was used for assessing the participants’ command. This scheme allows a calibration-free classifier. Before and after the three game sessions, around one minute of resting state and eyes closed conditions were recorded. The interface of Brain Invaders is composed of 36 aliens. In the Brain Invaders P300 paradigm, a repetition is composed of 12 flashes of pseudo-random groups of six symbols chosen in such a way that after each repetition each symbol has flashed exactly two times. A game session was compounded by nine levels, consisting in a unique and predefined configuration of the 36 symbols of the interface. Aliens slowly and regularly moved according to a predefined path keeping constant the inter-distance between adjacent aliens to maintain high player’s attention during the whole experiment. **References** Korczowski, L., Cederhout, M., Andreev, A., Cattan, G., Rodrigues, P. L. C., Gautheret, V., & Congedo, M. (2019). Brain Invaders calibration-less P300-based BCI with modulation of flash duration Dataset (BI2015a) [https://hal.archives-ouvertes.fr/hal-02172347](https://hal.archives-ouvertes.fr/hal-02172347) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000216` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | P300 dataset BI2015a from a “Brain Invaders” experiment | | Author (year) | `Korczowski2015_P300` | | Canonical | `BrainInvaders2015a`, `BI2015a` | | Importable as | `NM000216`, `Korczowski2015_P300`, `BrainInvaders2015a`, `BI2015a` | | Year | 2019 | | Authors | Louis Korczowski, Martine Cederhout, Anton Andreev, Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Violette Gautheret, Marco Congedo | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000216) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000216) | [Source URL](https://nemar.org/dataexplorer/detail/nm000216) | ## Technical Details - Subjects: 43 - Recordings: 129 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 512.0 - Duration (hours): 11.664148763020831 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 1.9 GB - File count: 129 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000216](https://openneuro.org/datasets/nm000216) - NeMAR: [nm000216](https://nemar.org/dataexplorer/detail?dataset_id=nm000216) ## API Reference Use the `NM000216` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000216(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset BI2015a from a “Brain Invaders” experiment * **Study:** `nm000216` (NeMAR) * **Author (year):** `Korczowski2015_P300` * **Canonical:** `BrainInvaders2015a`, `BI2015a` Also importable as: `NM000216`, `Korczowski2015_P300`, `BrainInvaders2015a`, `BI2015a`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 43; recordings: 129; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000216](https://openneuro.org/datasets/nm000216) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000216](https://nemar.org/dataexplorer/detail?dataset_id=nm000216) ### Examples ```pycon >>> from eegdash.dataset import NM000216 >>> dataset = NM000216(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000216) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000216) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000217: eeg dataset, 44 subjects *P300 dataset BI2015b from a “Brain Invaders” experiment* Access recordings and metadata through EEGDash. **Citation:** Louis Korczowski, Martine Cederhout, Anton Andreev, Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Violette Gautheret, Marco Congedo (2019). *P300 dataset BI2015b from a “Brain Invaders” experiment*. Modality: eeg Subjects: 44 Recordings: 176 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000217 dataset = NM000217(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000217(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000217( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000217, title = {P300 dataset BI2015b from a "Brain Invaders" experiment}, author = {Louis Korczowski and Martine Cederhout and Anton Andreev and Grégoire Cattan and Pedro Luiz Coelho Rodrigues and Violette Gautheret and Marco Congedo}, } ``` ## About This Dataset **P300 dataset BI2015b from a “Brain Invaders” experiment** P300 dataset BI2015b from a “Brain Invaders” experiment. **Dataset Overview** - **Code**: BrainInvaders2015b - **Paradigm**: p300 - **DOI**: [https://doi.org/10.5281/zenodo.3267307](https://doi.org/10.5281/zenodo.3267307) ### View full README **P300 dataset BI2015b from a “Brain Invaders” experiment** P300 dataset BI2015b from a “Brain Invaders” experiment. **Dataset Overview** - **Code**: BrainInvaders2015b - **Paradigm**: p300 - **DOI**: [https://doi.org/10.5281/zenodo.3267307](https://doi.org/10.5281/zenodo.3267307) - **Subjects**: 44 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1] s - **Runs per session**: 4 - **File format**: mat and csv - **Contributing labs**: GIPSA-lab **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 32 - **Channel types**: eeg=32 - **Channel names**: AFz, C3, C4, CP1, CP2, CP5, CP6, Cz, F3, F4, F7, F8, FC1, FC2, FC5, FC6, Fp1, Fp2, O1, O2, Oz, P3, P4, P7, P8, PO10, PO7, PO8, PO9, Pz, T7, T8 - **Montage**: 10-10 - **Hardware**: g.USBamp (g.tec, Schiedlberg, Austria) - **Software**: OpenVibe - **Reference**: right earlobe - **Ground**: Fz - **Sensor type**: wet Silver/Silver Chloride electrodes - **Line frequency**: 50.0 Hz - **Online filters**: no digital filter applied - **Cap manufacturer**: g.tec - **Cap model**: g.GAMMAcap - **Electrode type**: wet - **Electrode material**: Silver/Silver Chloride **Participants** - **Number of subjects**: 44 - **Health status**: patients - **Clinical population**: Healthy - **Age**: mean=23.7, std=3.19 - **Gender distribution**: male=36, female=14 - **BCI experience**: mostly students and young researchers - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Study design**: Three game sessions with different flash durations (110ms, 80ms, 50ms), with resting state and eyes closed conditions recorded before and after. Subjects were instructed to limit eye blinks, head movements and face muscular contractions. - **Feedback type**: visual (game interface with reward screen) - **Stimulus type**: visual flash - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: online - **Training/test split**: False - **Instructions**: Players had up to eight attempts to destroy the target symbol per level. Target symbol identification using oddball paradigm with 36 aliens flashing in pseudo-random groups of six symbols. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 1 - **Number of repetitions**: 12 **Data Structure** - **Trials**: variable per subject (up to 8 attempts per level, 9 levels per session, 3 sessions) - **Blocks per session**: 9 - **Trials context**: per session (9 levels per session, 3 sessions with different flash durations) **Preprocessing** - **Data state**: raw EEG with synchronized hardware tagging via USB digital-to-analog converter (reduced jitter compared to software tagging) - **Preprocessing applied**: False - **Notes**: Data were stored with no digital filter applied. USB digital-to-analog converter connected to the g.USBamp trigger channel was used to synchronize experimental tags produced by Brain Invaders with EEG signals to reduce jitter. **Signal Processing** - **Classifiers**: Riemannian Minimum Distance to Mean (RMDM), xDAWN, Riemannian MDM - **Feature extraction**: Covariance/Riemannian, xDAWN **Cross-Validation** - **Evaluation type**: cross_session **Performance (Original Study)** - **Note**: Real-time adaptive classifier used during experiment, performance variable per subject **BCI Application** - **Applications**: gaming - **Environment**: small room with a surface of four meters square, containing a 24’ screen - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Perception **Documentation** - **Description**: EEG recordings of 50 subjects playing to a visual P300 Brain-Computer Interface (BCI) videogame named Brain Invaders. The interface uses the oddball paradigm on a grid of 36 symbols (1 Target, 35 Non-Target) that are flashed pseudo-randomly to elicit the P300 response. Three conditions: flash duration 50ms, 80ms or 110ms. - **DOI**: 10.5281/zenodo.3266930 - **Associated paper DOI**: hal-02172347 - **License**: CC-BY-4.0 - **Investigators**: Louis Korczowski, Martine Cederhout, Anton Andreev, Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Violette Gautheret, Marco Congedo - **Senior author**: Marco Congedo - **Institution**: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP - **Address**: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France - **Country**: France - **Repository**: Zenodo - **Data URL**: [https://doi.org/10.5281/zenodo.3266930](https://doi.org/10.5281/zenodo.3266930) - **Publication year**: 2019 - **Ethics approval**: Ethical Committee of the University of Grenoble Alpes (Comité d’Ethique pour la Recherche Non-Interventionnelle) - **Keywords**: Electroencephalography (EEG), P300, Brain-Computer Interface, Experiment **Abstract** We describe the experimental procedures for an experiment dataset that we have made publicly available at [https://doi.org/10.5281/zenodo.3266930](https://doi.org/10.5281/zenodo.3266930) in mat and csv formats. This dataset contains electroencephalographic (EEG) recordings of 50 subjects playing to a visual P300 Brain-Computer Interface (BCI) videogame named Brain Invaders. The interface uses the oddball paradigm on a grid of 36 symbols (1 Target, 35 Non-Target) that are flashed pseudo-randomly to elicit the P300 response. EEG data were recorded using 32 active wet electrodes with three conditions: flash duration 50ms, 80ms or 110ms. The experiment took place at GIPSA-lab, Grenoble, France, in 2015. **Methodology** The experiment consisted of three game sessions of Brain Invaders of 9 levels each with different flash duration (110ms, 80ms, 50ms). Before and after the three game sessions, around one minute of resting state and eyes closed conditions were recorded. The interface is composed of 36 aliens. A repetition is composed of 12 flashes of pseudo-random groups of six symbols chosen in such a way that after each repetition each symbol has flashed exactly two times. The ratio of Target versus non-Target is one-to-five. During the experiment, the output of a real-time adaptive Riemannian Minimum Distance to Mean (RMDM) classifier was used for assessing the participants’ command. This scheme allows a calibration-free classifier. **References** Korczowski, L., Cederhout, M., Andreev, A., Cattan, G., Rodrigues, P. L. C., Gautheret, V., & Congedo, M. (2019). Brain Invaders Cooperative versus Competitive: Multi-User P300-based Brain-Computer Interface Dataset (BI2015b) [https://hal.archives-ouvertes.fr/hal-02172347](https://hal.archives-ouvertes.fr/hal-02172347) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000217` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | P300 dataset BI2015b from a “Brain Invaders” experiment | | Author (year) | `Korczowski2015_P300_BI2015b` | | Canonical | `BrainInvaders2015b`, `BI2015b` | | Importable as | `NM000217`, `Korczowski2015_P300_BI2015b`, `BrainInvaders2015b`, `BI2015b` | | Year | 2019 | | Authors | Louis Korczowski, Martine Cederhout, Anton Andreev, Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Violette Gautheret, Marco Congedo | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000217) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000217) | [Source URL](https://nemar.org/dataexplorer/detail/nm000217) | ## Technical Details - Subjects: 44 - Recordings: 176 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 512.0 - Duration (hours): 26.080008680555554 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 4.3 GB - File count: 176 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000217](https://openneuro.org/datasets/nm000217) - NeMAR: [nm000217](https://nemar.org/dataexplorer/detail?dataset_id=nm000217) ## API Reference Use the `NM000217` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000217(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset BI2015b from a “Brain Invaders” experiment * **Study:** `nm000217` (NeMAR) * **Author (year):** `Korczowski2015_P300_BI2015b` * **Canonical:** `BrainInvaders2015b`, `BI2015b` Also importable as: `NM000217`, `Korczowski2015_P300_BI2015b`, `BrainInvaders2015b`, `BI2015b`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 44; recordings: 176; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000217](https://openneuro.org/datasets/nm000217) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000217](https://nemar.org/dataexplorer/detail?dataset_id=nm000217) ### Examples ```pycon >>> from eegdash.dataset import NM000217 >>> dataset = NM000217(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000217) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000217) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000218: eeg dataset, 16 subjects *BigP3BCI Study H — 9x8 checkerboard with gaze conditions (16 healthy subjects)* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *BigP3BCI Study H — 9x8 checkerboard with gaze conditions (16 healthy subjects)*. Modality: eeg Subjects: 16 Recordings: 372 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000218 dataset = NM000218(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000218(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000218( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000218, title = {BigP3BCI Study H — 9x8 checkerboard with gaze conditions (16 healthy subjects)}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, } ``` ## About This Dataset **BigP3BCI Study H — 9x8 checkerboard with gaze conditions (16 healthy subjects)** BigP3BCI Study H — 9x8 checkerboard with gaze conditions (16 healthy subjects). **Dataset Overview** - **Code**: Mainsah2025-H - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 ### View full README **BigP3BCI Study H — 9x8 checkerboard with gaze conditions (16 healthy subjects)** BigP3BCI Study H — 9x8 checkerboard with gaze conditions (16 healthy subjects). **Dataset Overview** - **Code**: Mainsah2025-H - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 - **Subjects**: 16 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1.0] s **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Montage**: standard_1020 - **Hardware**: g.USBamp (g.tec) - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 16 - **Health status**: healthy **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 **Signal Processing** - **Feature extraction**: P300_ERP_detection **Cross-Validation** - **Method**: calibration-then-test - **Evaluation type**: within_subject **BCI Application** - **Applications**: speller - **Environment**: laboratory - **Online feedback**: True **Tags** - **Modality**: visual - **Type**: perception **Documentation** - **Description**: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. - **DOI**: 10.13026/0byy-ry86 - **License**: CC-BY-4.0 - **Investigators**: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins - **Institution**: Duke University; East Tennessee State University - **Country**: US - **Repository**: PhysioNet - **Data URL**: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) - **Publication year**: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000218` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BigP3BCI Study H — 9x8 checkerboard with gaze conditions (16 healthy subjects) | | Author (year) | `Mainsah2025_BigP3BCI_H` | | Canonical | `BigP3BCI_StudyH`, `BigP3BCI_H` | | Importable as | `NM000218`, `Mainsah2025_BigP3BCI_H`, `BigP3BCI_StudyH`, `BigP3BCI_H` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000218) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000218) | [Source URL](https://nemar.org/dataexplorer/detail/nm000218) | ## Technical Details - Subjects: 16 - Recordings: 372 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 256.0 - Duration (hours): 7.428207465277778 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 326.5 MB - File count: 372 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000218](https://openneuro.org/datasets/nm000218) - NeMAR: [nm000218](https://nemar.org/dataexplorer/detail?dataset_id=nm000218) ## API Reference Use the `NM000218` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000218(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study H — 9x8 checkerboard with gaze conditions (16 healthy subjects) * **Study:** `nm000218` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_H` * **Canonical:** `BigP3BCI_StudyH`, `BigP3BCI_H` Also importable as: `NM000218`, `Mainsah2025_BigP3BCI_H`, `BigP3BCI_StudyH`, `BigP3BCI_H`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 16; recordings: 372; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000218](https://openneuro.org/datasets/nm000218) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000218](https://nemar.org/dataexplorer/detail?dataset_id=nm000218) ### Examples ```pycon >>> from eegdash.dataset import NM000218 >>> dataset = NM000218(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000218) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000218) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000219: eeg dataset, 18 subjects *BNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset* Access recordings and metadata through EEGDash. **Citation:** Christoph Reichert, Igor Fabian Tellez Ceja, Catherine M. Sweeney-Reed, Hans-Jochen Heinze, Hermann Hinrichs, Stefan Dürschmid (2020). *BNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset*. Modality: eeg Subjects: 18 Recordings: 18 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000219 dataset = NM000219(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000219(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000219( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000219, title = {BNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset}, author = {Christoph Reichert and Igor Fabian Tellez Ceja and Catherine M. Sweeney-Reed and Hans-Jochen Heinze and Hermann Hinrichs and Stefan Dürschmid}, } ``` ## About This Dataset **BNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset** BNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset. **Dataset Overview** - **Code**: BNCI2020-002 - **Paradigm**: p300 - **DOI**: 10.3389/fnins.2020.591777 ### View full README **BNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset** BNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset. **Dataset Overview** - **Code**: BNCI2020-002 - **Paradigm**: p300 - **DOI**: 10.3389/fnins.2020.591777 - **Subjects**: 18 - **Sessions per subject**: 1 - **Events**: NonTarget=1, Target=2 - **Trial interval**: [0, 16] s - **File format**: MAT **Acquisition** - **Sampling rate**: 250.0 Hz - **Number of channels**: 30 - **Channel types**: eeg=30, eog=2 - **Channel names**: C3, C4, CP1, CP2, Cz, F3, F4, F7, F8, FC1, FC2, Fp1, Fp2, Fz, HEOG, IZ, LMAST, O10, O9, Oz, P3, P4, P7, P8, PO3, PO4, PO7, PO8, Pz, T7, T8, VEOG - **Montage**: extended 10-20 - **Hardware**: BrainAmp DC Amplifier - **Reference**: right mastoid - **Sensor type**: Ag/AgCl electrodes - **Line frequency**: 50.0 Hz - **Online filters**: 0.1 Hz highpass - **Cap manufacturer**: Brain Products GmbH - **Auxiliary channels**: EOG (2 ch, horizontal, vertical) **Participants** - **Number of subjects**: 18 - **Health status**: healthy - **Age**: mean=27.0, min=19.0, max=38.0 - **Gender distribution**: male=8, female=10 - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Task type**: binary decision - **Number of classes**: 2 - **Class labels**: NonTarget, Target - **Feedback type**: visual (yes/no text) - **Stimulus type**: colored crosses (green + and red x) - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: online - **Training/test split**: True - **Instructions**: Respond to yes/no questions by shifting attention to green cross (yes) or red cross (no) while maintaining central gaze fixation - **Stimulus presentation**: duration_ms=250, soa_ms=850 (jittered by 0-250 ms), stimuli_per_trial=10 **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 2 - **Number of repetitions**: 10 - **Stimulus onset asynchrony**: 850.0 ms **Data Structure** - **Trials**: 24 - **Blocks per session**: 7 - **Trials context**: per_block **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False - **Steps**: re-referenced to average of left and right mastoid, 4th order zero-phase IIR Butterworth bandpass filter (1.0-12.5 Hz), resampled to 50 Hz, epoched from stimulus onset to 750 ms after - **Highpass filter**: 1.0 Hz - **Lowpass filter**: 12.5 Hz - **Bandpass filter**: [1.0, 12.5] - **Filter type**: Butterworth IIR - **Filter order**: 4 - **Re-reference**: average of left and right mastoid - **Downsampled to**: 50.0 Hz - **Epoch window**: [0.0, 0.75] **Signal Processing** - **Classifiers**: Canonical Correlation Analysis (CCA) - **Feature extraction**: N2pc, ERP, Canonical difference waves - **Spatial filters**: CCA spatial filters **Cross-Validation** - **Method**: leave-one-out cross-validation (LOOCV) - **Evaluation type**: within_subject **Performance (Original Study)** - **Accuracy**: 88.5% - **Itr**: 3.02 bits/min - **Std Accuracy**: 7.8 - **Min Accuracy**: 70.8 - **Max Accuracy**: 90.3 **BCI Application** - **Applications**: communication, binary decision - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Attention **Documentation** - **Description**: Gaze-independent brain-computer interface based on covert spatial attention shifts for binary (yes/no) communication - **DOI**: 10.3389/fnins.2020.591777 - **Associated paper DOI**: 10.3389/fnins.2020.591777 - **License**: CC-BY-4.0 - **Investigators**: Christoph Reichert, Igor Fabian Tellez Ceja, Catherine M. Sweeney-Reed, Hans-Jochen Heinze, Hermann Hinrichs, Stefan Dürschmid - **Senior author**: Stefan Dürschmid - **Contact**: [christoph.reichert@lin-magdeburg.de](mailto:christoph.reichert@lin-magdeburg.de) - **Institution**: Leibniz Institute for Neurobiology - **Department**: Department of Behavioral Neurology - **Address**: Magdeburg, Germany - **Country**: Germany - **Repository**: BNCI Horizon - **Data URL**: [http://bnci-horizon-2020.eu/database/data-sets](http://bnci-horizon-2020.eu/database/data-sets) - **Publication year**: 2020 - **Funding**: German Ministry of Education and Research (BMBF) within the Research Campus STIMULATE under grant number 13GW0095D - **Ethics approval**: Ethics Committee of the Otto-von-Guericke University, Magdeburg - **Keywords**: visual spatial attention, brain-computer interface, stimulus features, N2pc, canonical correlation analysis, gaze-independent, BCI **References** Reichert, C., Tellez-Ceja, I. F., Schwenker, F., Rusnac, A.-L., Curio, G., Aust, L., & Hinrichs, H. (2020). Impact of Stimulus Features on the Performance of a Gaze-Independent Brain-Computer Interface Based on Covert Spatial Attention Shifts. Frontiers in Neuroscience, 14, 591777. [https://doi.org/10.3389/fnins.2020.591777](https://doi.org/10.3389/fnins.2020.591777) Notes .. versionadded:: 1.3.0 This dataset uses a covert spatial attention paradigm with N2pc ERP detection, which is different from traditional P300 or motor imagery paradigms. The paradigm is designed for gaze-independent BCI control, making it suitable for users who cannot control eye movements. See Also BNCI2015_009 : AMUSE auditory spatial P300 paradigm BNCI2015_010 : RSVP visual P300 paradigm Examples > >> from moabb.datasets import BNCI2020_002 >>> dataset = BNCI2020_002() >>> dataset.subject_list [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18] Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000219` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset | | Author (year) | `Reichert2020` | | Canonical | `BNCI2020`, `BNCI2020_002_AttentionShift`, `BNCI2020_002_CovertSpatialAttention` | | Importable as | `NM000219`, `Reichert2020`, `BNCI2020`, `BNCI2020_002_AttentionShift`, `BNCI2020_002_CovertSpatialAttention` | | Year | 2020 | | Authors | Christoph Reichert, Igor Fabian Tellez Ceja, Catherine M. Sweeney-Reed, Hans-Jochen Heinze, Hermann Hinrichs, Stefan Dürschmid | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000219) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000219) | [Source URL](https://nemar.org/dataexplorer/detail/nm000219) | ## Technical Details - Subjects: 18 - Recordings: 18 - Tasks: 1 - Channels: 30 - Sampling rate (Hz): 250.0 - Duration (hours): 13.226646666666667 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 1023.6 MB - File count: 18 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000219](https://openneuro.org/datasets/nm000219) - NeMAR: [nm000219](https://nemar.org/dataexplorer/detail?dataset_id=nm000219) ## API Reference Use the `NM000219` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000219(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset * **Study:** `nm000219` (NeMAR) * **Author (year):** `Reichert2020` * **Canonical:** `BNCI2020`, `BNCI2020_002_AttentionShift`, `BNCI2020_002_CovertSpatialAttention` Also importable as: `NM000219`, `Reichert2020`, `BNCI2020`, `BNCI2020_002_AttentionShift`, `BNCI2020_002_CovertSpatialAttention`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000219](https://openneuro.org/datasets/nm000219) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000219](https://nemar.org/dataexplorer/detail?dataset_id=nm000219) ### Examples ```pycon >>> from eegdash.dataset import NM000219 >>> dataset = NM000219(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000219) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000219) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000221: eeg dataset, 19 subjects *Alphawaves dataset* Access recordings and metadata through EEGDash. **Citation:** Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Marco Congedo (2018). *Alphawaves dataset*. Modality: eeg Subjects: 19 Recordings: 19 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000221 dataset = NM000221(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000221(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000221( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000221, title = {Alphawaves dataset}, author = {Grégoire Cattan and Pedro Luiz Coelho Rodrigues and Marco Congedo}, } ``` ## About This Dataset **Alphawaves dataset** Alphawaves dataset **Dataset Overview** - **Code**: Rodrigues2017 - **Paradigm**: rstate - **DOI**: [https://doi.org/10.5281/zenodo.2348892](https://doi.org/10.5281/zenodo.2348892) ### View full README **Alphawaves dataset** Alphawaves dataset **Dataset Overview** - **Code**: Rodrigues2017 - **Paradigm**: rstate - **DOI**: [https://doi.org/10.5281/zenodo.2348892](https://doi.org/10.5281/zenodo.2348892) - **Subjects**: 19 - **Sessions per subject**: 1 - **Events**: closed=1, open=2 - **Trial interval**: [0, 10] s **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Channel names**: Cz, Fc5, Fc6, Fp1, Fp2, Fz, O1, O2, Oz, P3, P4, P7, P8, Pz, T7, T8 - **Montage**: standard_1010 - **Hardware**: g.tec g.USBamp - **Software**: OpenViBE - **Reference**: right earlobe - **Sensor type**: wet electrodes - **Line frequency**: 50.0 Hz - **Online filters**: no digital filter **Participants** - **Number of subjects**: 19 - **Health status**: healthy - **Age**: mean=25.8 - **Gender distribution**: female=7, male=13 **Experimental Protocol** - **Paradigm**: rstate - **Number of classes**: 2 - **Class labels**: closed, open - **Trial duration**: 10 s - **Study design**: Subjects alternated between keeping eyes closed (condition 1) and eyes open (condition 2) while EEG was recorded **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text closed ``` ```text ├─ Experiment-structure └─ Rest └─ Close, Eye open ``` ```text ├─ Experiment-structure └─ Rest └─ Open, Eye ``` **Paradigm-Specific Parameters** - **Detected paradigm**: resting_state **Data Structure** - **Trials**: 10 **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False **Signal Processing** - **Feature extraction**: ERS - **Frequency bands**: alpha=[8, 12] Hz **Tags** - **Pathology**: Healthy - **Modality**: Resting State - **Type**: Resting-state **Documentation** - **DOI**: 10.5281/zenodo.2348891 - **Associated paper DOI**: hal-02086581 - **License**: CC-BY-4.0 - **Investigators**: Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Marco Congedo - **Senior author**: Marco Congedo - **Contact**: [pedro-luiz.coelho-rodrigues@grenoble-inp.fr](mailto:pedro-luiz.coelho-rodrigues@grenoble-inp.fr) - **Institution**: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP - **Department**: GIPSA-lab - **Address**: 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France - **Country**: FR - **Repository**: Zenodo - **Data URL**: [https://doi.org/10.5281/zenodo.2348891](https://doi.org/10.5281/zenodo.2348891) - **Publication year**: 2018 - **Ethics approval**: All participants provided written informed consent - **How to acknowledge**: Please cite: Cattan, Rodrigues & Congedo (2018). EEG Alpha Waves Dataset. GIPSA-lab Research Report. [https://hal.science/hal-02086581](https://hal.science/hal-02086581) **References** G. Cattan, P. L. Coelho Rodrigues, and M. Congedo, ‘EEG Alpha Waves Dataset’, 2018. Available: [https://hal.archives-ouvertes.fr/hal-02086581](https://hal.archives-ouvertes.fr/hal-02086581) Rodrigues PLC. Alpha-Waves-Dataset [Internet]. Grenoble: GIPSA-lab; 2018. Available from: [https://github.com/plcrodrigues/Alpha-Waves-Dataset](https://github.com/plcrodrigues/Alpha-Waves-Dataset) Notes .. versionadded:: 1.1.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000221` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Alphawaves dataset | | Author (year) | `Cattan2017` | | Canonical | `Alphawaves`, `Rodrigues2017`, `AlphaWaves` | | Importable as | `NM000221`, `Cattan2017`, `Alphawaves`, `Rodrigues2017`, `AlphaWaves` | | Year | 2018 | | Authors | Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Marco Congedo | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000221) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000221) | [Source URL](https://nemar.org/dataexplorer/detail/nm000221) | ## Technical Details - Subjects: 19 - Recordings: 19 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 512.0 - Duration (hours): 0.9618820529513888 - Pathology: Healthy - Modality: Resting State - Type: Resting-state - Size on disk: 81.7 MB - File count: 19 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000221](https://openneuro.org/datasets/nm000221) - NeMAR: [nm000221](https://nemar.org/dataexplorer/detail?dataset_id=nm000221) ## API Reference Use the `NM000221` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000221(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alphawaves dataset * **Study:** `nm000221` (NeMAR) * **Author (year):** `Cattan2017` * **Canonical:** `Alphawaves`, `Rodrigues2017`, `AlphaWaves` Also importable as: `NM000221`, `Cattan2017`, `Alphawaves`, `Rodrigues2017`, `AlphaWaves`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 19; recordings: 19; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000221](https://openneuro.org/datasets/nm000221) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000221](https://nemar.org/dataexplorer/detail?dataset_id=nm000221) ### Examples ```pycon >>> from eegdash.dataset import NM000221 >>> dataset = NM000221(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000221) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000221) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000222: eeg dataset, 10 subjects *Air conditioner control experiment (10 subjects, 4 classes, 25 EEG ch)* Access recordings and metadata through EEGDash. **Citation:** Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim (2019). *Air conditioner control experiment (10 subjects, 4 classes, 25 EEG ch)*. Modality: eeg Subjects: 10 Recordings: 305 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000222 dataset = NM000222(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000222(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000222( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000222, title = {Air conditioner control experiment (10 subjects, 4 classes, 25 EEG ch)}, author = {Jongmin Lee and Minju Kim and Dojin Heo and Jongsu Kim and Min-Ki Kim and Taejun Lee and Jongwoo Park and HyunYoung Kim and Minho Hwang and Laehyun Kim and Sung-Phil Kim}, } ``` ## About This Dataset **Air conditioner control experiment (10 subjects, 4 classes, 25 EEG ch)** Air conditioner control experiment (10 subjects, 4 classes, 25 EEG ch). **Dataset Overview** - **Code**: Lee2024-AC - **Paradigm**: p300 - **DOI**: 10.3389/fnhum.2024.1320457 ### View full README **Air conditioner control experiment (10 subjects, 4 classes, 25 EEG ch)** Air conditioner control experiment (10 subjects, 4 classes, 25 EEG ch). **Dataset Overview** - **Code**: Lee2024-AC - **Paradigm**: p300 - **DOI**: 10.3389/fnhum.2024.1320457 - **Subjects**: 10 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1] s - **File format**: MATLAB **Acquisition** - **Sampling rate**: 500.0 Hz - **Number of channels**: 25 - **Channel types**: eeg=25 - **Channel names**: Fp1, Fpz, Fp2, F7, F3, Fz, F4, F8, FC5, FC1, FC2, FC6, C3, Cz, C4, CP5, CP1, CP2, CP6, P3, Pz, P4, O1, Oz, O2 - **Montage**: standard_1020 - **Hardware**: actiCHamp (Brain Products) - **Reference**: linked mastoids - **Sensor type**: active - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 10 - **Health status**: healthy - **Age**: mean=22.4, std=2.59 - **Gender distribution**: male=6, female=4 - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 1.0 s - **Study design**: P300 BCI for AC home appliance control; 4-class oddball; LCD display - **Feedback type**: visual - **Stimulus type**: flash - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: online **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Stimulus onset asynchrony**: 750.0 ms **Data Structure** - **Trials**: 50 training + 30 testing blocks per subject - **Trials context**: per_subject **BCI Application** - **Applications**: home_appliance_control - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: ERP - **Type**: P300 **Documentation** - **DOI**: 10.3389/fnhum.2024.1320457 - **License**: CC-BY-4.0 - **Investigators**: Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim - **Institution**: Ulsan National Institute of Science and Technology - **Country**: KR - **Data URL**: [https://github.com/jml226/Home-Appliance-Control-Dataset](https://github.com/jml226/Home-Appliance-Control-Dataset) - **Publication year**: 2024 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000222` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Air conditioner control experiment (10 subjects, 4 classes, 25 EEG ch) | | Author (year) | `Lee2024_Air_conditioner_control` | | Canonical | — | | Importable as | `NM000222`, `Lee2024_Air_conditioner_control` | | Year | 2019 | | Authors | Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000222) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000222) | [Source URL](https://nemar.org/dataexplorer/detail/nm000222) | ## Technical Details - Subjects: 10 - Recordings: 305 - Tasks: 1 - Channels: 25 - Sampling rate (Hz): 500.0 - Duration (hours): 3.1864966666666668 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 415.3 MB - File count: 305 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000222](https://openneuro.org/datasets/nm000222) - NeMAR: [nm000222](https://nemar.org/dataexplorer/detail?dataset_id=nm000222) ## API Reference Use the `NM000222` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000222(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Air conditioner control experiment (10 subjects, 4 classes, 25 EEG ch) * **Study:** `nm000222` (NeMAR) * **Author (year):** `Lee2024_Air_conditioner_control` * **Canonical:** — Also importable as: `NM000222`, `Lee2024_Air_conditioner_control`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 305; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000222](https://openneuro.org/datasets/nm000222) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000222](https://nemar.org/dataexplorer/detail?dataset_id=nm000222) ### Examples ```pycon >>> from eegdash.dataset import NM000222 >>> dataset = NM000222(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000222) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000222) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000223: eeg dataset, 15 subjects *Electric light control experiment (15 subjects, 4 classes, 31 EEG ch)* Access recordings and metadata through EEGDash. **Citation:** Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim (2019). *Electric light control experiment (15 subjects, 4 classes, 31 EEG ch)*. Modality: eeg Subjects: 15 Recordings: 465 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000223 dataset = NM000223(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000223(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000223( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000223, title = {Electric light control experiment (15 subjects, 4 classes, 31 EEG ch)}, author = {Jongmin Lee and Minju Kim and Dojin Heo and Jongsu Kim and Min-Ki Kim and Taejun Lee and Jongwoo Park and HyunYoung Kim and Minho Hwang and Laehyun Kim and Sung-Phil Kim}, } ``` ## About This Dataset **Electric light control experiment (15 subjects, 4 classes, 31 EEG ch)** Electric light control experiment (15 subjects, 4 classes, 31 EEG ch). **Dataset Overview** - **Code**: Lee2024-EL - **Paradigm**: p300 - **DOI**: 10.3389/fnhum.2024.1320457 ### View full README **Electric light control experiment (15 subjects, 4 classes, 31 EEG ch)** Electric light control experiment (15 subjects, 4 classes, 31 EEG ch). **Dataset Overview** - **Code**: Lee2024-EL - **Paradigm**: p300 - **DOI**: 10.3389/fnhum.2024.1320457 - **Subjects**: 15 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1] s - **File format**: MATLAB **Acquisition** - **Sampling rate**: 500.0 Hz - **Number of channels**: 31 - **Channel types**: eeg=31 - **Channel names**: Fp1, Fpz, Fp2, F7, F3, Fz, F4, F8, FT9, FC5, FC1, FC2, FC6, FT10, T7, C3, Cz, C4, T8, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, O1, Oz, O2 - **Montage**: standard_1020 - **Hardware**: actiCHamp (Brain Products) - **Reference**: linked mastoids - **Sensor type**: active - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 15 - **Health status**: healthy - **Age**: mean=22.13, std=2.2 - **Gender distribution**: male=10, female=5 - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 1.0 s - **Study design**: P300 BCI for EL home appliance control; 4-class oddball; LCD display - **Feedback type**: visual - **Stimulus type**: flash - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: online **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Stimulus onset asynchrony**: 750.0 ms **Data Structure** - **Trials**: 50 training + 30 testing blocks per subject - **Trials context**: per_subject **BCI Application** - **Applications**: home_appliance_control - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: ERP - **Type**: P300 **Documentation** - **DOI**: 10.3389/fnhum.2024.1320457 - **License**: CC-BY-4.0 - **Investigators**: Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim - **Institution**: Ulsan National Institute of Science and Technology - **Country**: KR - **Data URL**: [https://github.com/jml226/Home-Appliance-Control-Dataset](https://github.com/jml226/Home-Appliance-Control-Dataset) - **Publication year**: 2024 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000223` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Electric light control experiment (15 subjects, 4 classes, 31 EEG ch) | | Author (year) | `Lee2024_Electric_light_control` | | Canonical | — | | Importable as | `NM000223`, `Lee2024_Electric_light_control` | | Year | 2019 | | Authors | Jongmin Lee, Minju Kim, Dojin Heo, Jongsu Kim, Min-Ki Kim, Taejun Lee, Jongwoo Park, HyunYoung Kim, Minho Hwang, Laehyun Kim, Sung-Phil Kim | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000223) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000223) | [Source URL](https://nemar.org/dataexplorer/detail/nm000223) | ## Technical Details - Subjects: 15 - Recordings: 465 - Tasks: 1 - Channels: 31 - Sampling rate (Hz): 500.0 - Duration (hours): 3.895852777777778 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 632.4 MB - File count: 465 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000223](https://openneuro.org/datasets/nm000223) - NeMAR: [nm000223](https://nemar.org/dataexplorer/detail?dataset_id=nm000223) ## API Reference Use the `NM000223` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000223(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electric light control experiment (15 subjects, 4 classes, 31 EEG ch) * **Study:** `nm000223` (NeMAR) * **Author (year):** `Lee2024_Electric_light_control` * **Canonical:** — Also importable as: `NM000223`, `Lee2024_Electric_light_control`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 465; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000223](https://openneuro.org/datasets/nm000223) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000223](https://nemar.org/dataexplorer/detail?dataset_id=nm000223) ### Examples ```pycon >>> from eegdash.dataset import NM000223 >>> dataset = NM000223(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000223) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000223) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000225: eeg dataset, 1983 subjects *PhysioNet 2018 Challenge: Sleep Arousal Detection PSG (Training)* Access recordings and metadata through EEGDash. **Citation:** Mohammad M. Ghassemi, Benjamin E. Moody, Li-wei H. Lehman, Christopher Song, Qiao Li, Haoqi Sun, Roger G. Mark, M. Brandon Westover, Gari D. Clifford (2018). *PhysioNet 2018 Challenge: Sleep Arousal Detection PSG (Training)*. [10.13026/6phb-r450](https://doi.org/10.13026/6phb-r450) Modality: eeg Subjects: 1983 Recordings: 1983 License: Open Data Commons Attribution License v1.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000225 dataset = NM000225(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000225(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000225( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000225, title = {PhysioNet 2018 Challenge: Sleep Arousal Detection PSG (Training)}, author = {Mohammad M. Ghassemi and Benjamin E. Moody and Li-wei H. Lehman and Christopher Song and Qiao Li and Haoqi Sun and Roger G. Mark and M. Brandon Westover and Gari D. Clifford}, doi = {10.13026/6phb-r450}, url = {https://doi.org/10.13026/6phb-r450}, } ``` ## About This Dataset **You Snooze You Win: PhysioNet/CinC Challenge 2018 PSG** **Overview** 1,983 overnight polysomnographic (PSG) recordings from subjects monitored at the Massachusetts General Hospital (MGH) sleep laboratory for sleep disorder diagnosis. The dataset was created for the PhysioNet/Computing in Cardiology Challenge 2018 on automatic arousal detection. ### View full README **You Snooze You Win: PhysioNet/CinC Challenge 2018 PSG** **Overview** 1,983 overnight polysomnographic (PSG) recordings from subjects monitored at the Massachusetts General Hospital (MGH) sleep laboratory for sleep disorder diagnosis. The dataset was created for the PhysioNet/Computing in Cardiology Challenge 2018 on automatic arousal detection. - Training set: 994 subjects (with expert annotations) - Test set: 989 subjects (PSG signals only, no annotations) - Demographics: mean age 55 +/- 14 years (range 18-93), 65% male, 35% female - Clinical population: subjects with suspected obstructive sleep apnea **Channels (13 total, all at 200 Hz)** - EEG (6): F3-M2, F4-M1, C3-M2, C4-M1, O1-M2, O2-M1 Referential montage against contralateral mastoids (M1/M2) - EOG (1): E1-M2 (left electrooculogram) - EMG (1): Chin1-Chin2 (submental chin electromyogram) - Respiratory (3): ABD (abdominal effort), CHEST (thoracic effort), AIRFLOW (nasal/oral airflow) - SpO2 (1): SaO2 (pulse oximetry, resampled to 200 Hz) - ECG (1): ECG (single-lead electrocardiogram) **Annotations (training set only, in events.tsv)** Sleep staging (AASM standard, 30-second contiguous epochs): : Wake, N1, N2, N3, REM Respiratory events (with onset and duration): : resp_obstructiveapnea — complete upper airway obstruction resp_centralapnea — absent respiratory effort resp_mixedapnea — combined obstructive + central resp_hypopnea — partial airway obstruction (>=30% flow reduction) Arousal events: : arousal_rera — respiratory effort-related arousal arousal_spontaneous — spontaneous cortical arousal arousal_snore — snoring-related arousal arousal_plm — periodic leg movement arousal **Participants metadata (in participants.tsv)** Per-subject: age, sex, split (training/test), recording duration, sleep architecture (epoch counts per stage), and respiratory/arousal event counts. **Sessions** - ses-training: 994 subjects with PSG + annotations - ses-test: 989 subjects with PSG only (no annotations) **Notes** - Original format: WFDB (.mat + .hea + .arousal) - All signals originally at 200 Hz; SaO2 was resampled to match - Annotators: certified sleep technologists at MGH, following AASM manual - Updated arousal annotations (new-arousals.zip) supersede originals **Reference** Ghassemi, M.M., Moody, B.E., Lehman, L.H., Song, C., Li, Q., Sun, H., Mark, R.G., Westover, M.B. & Clifford, G.D. (2018). You Snooze, You Win: the PhysioNet/Computing in Cardiology Challenge 2018. Computing in Cardiology, 45, 1-4. doi:10.22489/CinC.2018.049 Goldberger, A. et al. (2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation, 101(23), e215-e220. https://physionet.org/content/challenge-2018/1.0.0/ **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `NM000225` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | PhysioNet 2018 Challenge: Sleep Arousal Detection PSG (Training) | | Author (year) | `Ghassemi2018` | | Canonical | — | | Importable as | `NM000225`, `Ghassemi2018` | | Year | 2018 | | Authors | Mohammad M. Ghassemi, Benjamin E. Moody, Li-wei H. Lehman, Christopher Song, Qiao Li, Haoqi Sun, Roger G. Mark, M. Brandon Westover, Gari D. Clifford | | License | Open Data Commons Attribution License v1.0 | | Citation / DOI | [doi:10.13026/6phb-r450](https://doi.org/10.13026/6phb-r450) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000225) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000225) | [Source URL](https://nemar.org/dataexplorer/detail/nm000225) | ### Copy-paste BibTeX ```bibtex @dataset{nm000225, title = {PhysioNet 2018 Challenge: Sleep Arousal Detection PSG (Training)}, author = {Mohammad M. Ghassemi and Benjamin E. Moody and Li-wei H. Lehman and Christopher Song and Qiao Li and Haoqi Sun and Roger G. Mark and M. Brandon Westover and Gari D. Clifford}, doi = {10.13026/6phb-r450}, url = {https://doi.org/10.13026/6phb-r450}, } ``` ## Technical Details - Subjects: 1983 - Recordings: 1983 - Tasks: 1 - Channels: 13 - Sampling rate (Hz): 200 - Duration (hours): 15261.231134722222 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 401.1 GB - File count: 1983 - Format: BIDS - License: Open Data Commons Attribution License v1.0 - DOI: doi:10.13026/6phb-r450 - Source: nemar - OpenNeuro: [nm000225](https://openneuro.org/datasets/nm000225) - NeMAR: [nm000225](https://nemar.org/dataexplorer/detail?dataset_id=nm000225) ## API Reference Use the `NM000225` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000225(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PhysioNet 2018 Challenge: Sleep Arousal Detection PSG (Training) * **Study:** `nm000225` (NeMAR) * **Author (year):** `Ghassemi2018` * **Canonical:** — Also importable as: `NM000225`, `Ghassemi2018`. Modality: `eeg`. Subjects: 1983; recordings: 1983; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000225](https://openneuro.org/datasets/nm000225) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000225](https://nemar.org/dataexplorer/detail?dataset_id=nm000225) DOI: [https://doi.org/10.13026/6phb-r450](https://doi.org/10.13026/6phb-r450) ### Examples ```pycon >>> from eegdash.dataset import NM000225 >>> dataset = NM000225(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000225) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000225) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000226: eeg dataset, 4 subjects *Zhou2016* Access recordings and metadata through EEGDash. **Citation:** Bangyan Zhou, Xiaopei Wu, Zongtan Lv, Lei Zhang, Xiaojin Guo (2016). *Zhou2016*. [10.82901/nemar.nm000115](https://doi.org/10.82901/nemar.nm000115) Modality: eeg Subjects: 4 Recordings: 24 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000226 dataset = NM000226(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000226(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000226( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000226, title = {Zhou2016}, author = {Bangyan Zhou and Xiaopei Wu and Zongtan Lv and Lei Zhang and Xiaojin Guo}, doi = {10.82901/nemar.nm000115}, url = {https://doi.org/10.82901/nemar.nm000115}, } ``` ## About This Dataset **Data Availability and Regeneration Instructions** This is a derivative dataset. If any data are missing, you can use the instructions in the code folder to download the raw data and regenerate the derivatives. README **Introduction** ### View full README **Data Availability and Regeneration Instructions** This is a derivative dataset. If any data are missing, you can use the instructions in the code folder to download the raw data and regenerate the derivatives. README **Introduction** This dataset contains EEG recordings from four subjects performing motor imagery tasks (left hand, right hand, and feet), originally published by Zhou et al. (2016). The data was reformatted into BIDS from its Zenodo version ([https://zenodo.org/records/16534752](https://zenodo.org/records/16534752)), which was itself generated by MOABB (Mother of All BCI Benchmarks, [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb)). The original study investigated a fully automated trial selection method for optimization of motor imagery based brain-computer interfaces. **Overview of the experiment** Four participants each completed three recording sessions separated by days to months. Each session contained two consecutive runs with inter-run breaks. Each run comprised 75 trials (25 per class: left hand, right hand, and feet imagery), for a total of 450 trials per subject across all sessions. Trials began with an auditory cue, followed by a 5-second visual arrow stimulus indicating the motor imagery task to perform, then a 4-second rest period. EEG was recorded from 14 channels placed according to the extended 10/20 system (Fp1, Fp2, FC3, FCz, FC4, C3, Cz, C4, CP3, CPz, CP4, O1, Oz, O2) at a sampling frequency of 250 Hz with a 50 Hz power line frequency. **Dataset structure** - 4 subjects (sub-1 through sub-4) - 3 sessions per subject (ses-0, ses-1, ses-2) - 2 runs per session (run-0, run-1) - 24 EEG recordings total in EDF format - 14 EEG channels, 250 Hz sampling rate - 3 event types: left_hand (value=2), right_hand (value=3), feet (value=1) - Electrode positions in CapTrak coordinate system **Preprocessing** The data distributed here has undergone minimal preprocessing by MOABB prior to BIDS conversion: - Extraction of the 14 EEG channels from the original recordings - Annotation of motor imagery events (left_hand, right_hand, feet) with 5-second durations - Resampling to 250 Hz - Export to EDF format **Original and related datasets** This dataset was reformatted into BIDS from the Zenodo archive at [https://zenodo.org/records/16534752](https://zenodo.org/records/16534752). That archive was generated by MOABB v1.2.0 from the original data accompanying the publication. The original study and data are described in: Zhou B, Wu X, Lv Z, Zhang L, Guo X (2016). A Fully Automated Trial Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface. PLoS ONE 11(9): e0162657. [https://doi.org/10.1371/journal.pone.0162657](https://doi.org/10.1371/journal.pone.0162657) **References** Zhou B, Wu X, Lv Z, Zhang L, Guo X (2016). A Fully Automated Trial Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface. PLoS ONE 11(9): e0162657. [https://doi.org/10.1371/journal.pone.0162657](https://doi.org/10.1371/journal.pone.0162657) Appelhoff S, Sanderson M, Brooks T, et al. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: 1896. [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet CR, Appelhoff S, Gorgolewski KJ, et al. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Data curator for NEMAR version: Arnaud Delorme (UCSD, La Jolla, CA, USA) ## Dataset Information | Dataset ID | `NM000226` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Zhou2016 | | Author (year) | `Zhou2016_226` | | Canonical | `Zhou2016_NEMAR` | | Importable as | `NM000226`, `Zhou2016_226`, `Zhou2016_NEMAR` | | Year | 2016 | | Authors | Bangyan Zhou, Xiaopei Wu, Zongtan Lv, Lei Zhang, Xiaojin Guo | | License | CC-BY-4.0 | | Citation / DOI | [10.82901/nemar.nm000115](https://doi.org/10.82901/nemar.nm000115) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000226) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000226) | [Source URL](https://nemar.org/dataexplorer/detail/nm000226) | ### Copy-paste BibTeX ```bibtex @dataset{nm000226, title = {Zhou2016}, author = {Bangyan Zhou and Xiaopei Wu and Zongtan Lv and Lei Zhang and Xiaojin Guo}, doi = {10.82901/nemar.nm000115}, url = {https://doi.org/10.82901/nemar.nm000115}, } ``` ## Technical Details - Subjects: 4 - Recordings: 24 - Tasks: 1 - Channels: 14 - Sampling rate (Hz): 100 - Duration (hours): 6.271044444444445 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 528.3 MB - File count: 24 - Format: BIDS - License: CC-BY-4.0 - DOI: 10.82901/nemar.nm000115 - Source: nemar - OpenNeuro: [nm000226](https://openneuro.org/datasets/nm000226) - NeMAR: [nm000226](https://nemar.org/dataexplorer/detail?dataset_id=nm000226) ## API Reference Use the `NM000226` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000226(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Zhou2016 * **Study:** `nm000226` (NeMAR) * **Author (year):** `Zhou2016_226` * **Canonical:** `Zhou2016_NEMAR` Also importable as: `NM000226`, `Zhou2016_226`, `Zhou2016_NEMAR`. Modality: `eeg`. Subjects: 4; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000226](https://openneuro.org/datasets/nm000226) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000226](https://nemar.org/dataexplorer/detail?dataset_id=nm000226) DOI: [https://doi.org/10.82901/nemar.nm000115](https://doi.org/10.82901/nemar.nm000115) ### Examples ```pycon >>> from eegdash.dataset import NM000226 >>> dataset = NM000226(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000226) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000226) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000227: eeg dataset, 31 subjects *Eye-BCI Motor Execution dataset from Guttmann-Flury et al 2025* Access recordings and metadata through EEGDash. **Citation:** Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu (2019). *Eye-BCI Motor Execution dataset from Guttmann-Flury et al 2025*. Modality: eeg Subjects: 31 Recordings: 63 License: CC0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000227 dataset = NM000227(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000227(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000227( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000227, title = {Eye-BCI Motor Execution dataset from Guttmann-Flury et al 2025}, author = {Eva Guttmann-Flury and Xinjun Sheng and Xiangyang Zhu}, } ``` ## About This Dataset **Eye-BCI Motor Execution dataset from Guttmann-Flury et al 2025** Eye-BCI Motor Execution dataset from Guttmann-Flury et al 2025. **Dataset Overview** - **Code**: GuttmannFlury2025-ME - **Paradigm**: imagery - **DOI**: 10.1038/s41597-025-04861-9 ### View full README **Eye-BCI Motor Execution dataset from Guttmann-Flury et al 2025** Eye-BCI Motor Execution dataset from Guttmann-Flury et al 2025. **Dataset Overview** - **Code**: GuttmannFlury2025-ME - **Paradigm**: imagery - **DOI**: 10.1038/s41597-025-04861-9 - **Subjects**: 31 - **Sessions per subject**: 3 - **Events**: left_hand=1, right_hand=2 - **Trial interval**: [0, 4] s - **File format**: BDF **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 66 - **Channel types**: eeg=64, eog=1, stim=1 - **Channel names**: FP1, FPZ, FP2, AF3, AF4, F7, F5, F3, F1, FZ, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCZ, FC2, FC4, FC6, FT8, T7, C5, C3, C1, CZ, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPZ, CP2, CP4, CP6, TP8, P7, P5, P3, P1, PZ, P2, P4, P6, P8, PO7, PO5, PO3, POZ, PO4, PO6, PO8, O1, OZ, O2, CB1, CB2 - **Montage**: standard_1005 - **Hardware**: Neuroscan Quik-Cap 65-ch, SynAmps2 - **Reference**: right mastoid (M1) - **Ground**: forehead - **Sensor type**: Ag/AgCl - **Line frequency**: 50.0 Hz - **Online filters**: {‘highpass_time_constant_s’: 10} **Participants** - **Number of subjects**: 31 - **Health status**: healthy - **Age**: mean=28.3, min=20.0, max=57.0 - **Gender distribution**: female=11, male=20 - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 2 - **Class labels**: left_hand, right_hand - **Trial duration**: 7.5 s - **Study design**: Multi-paradigm BCI (MI/ME/SSVEP/P300). MI and ME: 2-class hand grasping, 40 trials/session, up to 3 sessions per subject. - **Feedback type**: none - **Stimulus type**: visual rectangle cue - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_hand, right_hand - **Cue duration**: 2.0 s - **Imagery duration**: 4.0 s **Data Structure** - **Trials**: 2520 - **Trials context**: 63 sessions x 40 trials = 2520 (MI only, default) **BCI Application** - **Applications**: motor_control - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Research **Documentation** - **DOI**: 10.1038/s41597-025-04861-9 - **License**: CC0 - **Investigators**: Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu - **Institution**: Shanghai Jiao Tong University - **Country**: CN - **Publication year**: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000227` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Eye-BCI Motor Execution dataset from Guttmann-Flury et al 2025 | | Author (year) | `GuttmannFlury2025_Eye` | | Canonical | `GuttmannFlury2025_ME` | | Importable as | `NM000227`, `GuttmannFlury2025_Eye`, `GuttmannFlury2025_ME` | | Year | 2019 | | Authors | Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu | | License | CC0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000227) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000227) | [Source URL](https://nemar.org/dataexplorer/detail/nm000227) | ## Technical Details - Subjects: 31 - Recordings: 63 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 1000.0 - Duration (hours): 7.093593611111111 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 4.7 GB - File count: 63 - Format: BIDS - License: CC0 - DOI: — - Source: nemar - OpenNeuro: [nm000227](https://openneuro.org/datasets/nm000227) - NeMAR: [nm000227](https://nemar.org/dataexplorer/detail?dataset_id=nm000227) ## API Reference Use the `NM000227` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000227(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Eye-BCI Motor Execution dataset from Guttmann-Flury et al 2025 * **Study:** `nm000227` (NeMAR) * **Author (year):** `GuttmannFlury2025_Eye` * **Canonical:** `GuttmannFlury2025_ME` Also importable as: `NM000227`, `GuttmannFlury2025_Eye`, `GuttmannFlury2025_ME`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 31; recordings: 63; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000227](https://openneuro.org/datasets/nm000227) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000227](https://nemar.org/dataexplorer/detail?dataset_id=nm000227) ### Examples ```pycon >>> from eegdash.dataset import NM000227 >>> dataset = NM000227(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000227) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000227) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000228: eeg dataset, 356 subjects *Nieuwland et al. 2018: Multi-site N400 Replication Study* Access recordings and metadata through EEGDash. **Citation:** Mante S. Nieuwland, Stephen Politzer-Ahles, Evelien Heyselaar, Katrien Segaert, Emily Darley, Nina Kazanina, Sarah Von Grebmer Zu Wolfsthurn, Federica Bartolozzi, Vita Kogan, Aine Ito, Diane Mézière, Dale J. Barr, Guillaume A. Rousselet, Heather J. Ferguson, Simon Busch-Moreno, Xiao Fu, Jyrki Tuomainen, Eugenia Kulakova, E. Matthew Husband, David I. Donaldson, Zdenko Kohút, Shirley-Ann Rueschemeyer, Falk Huettig (2005). *Nieuwland et al. 2018: Multi-site N400 Replication Study*. [10.7554/eLife.33468](https://doi.org/10.7554/eLife.33468) Modality: eeg Subjects: 356 Recordings: 397 License: CC-BY 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000228 dataset = NM000228(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000228(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000228( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000228, title = {Nieuwland et al. 2018: Multi-site N400 Replication Study}, author = {Mante S. Nieuwland and Stephen Politzer-Ahles and Evelien Heyselaar and Katrien Segaert and Emily Darley and Nina Kazanina and Sarah Von Grebmer Zu Wolfsthurn and Federica Bartolozzi and Vita Kogan and Aine Ito and Diane Mézière and Dale J. Barr and Guillaume A. Rousselet and Heather J. Ferguson and Simon Busch-Moreno and Xiao Fu and Jyrki Tuomainen and Eugenia Kulakova and E. Matthew Husband and David I. Donaldson and Zdenko Kohút and Shirley-Ann Rueschemeyer and Falk Huettig}, doi = {10.7554/eLife.33468}, url = {https://doi.org/10.7554/eLife.33468}, } ``` ## About This Dataset **Nieuwland et al. 2018: Multi-site N400 Replication Study** **Overview** This is a large-scale (N=356) multi-laboratory replication of DeLong, Urbach & Kutas (2005), testing whether readers pre-activate the phonological form of upcoming nouns during sentence comprehension. Participants read sentences word-by-word (RSVP, 2 words per second) that contained indefinite articles ### View full README **Nieuwland et al. 2018: Multi-site N400 Replication Study** **Overview** This is a large-scale (N=356) multi-laboratory replication of DeLong, Urbach & Kutas (2005), testing whether readers pre-activate the phonological form of upcoming nouns during sentence comprehension. Participants read sentences word-by-word (RSVP, 2 words per second) that contained indefinite articles (a/an) preceding either highly expected or unexpected nouns (based on cloze probability), while EEG was recorded. Nine laboratories in the UK collected data following a pre-registered replication protocol ([https://osf.io/eyzaq](https://osf.io/eyzaq)). The original study by DeLong et al. reported N400-like effects on the indefinite articles (larger negativity for unexpected articles). Nieuwland et al. found reliable N400 effects on the target nouns but no statistically significant effect on the preceding articles, challenging strong prediction accounts. **Participants** - 356 total participants (222 women / 134 men) - All right-handed, native English speakers - Age 18–35 years (mean 19.8) - Normal or corrected-to-normal vision - Free from known language or learning disorders - 89 reported a left-handed parent or sibling After applying the paper’s quality threshold (<60/80 article or noun trials), 334 subjects were retained in the statistical analyses. In this BIDS release we include ALL subjects for which raw data is available, with an `included_in_paper` flag in participants.tsv so users can filter themselves. **Laboratories** ```text | Lab (paper #) | Institution | Format | Sfreq | Channels | |---------------|----------------------------|--------------|----------|-------------------| | BIRM (1) | University of Birmingham | BrainVision | 500 Hz | 64 EEG | | BRIS (2) | University of Bristol | BrainVision | 1000 Hz | 32 EEG | | EDIN (3) | University of Edinburgh | BioSemi BDF | 512 Hz | 64 EEG + 8 EXG | | GLAS (4) | University of Glasgow | BioSemi BDF | 512 Hz | 128 EEG + 8 EXG | | KENT (5) | University of Kent | BrainVision | 500 Hz | 64 EEG + HEOG/VEOG| | LOND (6) | University College London | BioSemi BDF | 512 Hz | 32 EEG + 8 EXG | | OXFO (7) | University of Oxford | BioSemi BDF | 2048 Hz | 64 EEG + 8 EXG | | STIR (8) | University of Stirling | Neuroscan CNT| 250 Hz | 64 EEG + EOG | | YORK (9) | University of York | BrainVision | 500 Hz | 64 EEG + HEOG/VEOG| ``` **Paradigm** - Word-by-word RSVP: 200 ms word duration + 300 ms blank (2 words/sec) - 80 Delong replication sentences + 80 control sentences - Comprehension questions on a subset of trials (yes/no button response) - Two counter-balanced stimulus lists (list 1 / list 2) **Tasks** - `task-delong`: Main experiment (all subjects, all labs) - `task-control`: Control grammaticality experiment (BRIS subjects, LOND 1-2) **Events (trial_type values)** Delong experiment: : a_expected — article “a”, expected (high cloze) context an_expected — article “an”, expected (high cloze) context a_unexpected — article “a”, unexpected (low cloze) context an_unexpected — article “an”, unexpected (low cloze) context noun_expected — target noun, expected condition noun_unexpected — target noun, unexpected condition final_expected — sentence-final word, expected condition final_unexpected — sentence-final word, unexpected condition Control experiment: : control_correct — grammatically correct article control_incorrect — grammatically incorrect article General: : cloze_marker — cloze probability marker (trigger 1-100 or 200) item_marker — stimulus item marker (trigger 101-180) question — comprehension question onset filler_word — any other (non-critical) word in sentence unknown_trigger — trigger code not matched to any known category **Event enrichment** Each event in `events.tsv` is enriched (when applicable) with: : - sequence_id, item_number, list, task_type, condition - expected_article / unexpected_article (a or an) - expected_noun / unexpected_noun (strings) - expected_cloze / unexpected_cloze (0-100) - plausibility_expected / plausibility_unexpected (1-7 Likert) - sentence_context / sentence_ending (strings) - has_question, question_text, question_answer These come from the authors’ REPLICATION_ITEMS.xlsx file on OSF. **participants.tsv columns** > participant_id — sub- > lab — birm/bris/edin/glas/kent/lond/oxfo/stir/york > lab_number — 1-9 (paper’s numbering) > institution — full institution name > list — stimulus list (1 or 2) > accuracy — % correct on comprehension questions (from OSF) > n_article_trials — article trials kept (out of 80) > n_noun_trials — noun trials kept (out of 80) > included_in_paper — True if >=60/80 trials (paper’s threshold) > exclusion_note — e.g. “random_answers”, “non_native”, “low_trials” > hand — R (all right-handed) > age_range — 18-35 (all participants) > native_language — English (all participants) > recording_system — manufacturer + model **Notes** - Original raw data is kept — no filtering, no resampling, no artifact rejection - Channel types: EEG, EOG, and misc (peripheral) channels are labeled - For BDF labs, channels EXG1-8, GSR1/2, Erg1/2, Resp, Plet, Temp are marked misc - GLAS has a 128-channel BioSemi montage (biosemi128) - STIR data is read with a custom Neuroscan CNT parser (MNE’s built-in reader has a bug with the corrupted total_samples header field) - OXFO has 3 subjects recorded with BrainVision instead of BDF **Reference** Nieuwland, M.S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., …, Huettig, F. (2018). Large-scale replication study reveals a limit on probabilistic prediction in language comprehension. eLife, 7, e33468. [https://doi.org/10.7554/eLife.33468](https://doi.org/10.7554/eLife.33468) **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8 ## Dataset Information | Dataset ID | `NM000228` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Nieuwland et al. 2018: Multi-site N400 Replication Study | | Author (year) | `Nieuwland2018` | | Canonical | — | | Importable as | `NM000228`, `Nieuwland2018` | | Year | 2005 | | Authors | Mante S. Nieuwland, Stephen Politzer-Ahles, Evelien Heyselaar, Katrien Segaert, Emily Darley, Nina Kazanina, Sarah Von Grebmer Zu Wolfsthurn, Federica Bartolozzi, Vita Kogan, Aine Ito, Diane Mézière, Dale J. Barr, Guillaume A. Rousselet, Heather J. Ferguson, Simon Busch-Moreno, Xiao Fu, Jyrki Tuomainen, Eugenia Kulakova, E. Matthew Husband, David I. Donaldson, Zdenko Kohút, Shirley-Ann Rueschemeyer, Falk Huettig | | License | CC-BY 4.0 | | Citation / DOI | [doi:10.7554/eLife.33468](https://doi.org/10.7554/eLife.33468) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000228) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000228) | [Source URL](https://nemar.org/dataexplorer/detail/nm000228) | ### Copy-paste BibTeX ```bibtex @dataset{nm000228, title = {Nieuwland et al. 2018: Multi-site N400 Replication Study}, author = {Mante S. Nieuwland and Stephen Politzer-Ahles and Evelien Heyselaar and Katrien Segaert and Emily Darley and Nina Kazanina and Sarah Von Grebmer Zu Wolfsthurn and Federica Bartolozzi and Vita Kogan and Aine Ito and Diane Mézière and Dale J. Barr and Guillaume A. Rousselet and Heather J. Ferguson and Simon Busch-Moreno and Xiao Fu and Jyrki Tuomainen and Eugenia Kulakova and E. Matthew Husband and David I. Donaldson and Zdenko Kohút and Shirley-Ann Rueschemeyer and Falk Huettig}, doi = {10.7554/eLife.33468}, url = {https://doi.org/10.7554/eLife.33468}, } ``` ## Technical Details - Subjects: 356 - Recordings: 397 - Tasks: 2 - Channels: 66 (81), 32 (78), 73 (77), 65 (43), 41 (40), 64 (38), 144 (37), 138 (3) - Sampling rate (Hz): 500 (122), 512 (116), 1000 (78), 2048 (41), 250 (40) - Duration (hours): 232.39741483832464 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 102.7 GB - File count: 397 - Format: BIDS - License: CC-BY 4.0 - DOI: doi:10.7554/eLife.33468 - Source: nemar - OpenNeuro: [nm000228](https://openneuro.org/datasets/nm000228) - NeMAR: [nm000228](https://nemar.org/dataexplorer/detail?dataset_id=nm000228) ## API Reference Use the `NM000228` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000228(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Nieuwland et al. 2018: Multi-site N400 Replication Study * **Study:** `nm000228` (NeMAR) * **Author (year):** `Nieuwland2018` * **Canonical:** — Also importable as: `NM000228`, `Nieuwland2018`. Modality: `eeg`. Subjects: 356; recordings: 397; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000228](https://openneuro.org/datasets/nm000228) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000228](https://nemar.org/dataexplorer/detail?dataset_id=nm000228) DOI: [https://doi.org/10.7554/eLife.33468](https://doi.org/10.7554/eLife.33468) ### Examples ```pycon >>> from eegdash.dataset import NM000228 >>> dataset = NM000228(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000228) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000228) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000229: eeg dataset, 29 subjects *Gwilliams et al. 2023 — Introducing MEG-MASC: a high-quality magneto-encephalography dataset for evaluating natural speech processing* Access recordings and metadata through EEGDash. **Citation:** Laura Gwilliams, Graham Flick, Alec Marantz, Liina Pylkkänen, David Poeppel, Jean-Rémi King (2019). *Gwilliams et al. 2023 — Introducing MEG-MASC: a high-quality magneto-encephalography dataset for evaluating natural speech processing*. [10.1038/s41597-023-02752-5](https://doi.org/10.1038/s41597-023-02752-5) Modality: eeg Subjects: 29 Recordings: 1360 License: CC0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000229 dataset = NM000229(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000229(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000229( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000229, title = {Gwilliams et al. 2023 — Introducing MEG-MASC: a high-quality magneto-encephalography dataset for evaluating natural speech processing}, author = {Laura Gwilliams and Graham Flick and Alec Marantz and Liina Pylkkänen and David Poeppel and Jean-Rémi King}, doi = {10.1038/s41597-023-02752-5}, url = {https://doi.org/10.1038/s41597-023-02752-5}, } ``` ## About This Dataset **MEG-MASC: a high-quality magneto-encephalography dataset for evaluating natural speech processing.** Laura Gwilliams, Graham Flick, Alec Marantz, Liina Pylkkänen, David Poeppel, Jean-Rémi King - [Paper](https://arxiv.org) - [Data](https://osf.io/rguwj/) - [Code](https://github.com/kingjr/meg-masc) ### View full README **MEG-MASC: a high-quality magneto-encephalography dataset for evaluating natural speech processing.** Laura Gwilliams, Graham Flick, Alec Marantz, Liina Pylkkänen, David Poeppel, Jean-Rémi King - [Paper](https://arxiv.org) - [Data](https://osf.io/rguwj/) - [Code](https://github.com/kingjr/meg-masc) **Abstract** The “MEG-MASC” dataset provides a curated set of raw magnetoencephalography (MEG) recordings of 27 English speakers who listened to two hours of naturalistic stories. Each participant performed two identical sessions, involving listening to four fictional stories from the Manually Annotated Sub-Corpus (MASC) intermixed with random word lists and comprehension questions. We time-stamp the onset and offset of each word and phoneme in the metadata of the recording, and organize the dataset according to the ‘Brain Imaging Data Structure’ (BIDS). This data collection provides a suitable benchmark to large-scale encoding and decoding analyses of temporally-resolved brain responses to speech. We provide the Python code to replicate several validations analyses of the MEG evoked related fields such as the temporal decoding of phonetic features and word frequency. All code and MEG, audio and text data are publicly available to keep with best practices in transparent and reproducible research. **Please cite** @article{gwilliams2022neural, : title={Neural dynamics of phoneme sequences reveal position-invariant code for content and order}, author={Gwilliams, Laura and King, Jean-Remi and Marantz, Alec and Poeppel, David}, journal={Nature Communications}, volume={13}, number={1}, pages={1–14}, year={2022}, publisher={Nature Publishing Group} } **Task organisation** Each subject listened to four unique stories: : - task-0 : ‘lw1’, - task-1 : ‘cable_spool_fort’, - task-2 : ‘easy_money’, - task-3 : ‘The_Black_Widow’ Stories were presented in a different order to each participant: > participant_id : task_order > sub-01 : [0, 1, 2, 3] > sub-02 : [0, 1, 3, 2] > sub-03 : [0, 2, 3, 1] > sub-04 : [3, 0, 1, 2] > sub-05 : [2, 3, 1, 0] > sub-06 : [0, 2, 1, 3] > sub-07 : [0, 3, 1, 2] > sub-08 : [3, 1, 0, 2] > sub-09 : [2, 1, 3, 0] > sub-10 : [1, 2, 3, 0] > sub-11 : [1, 3, 2, 0] > sub-12 : [2, 0, 3, 1] > sub-13 : [1, 3, 0, 2] > sub-14 : [1, 0, 3, 2] > sub-15 : [2, 1, 0, 3] > sub-16 : [3, 0, 2, 1] > sub-17 : [1, 2, 3, 0] > sub-18 : [2, 0, 1, 3] > sub-19 : [0, 3, 2, 1] > sub-20 : [2, 3, 0, 1] > sub-21 : [1, 2, 3, 0] > sub-22 : [1, 0, 2, 3] > sub-23 : [0, 2, 3, 1] > sub-24 : [3, 1, 2, 0] > sub-25 : [0, 1, 3, 2] > sub-26 : [3, 1, 0, 2] > sub-27 : [1, 2, 3, 0] **Stimulus timestamps** The timing of each phoneme and each word is provided in each sub-\*_ses-\*_task-\*_events.tsv file, for each subject, session and task. The timing links the MEG recording to the relevant speech moments of that story. Each events file contains five columns: : - onset (float) : onset time of event in seconds - duration (float) : duration of event in seconds - trial_type (dict) : dictionary of key:value pairs providing information about the event - sample (int) : onset time of event in MEG samples **Stories.** Each participant listened to four fictional stories, over the course of two ~1h-long MEG sessions, with the exception of 5 subjects who only underwent 1 session. The stories were played in different orders across participants. These stories were originally selected because they had been annotated for their syntactic structures (MASC). The corresponding text files can be found in stimuli/text/\*.txt **Word lists and pseudo-words.** To potentially investigate MEG responses to words independently of their narrative context, the text of these stories have been supplemented with word lists. Specifically, a random word list consisting of the unique content words (nouns, proper nouns, verbs, adverbs and adjectives) selected from the preceding text segment was added in a random order. In addition, a small fraction (<1%) of non-words were inserted into the natural sentences of the stories. The corresponding text files can be found in stimuli/text_with_wordlist/\*.txt. For simplicity, the brain responses to these word lists and to these pseudo words are fully discarded from the present study. **Audio synthesis.** Each of these stories was synthesized with Mac OS Mojave © version 10.14 text-to-speech. Voices (n=3 female) and speech rates (145 - 205 words per minute) varied every 5-20 sentences. The inter-sentence interval randomly varied between 0 and 1,000 ms. Both speech rate and inter-sentence intervals were sampled from a uniform distribution. Each `text_with_wordlist` files was divided into ~3 min sound files, which can be found in stimuli/audio/\*.wav. **Forced Alignment.** The timing of words and phonemes were inferred from the forced-alignment between the wav and text files, using the ‘gentle aligner’ from the Python module lowerquality ([https://github.com/lowerquality/gentle](https://github.com/lowerquality/gentle)). We discarded the words that did not get a forced alignment through this procedure. Analysis of the Mel spectrogram and of the phonetic decoding led to better results when using gentle than when using the Penn Forced Aligner originally used in Gwilliams et al MASC. The timing of each word and phoneme can be found in the events.tsv of each individual recording session. **Verification. To verify that the forced alignment did not have a systematic bias, we systematically check the MEG decoding of phonetic features for each sound file separately.** **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110. [https://doi.org/10.1038/sdata.2018.110](https://doi.org/10.1038/sdata.2018.110) ## Dataset Information | Dataset ID | `NM000229` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Gwilliams et al. 2023 — Introducing MEG-MASC: a high-quality magneto-encephalography dataset for evaluating natural speech processing | | Author (year) | `Gwilliams2023` | | Canonical | `MASC_MEG`, `MEG_MASC` | | Importable as | `NM000229`, `Gwilliams2023`, `MASC_MEG`, `MEG_MASC` | | Year | 2019 | | Authors | Laura Gwilliams, Graham Flick, Alec Marantz, Liina Pylkkänen, David Poeppel, Jean-Rémi King | | License | CC0 | | Citation / DOI | [doi:10.1038/s41597-023-02752-5](https://doi.org/10.1038/s41597-023-02752-5) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000229) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000229) | [Source URL](https://github.com/nemarDatasets/nm000229) | ### Copy-paste BibTeX ```bibtex @dataset{nm000229, title = {Gwilliams et al. 2023 — Introducing MEG-MASC: a high-quality magneto-encephalography dataset for evaluating natural speech processing}, author = {Laura Gwilliams and Graham Flick and Alec Marantz and Liina Pylkkänen and David Poeppel and Jean-Rémi King}, doi = {10.1038/s41597-023-02752-5}, url = {https://doi.org/10.1038/s41597-023-02752-5}, } ``` ## Technical Details - Subjects: 29 - Recordings: 1360 - Tasks: 79 - Channels: 208 - Sampling rate (Hz): 1000 - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: 1360 - Format: BIDS - License: CC0 - DOI: doi:10.1038/s41597-023-02752-5 - Source: nemar - OpenNeuro: [nm000229](https://openneuro.org/datasets/nm000229) - NeMAR: [nm000229](https://nemar.org/dataexplorer/detail?dataset_id=nm000229) ## API Reference Use the `NM000229` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000229(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Gwilliams et al. 2023 — Introducing MEG-MASC: a high-quality magneto-encephalography dataset for evaluating natural speech processing * **Study:** `nm000229` (NeMAR) * **Author (year):** `Gwilliams2023` * **Canonical:** `MASC_MEG`, `MEG_MASC` Also importable as: `NM000229`, `Gwilliams2023`, `MASC_MEG`, `MEG_MASC`. Modality: `eeg`. Subjects: 29; recordings: 1360; tasks: 79. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000229](https://openneuro.org/datasets/nm000229) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000229](https://nemar.org/dataexplorer/detail?dataset_id=nm000229) DOI: [https://doi.org/10.1038/s41597-023-02752-5](https://doi.org/10.1038/s41597-023-02752-5) ### Examples ```pycon >>> from eegdash.dataset import NM000229 >>> dataset = NM000229(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000229) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000229) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000230: eeg dataset, 30 subjects *Lower-limb MI dataset for knee pain patients from Zuo et al. 2025* Access recordings and metadata through EEGDash. **Citation:** Chongwen Zuo, Yi Yin, Haochong Wang, Zhiyang Zheng, Xiaoyan Ma, Yuan Yang, Jue Wang, Shan Wang, Zi-gang Huang, Chaoqun Ye (2025). *Lower-limb MI dataset for knee pain patients from Zuo et al. 2025*. Modality: eeg Subjects: 30 Recordings: 118 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000230 dataset = NM000230(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000230(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000230( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000230, title = {Lower-limb MI dataset for knee pain patients from Zuo et al. 2025}, author = {Chongwen Zuo and Yi Yin and Haochong Wang and Zhiyang Zheng and Xiaoyan Ma and Yuan Yang and Jue Wang and Shan Wang and Zi-gang Huang and Chaoqun Ye}, } ``` ## About This Dataset **Lower-limb MI dataset for knee pain patients from Zuo et al. 2025** Lower-limb MI dataset for knee pain patients from Zuo et al. 2025. **Dataset Overview** - **Code**: Zuo2025 - **Paradigm**: imagery - **DOI**: 10.1038/s41597-025-05767-2 ### View full README **Lower-limb MI dataset for knee pain patients from Zuo et al. 2025** Lower-limb MI dataset for knee pain patients from Zuo et al. 2025. **Dataset Overview** - **Code**: Zuo2025 - **Paradigm**: imagery - **DOI**: 10.1038/s41597-025-05767-2 - **Subjects**: 30 - **Sessions per subject**: 5 - **Events**: left_leg=1, right_leg=2 - **Trial interval**: [0, 4] s - **File format**: MAT **Acquisition** - **Sampling rate**: 500.0 Hz - **Number of channels**: 30 - **Channel types**: eeg=30 - **Channel names**: Fp1, Fp2, Fz, F3, F4, F7, F8, FCz, FC3, FC4, FT7, FT8, Cz, C3, C4, T3, T4, CPz, CP3, CP4, TP7, TP8, Pz, P3, P4, T5, T6, Oz, O1, O2 - **Montage**: standard_1005 - **Hardware**: ZhenTec EEG system - **Reference**: CPz - **Ground**: FPz - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 30 - **Health status**: knee pain patients - **Clinical population**: knee_pain - **Age**: mean=33.5, min=24, max=45 - **Gender distribution**: female=12, male=18 - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 2 - **Class labels**: left_leg, right_leg - **Trial duration**: 4.0 s - **Study design**: 2-class lower-limb MI (left/right leg flexion/extension). 5 sessions, 100 trials per session. - **Feedback type**: none - **Stimulus type**: visual - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: cue-based - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_leg ``` ```text ├─ Sensory-event └─ Label/left_leg right_leg ``` ```text ├─ Sensory-event └─ Label/right_leg ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_leg, right_leg - **Imagery duration**: 4.0 s **Data Structure** - **Trials**: 500 - **Trials per class**: left_leg=250, right_leg=250 - **Trials context**: 5 sessions x 100 trials (50 left + 50 right) **Signal Processing** - **Classifiers**: CSP+LDA, FBCSP+SVM, EEGNet, OTFWRGD - **Feature extraction**: CSP, FBCSP, deep_learning, Riemannian_geometry - **Frequency bands**: alpha_mu=[8.0, 15.0] Hz; beta=[15.0, 30.0] Hz - **Spatial filters**: CSP, FBCSP **Cross-Validation** - **Method**: 10-fold - **Folds**: 10 - **Evaluation type**: within_subject **BCI Application** - **Applications**: rehabilitation - **Environment**: clinical - **Online feedback**: False **Tags** - **Pathology**: Knee Pain - **Modality**: Motor - **Type**: Clinical, Motor Imagery **Documentation** - **DOI**: 10.1038/s41597-025-05767-2 - **License**: CC-BY-4.0 - **Investigators**: Chongwen Zuo, Yi Yin, Haochong Wang, Zhiyang Zheng, Xiaoyan Ma, Yuan Yang, Jue Wang, Shan Wang, Zi-gang Huang, Chaoqun Ye - **Institution**: Air Force Medical Center, Beijing - **Country**: CN - **Data URL**: [https://figshare.com/articles/dataset/28740260](https://figshare.com/articles/dataset/28740260) - **Publication year**: 2025 **References** Zuo, C., Yin, Y., Wang, H., et al. (2025). Enhancing classification of a large lower-limb motor imagery EEG dataset for BCI in knee pain patients. Scientific Data, 12, 1451. [https://doi.org/10.1038/s41597-025-05767-2](https://doi.org/10.1038/s41597-025-05767-2) Notes .. versionadded:: 1.2.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000230` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Lower-limb MI dataset for knee pain patients from Zuo et al. 2025 | | Author (year) | `Zuo2025` | | Canonical | — | | Importable as | `NM000230`, `Zuo2025` | | Year | 2025 | | Authors | Chongwen Zuo, Yi Yin, Haochong Wang, Zhiyang Zheng, Xiaoyan Ma, Yuan Yang, Jue Wang, Shan Wang, Zi-gang Huang, Chaoqun Ye | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000230) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000230) | [Source URL](https://nemar.org/dataexplorer/detail/nm000230) | ## Technical Details - Subjects: 30 - Recordings: 118 - Tasks: 1 - Channels: 30 - Sampling rate (Hz): 500.0 - Duration (hours): 38.07771222222222 - Pathology: Other - Modality: Visual - Type: Motor - Size on disk: 5.8 GB - File count: 118 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000230](https://openneuro.org/datasets/nm000230) - NeMAR: [nm000230](https://nemar.org/dataexplorer/detail?dataset_id=nm000230) ## API Reference Use the `NM000230` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000230(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Lower-limb MI dataset for knee pain patients from Zuo et al. 2025 * **Study:** `nm000230` (NeMAR) * **Author (year):** `Zuo2025` * **Canonical:** — Also importable as: `NM000230`, `Zuo2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 30; recordings: 118; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000230](https://openneuro.org/datasets/nm000230) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000230](https://nemar.org/dataexplorer/detail?dataset_id=nm000230) ### Examples ```pycon >>> from eegdash.dataset import NM000230 >>> dataset = NM000230(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000230) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000230) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000231: eeg dataset, 8 subjects *P300 dataset from Hoffmann et al 2008* Access recordings and metadata through EEGDash. **Citation:** Ulrich Hoffmann, Jean-Marc Vesin, Touradj Ebrahimi, Karin Diserens (2019). *P300 dataset from Hoffmann et al 2008*. Modality: eeg Subjects: 8 Recordings: 192 License: — Source: nemar Metadata: Good (80%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000231 dataset = NM000231(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000231(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000231( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000231, title = {P300 dataset from Hoffmann et al 2008}, author = {Ulrich Hoffmann and Jean-Marc Vesin and Touradj Ebrahimi and Karin Diserens}, } ``` ## About This Dataset **P300 dataset from Hoffmann et al 2008** P300 dataset from Hoffmann et al 2008. **Dataset Overview** - **Code**: EPFLP300 - **Paradigm**: p300 - **DOI**: 10.1016/j.jneumeth.2007.03.005 ### View full README **P300 dataset from Hoffmann et al 2008** P300 dataset from Hoffmann et al 2008. **Dataset Overview** - **Code**: EPFLP300 - **Paradigm**: p300 - **DOI**: 10.1016/j.jneumeth.2007.03.005 - **Subjects**: 8 - **Sessions per subject**: 4 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1] s - **Runs per session**: 6 - **File format**: MATLAB **Acquisition** - **Sampling rate**: 2048.0 Hz - **Number of channels**: 32 - **Channel types**: eeg=32, misc=2 - **Channel names**: AF3, AF4, C3, C4, CP1, CP2, CP5, CP6, Cz, F3, F4, F7, F8, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, MA1, MA2, O1, O2, Oz, P3, P4, P7, P8, PO3, PO4, Pz, T7, T8 - **Montage**: standard_1020 - **Hardware**: Biosemi ActiveTwo - **Sensor type**: active - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 8 - **Health status**: mixed - **Clinical population**: 4 disabled (cerebral palsy, multiple sclerosis, late-stage amyotrophic lateral sclerosis, traumatic brain and spinal-cord injury C4 level), 4 able-bodied - **Age**: mean=38.4, min=30, max=56 - **Gender distribution**: male=7, female=1 - **BCI experience**: no training required - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 1.0 s - **Study design**: Subjects counted silently how often a prescribed image (one of six: television, telephone, lamp, door, window, radio) was flashed while images were flashed in random sequences - **Feedback type**: none - **Stimulus type**: image_flash - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: offline - **Instructions**: Subjects were asked to count silently how often a prescribed image was flashed - **Stimulus presentation**: flash_duration=100ms, isi=400ms, display=six images (television, telephone, lamp, door, window, radio) **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 6 - **Inter-stimulus interval**: 400.0 ms - **Stimulus onset asynchrony**: 400.0 ms **Data Structure** - **Trials**: {‘target’: 135, ‘non-target’: 675} - **Trials per class**: target=135, non-target=675 - **Trials context**: per_session **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False **Signal Processing** - **Classifiers**: BLDA, FLDA - **Feature extraction**: temporal samples from selected electrodes - **Frequency bands**: analyzed=[1.0, 12.0] Hz **Cross-Validation** - **Method**: leave-one-session-out - **Folds**: 4 - **Evaluation type**: session-based **Performance (Original Study)** - **Accuracy**: 100.0% - **Itr**: 28.8 bits/min - **Max Bitrate Disabled Avg**: 19.0 - **Max Bitrate Able Bodied Avg**: 38.6 - **Max Bitrate Overall Avg**: 28.8 **BCI Application** - **Applications**: environment_control - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy, Cerebral palsy, Multiple sclerosis, Amyotrophic lateral sclerosis, Traumatic brain injury, Post-anoxic encephalopathy - **Modality**: Visual - **Type**: Research **Documentation** - **DOI**: 10.1016/j.jneumeth.2007.03.005 - **License**: Unknown - **Investigators**: Ulrich Hoffmann, Jean-Marc Vesin, Touradj Ebrahimi, Karin Diserens - **Senior author**: Karin Diserens - **Contact**: [ulrich.hoffmann@epfl.ch](mailto:ulrich.hoffmann@epfl.ch) - **Institution**: Ecole Polytechnique Fédérale de Lausanne - **Department**: Signal Processing Institute - **Address**: Signal Processing Institute, CH-1015 Lausanne, Switzerland - **Country**: CH - **Repository**: [http://bci.epfl.ch/p300](http://bci.epfl.ch/p300) - **Publication year**: 2008 - **Funding**: Swiss National Science Foundation Grant No. 200020-112313 - **Keywords**: Brain–computer interface, P300, Disabled subjects, Fisher’s linear discriminant analysis, Bayesian linear discriminant analysis **References** Hoffmann, U., Vesin, J-M., Ebrahimi, T., Diserens, K., 2008. An efficient P300-based brain-computer interfacefor disabled subjects. Journal of Neuroscience Methods . [https://doi.org/10.1016/j.jneumeth.2007.03.005](https://doi.org/10.1016/j.jneumeth.2007.03.005) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000231` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | P300 dataset from Hoffmann et al 2008 | | Author (year) | `Hoffmann2008` | | Canonical | `EPFLP300`, `EPFL_P300`, `EPFLP300Dataset` | | Importable as | `NM000231`, `Hoffmann2008`, `EPFLP300`, `EPFL_P300`, `EPFLP300Dataset` | | Year | 2019 | | Authors | Ulrich Hoffmann, Jean-Marc Vesin, Touradj Ebrahimi, Karin Diserens | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000231) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000231) | [Source URL](https://nemar.org/dataexplorer/detail/nm000231) | ## Technical Details - Subjects: 8 - Recordings: 192 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 2048.0 - Duration (hours): 2.9408072916666668 - Pathology: Other - Modality: Visual - Type: Attention - Size on disk: 1.9 GB - File count: 192 - Format: BIDS - License: See source - DOI: — - Source: nemar - OpenNeuro: [nm000231](https://openneuro.org/datasets/nm000231) - NeMAR: [nm000231](https://nemar.org/dataexplorer/detail?dataset_id=nm000231) ## API Reference Use the `NM000231` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000231(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset from Hoffmann et al 2008 * **Study:** `nm000231` (NeMAR) * **Author (year):** `Hoffmann2008` * **Canonical:** `EPFLP300`, `EPFL_P300`, `EPFLP300Dataset` Also importable as: `NM000231`, `Hoffmann2008`, `EPFLP300`, `EPFL_P300`, `EPFLP300Dataset`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 8; recordings: 192; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000231](https://openneuro.org/datasets/nm000231) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000231](https://nemar.org/dataexplorer/detail?dataset_id=nm000231) ### Examples ```pycon >>> from eegdash.dataset import NM000231 >>> dataset = NM000231(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000231) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000231) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000232: eeg dataset, 10 subjects *THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition* Access recordings and metadata through EEGDash. **Citation:** Alessandro T. Gifford, Kshitij Dwivedi, Gemma Roig, Radoslaw M. Cichy (2022). *THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition*. [10.17605/OSF.IO/3JK45](https://doi.org/10.17605/OSF.IO/3JK45) Modality: eeg Subjects: 10 Recordings: 638 License: CC-BY 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000232 dataset = NM000232(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000232(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000232( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000232, title = {THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition}, author = {Alessandro T. Gifford and Kshitij Dwivedi and Gemma Roig and Radoslaw M. Cichy}, doi = {10.17605/OSF.IO/3JK45}, url = {https://doi.org/10.17605/OSF.IO/3JK45}, } ``` ## About This Dataset THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition **Overview** EEG dataset of 10 subjects who viewed 16,540 distinct training images and 200 test images (each repeated ~80 times) using rapid serial visual presentation (RSVP) at 5 Hz, recorded on a BrainVision actiCHamp system at 1000 Hz. The source files store 63 EEG channels (the online reference electrode is not stored). Stimuli are drawn from the THINGS database (Hebart et al. 2019). Each subject completed 4 separate sessions; each session contained: ### View full README THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition **Overview** EEG dataset of 10 subjects who viewed 16,540 distinct training images and 200 test images (each repeated ~80 times) using rapid serial visual presentation (RSVP) at 5 Hz, recorded on a BrainVision actiCHamp system at 1000 Hz. The source files store 63 EEG channels (the online reference electrode is not stored). Stimuli are drawn from the THINGS database (Hebart et al. 2019). Each subject completed 4 separate sessions; each session contained: > - 5 training runs (~3,360 trials each) covering ~16,540 unique images > - 1 test run (~4,080 trials) of 200 images repeated 20× per session > - 2 resting-state runs (one before, one after the main experiment) Total: ~32,540 training trials + ~16,000 test trials per subject across 4 sessions. **Recording setup** - Manufacturer: Brain Products (actiCHamp) - 63 EEG channels (one electrode served as online reference and is not stored in the source files) - 10-10 cap layout - Sampling rate: 1000 Hz - Online band-pass: 0.01-100 Hz - Triggers recorded as BrainVision stimulus annotations (not as a dedicated stim channel) **Tasks (BIDS labels)** - task-train: training run (RSVP of unique images) - task-test: test run (RSVP of repeated test images) - task-rest: resting state (eyes open, fixation cross) **Run numbering** - task-train: run-01..run-05 per session (5 training parts) - task-test: single run per session - task-rest: run-01 (before main task) and run-02 (after main task) **Events** events.tsv columns: : onset, duration, sample, value, trial_type tot_img_number - global image ID (1-16540 for train; 1-200 for test;
> ‘n/a’ for target catch trials)
img_category - integer category index category_name - human-readable category, e.g. “01175_roller_coaster” block, sequence - hierarchical position within the run img_in_sequence - image position within its 20-image sequence soa - actual stimulus onset asynchrony (~200 ms) trial_type values: : image - normal training/test image presentation target - random catch trial (subject must press a button) rest_marker - resting-state start/end marker **Subject information** participants.tsv contains age and sex (both extracted from the behavioural .mat files in the source data). **Folder layout** /sub-XX/ses-YY/eeg/ - main BIDS data (BDF + sidecars) /sourcedata/ - original BrainVision .eeg/.vhdr/.vmrk and > behavioural .mat files /derivatives/preprocessed_eeg/ - authors’ preprocessed train/test epochs /derivatives/resting_state/ - authors’ preprocessed resting state /stimuli/ - image set (training_images.zip, test_images.zip) > plus image_metadata.npy /code/ - this conversion script **Reference** Gifford, A.T., Dwivedi, K., Roig, G., & Cichy, R.M. (2022). A large and rich EEG dataset for modeling human visual object recognition. NeuroImage, 264, 119754. [https://doi.org/10.1016/j.neuroimage.2022.119754](https://doi.org/10.1016/j.neuroimage.2022.119754) Code: [https://github.com/gifale95/eeg_encoding](https://github.com/gifale95/eeg_encoding) OSF: [https://osf.io/3jk45/](https://osf.io/3jk45/) ## Dataset Information | Dataset ID | `NM000232` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition | | Author (year) | `Gifford2019` | | Canonical | — | | Importable as | `NM000232`, `Gifford2019` | | Year | 2022 | | Authors | Alessandro T. Gifford, Kshitij Dwivedi, Gemma Roig, Radoslaw M. Cichy | | License | CC-BY 4.0 | | Citation / DOI | [doi:10.17605/OSF.IO/3JK45](https://doi.org/10.17605/OSF.IO/3JK45) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000232) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000232) | [Source URL](https://nemar.org/dataexplorer/detail/nm000232) | ### Copy-paste BibTeX ```bibtex @dataset{nm000232, title = {THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition}, author = {Alessandro T. Gifford and Kshitij Dwivedi and Gemma Roig and Radoslaw M. Cichy}, doi = {10.17605/OSF.IO/3JK45}, url = {https://doi.org/10.17605/OSF.IO/3JK45}, } ``` ## Technical Details - Subjects: 10 - Recordings: 638 - Tasks: 5 - Channels: 63 - Sampling rate (Hz): 1000 - Duration (hours): 87.2788263888889 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 203.9 GB - File count: 638 - Format: BIDS - License: CC-BY 4.0 - DOI: doi:10.17605/OSF.IO/3JK45 - Source: nemar - OpenNeuro: [nm000232](https://openneuro.org/datasets/nm000232) - NeMAR: [nm000232](https://nemar.org/dataexplorer/detail?dataset_id=nm000232) ## API Reference Use the `NM000232` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000232(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition * **Study:** `nm000232` (NeMAR) * **Author (year):** `Gifford2019` * **Canonical:** — Also importable as: `NM000232`, `Gifford2019`. Modality: `eeg`. Subjects: 10; recordings: 638; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000232](https://openneuro.org/datasets/nm000232) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000232](https://nemar.org/dataexplorer/detail?dataset_id=nm000232) DOI: [https://doi.org/10.17605/OSF.IO/3JK45](https://doi.org/10.17605/OSF.IO/3JK45) ### Examples ```pycon >>> from eegdash.dataset import NM000232 >>> dataset = NM000232(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000232) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000232) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000234: eeg dataset, 21 subjects *BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset* Access recordings and metadata through EEGDash. **Citation:** Martijn Schreuder, Benjamin Blankertz, Michael Tangermann (2011). *BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset*. Modality: eeg Subjects: 21 Recordings: 42 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000234 dataset = NM000234(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000234(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000234( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000234, title = {BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset}, author = {Martijn Schreuder and Benjamin Blankertz and Michael Tangermann}, } ``` ## About This Dataset **BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset** BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset. **Dataset Overview** - **Code**: BNCI2015-009 - **Paradigm**: p300 - **DOI**: 10.3389/fnins.2011.00112 ### View full README **BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset** BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset. **Dataset Overview** - **Code**: BNCI2015-009 - **Paradigm**: p300 - **DOI**: 10.3389/fnins.2011.00112 - **Subjects**: 21 - **Sessions per subject**: 1 - **Events**: Target=1, NonTarget=2 - **Trial interval**: [0, 0.8] s - **Runs per session**: 2 - **File format**: gdf - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 250.0 Hz - **Number of channels**: 60 - **Channel types**: eeg=60, eog=2 - **Montage**: 10-20 - **Hardware**: Brain Products 128-channel amplifier - **Software**: Matlab - **Reference**: nose - **Sensor type**: Ag/AgCl electrodes - **Line frequency**: 50.0 Hz - **Online filters**: 0.1-250 Hz analog bandpass - **Auxiliary channels**: EOG (2 ch, bipolar) **Participants** - **Number of subjects**: 21 - **Health status**: patients - **Clinical population**: Healthy - **Age**: mean=30.3, min=22, max=55 - **Gender distribution**: male=6, female=4 - **Handedness**: unknown - **BCI experience**: mixed - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Task type**: oddball - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 0.8 s - **Tasks**: spatial_auditory_oddball - **Study design**: Offline auditory oddball task using spatial location of auditory stimuli as discriminating cue. Frontal five speakers used (speakers 1,2,3,7,8) with 45 degree spacing. Three conditions tested: C300 (300ms ISI), C175 (175ms ISI), C300s (300ms ISI, single speaker). Each stimulus was unique 40ms complex sound from bandpass filtered white noise with tone overlay. - **Study domain**: BCI - **Feedback type**: none - **Stimulus type**: auditory_spatial - **Stimulus modalities**: auditory - **Primary modality**: auditory - **Synchronicity**: synchronous - **Mode**: offline - **Training/test split**: False - **Instructions**: Subjects asked to mentally count target stimulations or respond by keypress (condition Cr). Minimize eye movements and muscle contractions. Target direction indicated prior to each block visually and by presenting stimulus from that location. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 5 - **Number of repetitions**: 15 - **Inter-stimulus interval**: 300.0 ms **Data Structure** - **Trials**: varied by condition - **Blocks per session**: 50 - **Trials context**: BCI experiments: C300 (50 trials × 75 subtrials = 3750 subtrials), C175 (40 trials × 75 subtrials = 3000 subtrials), C300s (20 trials × 75 subtrials = 1500 subtrials). Physiological experiments: C1000 (32 trials × 80 subtrials = 2560 subtrials), Cr (576-768 subtrials) **Preprocessing** - **Data state**: filtered - **Preprocessing applied**: True - **Steps**: bandpass filter, notch filter, downsampling, artifact rejection - **Highpass filter**: 0.1 Hz - **Lowpass filter**: 250.0 Hz - **Bandpass filter**: {‘low_cutoff_hz’: 0.1, ‘high_cutoff_hz’: 250.0} - **Notch filter**: [50] Hz - **Filter type**: Chebyshev II order 8 (for visual inspection: 30 Hz pass, 42 Hz stop, 50 dB damping) - **Artifact methods**: threshold-based artifact rejection - **Re-reference**: nose - **Downsampled to**: 100.0 Hz - **Epoch window**: [-0.15, 0.8] - **Notes**: Raw data acquired at 1000 Hz. For visual inspection: low-pass filtered with order 8 Chebyshev II filter (30 Hz pass, 42 Hz stop, 50 dB damping) applied forward and backward to minimize phase shifts, then downsampled to 100 Hz. For classification: same filter applied causally (forward only) for online portability. Artifact rejection used simple threshold method: subtrials with deflection >70 µV over ocular channels compared to baseline were rejected. **Signal Processing** - **Classifiers**: LDA - **Feature extraction**: ROC-separability-index - **Frequency bands**: analyzed=[0.1, 250.0] Hz **Cross-Validation** - **Method**: cross-validation - **Evaluation type**: offline **Performance (Original Study)** - **Accuracy**: 90.0% - **Itr**: 17.39 bits/min - **Best Subject Itr**: 25.2 - **Best Subject Accuracy**: 100.0 - **C300S Accuracy**: 70.0 **BCI Application** - **Applications**: speller, communication - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Auditory - **Type**: P300 **Documentation** - **Description**: A new auditory multi-class brain-computer interface paradigm using spatial hearing as an informative cue - **DOI**: 10.1371/journal.pone.0009813 - **Associated paper DOI**: 10.3389/fnins.2011.00112 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: Martijn Schreuder, Benjamin Blankertz, Michael Tangermann - **Senior author**: Michael Tangermann - **Contact**: [martijn@cs.tu-berlin.de](mailto:martijn@cs.tu-berlin.de) - **Institution**: Berlin Institute of Technology - **Department**: Machine Learning Department - **Address**: Berlin, Germany - **Country**: Germany - **Repository**: BNCI Horizon - **Publication year**: 2010 - **Funding**: European ICT Programme Project FP7-224631; European ICT Programme Project FP7-216886; Deutsche Forschungsgemeinschaft (DFG) MU 987/3-1; Bundesministerium für Bildung und Forschung (BMBF) FKZ 01IB001A; Bundesministerium für Bildung und Forschung (BMBF) FKZ 01GQ0850; FP7-ICT PASCAL2 Network of Excellence ICT-216886 - **Ethics approval**: Ethics Committee of the Charité University Hospital (number EA4/073/09) - **Keywords**: auditory BCI, P300, spatial hearing, multi-class, oddball paradigm **References** Schreuder, M., Rost, T., & Tangermann, M. (2011). Listen, you are writing! Speeding up online spelling with a dynamic auditory BCI. Frontiers in neuroscience, 5, 112. [https://doi.org/10.3389/fnins.2011.00112](https://doi.org/10.3389/fnins.2011.00112) Notes .. versionadded:: 1.2.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000234` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset | | Author (year) | `Schreuder2015_ERP` | | Canonical | `BNCI2015_ERP` | | Importable as | `NM000234`, `Schreuder2015_ERP`, `BNCI2015_ERP` | | Year | 2011 | | Authors | Martijn Schreuder, Benjamin Blankertz, Michael Tangermann | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000234) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000234) | [Source URL](https://nemar.org/dataexplorer/detail/nm000234) | ## Technical Details - Subjects: 21 - Recordings: 42 - Tasks: 1 - Channels: 60 - Sampling rate (Hz): 250.0 - Duration (hours): 30.17912 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 4.6 GB - File count: 42 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000234](https://openneuro.org/datasets/nm000234) - NeMAR: [nm000234](https://nemar.org/dataexplorer/detail?dataset_id=nm000234) ## API Reference Use the `NM000234` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000234(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset * **Study:** `nm000234` (NeMAR) * **Author (year):** `Schreuder2015_ERP` * **Canonical:** `BNCI2015_ERP` Also importable as: `NM000234`, `Schreuder2015_ERP`, `BNCI2015_ERP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000234](https://openneuro.org/datasets/nm000234) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000234](https://nemar.org/dataexplorer/detail?dataset_id=nm000234) ### Examples ```pycon >>> from eegdash.dataset import NM000234 >>> dataset = NM000234(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000234) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000234) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000235: eeg dataset, 31 subjects *Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025* Access recordings and metadata through EEGDash. **Citation:** Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu (2025). *Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025*. Modality: eeg Subjects: 31 Recordings: 63 License: CC0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000235 dataset = NM000235(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000235(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000235( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000235, title = {Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025}, author = {Eva Guttmann-Flury and Xinjun Sheng and Xiangyang Zhu}, } ``` ## About This Dataset **Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025** Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025. **Dataset Overview** - **Code**: GuttmannFlury2025-MI - **Paradigm**: imagery - **DOI**: 10.1038/s41597-025-04861-9 ### View full README **Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025** Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025. **Dataset Overview** - **Code**: GuttmannFlury2025-MI - **Paradigm**: imagery - **DOI**: 10.1038/s41597-025-04861-9 - **Subjects**: 31 - **Sessions per subject**: 3 - **Events**: left_hand=1, right_hand=2 - **Trial interval**: [0, 4] s - **File format**: BDF **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 66 - **Channel types**: eeg=64, eog=1, stim=1 - **Channel names**: FP1, FPZ, FP2, AF3, AF4, F7, F5, F3, F1, FZ, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCZ, FC2, FC4, FC6, FT8, T7, C5, C3, C1, CZ, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPZ, CP2, CP4, CP6, TP8, P7, P5, P3, P1, PZ, P2, P4, P6, P8, PO7, PO5, PO3, POZ, PO4, PO6, PO8, O1, OZ, O2, CB1, CB2 - **Montage**: standard_1005 - **Hardware**: Neuroscan Quik-Cap 65-ch, SynAmps2 - **Reference**: right mastoid (M1) - **Ground**: forehead - **Sensor type**: Ag/AgCl - **Line frequency**: 50.0 Hz - **Online filters**: {‘highpass_time_constant_s’: 10} **Participants** - **Number of subjects**: 31 - **Health status**: healthy - **Age**: mean=28.3, min=20.0, max=57.0 - **Gender distribution**: female=11, male=20 - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 2 - **Class labels**: left_hand, right_hand - **Trial duration**: 7.5 s - **Study design**: Multi-paradigm BCI (MI/ME/SSVEP/P300). MI and ME: 2-class hand grasping, 40 trials/session, up to 3 sessions per subject. - **Feedback type**: none - **Stimulus type**: visual rectangle cue - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_hand, right_hand - **Cue duration**: 2.0 s - **Imagery duration**: 4.0 s **Data Structure** - **Trials**: 2520 - **Trials context**: 63 sessions x 40 trials = 2520 (MI only, default) **BCI Application** - **Applications**: motor_control - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Research **Documentation** - **DOI**: 10.1038/s41597-025-04861-9 - **License**: CC0 - **Investigators**: Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu - **Institution**: Shanghai Jiao Tong University - **Country**: CN - **Publication year**: 2025 **References** Guttmann-Flury, E., Sheng, X., & Zhu, X. (2025). Dataset combining EEG, eye-tracking, and high-speed video for ocular activity analysis across BCI paradigms. Scientific Data, 12, 587. [https://doi.org/10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000235` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025 | | Author (year) | `GuttmannFlury2025_Eye_BCI` | | Canonical | `GuttmannFlury2025_MIME` | | Importable as | `NM000235`, `GuttmannFlury2025_Eye_BCI`, `GuttmannFlury2025_MIME` | | Year | 2025 | | Authors | Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu | | License | CC0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000235) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000235) | [Source URL](https://nemar.org/dataexplorer/detail/nm000235) | ## Technical Details - Subjects: 31 - Recordings: 63 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 1000.0 - Duration (hours): 6.996371388888889 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 4.6 GB - File count: 63 - Format: BIDS - License: CC0 - DOI: — - Source: nemar - OpenNeuro: [nm000235](https://openneuro.org/datasets/nm000235) - NeMAR: [nm000235](https://nemar.org/dataexplorer/detail?dataset_id=nm000235) ## API Reference Use the `NM000235` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000235(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025 * **Study:** `nm000235` (NeMAR) * **Author (year):** `GuttmannFlury2025_Eye_BCI` * **Canonical:** `GuttmannFlury2025_MIME` Also importable as: `NM000235`, `GuttmannFlury2025_Eye_BCI`, `GuttmannFlury2025_MIME`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 31; recordings: 63; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000235](https://openneuro.org/datasets/nm000235) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000235](https://nemar.org/dataexplorer/detail?dataset_id=nm000235) ### Examples ```pycon >>> from eegdash.dataset import NM000235 >>> dataset = NM000235(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000235) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000235) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000236: eeg dataset, 21 subjects *Dataset of an EEG-based BCI experiment in Virtual Reality using P300* Access recordings and metadata through EEGDash. **Citation:** Grégoire Cattan, Anton Andreev, Pedro Luiz Coelho Rodrigues, Marco Congedo (2019). *Dataset of an EEG-based BCI experiment in Virtual Reality using P300*. Modality: eeg Subjects: 21 Recordings: 2520 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000236 dataset = NM000236(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000236(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000236( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000236, title = {Dataset of an EEG-based BCI experiment in Virtual Reality using P300}, author = {Grégoire Cattan and Anton Andreev and Pedro Luiz Coelho Rodrigues and Marco Congedo}, } ``` ## About This Dataset **Dataset of an EEG-based BCI experiment in Virtual Reality using P300** Dataset of an EEG-based BCI experiment in Virtual Reality using P300. **Dataset Overview** - **Code**: Cattan2019-VR - **Paradigm**: p300 - **DOI**: [https://doi.org/10.5281/zenodo.2605204](https://doi.org/10.5281/zenodo.2605204) ### View full README **Dataset of an EEG-based BCI experiment in Virtual Reality using P300** Dataset of an EEG-based BCI experiment in Virtual Reality using P300. **Dataset Overview** - **Code**: Cattan2019-VR - **Paradigm**: p300 - **DOI**: [https://doi.org/10.5281/zenodo.2605204](https://doi.org/10.5281/zenodo.2605204) - **Subjects**: 21 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1.0] s - **Runs per session**: 60 - **Session IDs**: PC, VR - **File format**: mat, csv - **Contributing labs**: GIPSA-lab **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Channel names**: Fp1, Fp2, Fc5, Fz, Fc6, T7, Cz, T8, P7, P3, Pz, P4, P8, O1, Oz, O2 - **Montage**: 10-10 - **Hardware**: g.USBamp (g.tec, Schiedlberg, Austria) - **Software**: OpenVibe - **Reference**: right earlobe - **Ground**: AFZ - **Sensor type**: wet electrodes - **Line frequency**: 50.0 Hz - **Online filters**: no digital filter applied - **Cap manufacturer**: EasyCap - **Cap model**: EC20 **Participants** - **Number of subjects**: 21 - **Health status**: healthy - **Age**: mean=26.38, std=5.78, min=19.0, max=44.0 - **Gender distribution**: male=14, female=7 - **BCI experience**: varied gaming experience: some played video games occasionally, some played First Person Shooters; varied VR experience from none to repetitive **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Study design**: randomized session order (PC vs VR); limit eye blinks, head movements and face muscular contractions - **Feedback type**: visual - **Stimulus type**: flashing white crosses in 6x6 matrix - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: offline - **Training/test split**: False - **Instructions**: focus on a red-squared target symbol while groups of six symbols flash - **Stimulus presentation**: description=6x6 matrix of white crosses; groups of 6 symbols flash; each symbol flashes exactly 2 times per repetition, platform=Unity engine exported to PC and VR **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 1 - **Number of repetitions**: 12 **Data Structure** - **Trials**: {‘target’: 120, ‘non_target’: 600} - **Trials per class**: target=120, non_target=600 - **Blocks per session**: 12 - **Trials context**: per session: 12 blocks × 5 repetitions × 12 flashes per repetition (2 target, 10 non-target) **Preprocessing** - **Data state**: raw EEG with software tagging via USB (note: tagging introduces jitter and latency - mean 38ms in PC, 117ms in VR) - **Preprocessing applied**: False - **Notes**: mean tagging latency: ~38 ms in PC, ~117 ms in VR due to different hardware/software setup; these latencies should be used to correct ERPs **Signal Processing** - **Classifiers**: xDAWN, Riemannian - **Feature extraction**: Covariance/Riemannian, xDAWN **Cross-Validation** - **Evaluation type**: cross_session **BCI Application** - **Applications**: speller - **Environment**: PC and Virtual Reality (VRElegiant HMD with Huawei Ascend Mate 7 smartphone) - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Perception **Documentation** - **Description**: EEG recordings of 21 subjects doing a visual P300 experiment on PC and VR to compare BCI performance and user experience - **DOI**: 10.5281/zenodo.2605204 - **Associated paper DOI**: hal-02078533v3 - **License**: CC-BY-4.0 - **Investigators**: Grégoire Cattan, Anton Andreev, Pedro Luiz Coelho Rodrigues, Marco Congedo - **Senior author**: Marco Congedo - **Institution**: GIPSA-lab - **Department**: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP - **Address**: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France - **Country**: FR - **Repository**: Zenodo - **Data URL**: [https://doi.org/10.5281/zenodo.2605204](https://doi.org/10.5281/zenodo.2605204) - **Publication year**: 2019 - **Funding**: IHMTEK Company (Interaction Homme-Machine Technologie) - **Ethics approval**: Ethical Committee of the University of Grenoble Alpes (Comité d’Ethique pour la Recherche Non-Interventionnelle) - **Acknowledgements**: promoted by the IHMTEK Company - **Keywords**: Electroencephalography (EEG), P300, Brain-Computer Interface (BCI), Virtual Reality (VR), experiment **Abstract** Dataset contains electroencephalographic recordings on 21 subjects doing a visual P300 experiment on PC and VR. The visual P300 is an event-related potential elicited by a visual stimulation, peaking 240–600 ms after stimulus onset. The experiment compares P300-based BCI on PC vs VR headset (passive HMD with smartphone) concerning physiological, subjective and performance aspects. EEG recorded with 16 electrodes. Experiment conducted at GIPSA-lab in 2018. **Methodology** Two randomized sessions (PC and VR). Each session: 12 blocks of 5 repetitions. Each repetition: 12 flashes of groups of 6 symbols, ensuring each symbol flashes exactly 2 times. Target flashes twice per repetition (2 target flashes), non-target flashes 10 times. Random feedback given after each repetition (70% expected accuracy). P300 interface: 6x6 matrix of white flashing crosses with red-squared target. VR used passive HMD (VRElegiant) with Huawei Mate 7 smartphone. IMU deactivated to prevent drift. Unity engine used for identical visual stimulation across PC and VR. **References** G. Cattan, A. Andreev, P. L. C. Rodrigues, and M. Congedo (2019). Dataset of an EEG-based BCI experiment in Virtual Reality and on a Personal Computer. Research Report, GIPSA-lab; IHMTEK. [https://doi.org/10.5281/zenodo.2605204](https://doi.org/10.5281/zenodo.2605204) .. versionadded:: 0.5.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000236` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Dataset of an EEG-based BCI experiment in Virtual Reality using P300 | | Author (year) | `Cattan2019_P300` | | Canonical | — | | Importable as | `NM000236`, `Cattan2019_P300` | | Year | 2019 | | Authors | Grégoire Cattan, Anton Andreev, Pedro Luiz Coelho Rodrigues, Marco Congedo | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000236) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000236) | [Source URL](https://nemar.org/dataexplorer/detail/nm000236) | ## Technical Details - Subjects: 21 - Recordings: 2520 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 512.0 - Duration (hours): 4.099188368055556 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 373.3 MB - File count: 2520 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000236](https://openneuro.org/datasets/nm000236) - NeMAR: [nm000236](https://nemar.org/dataexplorer/detail?dataset_id=nm000236) ## API Reference Use the `NM000236` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000236(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of an EEG-based BCI experiment in Virtual Reality using P300 * **Study:** `nm000236` (NeMAR) * **Author (year):** `Cattan2019_P300` * **Canonical:** — Also importable as: `NM000236`, `Cattan2019_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 2520; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000236](https://openneuro.org/datasets/nm000236) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000236](https://nemar.org/dataexplorer/detail?dataset_id=nm000236) ### Examples ```pycon >>> from eegdash.dataset import NM000236 >>> dataset = NM000236(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000236) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000236) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000237: eeg dataset, 20 subjects *7-day motor imagery BCI EEG dataset from Zhou et al 2021* Access recordings and metadata through EEGDash. **Citation:** Qing Zhou, Jiafan Lin, Lin Yao, Yueming Wang, Yan Han, Kedi Xu (2021). *7-day motor imagery BCI EEG dataset from Zhou et al 2021*. Modality: eeg Subjects: 20 Recordings: 833 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000237 dataset = NM000237(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000237(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000237( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000237, title = {7-day motor imagery BCI EEG dataset from Zhou et al 2021}, author = {Qing Zhou and Jiafan Lin and Lin Yao and Yueming Wang and Yan Han and Kedi Xu}, } ``` ## About This Dataset **7-day motor imagery BCI EEG dataset from Zhou et al 2021** 7-day motor imagery BCI EEG dataset from Zhou et al 2021. **Dataset Overview** - **Code**: Zhou2020 - **Paradigm**: imagery - **DOI**: 10.3389/fnhum.2021.701091 ### View full README **7-day motor imagery BCI EEG dataset from Zhou et al 2021** 7-day motor imagery BCI EEG dataset from Zhou et al 2021. **Dataset Overview** - **Code**: Zhou2020 - **Paradigm**: imagery - **DOI**: 10.3389/fnhum.2021.701091 - **Subjects**: 20 - **Sessions per subject**: 7 - **Events**: left_hand=1, right_hand=2, feet=3, rest=4 - **Trial interval**: [0, 5] s - **Runs per session**: 6 - **File format**: NPZ - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 500.0 Hz - **Number of channels**: 41 - **Channel types**: eeg=41 - **Channel names**: F3, F1, Fz, F2, F4, FC5, FC3, FC1, FCz, FC2, FC4, FC6, C5, C3, C1, Cz, C2, C4, C6, CP5, CP3, CP1, CPz, CP2, CP4, CP6 - **Montage**: standard_1005 - **Hardware**: Neuroscan SynAmps2 - **Reference**: vertex (Cz) - **Ground**: AFz - **Sensor type**: Ag/AgCl - **Line frequency**: 50.0 Hz - **Online filters**: {‘bandpass’: [0.5, 100], ‘notch_hz’: 50} **Participants** - **Number of subjects**: 20 - **Health status**: healthy - **Age**: mean=23.2, std=1.47, min=21, max=27 - **Gender distribution**: female=9, male=11 - **Handedness**: right-handed - **BCI experience**: mixed - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 4 - **Class labels**: left_hand, right_hand, feet, rest - **Trial duration**: 5.0 s - **Study design**: 7-day longitudinal MI-BCI study without feedback training. 4 classes: left hand, right hand, both feet, idle - **Feedback type**: none - **Stimulus type**: arrow cues - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand feet ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine, Move, Foot rest ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Rest ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_hand, right_hand, feet, rest - **Imagery duration**: 5.0 s **Data Structure** - **Trials**: 33600 - **Trials context**: 20 subjects x 7 sessions x 6 runs x 40 trials = 33600 **Signal Processing** - **Classifiers**: SVM - **Feature extraction**: CSP - **Frequency bands**: classification=[8.0, 30.0] Hz - **Spatial filters**: CSP **Cross-Validation** - **Method**: 10-fold - **Folds**: 10 - **Evaluation type**: within_session **BCI Application** - **Applications**: research - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Research **Documentation** - **DOI**: 10.3389/fnhum.2021.701091 - **License**: CC-BY-4.0 - **Investigators**: Qing Zhou, Jiafan Lin, Lin Yao, Yueming Wang, Yan Han, Kedi Xu - **Institution**: Zhejiang University - **Country**: CN - **Repository**: Zenodo - **Data URL**: [https://zenodo.org/records/18988317](https://zenodo.org/records/18988317) - **Publication year**: 2021 **References** Zhou, Q., Lin, J., Yao, L., Wang, Y., Han, Y., Xu, K. (2021). Relative Power Correlates With the Decoding Performance of Motor Imagery Both Across Time and Subjects. Frontiers in Human Neuroscience, 15, 701091. [https://doi.org/10.3389/fnhum.2021.701091](https://doi.org/10.3389/fnhum.2021.701091) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000237` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | 7-day motor imagery BCI EEG dataset from Zhou et al 2021 | | Author (year) | `Zhou2021` | | Canonical | — | | Importable as | `NM000237`, `Zhou2021` | | Year | 2021 | | Authors | Qing Zhou, Jiafan Lin, Lin Yao, Yueming Wang, Yan Han, Kedi Xu | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000237) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000237) | [Source URL](https://nemar.org/dataexplorer/detail/nm000237) | ## Technical Details - Subjects: 20 - Recordings: 833 - Tasks: 1 - Channels: 41 (506), 26 (327) - Sampling rate (Hz): 500.0 - Duration (hours): 90.07259277777776 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 16.0 GB - File count: 833 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000237](https://openneuro.org/datasets/nm000237) - NeMAR: [nm000237](https://nemar.org/dataexplorer/detail?dataset_id=nm000237) ## API Reference Use the `NM000237` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000237(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 7-day motor imagery BCI EEG dataset from Zhou et al 2021 * **Study:** `nm000237` (NeMAR) * **Author (year):** `Zhou2021` * **Canonical:** — Also importable as: `NM000237`, `Zhou2021`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 20; recordings: 833; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000237](https://openneuro.org/datasets/nm000237) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000237](https://nemar.org/dataexplorer/detail?dataset_id=nm000237) ### Examples ```pycon >>> from eegdash.dataset import NM000237 >>> dataset = NM000237(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000237) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000237) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000238: eeg dataset, 87 subjects *SparrKULee: A Speech-Evoked Auditory Response Repository from KU Leuven, Containing the EEG of 85 Participants* Access recordings and metadata through EEGDash. **Citation:** Bernd Accou, Lies Bollens, Marlies Gillis, Wendy Verheijen, Hugo Van hamme, Tom Francart (2024). *SparrKULee: A Speech-Evoked Auditory Response Repository from KU Leuven, Containing the EEG of 85 Participants*. [10.48804/K3VSND](https://doi.org/10.48804/K3VSND) Modality: eeg Subjects: 87 Recordings: 4088 License: Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0) for the EEG data. Stimuli can only be used for non-commercial purposes. Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000238 dataset = NM000238(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000238(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000238( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000238, title = {SparrKULee: A Speech-Evoked Auditory Response Repository from KU Leuven, Containing the EEG of 85 Participants}, author = {Bernd Accou and Lies Bollens and Marlies Gillis and Wendy Verheijen and Hugo Van hamme and Tom Francart}, doi = {10.48804/K3VSND}, url = {https://doi.org/10.48804/K3VSND}, } ``` ## About This Dataset **IMPORTANT — RESTRICTED SUBJECTS EXCLUDED FROM NEMAR RE-HOST** IMPORTANT — 5 of the 85 original subjects (sub-019, sub-020, sub-021, sub-022, sub-026) are EXCLUDED from this NEMAR re-host because their raw EEG files are access-restricted on the KU Leuven Dataverse (HTTP 403 on download without a data-use agreement). Researchers who need these subjects should email [sparrkulee@kuleuven.be](mailto:sparrkulee@kuleuven.be) to request access and download the data directly from [https://rdr.kuleuven.be/dataset.xhtml?persistentId=doi:10.48804/K3VSND](https://rdr.kuleuven.be/dataset.xhtml?persistentId=doi:10.48804/K3VSND) (DOI 10.48804/K3VSND). The re-host therefore contains 80 of the original 85 subjects, covering all 11 session types (shortstories01, varyingStories01..10). Excluded subjects: sub-019, sub-020, sub-021, sub-022, sub-026 **Cohort demographics** ### View full README **IMPORTANT — RESTRICTED SUBJECTS EXCLUDED FROM NEMAR RE-HOST** IMPORTANT — 5 of the 85 original subjects (sub-019, sub-020, sub-021, sub-022, sub-026) are EXCLUDED from this NEMAR re-host because their raw EEG files are access-restricted on the KU Leuven Dataverse (HTTP 403 on download without a data-use agreement). Researchers who need these subjects should email [sparrkulee@kuleuven.be](mailto:sparrkulee@kuleuven.be) to request access and download the data directly from [https://rdr.kuleuven.be/dataset.xhtml?persistentId=doi:10.48804/K3VSND](https://rdr.kuleuven.be/dataset.xhtml?persistentId=doi:10.48804/K3VSND) (DOI 10.48804/K3VSND). The re-host therefore contains 80 of the original 85 subjects, covering all 11 session types (shortstories01, varyingStories01..10). Excluded subjects: sub-019, sub-020, sub-021, sub-022, sub-026 **Cohort demographics** Cohort demographics (from Accou et al., Data 2024, 9, 94, Section 2.1): 85 original participants, 74 female / 11 male, aged 21.4 ± 1.9 years (mean ± SD), inclusion window 18-30 years, all normal-hearing (≤30 dB HL, 125-8000 Hz), native Dutch/Flemish speakers. Per-subject numeric ages are not published by the SparrKULee authors for privacy reasons; `participants.tsv` only ships 3-year binned ages in the `age_range` column (see `participants.json` for details). **How to cite** Please cite the original SparrKULee data descriptor when using this dataset: Accou, B., Bollens, L., Gillis, M., Verheijen, W., Van hamme, H., & Francart, T. (2024). SparrKULee: A Speech-Evoked Auditory Response Repository from KU Leuven, Containing the EEG of 85 Participants. Data, 9(8), 94. [https://doi.org/10.3390/data9080094](https://doi.org/10.3390/data9080094) **Where extra metadata lives (after NEMAR preparation)** > * `/code/task-listeningActive_eeg.json` — full recording-level EEG metadata (SamplingFrequency, Manufacturer, EEGChannelCount, EEGReference, PowerLineFrequency, …). Relocated from the dataset root because the validator does not match the orphan top-level sidecar against the `.bdf.gz` data files. > * `/code/remarks/` — per-session free-form recording notes (`.txt` and `.docx`) originally placed under `sub-XX/ses-YY/remarks/`. Relocated so the validator does not see an arbitrary `remarks/` folder inside BIDS session directories. > * `/code/convert_accou2023.py` — the exact script that was run to produce this NEMAR re-host. **README** \_\_SparrKULee_\_: A Speech-evoked Auditory Response Repository of the KU Leuven, containing EEG of 85 participants **Overview** An overview of the dataset including details about the filetypes, methods and technical validation can be found in [our paper]() **Notes** Code to download, preprocess and validate the data can be found at [https://github.com/exporl/auditory-eeg-dataset](https://github.com/exporl/auditory-eeg-dataset). Due to mistakes during recording, following recordings do not have an adequate number of triggers and can therefore not be accurately aligned with the stimulus: 1. sub-006/ses-shortstories01/eeg/sub-006_ses-shortstories01_task-listeningActive_run-06_eeg.bdf.gz 2. sub-017/ses-shortstories01/eeg/sub-017_ses-shortstories01_task-listeningActive_run-03_eeg.bdf.gz 3. sub-048/ses-varyingStories05/eeg/sub-048_ses-varyingStories05_task-listeningActive_run-04_eeg.bdf.gz ## Dataset Information | Dataset ID | `NM000238` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | SparrKULee: A Speech-Evoked Auditory Response Repository from KU Leuven, Containing the EEG of 85 Participants | | Author (year) | `Accou2024` | | Canonical | — | | Importable as | `NM000238`, `Accou2024` | | Year | 2024 | | Authors | Bernd Accou, Lies Bollens, Marlies Gillis, Wendy Verheijen, Hugo Van hamme, Tom Francart | | License | Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0) for the EEG data. Stimuli can only be used for non-commercial purposes. | | Citation / DOI | [doi:10.48804/K3VSND](https://doi.org/10.48804/K3VSND) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000238) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000238) | [Source URL](https://github.com/nemarDatasets/nm000238) | ### Copy-paste BibTeX ```bibtex @dataset{nm000238, title = {SparrKULee: A Speech-Evoked Auditory Response Repository from KU Leuven, Containing the EEG of 85 Participants}, author = {Bernd Accou and Lies Bollens and Marlies Gillis and Wendy Verheijen and Hugo Van hamme and Tom Francart}, doi = {10.48804/K3VSND}, url = {https://doi.org/10.48804/K3VSND}, } ``` ## Technical Details - Subjects: 87 - Recordings: 4088 - Tasks: 366 - Channels: 64 - Sampling rate (Hz): 8192 - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: 4088 - Format: BIDS - License: Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0) for the EEG data. Stimuli can only be used for non-commercial purposes. - DOI: doi:10.48804/K3VSND - Source: nemar - OpenNeuro: [nm000238](https://openneuro.org/datasets/nm000238) - NeMAR: [nm000238](https://nemar.org/dataexplorer/detail?dataset_id=nm000238) ## API Reference Use the `NM000238` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000238(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SparrKULee: A Speech-Evoked Auditory Response Repository from KU Leuven, Containing the EEG of 85 Participants * **Study:** `nm000238` (NeMAR) * **Author (year):** `Accou2024` * **Canonical:** — Also importable as: `NM000238`, `Accou2024`. Modality: `eeg`. Subjects: 87; recordings: 4088; tasks: 366. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000238](https://openneuro.org/datasets/nm000238) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000238](https://nemar.org/dataexplorer/detail?dataset_id=nm000238) DOI: [https://doi.org/10.48804/K3VSND](https://doi.org/10.48804/K3VSND) ### Examples ```pycon >>> from eegdash.dataset import NM000238 >>> dataset = NM000238(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000238) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000238) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000239: eeg dataset, 16 subjects *P-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023)* Access recordings and metadata through EEGDash. **Citation:** Víctor Martínez-Cagigal, Eduardo Santamaría-Vázquez, Sergio Pérez-Velasco, Diego Marcos-Martínez, Selene Moreno-Calderón, Roberto Hornero (2023). *P-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023)*. Modality: eeg Subjects: 16 Recordings: 640 License: CC-BY-NC-SA-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000239 dataset = NM000239(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000239(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000239( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000239, title = {P-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023)}, author = {Víctor Martínez-Cagigal and Eduardo Santamaría-Vázquez and Sergio Pérez-Velasco and Diego Marcos-Martínez and Selene Moreno-Calderón and Roberto Hornero}, } ``` ## About This Dataset **P-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023)** P-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023) **Dataset Overview** - **Code**: MartinezCagigal2023Parycvep - **Paradigm**: cvep - **DOI**: [https://doi.org/10.71569/025s-eq10](https://doi.org/10.71569/025s-eq10) ### View full README **P-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023)** P-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023) **Dataset Overview** - **Code**: MartinezCagigal2023Parycvep - **Paradigm**: cvep - **DOI**: [https://doi.org/10.71569/025s-eq10](https://doi.org/10.71569/025s-eq10) - **Subjects**: 16 - **Sessions per subject**: 5 - **Events**: 0.0=100, 1.0=101, 2.0=102, 3.0=103, 4.0=104, 5.0=105, 6.0=106, 7.0=107, 8.0=108, 9.0=109, 10.0=110 - **Trial interval**: (0, 1) s - **Runs per session**: 8 **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Montage**: standard_1005 - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 16 - **Health status**: healthy **Experimental Protocol** - **Paradigm**: cvep - **Number of classes**: 11 - **Class labels**: 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0 **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 0.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_0_0 1.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_1_0 2.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_2_0 3.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_3_0 4.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_4_0 5.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_5_0 6.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_6_0 7.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_7_0 8.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_8_0 9.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_9_0 10.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_10_0 ``` **Documentation** - **DOI**: 10.71569/025s-eq10 - **Associated paper DOI**: 10.1016/j.eswa.2023.120815 - **License**: CC-BY-NC-SA-4.0 - **Investigators**: Víctor Martínez-Cagigal, Eduardo Santamaría-Vázquez, Sergio Pérez-Velasco, Diego Marcos-Martínez, Selene Moreno-Calderón, Roberto Hornero - **Senior author**: Roberto Hornero - **Contact**: [victor.martinez@gib.tel.uva.es](mailto:victor.martinez@gib.tel.uva.es) - **Institution**: University of Valladolid - **Department**: Biomedical Engineering Group, ETSIT - **Address**: Paseo de Belén, 15, 47011, Valladolid, Spain - **Country**: ES - **Repository**: U Valladoid - **Data URL**: [https://doi.org/10.71569/025s-eq10](https://doi.org/10.71569/025s-eq10) - **Publication year**: 2023 - **Funding**: Ministerio de Ciencia e Innovación/Agencia Estatal de Investigación and ERDF (TED2021-129915B-I00, RTC2019-007350-1, PID2020-115468RB-I00); CIBER-BBN through Instituto de Salud Carlos III - **Ethics approval**: Approved by the local ethics committee; all participants provided informed consent - **Acknowledgements**: This study was partially funded by Ministerio de Ciencia e Innovación/Agencia Estatal de Investigación and ERDF, and CIBER-BBN through Instituto de Salud Carlos III. - **How to acknowledge**: Please cite: Martínez-Cagigal et al. (2023). Non-binary m-sequences for more comfortable brain-computer interfaces based on c-VEPs. Expert Systems With Applications, 232, 120815. [https://doi.org/10.1016/j.eswa.2023.120815](https://doi.org/10.1016/j.eswa.2023.120815) **References** Martínez-Cagigal, V., Santamaría-Vázquez, E., Pérez-Velasco, S., Marcos-Martínez, D., Moreno-Calderón, S., & Hornero, R. (2023). Non-binary m-sequences for more comfortable brain-computer interfaces based on c-VEPs. *Expert Systems with Applications, 232*, 120815. [https://doi.org/10.1016/j.eswa.2023.120815](https://doi.org/10.1016/j.eswa.2023.120815) Martínez-Cagigal, V., Thielen, J., Santamaría-Vázquez, E., Pérez-Velasco, S., Desain, P., & Hornero, R. (2021). Brain-computer interfaces based on code-modulated visual evoked potentials (c-VEP): A literature review. *Journal of Neural Engineering*, 18(6), 061002. [https://doi.org/10.1088/1741-2552/ac38cf](https://doi.org/10.1088/1741-2552/ac38cf) Martínez-Cagigal, V. (2025). Dataset: Non-binary m-sequences for more comfortable brain-computer interfaces based on c-VEPs. [https://doi.org/10.35376/10324/70945](https://doi.org/10.35376/10324/70945) Santamaría-Vázquez, E., Martínez-Cagigal, V., Marcos-Martínez, D., Rodríguez-González, V., Pérez-Velasco, S., Moreno-Calderón, S., & Hornero, R. (2023). MEDUSA©: A novel Python-based software ecosystem to accelerate brain-computer interface and cognitive neuroscience research. *Computer Methods and Programs in Biomedicine, 230*, 107357. [https://doi.org/10.1016/j.cmpb.2023.107357](https://doi.org/10.1016/j.cmpb.2023.107357) Notes Although the dataset was recorded in a single session, each condition is stored as a separate session to match the MOABB structure. Within each session, eight runs are available (six for training, two for testing). .. versionadded:: 1.2.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000239` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | P-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023) | | Author (year) | `MartinezCagigal2023` | | Canonical | — | | Importable as | `NM000239`, `MartinezCagigal2023` | | Year | 2023 | | Authors | Víctor Martínez-Cagigal, Eduardo Santamaría-Vázquez, Sergio Pérez-Velasco, Diego Marcos-Martínez, Selene Moreno-Calderón, Roberto Hornero | | License | CC-BY-NC-SA-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000239) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000239) | [Source URL](https://nemar.org/dataexplorer/detail/nm000239) | ## Technical Details - Subjects: 16 - Recordings: 640 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 256.0 (608), 600.0 (32) - Duration (hours): 15.095158796296298 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 783.0 MB - File count: 640 - Format: BIDS - License: CC-BY-NC-SA-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000239](https://openneuro.org/datasets/nm000239) - NeMAR: [nm000239](https://nemar.org/dataexplorer/detail?dataset_id=nm000239) ## API Reference Use the `NM000239` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000239(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023) * **Study:** `nm000239` (NeMAR) * **Author (year):** `MartinezCagigal2023` * **Canonical:** — Also importable as: `NM000239`, `MartinezCagigal2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 16; recordings: 640; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000239](https://openneuro.org/datasets/nm000239) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000239](https://nemar.org/dataexplorer/detail?dataset_id=nm000239) ### Examples ```pycon >>> from eegdash.dataset import NM000239 >>> dataset = NM000239(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000239) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000239) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000240: eeg dataset, 16 subjects *Checkerboard m-sequence-based c-VEP dataset from* Access recordings and metadata through EEGDash. **Citation:** Álvaro Fernández-Rodríguez, Víctor Martínez-Cagigal, Eduardo Santamaría-Vázquez, Ricardo Ron-Angevin, Roberto Hornero (2025). *Checkerboard m-sequence-based c-VEP dataset from*. Modality: eeg Subjects: 16 Recordings: 383 License: CC-BY-NC-SA-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000240 dataset = NM000240(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000240(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000240( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000240, title = {Checkerboard m-sequence-based c-VEP dataset from}, author = {Álvaro Fernández-Rodríguez and Víctor Martínez-Cagigal and Eduardo Santamaría-Vázquez and Ricardo Ron-Angevin and Roberto Hornero}, } ``` ## About This Dataset **Checkerboard m-sequence-based c-VEP dataset from** Checkerboard m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2025) and Fernández-Rodríguez et al. (2023). **Dataset Overview** - **Code**: MartinezCagigal2023Checkercvep - **Paradigm**: cvep - **DOI**: [https://doi.org/10.71569/7c67-v596](https://doi.org/10.71569/7c67-v596) ### View full README **Checkerboard m-sequence-based c-VEP dataset from** Checkerboard m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2025) and Fernández-Rodríguez et al. (2023). **Dataset Overview** - **Code**: MartinezCagigal2023Checkercvep - **Paradigm**: cvep - **DOI**: [https://doi.org/10.71569/7c67-v596](https://doi.org/10.71569/7c67-v596) - **Subjects**: 16 - **Sessions per subject**: 8 - **Events**: 0.0=100, 1.0=101 - **Trial interval**: (0, 1) s - **Runs per session**: 3 **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Montage**: standard_1005 - **Line frequency**: 50.0 Hz **Participants** - **Number of subjects**: 16 - **Health status**: healthy **Experimental Protocol** - **Paradigm**: cvep - **Number of classes**: 2 - **Class labels**: 0.0, 1.0 **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text 0.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_0_0 1.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_1_0 ``` **Documentation** - **DOI**: 10.71569/7c67-v596 - **Associated paper DOI**: 10.3389/fnhum.2023.1288438 - **License**: CC-BY-NC-SA-4.0 - **Investigators**: Álvaro Fernández-Rodríguez, Víctor Martínez-Cagigal, Eduardo Santamaría-Vázquez, Ricardo Ron-Angevin, Roberto Hornero - **Senior author**: Roberto Hornero - **Contact**: [victor.martinez@gib.tel.uva.es](mailto:victor.martinez@gib.tel.uva.es) - **Institution**: University of Valladolid - **Department**: Biomedical Engineering Group, ETSIT - **Address**: Paseo de Belén, 15, 47011, Valladolid, Spain - **Country**: ES - **Repository**: U Valladoid - **Data URL**: [https://doi.org/10.71569/7c67-v596](https://doi.org/10.71569/7c67-v596) - **Publication year**: 2023 - **Ethics approval**: Approved by the local ethics committee; all participants provided informed consent - **How to acknowledge**: Please cite: Fernández-Rodríguez et al. (2023). Influence of spatial frequency in visual stimuli for cVEP-based BCIs: evaluation of performance and user experience. Frontiers in Human Neuroscience, 17, 1288438. [https://doi.org/10.3389/fnhum.2023.1288438](https://doi.org/10.3389/fnhum.2023.1288438) **References** Martínez Cagigal, V. (2025). Dataset: Influence of spatial frequency in visual stimuli for cVEP-based BCIs: evaluation of performance and user experience. [https://doi.org/10.71569/7c67-v596](https://doi.org/10.71569/7c67-v596) Fernández-Rodríguez, Á., Martínez-Cagigal, V., Santamaría-Vázquez, E., Ron-Angevin, R., & Hornero, R. (2023). Influence of spatial frequency in visual stimuli for cVEP-based BCIs: evaluation of performance and user experience. Frontiers in Human Neuroscience, 17, 1288438. [https://doi.org/10.3389/fnhum.2023.1288438](https://doi.org/10.3389/fnhum.2023.1288438) Santamaría-Vázquez, E., Martínez-Cagigal, V., Marcos-Martínez, D., Rodríguez-González, V., Pérez-Velasco, S., Moreno-Calderón, S., & Hornero, R. (2023). MEDUSA©: A novel Python-based software ecosystem to accelerate brain–computer interface and cognitive neuroscience research. Computer Methods and Programs in Biomedicine, 230, 107357. [https://doi.org/10.1016/j.cmpb.2023.107357](https://doi.org/10.1016/j.cmpb.2023.107357) Notes Although the dataset was recorded in a single session, each condition is stored as a separate session to match the MOABB structure. Within each session, three runs are available (two for training, one for testing). .. versionadded:: 1.2.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000240` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Checkerboard m-sequence-based c-VEP dataset from | | Author (year) | `FernandezRodriguez2025` | | Canonical | `FernandezRodriguez2023` | | Importable as | `NM000240`, `FernandezRodriguez2025`, `FernandezRodriguez2023` | | Year | 2025 | | Authors | Álvaro Fernández-Rodríguez, Víctor Martínez-Cagigal, Eduardo Santamaría-Vázquez, Ricardo Ron-Angevin, Roberto Hornero | | License | CC-BY-NC-SA-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000240) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000240) | [Source URL](https://nemar.org/dataexplorer/detail/nm000240) | ## Technical Details - Subjects: 16 - Recordings: 383 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 256.0 - Duration (hours): 13.408473307291668 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 637.7 MB - File count: 383 - Format: BIDS - License: CC-BY-NC-SA-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000240](https://openneuro.org/datasets/nm000240) - NeMAR: [nm000240](https://nemar.org/dataexplorer/detail?dataset_id=nm000240) ## API Reference Use the `NM000240` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000240(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Checkerboard m-sequence-based c-VEP dataset from * **Study:** `nm000240` (NeMAR) * **Author (year):** `FernandezRodriguez2025` * **Canonical:** `FernandezRodriguez2023` Also importable as: `NM000240`, `FernandezRodriguez2025`, `FernandezRodriguez2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 16; recordings: 383; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000240](https://openneuro.org/datasets/nm000240) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000240](https://nemar.org/dataexplorer/detail?dataset_id=nm000240) ### Examples ```pycon >>> from eegdash.dataset import NM000240 >>> dataset = NM000240(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000240) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000240) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000241: ieeg dataset, 2 subjects *CerebroVoice: Bilingual sEEG Speech Dataset* Access recordings and metadata through EEGDash. **Citation:** Xueyi Zhang (2019). *CerebroVoice: Bilingual sEEG Speech Dataset*. [10.5281/zenodo.13332808](https://doi.org/10.5281/zenodo.13332808) Modality: ieeg Subjects: 2 Recordings: 18 License: CC BY 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000241 dataset = NM000241(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000241(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000241( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000241, title = {CerebroVoice: Bilingual sEEG Speech Dataset}, author = {Xueyi Zhang}, doi = {10.5281/zenodo.13332808}, url = {https://doi.org/10.5281/zenodo.13332808}, } ``` ## About This Dataset **CerebroVoice: Bilingual sEEG Speech Dataset** **Overview** Intracranial EEG (sEEG) recordings from 2 epilepsy patients during bilingual speech tasks (Mandarin Chinese, English, and digit reading). Recorded at 1000 Hz with Nihon Kohden EEG-1200, depth electrodes (platinum-iridium). Data distributed as preprocessed NPY derivatives: - LFS: Low-frequency signal - HGA: High-gamma activity - BBS: Broadband signal Tasks: Chinese reading, English reading, digit reading Subjects: SUB1 (114 channels post-filtering), SUB2 (158 channels) Duration: ~73 min (SUB1), ~76 min (SUB2) Source: Zenodo (doi:10.5281/zenodo.13332808) License: CC BY 4.0 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Holdgraf, C., Appelhoff, S., Bickel, S., Bouchard, K., D’Ambrosio, S., David, O., … Hermes, D. (2019). iEEG-BIDS, extending the Brain Imaging Data Structure specification to human intracranial electrophysiology. Scientific Data, 6, 102. [https://doi.org/10.1038/s41597-019-0105-7](https://doi.org/10.1038/s41597-019-0105-7) ## Dataset Information | Dataset ID | `NM000241` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | CerebroVoice: Bilingual sEEG Speech Dataset | | Author (year) | `Zhang2019` | | Canonical | — | | Importable as | `NM000241`, `Zhang2019` | | Year | 2019 | | Authors | Xueyi Zhang | | License | CC BY 4.0 | | Citation / DOI | [doi:10.5281/zenodo.13332808](https://doi.org/10.5281/zenodo.13332808) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000241) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000241) | [Source URL](https://nemar.org/dataexplorer/detail/nm000241) | ### Copy-paste BibTeX ```bibtex @dataset{nm000241, title = {CerebroVoice: Bilingual sEEG Speech Dataset}, author = {Xueyi Zhang}, doi = {10.5281/zenodo.13332808}, url = {https://doi.org/10.5281/zenodo.13332808}, } ``` ## Technical Details - Subjects: 2 - Recordings: 18 - Tasks: 9 - Channels: 158 (6), 114 (6), 228 (3), 316 (3) - Sampling rate (Hz): 200 - Duration (hours): 3.836216666666667 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 1.9 GB - File count: 18 - Format: BIDS - License: CC BY 4.0 - DOI: doi:10.5281/zenodo.13332808 - Source: nemar - OpenNeuro: [nm000241](https://openneuro.org/datasets/nm000241) - NeMAR: [nm000241](https://nemar.org/dataexplorer/detail?dataset_id=nm000241) ## API Reference Use the `NM000241` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000241(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CerebroVoice: Bilingual sEEG Speech Dataset * **Study:** `nm000241` (NeMAR) * **Author (year):** `Zhang2019` * **Canonical:** — Also importable as: `NM000241`, `Zhang2019`. Modality: `ieeg`. Subjects: 2; recordings: 18; tasks: 9. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000241](https://openneuro.org/datasets/nm000241) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000241](https://nemar.org/dataexplorer/detail?dataset_id=nm000241) DOI: [https://doi.org/10.5281/zenodo.13332808](https://doi.org/10.5281/zenodo.13332808) ### Examples ```pycon >>> from eegdash.dataset import NM000241 >>> dataset = NM000241(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000241) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000241) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # NM000242: eeg dataset, 22 subjects *Visual imagery EEG dataset from Gao et al 2026* Access recordings and metadata through EEGDash. **Citation:** Jing’ao Gao, Yao Liu, Zhengshuang Li, Kaixin Huang, Fan Wang, Jiaping Xu, Lei Zhao, Tianwen Li, Yunfa Fu (2026). *Visual imagery EEG dataset from Gao et al 2026*. Modality: eeg Subjects: 22 Recordings: 125 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000242 dataset = NM000242(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000242(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000242( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000242, title = {Visual imagery EEG dataset from Gao et al 2026}, author = {Jing'ao Gao and Yao Liu and Zhengshuang Li and Kaixin Huang and Fan Wang and Jiaping Xu and Lei Zhao and Tianwen Li and Yunfa Fu}, } ``` ## About This Dataset **Visual imagery EEG dataset from Gao et al 2026** Visual imagery EEG dataset from Gao et al 2026. **Dataset Overview** - **Code**: Gao2026 - **Paradigm**: imagery - **DOI**: 10.1038/s41597-025-06512-5 ### View full README **Visual imagery EEG dataset from Gao et al 2026** Visual imagery EEG dataset from Gao et al 2026. **Dataset Overview** - **Code**: Gao2026 - **Paradigm**: imagery - **DOI**: 10.1038/s41597-025-06512-5 - **Subjects**: 22 - **Sessions per subject**: 2 - **Events**: dog=1, bird=2, fish=3, pentagram=11, square=12, circle=13, scissor=21, watch=22, cup=23, chair=24 - **Trial interval**: [0, 4] s - **Runs per session**: 3 - **File format**: BDF **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 32 - **Channel types**: eeg=32 - **Montage**: standard_1005 - **Hardware**: Neuracle NeuSenW32 - **Reference**: CPz - **Ground**: AFz - **Sensor type**: Ag/AgCl - **Line frequency**: 50.0 Hz - **Online filters**: {‘sampling_rate’: 1000} **Participants** - **Number of subjects**: 22 - **Health status**: healthy - **Age**: min=20.0, max=23.0 - **Gender distribution**: male=17, female=5 - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 10 - **Class labels**: dog, bird, fish, pentagram, square, circle, scissor, watch, cup, chair - **Trial duration**: 4.0 s - **Study design**: Visual imagery of animals, figures, and objects with simultaneous 32-channel EEG recording - **Feedback type**: none - **Stimulus type**: image cues - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text dog ``` ```text ├─ Sensory-event └─ Label/dog bird ``` ```text ├─ Sensory-event └─ Label/bird fish ``` ```text ├─ Sensory-event └─ Label/fish pentagram ``` ```text ├─ Sensory-event └─ Label/pentagram square ``` ```text ├─ Sensory-event └─ Label/square circle ``` ```text ├─ Sensory-event └─ Label/circle scissor ``` ```text ├─ Sensory-event └─ Label/scissor watch ``` ```text ├─ Sensory-event └─ Label/watch cup ``` ```text ├─ Sensory-event └─ Label/cup chair ``` ```text ├─ Sensory-event └─ Label/chair ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: dog, bird, fish, pentagram, square, circle, scissor, watch, cup, chair **Data Structure** - **Trials**: 16800 - **Trials context**: 20 subjects x 2 sessions x 400 trials + 2 subjects x 1 session x 400 trials = 16800 **Signal Processing** - **Classifiers**: EEGNet, CSP+KNN - **Feature extraction**: CSP, deep_learning - **Frequency bands**: bandpass=[5.0, 30.0] Hz - **Spatial filters**: CSP, CAR **Cross-Validation** - **Method**: train-test split - **Evaluation type**: within_subject **BCI Application** - **Applications**: human_machine_interaction - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Research **Documentation** - **DOI**: 10.1038/s41597-025-06512-5 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: Jing’ao Gao, Yao Liu, Zhengshuang Li, Kaixin Huang, Fan Wang, Jiaping Xu, Lei Zhao, Tianwen Li, Yunfa Fu - **Institution**: Kunming University of Science and Technology - **Country**: CN - **Repository**: Figshare - **Data URL**: [https://doi.org/10.6084/m9.figshare.30227503.v1](https://doi.org/10.6084/m9.figshare.30227503.v1) - **Publication year**: 2026 **References** Gao, J., Liu, Y., Li, Z., Huang, K., Wang, F., Xu, J., Zhao, L., Li, T., & Fu, Y. (2026). An EEG Dataset for Visual Imagery-Based Brain-Computer Interface. Scientific Data. [https://doi.org/10.1038/s41597-025-06512-5](https://doi.org/10.1038/s41597-025-06512-5) Gao, J. et al. (2026). EEG Dataset for Visual Imagery. Figshare. [https://doi.org/10.6084/m9.figshare.30227503.v1](https://doi.org/10.6084/m9.figshare.30227503.v1) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000242` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Visual imagery EEG dataset from Gao et al 2026 | | Author (year) | `Gao2026_Visual_imagery_et` | | Canonical | `Gao2026` | | Importable as | `NM000242`, `Gao2026_Visual_imagery_et`, `Gao2026` | | Year | 2026 | | Authors | Jing’ao Gao, Yao Liu, Zhengshuang Li, Kaixin Huang, Fan Wang, Jiaping Xu, Lei Zhao, Tianwen Li, Yunfa Fu | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000242) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000242) | [Source URL](https://nemar.org/dataexplorer/detail/nm000242) | ## Technical Details - Subjects: 22 - Recordings: 125 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 1000.0 - Duration (hours): 98.47829861111111 - Pathology: Healthy - Modality: Visual - Type: Other - Size on disk: 31.7 GB - File count: 125 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000242](https://openneuro.org/datasets/nm000242) - NeMAR: [nm000242](https://nemar.org/dataexplorer/detail?dataset_id=nm000242) ## API Reference Use the `NM000242` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000242(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual imagery EEG dataset from Gao et al 2026 * **Study:** `nm000242` (NeMAR) * **Author (year):** `Gao2026_Visual_imagery_et` * **Canonical:** `Gao2026` Also importable as: `NM000242`, `Gao2026_Visual_imagery_et`, `Gao2026`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 22; recordings: 125; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000242](https://openneuro.org/datasets/nm000242) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000242](https://nemar.org/dataexplorer/detail?dataset_id=nm000242) ### Examples ```pycon >>> from eegdash.dataset import NM000242 >>> dataset = NM000242(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000242) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000242) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000243: eeg dataset, 15 subjects *BNCI 2016-002 Emergency Braking during Simulated Driving dataset* Access recordings and metadata through EEGDash. **Citation:** Stefan Haufe, Matthias S Treder, Manfred F Gugler, Max Sagebaum, Gabriel Curio, Benjamin Blankertz (2011). *BNCI 2016-002 Emergency Braking during Simulated Driving dataset*. Modality: eeg Subjects: 15 Recordings: 15 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000243 dataset = NM000243(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000243(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000243( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000243, title = {BNCI 2016-002 Emergency Braking during Simulated Driving dataset}, author = {Stefan Haufe and Matthias S Treder and Manfred F Gugler and Max Sagebaum and Gabriel Curio and Benjamin Blankertz}, } ``` ## About This Dataset **BNCI 2016-002 Emergency Braking during Simulated Driving dataset** BNCI 2016-002 Emergency Braking during Simulated Driving dataset. **Dataset Overview** - **Code**: BNCI2016-002 - **Paradigm**: p300 - **DOI**: 10.1088/1741-2560/8/5/056001 ### View full README **BNCI 2016-002 Emergency Braking during Simulated Driving dataset** BNCI 2016-002 Emergency Braking during Simulated Driving dataset. **Dataset Overview** - **Code**: BNCI2016-002 - **Paradigm**: p300 - **DOI**: 10.1088/1741-2560/8/5/056001 - **Subjects**: 15 - **Sessions per subject**: 1 - **Events**: Target=1, NonTarget=2 - **Trial interval**: [-0.5, 1.0] s - **File format**: .mat - **Data preprocessed**: True - **Contributing labs**: Machine Learning Group, Berlin Institute of Technology, Bernstein Focus Neurotechnology, Berlin, Neurophysics Group, Charité University Medicine Berlin, Intelligent Data Analysis Group, Fraunhofer Institute FIRST **Acquisition** - **Sampling rate**: 200.0 Hz - **Number of channels**: 59 - **Channel types**: eeg=59, emg=1, eog=2, misc=7 - **Channel names**: AF3, AF4, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EMGf, EOGh, EOGv, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fz, O1, O2, Oz, P1, P10, P2, P3, P4, P5, P6, P7, P8, P9, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP7, TP8, brake, dist_to_lead, gas, lead_brake, lead_gas, wheel_X, wheel_Y - **Montage**: extended 10-20 - **Hardware**: BrainAmp - **Software**: TORCS - **Reference**: nose - **Sensor type**: Ag/AgCl - **Line frequency**: 50.0 Hz - **Online filters**: {‘highpass_hz’: 0.1, ‘lowpass_hz’: 250} - **Impedance threshold**: {‘eeg’: 20, ‘emg’: 50} kOhm - **Cap manufacturer**: Easycap - **Cap model**: Easycap - **Auxiliary channels**: EOG (2 ch, vertical, horizontal), EMG (1 ch), technical_markers **Participants** - **Number of subjects**: 15 - **Health status**: healthy - **Age**: mean=30.6, std=5.4 - **Gender distribution**: male=14, female=4 - **Handedness**: right-handed - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Task type**: driving_simulation - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Trial duration**: 3.0 s - **Study design**: Participants drove a virtual racing car using steering wheel and gas/brake pedals, tightly following a computer-controlled lead vehicle at 100 km/h. The lead vehicle occasionally decelerated abruptly (20-40s inter-stimulus-interval) to 60-80 km/h, requiring immediate emergency braking. Three blocks of 45 min each with 10-15 min rest between blocks. - **Feedback type**: visual (colored circle indicating distance: green <20m, yellow otherwise; brakelight flashing) - **Stimulus type**: emergency_braking_scenario - **Stimulus modalities**: visual, multisensory - **Primary modality**: visual - **Synchronicity**: asynchronous - **Mode**: online - **Training/test split**: True - **Instructions**: Drive a virtual racing car using steering wheel and gas/brake pedals, tightly follow the lead vehicle within 20m at 100 km/h. Perform immediate emergency braking when the lead vehicle decelerates abruptly to avoid a crash. - **Stimulus presentation**: isi_range=20-40 seconds, deceleration_range=60-80 km/h, brakelight=flashing, oncoming_traffic=present, sharp_curves=present **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 **Data Structure** - **Trials**: ~99 emergency braking events per subject (test set) - **Blocks per session**: 3 - **Block duration**: 2700.0 s - **Trials context**: Emergency braking events with 20-40s inter-stimulus-interval, total ~225 events across 3 blocks per subject **Preprocessing** - **Data state**: preprocessed - **Preprocessing applied**: True - **Steps**: lowpass filtering, bandpass filtering, notch filtering, rectification, downsampling/upsampling, baseline correction, synchronization - **Highpass filter**: 0.1 Hz - **Lowpass filter**: 45.0 Hz - **Bandpass filter**: [15.0, 90.0] - **Notch filter**: 50.0 Hz - **Filter type**: Chebychev type II (EEG lowpass), Elliptic (EMG bandpass), digital (notch) - **Filter order**: tenth-order (EEG), sixth-order (EMG), second-order (notch) - **Re-reference**: nose - **Downsampled to**: 200.0 Hz - **Epoch window**: [-0.3, 1.2] - **Notes**: EEG lowpass filtered at 45 Hz (causal). EMG bandpass filtered 15-90 Hz with 50 Hz notch and rectified. All signals synchronized and resampled to 200 Hz. Baseline correction using first 100 ms. **Signal Processing** - **Classifiers**: RLDA, Regularized Linear Discriminant Analysis, Shrinkage LDA - **Feature extraction**: Event-Related Potentials, Spatio-temporal features, Bi-serial correlation, Area Under Curve - **Spatial filters**: Artifact rejection based on spectral power **Cross-Validation** - **Method**: sequential temporal split - **Evaluation type**: temporal_validation **Performance (Original Study)** - **Auc**: 0.5 - **Braking Time Reduction Ms**: 130 - **Braking Distance Reduction M**: 3.66 **BCI Application** - **Applications**: driving_assistance, emergency_braking_detection, neuroergonomics - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Visual, Multisensory - **Type**: Driving, Neuroergonomics **Documentation** - **Description**: Emergency braking detection during simulated driving using EEG and EMG to predict driver’s braking intention before behavioral response. - **DOI**: 10.1088/1741-2560/8/5/056001 - **Associated paper DOI**: 10.1088/1741-2560/8/5/056001 - **License**: CC-BY-NC-ND-4.0 - **Investigators**: Stefan Haufe, Matthias S Treder, Manfred F Gugler, Max Sagebaum, Gabriel Curio, Benjamin Blankertz - **Senior author**: Benjamin Blankertz - **Contact**: [stefan.haufe@tu-berlin.de](mailto:stefan.haufe@tu-berlin.de) - **Institution**: Berlin Institute of Technology - **Department**: Machine Learning Group, Department of Computer Science - **Address**: Franklinstraße 28/29, D-10587 Berlin, Germany - **Country**: Germany - **Repository**: BNCI Horizon - **Publication year**: 2011 - **Funding**: DFG grant; BMBF grant; Bernstein Focus Neurotechnology, Berlin - **Ethics approval**: IRB of Charité University Medicine, Berlin; Declaration of Helsinki; Written informed consent from all participants - **Keywords**: emergency braking, driving simulation, EEG, EMG, brain-computer interface, neuroergonomics, event-related potentials, machine learning, driver assistance **References** Haufe, S., Treder, M. S., Gugler, M. F., Sagebaum, M., Curio, G., & Blankertz, B. (2011). EEG potentials predict upcoming emergency brakings during simulated driving. Journal of Neural Engineering, 8(5), 056001. [https://doi.org/10.1088/1741-2560/8/5/056001](https://doi.org/10.1088/1741-2560/8/5/056001) Notes .. versionadded:: 1.3.0 This dataset is valuable for research on: - Predictive braking assistance systems - Neuroergonomics and driving safety - Real-time detection of emergency intentions - Multimodal biosignal integration (EEG + EMG + vehicle dynamics) The paradigm represents a unique blend of ERP (event-related potential) analysis with ecological validity in a naturalistic driving context. **Data Availability**: Currently 15 of 18 subjects are available. Files are hosted at the BBCI (Berlin Brain-Computer Interface) archive. License: Creative Commons Attribution Non-Commercial No Derivatives (CC BY-NC-ND 4.0) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000243` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2016-002 Emergency Braking during Simulated Driving dataset | | Author (year) | `Haufe2016` | | Canonical | `BNCI2016`, `BNCI2016002` | | Importable as | `NM000243`, `Haufe2016`, `BNCI2016`, `BNCI2016002` | | Year | 2011 | | Authors | Stefan Haufe, Matthias S Treder, Manfred F Gugler, Max Sagebaum, Gabriel Curio, Benjamin Blankertz | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000243) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000243) | [Source URL](https://nemar.org/dataexplorer/detail/nm000243) | ## Technical Details - Subjects: 15 - Recordings: 15 - Tasks: 1 - Channels: 59 - Sampling rate (Hz): 200.0 - Duration (hours): 33.74497916666667 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 4.0 GB - File count: 15 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000243](https://openneuro.org/datasets/nm000243) - NeMAR: [nm000243](https://nemar.org/dataexplorer/detail?dataset_id=nm000243) ## API Reference Use the `NM000243` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000243(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2016-002 Emergency Braking during Simulated Driving dataset * **Study:** `nm000243` (NeMAR) * **Author (year):** `Haufe2016` * **Canonical:** `BNCI2016`, `BNCI2016002` Also importable as: `NM000243`, `Haufe2016`, `BNCI2016`, `BNCI2016002`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 15; recordings: 15; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000243](https://openneuro.org/datasets/nm000243) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000243](https://nemar.org/dataexplorer/detail?dataset_id=nm000243) ### Examples ```pycon >>> from eegdash.dataset import NM000243 >>> dataset = NM000243(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000243) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000243) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000244: eeg dataset, 64 subjects *P300 dataset BI2014a from a “Brain Invaders” experiment* Access recordings and metadata through EEGDash. **Citation:** Louis Korczowski, Ekaterina Ostaschenko, Anton Andreev, Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Violette Gautheret, Marco Congedo (2019). *P300 dataset BI2014a from a “Brain Invaders” experiment*. Modality: eeg Subjects: 64 Recordings: 64 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000244 dataset = NM000244(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000244(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000244( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000244, title = {P300 dataset BI2014a from a "Brain Invaders" experiment}, author = {Louis Korczowski and Ekaterina Ostaschenko and Anton Andreev and Grégoire Cattan and Pedro Luiz Coelho Rodrigues and Violette Gautheret and Marco Congedo}, } ``` ## About This Dataset **P300 dataset BI2014a from a “Brain Invaders” experiment** P300 dataset BI2014a from a “Brain Invaders” experiment. **Dataset Overview** - **Code**: BrainInvaders2014a - **Paradigm**: p300 - **DOI**: [https://doi.org/10.5281/zenodo.3266222](https://doi.org/10.5281/zenodo.3266222) ### View full README **P300 dataset BI2014a from a “Brain Invaders” experiment** P300 dataset BI2014a from a “Brain Invaders” experiment. **Dataset Overview** - **Code**: BrainInvaders2014a - **Paradigm**: p300 - **DOI**: [https://doi.org/10.5281/zenodo.3266222](https://doi.org/10.5281/zenodo.3266222) - **Subjects**: 64 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1] s - **File format**: mat and csv **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Channel names**: Fp1, Fp2, F5, AFz, F6, T7, Cz, T8, P7, P3, Pz, P4, P8, O1, Oz, O2 - **Montage**: standard_1010 - **Hardware**: g.USBamp (g.tec, Schiedlberg, Austria) - **Software**: OpenVibe - **Reference**: right earlobe - **Ground**: FZ - **Sensor type**: dry electrodes - **Line frequency**: 50.0 Hz - **Online filters**: no digital filter applied - **Cap manufacturer**: g.tec - **Cap model**: g.Sahara - **Electrode type**: dry 8-pins gold-alloy electrodes - **Electrode material**: gold-alloy **Participants** - **Number of subjects**: 64 - **Health status**: healthy - **Age**: mean=23.55, std=3.13 - **Gender distribution**: male=49, female=22 - **Handedness**: not specified - **BCI experience**: 57 were naïve BCI participants - **Species**: human **Experimental Protocol** - **Paradigm**: p300 - **Task type**: oddball - **Number of classes**: 2 - **Class labels**: Target, NonTarget - **Study design**: calibration-less P300-based BCI system with dry electrodes; screening session for potential candidates for a broader multi-user BCI study - **Feedback type**: visual - **Stimulus type**: visual flashes - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: online - **Instructions**: Destroy the target symbol by focusing attention on it. Players had up to eight attempts to destroy the target symbol per level. - **Stimulus presentation**: n_symbols=36, n_groups=12, symbols_per_group=6, target_flashes_per_repetition=2, non_target_flashes_per_repetition=10, animation=symbols slowly and regularly moving according to predefined path **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 - **Number of targets**: 1 - **Number of repetitions**: 12 **Data Structure** - **Trials**: variable; up to 8 attempts per level, 9 levels per session - **Blocks per session**: 9 - **Trials context**: 9 levels per session, up to 8 attempts per level to destroy target **Preprocessing** - **Data state**: raw EEG with hardware tagging (USB digital-to-analog converter for synchronization) - **Preprocessing applied**: False - **Notes**: No digital filter applied during recording. USB digital-to-analog converter used to reduce jitter and synchronize experimental tags with EEG signals. **Signal Processing** - **Classifiers**: Riemannian Minimum Distance to Mean (RMDM), xDAWN, Riemannian - **Feature extraction**: Covariance/Riemannian, xDAWN **Cross-Validation** - **Evaluation type**: cross_session **Performance (Original Study)** - **Note**: Real-time adaptive RMDM classifier used for assessing participants’ command with calibration-free procedure **BCI Application** - **Applications**: gaming - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Visual - **Type**: Perception **Documentation** - **Description**: Dataset contains electroencephalographic (EEG) recordings of 71 subjects playing to a visual P300 Brain-Computer Interface (BCI) videogame named Brain Invaders. The interface uses the oddball paradigm on a grid of 36 symbols (1 Target, 35 Non-Target) that are flashed pseudo-randomly to elicit the P300 response. - **DOI**: 10.5281/zenodo.3266223 - **Associated paper DOI**: hal-02171575 - **License**: CC-BY-4.0 - **Investigators**: Louis Korczowski, Ekaterina Ostaschenko, Anton Andreev, Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Violette Gautheret, Marco Congedo - **Senior author**: Marco Congedo - **Institution**: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP - **Address**: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France - **Country**: FR - **Repository**: Zenodo - **Data URL**: [https://doi.org/10.5281/zenodo.3266223](https://doi.org/10.5281/zenodo.3266223) - **Publication year**: 2019 - **Ethics approval**: Approved by the Ethical Committee of the University of Grenoble Alpes (Comité d’Ethique pour la Recherche Non-Interventionnelle) - **Acknowledgements**: At the end of the experiment one ticket of cinema was offered to each subject, for a value of 7.5 euros per subject. - **Keywords**: Electroencephalography (EEG), P300, Brain-Computer Interface, Experiment, Collaboration, Multi-User, Hyperscanning **Abstract** We describe the experimental procedures for the bi2014a dataset that contains electroencephalographic (EEG) recordings of 71 subjects playing to a visual P300 Brain-Computer Interface (BCI) videogame named Brain Invaders. The interface uses the oddball paradigm on a grid of 36 symbols (1 Target, 35 Non-Target) that are flashed pseudo-randomly to elicit the P300 response. EEG data were recorded using 16 active dry electrodes with up to three game sessions. The experiment took place at GIPSA-lab, Grenoble, France, in 2014. **Methodology** The experiment was designed to study the viability of a calibration-less P300-based BCI system with dry electrodes. Visual P300 is an event-related potential (ERP) elicited by an expected but unpredictable target visual stimulation (oddball paradigm), with peaking amplitude 240-600 ms after stimulus onset. Two event-related stimuli: Target (P300 expected) and Non-Target (no P300). The experiment used Brain Invaders, a P300-based BCI open-source software. A repetition is composed of 12 flashes (one for each group), of which two include the Target symbol (Target flashes) and 10 do not (non-Target flashes). The ratio of Target versus non-Target epochs in the whole datasets is one-to-five. During the experiment, the output of a real-time adaptive Riemannian Minimum Distance to Mean (RMDM) classifier was used for assessing the participants’ command. Game session was compounded by nine levels, consisting in a unique and predefined configuration of the 36 symbols of the interface. Players had up to eight attempts to destroy the target symbol. If the player missed all eight attempts, the level was started once again from the beginning. Average duration of five minutes for the nine levels. Experimenter could end the experiment if no control over the BCI system was gained after 10 minutes. **References** Korczowski, L., Ostaschenko, E., Andreev, A., Cattan, G., Rodrigues, P. L. C., Gautheret, V., & Congedo, M. (2019). Brain Invaders calibration-less P300-based BCI using dry EEG electrodes Dataset (BI2014a). [https://hal.archives-ouvertes.fr/hal-02171575](https://hal.archives-ouvertes.fr/hal-02171575) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000244` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | P300 dataset BI2014a from a “Brain Invaders” experiment | | Author (year) | `Korczowski2014_P300_BI2014a` | | Canonical | `BrainInvaders2014a`, `BI2014a` | | Importable as | `NM000244`, `Korczowski2014_P300_BI2014a`, `BrainInvaders2014a`, `BI2014a` | | Year | 2019 | | Authors | Louis Korczowski, Ekaterina Ostaschenko, Anton Andreev, Grégoire Cattan, Pedro Luiz Coelho Rodrigues, Violette Gautheret, Marco Congedo | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000244) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000244) | [Source URL](https://nemar.org/dataexplorer/detail/nm000244) | ## Technical Details - Subjects: 64 - Recordings: 64 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 512.0 - Duration (hours): 12.4046875 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 1.0 GB - File count: 64 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000244](https://openneuro.org/datasets/nm000244) - NeMAR: [nm000244](https://nemar.org/dataexplorer/detail?dataset_id=nm000244) ## API Reference Use the `NM000244` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000244(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset BI2014a from a “Brain Invaders” experiment * **Study:** `nm000244` (NeMAR) * **Author (year):** `Korczowski2014_P300_BI2014a` * **Canonical:** `BrainInvaders2014a`, `BI2014a` Also importable as: `NM000244`, `Korczowski2014_P300_BI2014a`, `BrainInvaders2014a`, `BI2014a`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 64; recordings: 64; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000244](https://openneuro.org/datasets/nm000244) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000244](https://nemar.org/dataexplorer/detail?dataset_id=nm000244) ### Examples ```pycon >>> from eegdash.dataset import NM000244 >>> dataset = NM000244(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000244) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000244) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000245: eeg dataset, 52 subjects *Motor Imagery dataset from Cho et al 2017* Access recordings and metadata through EEGDash. **Citation:** Hohyun Cho, Minkyu Ahn, Sangtae Ahn, Moonyoung Kwon, Sung Chan Jun (2019). *Motor Imagery dataset from Cho et al 2017*. Modality: eeg Subjects: 52 Recordings: 52 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000245 dataset = NM000245(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000245(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000245( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000245, title = {Motor Imagery dataset from Cho et al 2017}, author = {Hohyun Cho and Minkyu Ahn and Sangtae Ahn and Moonyoung Kwon and Sung Chan Jun}, } ``` ## About This Dataset **Motor Imagery dataset from Cho et al 2017** Motor Imagery dataset from Cho et al 2017. **Dataset Overview** - **Code**: Cho2017 - **Paradigm**: imagery - **DOI**: 10.5524/100295 ### View full README **Motor Imagery dataset from Cho et al 2017** Motor Imagery dataset from Cho et al 2017. **Dataset Overview** - **Code**: Cho2017 - **Paradigm**: imagery - **DOI**: 10.5524/100295 - **Subjects**: 52 - **Sessions per subject**: 1 - **Events**: left_hand=1, right_hand=2 - **Trial interval**: [0, 3] s - **File format**: .mat (MATLAB) **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 68 - **Channel types**: eeg=64, emg=4 - **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EMG1, EMG2, EMG3, EMG4, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, Iz, O1, O2, Oz, P1, P10, P2, P3, P4, P5, P6, P7, P8, P9, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP7, TP8 - **Montage**: standard_1005 - **Hardware**: Biosemi ActiveTwo - **Software**: BCI2000 3.0.2 - **Reference**: CMS/DRL - **Sensor type**: active electrodes - **Line frequency**: 60.0 Hz - **Electrode type**: active - **Auxiliary channels**: EMG (4 ch) **Participants** - **Number of subjects**: 52 - **Health status**: healthy - **Age**: mean=24.8, std=3.86 - **Gender distribution**: female=19, male=33 - **Handedness**: {‘right’: 50, ‘both’: 2} - **BCI experience**: collected via questionnaire (0 = no, number = how many times) - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 2 - **Class labels**: left_hand, right_hand - **Trial duration**: 3.0 s - **Study design**: motor imagery - **Feedback type**: none - **Stimulus type**: visual instruction - **Stimulus modalities**: visual - **Primary modality**: visual - **Mode**: offline - **Instructions**: Subjects were asked to imagine kinesthetic finger movements (touching index, middle, ring, and little finger to thumb within 3 seconds) **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_hand, right_hand - **Cue duration**: 3.0 s - **Imagery duration**: 3.0 s **Data Structure** - **Trials**: 100 or 120 per class (200-240 total) - **Blocks per session**: 5 or 6 - **Trials context**: per_class **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False - **Notes**: Bad trial indices provided separately in .mat files (bad_trial_indices); raw EEG data is unfiltered **Signal Processing** - **Classifiers**: FLDA - **Feature extraction**: CSP, ERD, ERS - **Frequency bands**: alpha=[8.0, 14.0] Hz; mu=[8, 12] Hz; analyzed=[8.0, 30.0] Hz **Cross-Validation** - **Method**: random subset selection - **Folds**: 10 - **Evaluation type**: within_session **Performance (Original Study)** - **Accuracy**: 67.46% - **Accuracy Std**: 13.17 - **Discriminative Subjects**: 38 - **Total Subjects**: 50 **BCI Application** - **Applications**: motor_control - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Research **Documentation** - **Description**: EEG datasets for motor imagery brain-computer interface from 52 subjects with psychological and physiological questionnaire, EMG datasets, 3D EEG electrode locations, and non-task-related states - **DOI**: 10.5524/100295 - **Associated paper DOI**: 10.1093/gigascience/gix034 - **License**: CC-BY-4.0 - **Investigators**: Hohyun Cho, Minkyu Ahn, Sangtae Ahn, Moonyoung Kwon, Sung Chan Jun - **Senior author**: Sung Chan Jun - **Contact**: [scjun@gist.ac.kr](mailto:scjun@gist.ac.kr); TEL: +82-62-715-2216; FAX: +82-62-715-2204 - **Institution**: Gwangju Institute of Science and Technology - **Department**: School of Electrical Engineering and Computer Science - **Address**: 123 Cheomdangwagi-ro, Buk-gu, Gwangju 61005, Korea - **Country**: KR - **Repository**: GigaDB - **Data URL**: [http://dx.doi.org/10.5524/100295](http://dx.doi.org/10.5524/100295) - **Publication year**: 2017 - **Funding**: GIST Research Institute (GRI) grant funded by the GIST in 2017; Institute for Information & Communication Technology Promotion (IITP) grant funded by the Korea government (No. 2017-0-00451) - **Ethics approval**: Institutional Review Board of Gwangju Institute of Science and Technology - **Keywords**: motor imagery, EEG, brain-computer interface, performance variation, subject-to-subject transfer **Abstract** Motor imagery (MI)-based brain-computer interface (BCI) dataset from 52 subjects with EEG, EMG, psychological and physiological questionnaire, 3D EEG electrode locations, and non-task-related states. The dataset includes 100 or 120 trials per class (left/right hand) with validation showing 73.08% (38 subjects) had discriminative information. Mean accuracy of 67.46% (±13.17%) over 50 subjects (excluding 2 bad subjects). Dataset stored in GigaDB and validated using bad trial percentage, ERD/ERS analysis, and classification analysis. **Methodology** Subjects performed motor imagery of left and right hand finger movements (kinesthetic imagery). Each trial consisted of: 2 seconds fixation cross, 3 seconds instruction (left/right hand), followed by random 4.1-4.8 second break. Five or six runs performed with feedback after each run. Additional data collected: 6 types of non-task-related data (eye blinking, eyeball movements, head movement, jaw clenching, resting state) and 20 trials of real hand movement per class. 3D electrode coordinates measured with Polhemus Fastrak digitizer. Experiments conducted August-September 2011 in four time slots (9:30-12:00, 12:30-15:00, 15:30-18:00, 19:00-21:30) with background noise 37-39 dB. **References** Cho, H., Ahn, M., Ahn, S., Kwon, M. and Jun, S.C., 2017. EEG datasets for motor imagery brain computer interface. GigaScience. [https://doi.org/10.1093/gigascience/gix034](https://doi.org/10.1093/gigascience/gix034) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000245` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Motor Imagery dataset from Cho et al 2017 | | Author (year) | `Cho2017` | | Canonical | — | | Importable as | `NM000245`, `Cho2017` | | Year | 2019 | | Authors | Hohyun Cho, Minkyu Ahn, Sangtae Ahn, Moonyoung Kwon, Sung Chan Jun | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000245) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000245) | [Source URL](https://nemar.org/dataexplorer/detail/nm000245) | ## Technical Details - Subjects: 52 - Recordings: 52 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 512.0 - Duration (hours): 20.45552734375 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 6.7 GB - File count: 52 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000245](https://openneuro.org/datasets/nm000245) - NeMAR: [nm000245](https://nemar.org/dataexplorer/detail?dataset_id=nm000245) ## API Reference Use the `NM000245` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000245(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Imagery dataset from Cho et al 2017 * **Study:** `nm000245` (NeMAR) * **Author (year):** `Cho2017` * **Canonical:** — Also importable as: `NM000245`, `Cho2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 52; recordings: 52; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000245](https://openneuro.org/datasets/nm000245) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000245](https://nemar.org/dataexplorer/detail?dataset_id=nm000245) ### Examples ```pycon >>> from eegdash.dataset import NM000245 >>> dataset = NM000245(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000245) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000245) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000246: eeg dataset, 51 subjects *Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025* Access recordings and metadata through EEGDash. **Citation:** Banghua Yang, Fenqi Rong, Yunlong Xie, Du Li, Jiayang Zhang, Fu Li, Guangming Shi, Xiaorong Gao (2025). *Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025*. Modality: eeg Subjects: 51 Recordings: 153 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000246 dataset = NM000246(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000246(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000246( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000246, title = {Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025}, author = {Banghua Yang and Fenqi Rong and Yunlong Xie and Du Li and Jiayang Zhang and Fu Li and Guangming Shi and Xiaorong Gao}, } ``` ## About This Dataset **Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025** Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025. **Dataset Overview** - **Code**: Yang2025 - **Paradigm**: imagery - **DOI**: 10.1038/s41597-025-04826-y ### View full README **Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025** Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025. **Dataset Overview** - **Code**: Yang2025 - **Paradigm**: imagery - **DOI**: 10.1038/s41597-025-04826-y - **Subjects**: 51 - **Sessions per subject**: 3 - **Events**: left_hand=1, right_hand=2 - **Trial interval**: [1.5, 5.5] s - **File format**: BDF **Acquisition** - **Sampling rate**: 1000.0 Hz - **Number of channels**: 59 - **Channel types**: eeg=59, ecg=1, eog=4 - **Channel names**: Fpz, Fp1, Fp2, AF3, AF4, AF7, AF8, Fz, F1, F2, F3, F4, F5, F6, F7, F8, FCz, FC1, FC2, FC3, FC4, FC5, FC6, FT7, FT8, Cz, C1, C2, C3, C4, C5, C6, T7, T8, CP1, CP2, CP3, CP4, CP5, CP6, TP7, TP8, Pz, P3, P4, P5, P6, P7, P8, POz, PO3, PO4, PO5, PO6, PO7, PO8, Oz, O1, O2 - **Montage**: standard_1005 - **Hardware**: Neuracle NeuSen W - **Sensor type**: Ag/AgCl - **Line frequency**: 50.0 Hz - **Online filters**: {} **Participants** - **Number of subjects**: 51 - **Health status**: healthy - **Age**: min=17.0, max=30.0 - **Gender distribution**: female=18, male=44 - **Handedness**: right-handed - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 2 - **Class labels**: left_hand, right_hand - **Trial duration**: 7.5 s - **Study design**: Multi-day MI-BCI: 2C (left/right hand, 51 subj) and 3C (left hand, right hand, foot-hooking, 11 subj). 3 sessions per subject on different days. - **Feedback type**: none - **Stimulus type**: video cues - **Stimulus modalities**: visual, auditory - **Primary modality**: visual - **Synchronicity**: synchronous - **Mode**: offline **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: left_hand, right_hand, feet - **Cue duration**: 1.5 s - **Imagery duration**: 4.0 s **Data Structure** - **Trials**: 39600 - **Trials context**: 51 subjects x 3 sessions x 200 trials (2C) + 11 subjects x 3 sessions x 300 trials (3C) = 39600 **Signal Processing** - **Classifiers**: CSP+SVM, FBCSP+SVM, EEGNet, deepConvNet, FBCNet - **Feature extraction**: CSP, FBCSP - **Frequency bands**: bandpass=[0.5, 40.0] Hz - **Spatial filters**: CSP, FBCSP **Cross-Validation** - **Method**: 10-fold - **Folds**: 10 - **Evaluation type**: within_session **BCI Application** - **Applications**: motor_control - **Environment**: laboratory - **Online feedback**: False **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Research **Documentation** - **DOI**: 10.1038/s41597-025-04826-y - **License**: CC-BY-4.0 - **Investigators**: Banghua Yang, Fenqi Rong, Yunlong Xie, Du Li, Jiayang Zhang, Fu Li, Guangming Shi, Xiaorong Gao - **Institution**: Shanghai University - **Country**: CN - **Data URL**: [https://plus.figshare.com/articles/dataset/22671172](https://plus.figshare.com/articles/dataset/22671172) - **Publication year**: 2025 **References** Yang, B., Rong, F., Xie, Y., et al. (2025). A multi-day and high-quality EEG dataset for motor imagery brain-computer interface. Scientific Data, 12, 488. [https://doi.org/10.1038/s41597-025-04826-y](https://doi.org/10.1038/s41597-025-04826-y) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000246` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025 | | Author (year) | `Yang2025_Multi` | | Canonical | `WBCIC_SHU`, `WBCICSHU` | | Importable as | `NM000246`, `Yang2025_Multi`, `WBCIC_SHU`, `WBCICSHU` | | Year | 2025 | | Authors | Banghua Yang, Fenqi Rong, Yunlong Xie, Du Li, Jiayang Zhang, Fu Li, Guangming Shi, Xiaorong Gao | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000246) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000246) | [Source URL](https://nemar.org/dataexplorer/detail/nm000246) | ## Technical Details - Subjects: 51 - Recordings: 153 - Tasks: 1 - Channels: 59 - Sampling rate (Hz): 1000.0 - Duration (hours): 98.42606861111108 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 58.4 GB - File count: 153 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000246](https://openneuro.org/datasets/nm000246) - NeMAR: [nm000246](https://nemar.org/dataexplorer/detail?dataset_id=nm000246) ## API Reference Use the `NM000246` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000246(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025 * **Study:** `nm000246` (NeMAR) * **Author (year):** `Yang2025_Multi` * **Canonical:** `WBCIC_SHU`, `WBCICSHU` Also importable as: `NM000246`, `Yang2025_Multi`, `WBCIC_SHU`, `WBCICSHU`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 51; recordings: 153; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000246](https://openneuro.org/datasets/nm000246) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000246](https://nemar.org/dataexplorer/detail?dataset_id=nm000246) ### Examples ```pycon >>> from eegdash.dataset import NM000246 >>> dataset = NM000246(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000246) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000246) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000247: eeg dataset, 10 subjects *BigP3BCI Study S1 — 9x8 face/house paradigm (10 healthy subjects)* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *BigP3BCI Study S1 — 9x8 face/house paradigm (10 healthy subjects)*. Modality: eeg Subjects: 10 Recordings: 120 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000247 dataset = NM000247(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000247(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000247( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000247, title = {BigP3BCI Study S1 — 9x8 face/house paradigm (10 healthy subjects)}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, } ``` ## About This Dataset **BigP3BCI Study S1 — 9x8 face/house paradigm (10 healthy subjects)** BigP3BCI Study S1 — 9x8 face/house paradigm (10 healthy subjects). **Dataset Overview** - **Code**: Mainsah2025-S1 - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 ### View full README **BigP3BCI Study S1 — 9x8 face/house paradigm (10 healthy subjects)** BigP3BCI Study S1 — 9x8 face/house paradigm (10 healthy subjects). **Dataset Overview** - **Code**: Mainsah2025-S1 - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 - **Subjects**: 10 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1.0] s **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 32 - **Channel types**: eeg=32 - **Montage**: standard_1020 - **Hardware**: g.USBamp (g.tec) - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 10 - **Health status**: healthy **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 **Signal Processing** - **Feature extraction**: P300_ERP_detection **Cross-Validation** - **Method**: calibration-then-test - **Evaluation type**: within_subject **BCI Application** - **Applications**: speller - **Environment**: laboratory - **Online feedback**: True **Tags** - **Modality**: visual - **Type**: perception **Documentation** - **Description**: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. - **DOI**: 10.13026/0byy-ry86 - **License**: CC-BY-4.0 - **Investigators**: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins - **Institution**: Duke University; East Tennessee State University - **Country**: US - **Repository**: PhysioNet - **Data URL**: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) - **Publication year**: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000247` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BigP3BCI Study S1 — 9x8 face/house paradigm (10 healthy subjects) | | Author (year) | `Mainsah2025_BigP3BCI_S1` | | Canonical | `BigP3BCI_StudyS1`, `BigP3BCI_S1` | | Importable as | `NM000247`, `Mainsah2025_BigP3BCI_S1`, `BigP3BCI_StudyS1`, `BigP3BCI_S1` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000247) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000247) | [Source URL](https://nemar.org/dataexplorer/detail/nm000247) | ## Technical Details - Subjects: 10 - Recordings: 120 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 256.0000766323896 - Duration (hours): 5.566534792017462 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 477.9 MB - File count: 120 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000247](https://openneuro.org/datasets/nm000247) - NeMAR: [nm000247](https://nemar.org/dataexplorer/detail?dataset_id=nm000247) ## API Reference Use the `NM000247` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000247(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study S1 — 9x8 face/house paradigm (10 healthy subjects) * **Study:** `nm000247` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_S1` * **Canonical:** `BigP3BCI_StudyS1`, `BigP3BCI_S1` Also importable as: `NM000247`, `Mainsah2025_BigP3BCI_S1`, `BigP3BCI_StudyS1`, `BigP3BCI_S1`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 120; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000247](https://openneuro.org/datasets/nm000247) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000247](https://nemar.org/dataexplorer/detail?dataset_id=nm000247) ### Examples ```pycon >>> from eegdash.dataset import NM000247 >>> dataset = NM000247(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000247) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000247) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000248: eeg dataset, 11 subjects *BigP3BCI Study L — 6x6 multi-paradigm (11 ALS subjects)* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *BigP3BCI Study L — 6x6 multi-paradigm (11 ALS subjects)*. Modality: eeg Subjects: 11 Recordings: 330 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000248 dataset = NM000248(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000248(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000248( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000248, title = {BigP3BCI Study L — 6x6 multi-paradigm (11 ALS subjects)}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, } ``` ## About This Dataset **BigP3BCI Study L — 6x6 multi-paradigm (11 ALS subjects)** BigP3BCI Study L — 6x6 multi-paradigm (11 ALS subjects). **Dataset Overview** - **Code**: Mainsah2025-L - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 ### View full README **BigP3BCI Study L — 6x6 multi-paradigm (11 ALS subjects)** BigP3BCI Study L — 6x6 multi-paradigm (11 ALS subjects). **Dataset Overview** - **Code**: Mainsah2025-L - **Paradigm**: p300 - **DOI**: 10.13026/0byy-ry86 - **Subjects**: 11 - **Sessions per subject**: 1 - **Events**: Target=2, NonTarget=1 - **Trial interval**: [0, 1.0] s **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 16 - **Channel types**: eeg=16 - **Montage**: standard_1020 - **Hardware**: g.USBamp (g.tec) - **Line frequency**: 60.0 Hz **Participants** - **Number of subjects**: 11 - **Health status**: patients - **Clinical population**: ALS **Experimental Protocol** - **Paradigm**: p300 - **Number of classes**: 2 - **Class labels**: Target, NonTarget **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text Target ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** - **Detected paradigm**: p300 **Signal Processing** - **Feature extraction**: P300_ERP_detection **Cross-Validation** - **Method**: calibration-then-test - **Evaluation type**: within_subject **BCI Application** - **Applications**: speller - **Environment**: laboratory - **Online feedback**: True **Tags** - **Modality**: visual - **Type**: perception **Documentation** - **Description**: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. - **DOI**: 10.13026/0byy-ry86 - **License**: CC-BY-4.0 - **Investigators**: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins - **Institution**: Duke University; East Tennessee State University - **Country**: US - **Repository**: PhysioNet - **Data URL**: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) - **Publication year**: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000248` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BigP3BCI Study L — 6x6 multi-paradigm (11 ALS subjects) | | Author (year) | `Mainsah2025_BigP3BCI_L` | | Canonical | — | | Importable as | `NM000248`, `Mainsah2025_BigP3BCI_L` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000248) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000248) | [Source URL](https://nemar.org/dataexplorer/detail/nm000248) | ## Technical Details - Subjects: 11 - Recordings: 330 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 256.00005870719417 (220), 256.0000825640258 (110) - Duration (hours): 18.05797067792248 - Pathology: Other - Modality: Visual - Type: Attention - Size on disk: 780.5 MB - File count: 330 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000248](https://openneuro.org/datasets/nm000248) - NeMAR: [nm000248](https://nemar.org/dataexplorer/detail?dataset_id=nm000248) ## API Reference Use the `NM000248` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000248(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study L — 6x6 multi-paradigm (11 ALS subjects) * **Study:** `nm000248` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_L` * **Canonical:** — Also importable as: `NM000248`, `Mainsah2025_BigP3BCI_L`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 11; recordings: 330; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000248](https://openneuro.org/datasets/nm000248) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000248](https://nemar.org/dataexplorer/detail?dataset_id=nm000248) ### Examples ```pycon >>> from eegdash.dataset import NM000248 >>> dataset = NM000248(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000248) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000248) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000249: eeg dataset, 13 subjects *BNCI 2022-001 EEG Correlates of Difficulty Level dataset* Access recordings and metadata through EEGDash. **Citation:** Ping-Keng Jao, Ricardo Chavarriaga, Jose del R. Millan (2021). *BNCI 2022-001 EEG Correlates of Difficulty Level dataset*. Modality: eeg Subjects: 13 Recordings: 13 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000249 dataset = NM000249(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000249(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000249( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000249, title = {BNCI 2022-001 EEG Correlates of Difficulty Level dataset}, author = {Ping-Keng Jao and Ricardo Chavarriaga and Jose del R. Millan}, } ``` ## About This Dataset **BNCI 2022-001 EEG Correlates of Difficulty Level dataset** BNCI 2022-001 EEG Correlates of Difficulty Level dataset. **Dataset Overview** - **Code**: BNCI2022-001 - **Paradigm**: imagery - **DOI**: 10.1109/THMS.2020.3038339 ### View full README **BNCI 2022-001 EEG Correlates of Difficulty Level dataset** BNCI 2022-001 EEG Correlates of Difficulty Level dataset. **Dataset Overview** - **Code**: BNCI2022-001 - **Paradigm**: imagery - **DOI**: 10.1109/THMS.2020.3038339 - **Subjects**: 13 - **Sessions per subject**: 1 - **Events**: trajectory_start=1, waypoint_miss=16, waypoint_hit=48, trajectory_end=255 - **Trial interval**: [0, 90] s - **Session IDs**: offline, online_session_2, online_session_3 - **File format**: gdf - **Data preprocessed**: True **Acquisition** - **Sampling rate**: 256.0 Hz - **Number of channels**: 64 - **Channel types**: eeg=64, eog=3 - **Channel names**: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EOG1, EOG2, EOG3, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, Iz, O1, O2, Oz, P1, P10, P2, P3, P4, P5, P6, P7, P8, P9, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP7, TP8 - **Montage**: 10-10 - **Hardware**: Biosemi ActiveTwo - **Software**: EEGLAB - **Reference**: car - **Sensor type**: active - **Line frequency**: 50.0 Hz - **Auxiliary channels**: EOG (3 ch, horizontal, vertical), ppg **Participants** - **Number of subjects**: 13 - **Health status**: patients - **Clinical population**: normal or corrected-to-normal vision, no history of motor or neurological disease (one subject with history of vasovagal syncope) - **Age**: mean=22.6, std=1.04 - **Gender distribution**: female=8, male=5 - **Handedness**: {‘right’: 12, ‘left’: 1} **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 4 - **Class labels**: trajectory_start, waypoint_miss, waypoint_hit, trajectory_end - **Trial duration**: 90.0 s - **Study design**: Subjects piloted a simulated drone through circular waypoints using a flight joystick, controlling roll and pitch while the drone maintained constant velocity. In offline session: 32 trajectories each with constant difficulty level (v-shape design from level 16 to 1 and back to 16), each trajectory had 32 waypoints and lasted ~90 seconds. In online sessions: each condition consisted of 12 trajectories with 33 waypoints and 8 decision points per trajectory. - **Feedback type**: visual - **Stimulus type**: visual - **Stimulus modalities**: visual - **Primary modality**: visual - **Synchronicity**: cue-based - **Mode**: both - **Instructions**: Subjects piloted a simulated drone through a series of circular waypoints. Subjects controlled the roll and pitch while the drone had a constant velocity of 11.8 arbitrary units per second when flying straight. They were instructed to press a button when the current level was easy as a way to collect ground truth for decoding or to proceed with self-paced learning. - **Stimulus presentation**: screen_size=twenty-inch screen, screen_resolution=1680x1050, input_device=Logitech Extreme 3D Pro joystick, waypoint_colors=green (current), blue (next), yellow (decision point), waypoint_distance_pitch=32 A.U. (at least 2.7 seconds), waypoint_distance_roll=24 A.U. (at least 2.0 seconds) **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text trajectory_start ``` ```text ├─ Experiment-structure └─ Label/trajectory_start waypoint_miss ``` ```text ├─ Experiment-structure └─ Label/waypoint_miss waypoint_hit ``` ```text ├─ Experiment-structure └─ Label/waypoint_hit trajectory_end ``` ```text ├─ Experiment-structure └─ Label/trajectory_end ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: right_hand, left_hand, feet **Data Structure** - **Trials**: {‘offline_session’: ‘32 trajectories of 32 waypoints each (~90 seconds per trajectory)’, ‘online_session_per_condition’: ‘12 trajectories of 33 waypoints each with 8 decision points’} - **Blocks per session**: 2 - **Trials context**: Offline session: v-shape difficulty design (level 16→1→16). Online sessions: each condition had 12 trajectories, starting at level 1 for 1st trajectory, then 4 levels lower than final level of previous trajectory. Average 10.3 seconds per decision group (4 waypoints). **Preprocessing** - **Data state**: preprocessed - **Preprocessing applied**: True - **Steps**: downsampling from 2048 Hz to 256 Hz, casual bandpass filtering between 1 and 40 Hz, SPHARA 20th order spatial low-pass filter for interpolation and artifact reduction, common-average re-referencing, ICA for EOG artifact removal, peripheral electrodes removed (25 central channels kept), artifact rejection: windows with peak value > 50 µV rejected - **Highpass filter**: 1.0 Hz - **Lowpass filter**: 40.0 Hz - **Bandpass filter**: [1.0, 40.0] - **Filter type**: Butterworth - **Filter order**: 14 - **Artifact methods**: ICA, SPHARA, amplitude thresholding - **Re-reference**: car - **Downsampled to**: 256.0 Hz - **Notes**: Out of 39 recordings, P2 was removed twice from offline or online sessions due to short-circuit with the CMS or DRL electrode. On average, 15.8 ICA components were returned and 1.07 components were removed during construction of online decoders (correlation > 0.7 with EOG). **Signal Processing** - **Classifiers**: LDA, Generalized Linear Model with elastic net regularization - **Feature extraction**: PSD, ICA, log-PSD - **Frequency bands**: analyzed=[2.0, 28.0] Hz; theta=[4.0, 8.0] Hz; alpha=[10.5, 13.0] Hz - **Spatial filters**: SPHARA, common-average reference **Cross-Validation** - **Method**: leave-one-pair-out cross-validation (4x or 64x depending on class balance) - **Folds**: 4 - **Evaluation type**: within_subject, cross_session **Performance (Original Study)** - **Accuracy**: 76.7% - **Offline Validation Accuracy Mean**: 76.7 - **Offline Validation Accuracy Std**: 5.1 - **Online Session 2 Accuracy Mean**: 56.2 - **Online Session 2 Accuracy Std**: 8.6 - **Online Session 3 Accuracy Mean**: 54.7 - **Online Session 3 Accuracy Std**: 11.0 - **Online Above Chance Recordings**: 16 out of 26 (~62%) **BCI Application** - **Applications**: drone control, adaptive learning, difficulty regulation, visuomotor learning - **Environment**: indoor laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: EEG - **Type**: Experimental/Research **Documentation** - **DOI**: 10.1109/TAFFC.2021.3059688 - **Associated paper DOI**: 10.1109/THMS.2020.3038339 - **License**: CC-BY-4.0 - **Investigators**: Ping-Keng Jao, Ricardo Chavarriaga, Jose del R. Millan - **Senior author**: Jose del R. Millan - **Contact**: [ping-keng.jao@alumni.epfl.ch](mailto:ping-keng.jao@alumni.epfl.ch); [ricardo.chavarriaga@zhaw.ch](mailto:ricardo.chavarriaga@zhaw.ch); [jose.millan@austin.utexas.edu](mailto:jose.millan@austin.utexas.edu) - **Institution**: Ecole Polytechnique Federale de Lausanne - **Address**: 1015 Geneva, Switzerland - **Country**: Switzerland - **Repository**: BNCI Horizon - **Publication year**: 2021 - **Funding**: Swiss National Centres of Competence in Research (NCCR) Robotics - **Acknowledgements**: The authors would like to thank Alexander Cherpillod for his help in the implementation of the simulator and Ruslan Aydarkhanov for his suggestions in designing the protocol. Some figures were drawn with the Gramm MATLAB toolbox. - **Keywords**: EEG, real-time decoding of difficulty, closed-loop adaptation, (simulated) flying, workload, challenge point, brain-machine interface **Abstract** Adaptively increasing the difficulty level in learning was shown to be beneficial than increasing the level after some fixed time intervals. To efficiently adapt the level, we aimed at decoding the subjective difficulty level based on Electroencephalography (EEG) signals. We designed a visuomotor learning task that one needed to pilot a simulated drone through a series of waypoints of different sizes, to investigate the effectiveness of the EEG decoder. The EEG decoder was compared with another condition that the subjects decided when to increase the difficulty level. We examined the decoding performance together with behavioral outcomes. The online accuracies were higher than the chance level for 16 out of 26 cases, and the behavioral results, such as task scores, skill curves, and learning patterns, of EEG condition were similar to the condition based on manual regulation of difficulty. **Methodology** The study compared two conditions for difficulty regulation during a simulated drone piloting task: (1) EEG-based automatic difficulty adjustment using real-time decoding of perceived difficulty, and (2) Manual self-paced adjustment where subjects pressed a button when they found the level easy. Each subject participated in one offline session (for building subject-specific decoders) and two online sessions (each containing both EEG and Manual conditions in counterbalanced order). The task involved piloting a drone through circular waypoints with 16 difficulty levels defined by waypoint radius. Features were extracted using Thomson’s multitaper algorithm with 2-second sliding windows, and classification used generalized linear models with elastic net regularization followed by LDA. The study evaluated both decoding accuracy and behavioral outcomes (task scores, skill curves, learning patterns). **References** Jao, P.-K., Chavarriaga, R., & Millan, J. d. R. (2021). EEG Correlates of Difficulty Levels in Dynamical Transitions of Simulated Flying and Mapping Tasks. IEEE Transactions on Human-Machine Systems, 51(2), 99-108. [https://doi.org/10.1109/THMS.2020.3038339](https://doi.org/10.1109/THMS.2020.3038339) Notes .. versionadded:: 1.3.0 This dataset is designed for cognitive workload assessment and difficulty level detection. Unlike motor imagery datasets, the task involves actual motor control while the cognitive state (perceived difficulty) varies. The public release contains only the first session (offline) data. Additional behavioral data and online sessions with closed-loop difficulty adaptation are not included. The paradigm “imagery” is used for compatibility, though the actual task involves motor execution with cognitive load variations. See Also BNCI2015_004 : Multi-class mental task dataset with imagery and cognitive tasks BNCI2014_001 : 4-class motor imagery dataset Examples > >> from moabb.datasets import BNCI2022_001 >>> dataset = BNCI2022_001() >>> dataset.subject_list [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000249` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BNCI 2022-001 EEG Correlates of Difficulty Level dataset | | Author (year) | `Jao2022` | | Canonical | `Jao2020` | | Importable as | `NM000249`, `Jao2022`, `Jao2020` | | Year | 2021 | | Authors | Ping-Keng Jao, Ricardo Chavarriaga, Jose del R. Millan | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000249) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000249) | [Source URL](https://nemar.org/dataexplorer/detail/nm000249) | ## Technical Details - Subjects: 13 - Recordings: 13 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 256.0 - Duration (hours): 16.191097005208334 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 3.0 GB - File count: 13 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000249](https://openneuro.org/datasets/nm000249) - NeMAR: [nm000249](https://nemar.org/dataexplorer/detail?dataset_id=nm000249) ## API Reference Use the `NM000249` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000249(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2022-001 EEG Correlates of Difficulty Level dataset * **Study:** `nm000249` (NeMAR) * **Author (year):** `Jao2022` * **Canonical:** `Jao2020` Also importable as: `NM000249`, `Jao2022`, `Jao2020`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 13; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000249](https://openneuro.org/datasets/nm000249) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000249](https://nemar.org/dataexplorer/detail?dataset_id=nm000249) ### Examples ```pycon >>> from eegdash.dataset import NM000249 >>> dataset = NM000249(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000249) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000249) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000250: eeg dataset, 87 subjects *Class for Dreyer2023 dataset management. MI dataset* Access recordings and metadata through EEGDash. **Citation:** Pauline Dreyer, Aline Roc, Léa Pillette, Sébastien Rimbert, Fabien Lotte (2021). *Class for Dreyer2023 dataset management. MI dataset*. Modality: eeg Subjects: 87 Recordings: 520 License: CC-BY-4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000250 dataset = NM000250(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000250(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000250( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000250, title = {Class for Dreyer2023 dataset management. MI dataset}, author = {Pauline Dreyer and Aline Roc and Léa Pillette and Sébastien Rimbert and Fabien Lotte}, } ``` ## About This Dataset **Class for Dreyer2023 dataset management. MI dataset** Class for Dreyer2023 dataset management. MI dataset. **Dataset Overview** - **Code**: Dreyer2023 - **Paradigm**: imagery - **DOI**: 10.1038/s41597-023-02445-z ### View full README **Class for Dreyer2023 dataset management. MI dataset** Class for Dreyer2023 dataset management. MI dataset. **Dataset Overview** - **Code**: Dreyer2023 - **Paradigm**: imagery - **DOI**: 10.1038/s41597-023-02445-z - **Subjects**: 87 - **Sessions per subject**: 1 - **Events**: left_hand=1, right_hand=2 - **Trial interval**: [0, 5] s - **Runs per session**: 6 - **Session IDs**: calibration, online_training - **File format**: GDF - **Contributing labs**: Inria Bordeaux **Acquisition** - **Sampling rate**: 512.0 Hz - **Number of channels**: 27 - **Channel types**: eeg=27, emg=2, eog=3 - **Channel names**: C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EMGd, EMGg, EOG1, EOG2, EOG3, F3, F4, FC1, FC2, FC3, FC4, FC5, FC6, FCz, Fz, P3, P4, Pz - **Montage**: 10-20 - **Hardware**: g.USBAmp (g.tec, Austria) - **Software**: OpenViBE 2.1.0 (Dataset A) / OpenViBE 2.2.0 (Dataset B and C) - **Reference**: left earlobe - **Ground**: FPz - **Sensor type**: active electrodes - **Line frequency**: 50.0 Hz - **Online filters**: none (raw signals recorded without hardware filters) - **Cap manufacturer**: g.tec - **Auxiliary channels**: EOG (3 ch, horizontal, vertical), EMG (2 ch), gsr **Participants** - **Number of subjects**: 87 - **Health status**: healthy - **Age**: mean=29.0, min=19, max=59 - **Gender distribution**: female=41, male=46 - **Handedness**: right - **BCI experience**: naive - **Species**: human **Experimental Protocol** - **Paradigm**: imagery - **Number of classes**: 2 - **Class labels**: left_hand, right_hand - **Trial duration**: 8.0 s - **Tasks**: right_hand_MI, left_hand_MI, resting_state - **Study design**: Graz protocol - **Feedback type**: continuous visual - **Stimulus type**: blue bar varying in length - **Stimulus modalities**: visual, auditory - **Primary modality**: visual - **Synchronicity**: cue-based - **Mode**: online - **Training/test split**: True - **Instructions**: Participants were encouraged to perform kinesthetic imagination and leave them free to choose their mental imagery strategy. Participants were instructed to try to find the best strategy so that the system would show the longest possible feedback bar. Only positive feedback was provided. **HED Event Annotations** Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) ```text left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand ``` **Paradigm-Specific Parameters** - **Detected paradigm**: motor_imagery - **Imagery tasks**: right_hand, left_hand - **Cue duration**: 1.25 s - **Imagery duration**: 3.75 s **Data Structure** - **Trials**: 240 - **Trials per class**: right_hand=120, left_hand=120 - **Blocks per session**: 6 - **Block duration**: 420.0 s - **Trials context**: per subject (120 per class) **Preprocessing** - **Data state**: raw - **Preprocessing applied**: False - **Bandpass filter**: [5.0, 35.0] - **Filter type**: Butterworth - **Filter order**: 5 - **Artifact methods**: visual inspection - **Re-reference**: Laplacian (C3, C4 for feature extraction) - **Notes**: The raw signals were recorded without any hardware filters. For online processing, a fifth-order Butterworth filter was applied in a participant-specific discriminant frequency band in the range of 5 Hz to 35 Hz with 0.5 Hz large bins. Impedance could not be measured with active electrodes; EEG signals were visually checked and regularly re-checked to ensure good signal quality. **Signal Processing** - **Classifiers**: LDA - **Feature extraction**: CSP, Bandpower - **Frequency bands**: analyzed=[5.0, 35.0] Hz; alpha=[8.0, 13.0] Hz; mu=[8.0, 13.0] Hz; beta=[13.0, 30.0] Hz - **Spatial filters**: CSP, Laplacian **Cross-Validation** - **Method**: calibration-feedback - **Evaluation type**: within_session **Performance (Original Study)** - **Accuracy**: 63.35% - **Mean Accuracy Std**: 17.36 - **Mean Accuracy R3**: 63.14 - **Mean Accuracy R4**: 64.82 - **Chance Level Individual**: 58.7 - **Chance Level Database**: 51.0 **BCI Application** - **Applications**: rehabilitation, assistive_technology, neurofeedback, user_training - **Environment**: laboratory - **Online feedback**: True **Tags** - **Pathology**: Healthy - **Modality**: Motor - **Type**: Motor Imagery **Documentation** - **Description**: A large EEG database with users’ profile information for motor imagery brain-computer interface research. Contains electroencephalographic signals from 87 human participants, collected during a single day of brain-computer interface (BCI) experiments, organized into 3 datasets (A, B, and C) that were all recorded using the same protocol: right and left hand motor imagery (MI). - **DOI**: 10.1038/s41597-023-02445-z - **Associated paper DOI**: 10.1038/s41597-023-02445-z - **License**: CC-BY-4.0 - **Investigators**: Pauline Dreyer, Aline Roc, Léa Pillette, Sébastien Rimbert, Fabien Lotte - **Senior author**: Fabien Lotte - **Contact**: [fabien.lotte@inria.fr](mailto:fabien.lotte@inria.fr) - **Institution**: Centre Inria de l’université de Bordeaux - **Department**: LaBRI (Univ. Bordeaux/CNRS/Bordeaux INP) - **Address**: Talence, 33405, France - **Country**: FR - **Repository**: Zenodo - **Data URL**: [https://doi.org/10.5281/zenodo.8089820](https://doi.org/10.5281/zenodo.8089820) - **Publication year**: 2023 - **Funding**: European Research Council (ERC Starting Grant project BrainConquest, grant ERC-2016-STG-714567) - **Ethics approval**: Inria’s ethics committee, the COERLE (Approval number: 2018-13) - **Keywords**: motor imagery, brain-computer interface, EEG, BCI illiteracy, user training, personality profile, cognitive traits, user profile **Abstract** We present and share a large database containing electroencephalographic signals from 87 human participants, collected during a single day of brain-computer interface (BCI) experiments, organized into 3 datasets (A, B, and C) that were all recorded using the same protocol: right and left hand motor imagery (MI). Each session contains 240 trials (120 per class), which represents more than 20,800 trials, or approximately 70 hours of recording time. It includes the performance of the associated BCI users, detailed information about the demographics, personality profile as well as some cognitive traits and the experimental instructions and codes (executed in the open-source platform OpenViBE). Such database could prove useful for various studies, including but not limited to: (1) studying the relationships between BCI users’ profiles and their BCI performances, (2) studying how EEG signals properties varies for different users’ profiles and MI tasks, (3) using the large number of participants to design cross-user BCI machine learning algorithms or (4) incorporating users’ profile information into the design of EEG signal classification algorithms. **Methodology** Participants performed a Graz protocol MI-BCI task with 6 runs (2 calibration runs with sham feedback, 4 online training runs with real feedback). Each run consisted of 40 trials (20 per MI-task) with 8s trial duration. Trial structure: green cross (t=0s), acoustic signal (t=2s), red arrow cue (t=3s, 1.25s duration), continuous visual feedback (t=4.25s, 3.75s duration), inter-trial interval (1.5-3.5s). Signal processing used participant-specific Most Discriminant Frequency Band (MDFB) selection (5-35 Hz range), fifth-order Butterworth filtering, Common Spatial Pattern (CSP) with 3 pairs of spatial filters, and Linear Discriminant Analysis (LDA) classifier trained on calibration data. Participants completed 6 questionnaires assessing demographics, personality (16PF5), cognitive traits, spatial abilities (Mental Rotation test), learning style (ILS), and pre/post-experiment states (NeXT questionnaire). **References** Pillette, L., Roc, A., N’kaoua, B., & Lotte, F. (2021). Experimenters’ influence on mental-imagery based brain-computer interface user training. International Journal of Human-Computer Studies, 149, 102603. Benaroch, C., Yamamoto, M. S., Roc, A., Dreyer, P., Jeunet, C., & Lotte, F. (2022). When should MI-BCI feature optimization include prior knowledge, and which one?. Brain-Computer Interfaces, 9(2), 115-128. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000250` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Class for Dreyer2023 dataset management. MI dataset | | Author (year) | `Dreyer2023` | | Canonical | — | | Importable as | `NM000250`, `Dreyer2023` | | Year | 2021 | | Authors | Pauline Dreyer, Aline Roc, Léa Pillette, Sébastien Rimbert, Fabien Lotte | | License | CC-BY-4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000250) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000250) | [Source URL](https://nemar.org/dataexplorer/detail/nm000250) | ## Technical Details - Subjects: 87 - Recordings: 520 - Tasks: 1 - Channels: 27 - Sampling rate (Hz): 512.0 - Duration (hours): 63.4652734375 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 8.8 GB - File count: 520 - Format: BIDS - License: CC-BY-4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000250](https://openneuro.org/datasets/nm000250) - NeMAR: [nm000250](https://nemar.org/dataexplorer/detail?dataset_id=nm000250) ## API Reference Use the `NM000250` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000250(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Class for Dreyer2023 dataset management. MI dataset * **Study:** `nm000250` (NeMAR) * **Author (year):** `Dreyer2023` * **Canonical:** — Also importable as: `NM000250`, `Dreyer2023`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 87; recordings: 520; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000250](https://openneuro.org/datasets/nm000250) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000250](https://nemar.org/dataexplorer/detail?dataset_id=nm000250) ### Examples ```pycon >>> from eegdash.dataset import NM000250 >>> dataset = NM000250(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000250) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000250) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000251: ieeg dataset, 1 subjects *He et al. 2025 — VocalMind: A Stereotactic EEG Dataset for Vocalized, Mimed, and Imagined Speech in Tonal Language* Access recordings and metadata through EEGDash. **Citation:** Tianyu He, Mingyi Wei, Ruicong Wang, Renzhi Wang, Shiwei Du, Siqi Cai, Wei Tao, Haizhou Li (2019). *He et al. 2025 — VocalMind: A Stereotactic EEG Dataset for Vocalized, Mimed, and Imagined Speech in Tonal Language*. [10.1038/s41597-025-04741-2](https://doi.org/10.1038/s41597-025-04741-2) Modality: ieeg Subjects: 1 Recordings: 6 License: CC BY 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000251 dataset = NM000251(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000251(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000251( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000251, title = {He et al. 2025 — VocalMind: A Stereotactic EEG Dataset for Vocalized, Mimed, and Imagined Speech in Tonal Language}, author = {Tianyu He and Mingyi Wei and Ruicong Wang and Renzhi Wang and Shiwei Du and Siqi Cai and Wei Tao and Haizhou Li}, doi = {10.1038/s41597-025-04741-2}, url = {https://doi.org/10.1038/s41597-025-04741-2}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Holdgraf, C., Appelhoff, S., Bickel, S., Bouchard, K., D’Ambrosio, S., David, O., … Hermes, D. (2019). iEEG-BIDS, extending the Brain Imaging Data Structure specification to human intracranial electrophysiology. Scientific Data, 6, 102. [https://doi.org/10.1038/s41597-019-0105-7](https://doi.org/10.1038/s41597-019-0105-7) ## Dataset Information | Dataset ID | `NM000251` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | He et al. 2025 — VocalMind: A Stereotactic EEG Dataset for Vocalized, Mimed, and Imagined Speech in Tonal Language | | Author (year) | `He2025` | | Canonical | — | | Importable as | `NM000251`, `He2025` | | Year | 2019 | | Authors | Tianyu He, Mingyi Wei, Ruicong Wang, Renzhi Wang, Shiwei Du, Siqi Cai, Wei Tao, Haizhou Li | | License | CC BY 4.0 | | Citation / DOI | [doi:10.1038/s41597-025-04741-2](https://doi.org/10.1038/s41597-025-04741-2) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000251) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000251) | [Source URL](https://nemar.org/dataexplorer/detail/nm000251) | ### Copy-paste BibTeX ```bibtex @dataset{nm000251, title = {He et al. 2025 — VocalMind: A Stereotactic EEG Dataset for Vocalized, Mimed, and Imagined Speech in Tonal Language}, author = {Tianyu He and Mingyi Wei and Ruicong Wang and Renzhi Wang and Shiwei Du and Siqi Cai and Wei Tao and Haizhou Li}, doi = {10.1038/s41597-025-04741-2}, url = {https://doi.org/10.1038/s41597-025-04741-2}, } ``` ## Technical Details - Subjects: 1 - Recordings: 6 - Tasks: 3 - Channels: 110 - Sampling rate (Hz): 1000 - Duration (hours): 1.130831666666667 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 1.9 GB - File count: 6 - Format: BIDS - License: CC BY 4.0 - DOI: doi:10.1038/s41597-025-04741-2 - Source: nemar - OpenNeuro: [nm000251](https://openneuro.org/datasets/nm000251) - NeMAR: [nm000251](https://nemar.org/dataexplorer/detail?dataset_id=nm000251) ## API Reference Use the `NM000251` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000251(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) He et al. 2025 — VocalMind: A Stereotactic EEG Dataset for Vocalized, Mimed, and Imagined Speech in Tonal Language * **Study:** `nm000251` (NeMAR) * **Author (year):** `He2025` * **Canonical:** — Also importable as: `NM000251`, `He2025`. Modality: `ieeg`. Subjects: 1; recordings: 6; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000251](https://openneuro.org/datasets/nm000251) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000251](https://nemar.org/dataexplorer/detail?dataset_id=nm000251) DOI: [https://doi.org/10.1038/s41597-025-04741-2](https://doi.org/10.1038/s41597-025-04741-2) ### Examples ```pycon >>> from eegdash.dataset import NM000251 >>> dataset = NM000251(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000251) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000251) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # NM000253: ieeg dataset, 10 subjects *Wang et al. 2024 — Brain Treebank: Large-scale intracranial recordings from naturalistic language stimuli* Access recordings and metadata through EEGDash. **Citation:** Christopher Wang, Adam Yaari, Aaditya K Singh, Vighnesh Subramaniam, Dana Rosenfarb, Jan DeWitt, Pranav Misra, Joseph R Madsen, Scellig Stone, Gabriel Kreiman, Boris Katz, Ignacio Cases, Andrei Barbu (2019). *Wang et al. 2024 — Brain Treebank: Large-scale intracranial recordings from naturalistic language stimuli*. [10.48550/arXiv.2411.08343](https://doi.org/10.48550/arXiv.2411.08343) Modality: ieeg Subjects: 10 Recordings: 26 License: CC BY 4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000253 dataset = NM000253(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000253(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000253( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000253, title = {Wang et al. 2024 — Brain Treebank: Large-scale intracranial recordings from naturalistic language stimuli}, author = {Christopher Wang and Adam Yaari and Aaditya K Singh and Vighnesh Subramaniam and Dana Rosenfarb and Jan DeWitt and Pranav Misra and Joseph R Madsen and Scellig Stone and Gabriel Kreiman and Boris Katz and Ignacio Cases and Andrei Barbu}, doi = {10.48550/arXiv.2411.08343}, url = {https://doi.org/10.48550/arXiv.2411.08343}, } ``` ## About This Dataset **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896 Holdgraf, C., Appelhoff, S., Bickel, S., Bouchard, K., D’Ambrosio, S., David, O., … Hermes, D. (2019). iEEG-BIDS, extending the Brain Imaging Data Structure specification to human intracranial electrophysiology. Scientific Data, 6, 102. [https://doi.org/10.1038/s41597-019-0105-7](https://doi.org/10.1038/s41597-019-0105-7) ## Dataset Information | Dataset ID | `NM000253` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Wang et al. 2024 — Brain Treebank: Large-scale intracranial recordings from naturalistic language stimuli | | Author (year) | `Wang2024_et_al_Brain` | | Canonical | `BrainTreeBank` | | Importable as | `NM000253`, `Wang2024_et_al_Brain`, `BrainTreeBank` | | Year | 2019 | | Authors | Christopher Wang, Adam Yaari, Aaditya K Singh, Vighnesh Subramaniam, Dana Rosenfarb, Jan DeWitt, Pranav Misra, Joseph R Madsen, Scellig Stone, Gabriel Kreiman, Boris Katz, Ignacio Cases, Andrei Barbu | | License | CC BY 4.0 | | Citation / DOI | [doi:10.48550/arXiv.2411.08343](https://doi.org/10.48550/arXiv.2411.08343) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000253) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000253) | [Source URL](https://nemar.org/dataexplorer/detail/nm000253) | ### Copy-paste BibTeX ```bibtex @dataset{nm000253, title = {Wang et al. 2024 — Brain Treebank: Large-scale intracranial recordings from naturalistic language stimuli}, author = {Christopher Wang and Adam Yaari and Aaditya K Singh and Vighnesh Subramaniam and Dana Rosenfarb and Jan DeWitt and Pranav Misra and Joseph R Madsen and Scellig Stone and Gabriel Kreiman and Boris Katz and Ignacio Cases and Andrei Barbu}, doi = {10.48550/arXiv.2411.08343}, url = {https://doi.org/10.48550/arXiv.2411.08343}, } ``` ## Technical Details - Subjects: 10 - Recordings: 26 - Tasks: 1 - Channels: 164 (8), 156 (3), 166 (3), 190 (3), 136 (3), 248 (2), 218 (2), 108, 158 - Sampling rate (Hz): 2048 - Duration (hours): 1.8153209092881943 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 257.3 GB - File count: 26 - Format: BIDS - License: CC BY 4.0 - DOI: doi:10.48550/arXiv.2411.08343 - Source: nemar - OpenNeuro: [nm000253](https://openneuro.org/datasets/nm000253) - NeMAR: [nm000253](https://nemar.org/dataexplorer/detail?dataset_id=nm000253) ## API Reference Use the `NM000253` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000253(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Wang et al. 2024 — Brain Treebank: Large-scale intracranial recordings from naturalistic language stimuli * **Study:** `nm000253` (NeMAR) * **Author (year):** `Wang2024_et_al_Brain` * **Canonical:** `BrainTreeBank` Also importable as: `NM000253`, `Wang2024_et_al_Brain`, `BrainTreeBank`. Modality: `ieeg`. Subjects: 10; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000253](https://openneuro.org/datasets/nm000253) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000253](https://nemar.org/dataexplorer/detail?dataset_id=nm000253) DOI: [https://doi.org/10.48550/arXiv.2411.08343](https://doi.org/10.48550/arXiv.2411.08343) ### Examples ```pycon >>> from eegdash.dataset import NM000253 >>> dataset = NM000253(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000253) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000253) * [eegdash.dataset.DS002799](eegdash.dataset.DS002799.md) * [eegdash.dataset.DS003029](eegdash.dataset.DS003029.md) * [eegdash.dataset.DS003078](eegdash.dataset.DS003078.md) * [eegdash.dataset.DS003374](eegdash.dataset.DS003374.md) * [eegdash.dataset.DS003498](eegdash.dataset.DS003498.md) # NM000254: eeg dataset, 22 subjects *Naturalistic viewing: An open-access dataset using simultaneous EEG-fMRI* Access recordings and metadata through EEGDash. **Citation:** Qawi K Telesford, Eduardo Gonzalez-Moreira, Ting Xu, Yiwen Tian, Stanley Colcombe, Jessica Cloud, Brian Edward Russ, Arnaud Falchier, Maximilian Nentwich, Jens Madsen, Lucas Parra, Charles Schroeder, Michael Milham, Alexandre Rosa Franco (—). *Naturalistic viewing: An open-access dataset using simultaneous EEG-fMRI*. Modality: eeg Subjects: 22 Recordings: 942 License: — Source: nemar Metadata: Good (80%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000254 dataset = NM000254(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000254(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000254( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000254, title = {Naturalistic viewing: An open-access dataset using simultaneous EEG-fMRI}, author = {Qawi K Telesford and Eduardo Gonzalez-Moreira and Ting Xu and Yiwen Tian and Stanley Colcombe and Jessica Cloud and Brian Edward Russ and Arnaud Falchier and Maximilian Nentwich and Jens Madsen and Lucas Parra and Charles Schroeder and Michael Milham and Alexandre Rosa Franco}, } ``` ## About This Dataset This dataset is comprised of neuroimaging data collected at the Nathan Kline Institute (NKI). The dataset represents simultaneously collected electroencephalography (EEG) and function magnetic resonance imaging (fMRI) recordings obtained from 22 individuals between the ages of 23 and 51 years-old. EEG data contains 64-channel EEG recordings using a customized Brain Products BrainCapMR consisting of 61 cortical channels, two EOG channels placed below (channel 63) and above (channel 64) the left eye, and one ECG channel (channel 32) placed on the back. This dataset also contains eye tracking and physiological recordings. Eye tracking recordings were collected inside the scanner using EyeLink 1000 (SR Research Ltd.) with eye position and pupil dilation were recorded using an infrared based eye tracker. Physiological recordings were collected using BIOPAC MP150 (BIOPAC Systems, Inc.) using a respiratory transducer belt to monitor breathing. All individuals were consented in accordance and compliance with the Institutional Review Board (IRB) at NKI. Individuals provided demographic information and behavioral data. Behavioral data included participants filling out a survey on their last month of sleep (Pittsburgh Sleep Study), the amount of sleep they had the previous night, and their caffeine intake (if any) before the scan session. The primary goal of this study is to understand the neural underpinnings of brain function evaluating the correlation between electrical activity and hemodynamic fluctuations derived from neuroimaging data. ## Dataset Information | Dataset ID | `NM000254` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Naturalistic viewing: An open-access dataset using simultaneous EEG-fMRI | | Author (year) | `Telesford2024` | | Canonical | — | | Importable as | `NM000254`, `Telesford2024` | | Year | — | | Authors | Qawi K Telesford, Eduardo Gonzalez-Moreira, Ting Xu, Yiwen Tian, Stanley Colcombe, Jessica Cloud, Brian Edward Russ, Arnaud Falchier, Maximilian Nentwich, Jens Madsen, Lucas Parra, Charles Schroeder, Michael Milham, Alexandre Rosa Franco | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000254) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000254) | [Source URL](https://nemar.org/dataexplorer/detail/nm000254) | ## Technical Details - Subjects: 22 - Recordings: 942 - Tasks: 12 - Channels: 64 - Sampling rate (Hz): 5000 - Duration (hours): 108.65814155555556 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 256.0 GB - File count: 942 - Format: BIDS - License: See source - DOI: — - Source: nemar - OpenNeuro: [nm000254](https://openneuro.org/datasets/nm000254) - NeMAR: [nm000254](https://nemar.org/dataexplorer/detail?dataset_id=nm000254) ## API Reference Use the `NM000254` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000254(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Naturalistic viewing: An open-access dataset using simultaneous EEG-fMRI * **Study:** `nm000254` (NeMAR) * **Author (year):** `Telesford2024` * **Canonical:** — Also importable as: `NM000254`, `Telesford2024`. Modality: `eeg`. Subjects: 22; recordings: 942; tasks: 12. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000254](https://openneuro.org/datasets/nm000254) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000254](https://nemar.org/dataexplorer/detail?dataset_id=nm000254) ### Examples ```pycon >>> from eegdash.dataset import NM000254 >>> dataset = NM000254(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000254) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000254) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000255: eeg dataset, 30 subjects *The Brain, Body, and Behaviour Dataset (1.0.0) - Experiment 2* Access recordings and metadata through EEGDash. **Citation:** Jens Madsen, Nikhil Kuppa, Lucas Parra (—). *The Brain, Body, and Behaviour Dataset (1.0.0) - Experiment 2*. Modality: eeg Subjects: 30 Recordings: 291 License: CC BY 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000255 dataset = NM000255(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000255(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000255( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000255, title = {The Brain, Body, and Behaviour Dataset (1.0.0) - Experiment 2}, author = {Jens Madsen and Nikhil Kuppa and Lucas Parra}, } ``` ## About This Dataset **The Brain, Body, and Behaviour Dataset - Experiment 2** **Summary:** *Description:* Subjects watched five videos, knowing they’d be tested afterward. After each video, they answered 11 to 12 factual multiple-choice questions. Videos and questions were presented in random order. *Subjects:\*31,\*Sessions:* 2 1. *Attentive* - Watch videos with focus and answer questions after 2. *Distracted* - Watch videos while counting backwards in your head, no test after watching ### View full README **The Brain, Body, and Behaviour Dataset - Experiment 2** **Summary:** *Description:* Subjects watched five videos, knowing they’d be tested afterward. After each video, they answered 11 to 12 factual multiple-choice questions. Videos and questions were presented in random order. *Subjects:\*31,\*Sessions:* 2 1. *Attentive* - Watch videos with focus and answer questions after 2. *Distracted* - Watch videos while counting backwards in your head, no test after watching **Tasks (Stimuli)** **Experiment 2** ```text | **Stimulus ID** | **Name** | **URL** | |-----------------|------------------------------------------|---------------------------------------------------------| | Stim-01 | Why are Stars Star-Shaped | [Watch Here](https://www.youtube.com/embed/VVAKFJ8VVp4) | | Stim-02 | How Modern Light Bulbs Work | [Watch Here](https://www.youtube.com/embed/oCEKMEeZXug) | | Stim-03 | The Immune System Explained – Bacteria | [Watch Here](https://www.youtube.com/embed/zQGOcOUBi6s) | | Stim-04 | Who Invented the Internet - And Why | [Watch Here](https://www.youtube.com/embed/21eFwbb48sE) | | Stim-05 | Why Do We Have More Boys Than Girls | [Watch Here](https://www.youtube.com/embed/3IaYhG11ckA) | |----------------------------------------------------------------------------------------------------------------------| ``` *Modalities Recorded:* - Gaze (X, Y) - Pupil Size - Blinks - Saccades - EOG - EEG - ECG **Experiment Setup:** In experiment 2, subjects watched 5 different informative videos not knowing they would be questioned about the content of each video all together at the end of the video-watching. This experiment was **incidental** learning because the subjects did not know they would be questioned about the contents of the video. We collected ECG, EEG, EOG, Head Motion, Gaze Coordinates, and Pupil Size. Sessions: - Ses-01 contains these signals recorded on subjects watching these videos in an attentive condition. They answered questions pertaining to each video after watching the videos one at a time. - Ses-02 contains the data recorded on subjects watching all of the videos again in a distracted condition in the same order as Ses-01 described above. The distraction from the stimuli was to silently count backwards from a random prime number in steps of 7. No questions were asked after this session. **Questionnaires:** Questionnaires and answers to them can be found in the phenotype/ directory. 1. *stimuli_questionnaire:* The stimuli_questionnaire tsv and json files have the questions, answers and correct answers. - “Domain” question type is a general domain knowledge question that was asked before the subject watched a video. - “Memory” question type is a memory testing question that is asked after the subject finishes watching the video, and has questions that are directly pertaining to the video content. 2. *asrs_questionnaire:* These tsv and json files have questions and answers to the ASRS questionnaire that is an adult ADHD symptom checklist test. The answers are options from 0 (never) to 4 (very often). - There are two different scales used to score this test, and there are 2 different parts (Part A, Part B) to this test. Screen test involves just the first 6 questions (Part A), and the full-test involves the entire 18 question test (Part A + Part B). Despite lesser questions, the screen test (first 6 ques) gives one a higher indication of ADHD prevalence than the full-test. - This is the reason why the 2 different scoring scales are based on the Screen test. Scale one is out of 6, where each question counts for just 1 point depending on the frequency of the symptom occurrence, and if one scores over 4, there is a high chance of ADHD prevalence. The next scale is out of 24 where the 5 frequencies of symptom occurrence (never, rarely, sometimes, often, very often) are assigned scores in an increasing order of 0-4, and for each question, the respective scores of the frequency is the score for that question. Out of 24, the threshold for high prevalence of ADHD is 18 for this scale. The following are the question numbers for inattentive ADHD and Hyperactive ADHD inattentive_questions = [1,2,3,4,7,8,9,10,11,12] hyperactive_questions = [5,6,13,14,15,16,17,18] **General BIDS dataset structure overview for all MEVD experiments** Each BIDS dataset (one per experiment) has files that describe the dataset, its participants, and related metadata at the root directory - dataset_description.json, participants.tsv and participants.json, providing essential information about the study and participants to anyone working with the dataset. In the root directory, the raw data of each participant is organized by subject (sub-XX) and then further divided into sessions (ses-XX) to accommodate multi-session data collection (some experiments have 2 sessions: attentive and distracted). Inside each session folder, you’ll find modality-specific subfolders, such as: - **eeg**: Contains electroencephalogram data files (.bdf), event logs (events.tsv), and additional metadata (.json) that describe the experiment and recording conditions. - **beh**: Contains physiological recordings like ECG (electrocardiogram) and EOG (electrooculogram) stored in compressed .tsv.gz files, with accompanying metadata in .json files. - **eyetrack**: Contains physiological recordings like eye-tracking (gaze coordinates and pupil size), and head movement data, stored in compressed .tsv.gz files, with accompanying metadata in .json files. ```text | Modality | Filename Format | Data File Extension | Metadata | Notes | |-----------|---------------------------------------------------------------------|---------------------|----------------------------------------------|--------------------------------------------------------------------------------------------| | EEG | `sub_xx-ses_xx-task-stimxx_{file_of_interest.extension}` | `.bdf` | Exists for each file as a `.json` file | There are event files with a `.tsv` extension that include start and end times in seconds.| | Beh | `sub_xx-ses_xx-task-stimxx_recording-{modality}_physio.{extension}` | `.tsv.gz` | Exists for each file as a `.json` file | | | EyeTrack | `sub_xx-ses_xx-task-stimxx_{modality}_eyetrack.{extension}` | `.tsv.gz` | Exists for each file as a `.json` file | | ``` There is also a **derivatives** directory which contains preprocessed data derived from the raw recordings, such as filtered heart rate data or preprocessed physiological signals, making it easy to work with and apply advanced analyses. Files in this directory are also stored in the BIDS structure (subject-wise → session-wise → modality-wise). A brief overview on what you can expect in the derivatives directory: - **eeg**: Contains filtered electroencephalogram (EEG) data files (.bdf). - **beh**: Contains heart beats (r-peak timestamps synchronized with the stimulus), heart-rates, filtered ECG, breath-rates, and they are stored in compressed .tsv.gz files. - **eyetrack**: Contains saccades (timestamps), saccade-rates, fixations (timestamps), fixation-rates, blinks (timestamps), blink-rates, and filtered pupil and gaze files all stored in compressed .tsv.gz files. **How to Navigate the Dataset** - **Top-Level Files**: Files like dataset_description.json and participants.tsv give you an overview of the study and participants, serving as your starting point when exploring a dataset. - **Derivatives Folder**: In this folder, processed data is organized like how raw data is organized in the BIDS directory. - *Subject Folders (sub-XX):* Inside these folders, data is organized by individual participants, providing separate directories for each person involved in the study. - **Session Folders (ses-XX)**: For longitudinal or multi-session studies, session folders contain the raw and derived data for each session, making it easy to track and analyze data collected over time. - *Modality-Specific Subfolders:* Each session is further split into subfolders according to data modalities (e.g., EEG, behavior/physio), helping you to quickly locate the data of interest, whether it’s brain recordings, heart rate data, or eye movement information. - *Tasks:* Each subject is exposed to different stimuli during data collection, and BIDS uses “tasks” in filenames to clearly differentiate recordings based on these experimental conditions. This makes it easy to identify, retrieve, and analyze data associated with specific tasks across multiple modalities. **Dataset Overview** The table below is a breakdown on the total minutes of raw data available for each session across the complete dataset: ```text | Modality | Hours | |----------------------------|---------| | EEG | 65 | | ECG | 93.5 | | EOG | 94.35 | | Head | 92.46 | | Gaze | 110.56 | | Pupil | 110.56 | | Respiration | 44.2 | ``` ## Dataset Information | Dataset ID | `NM000255` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The Brain, Body, and Behaviour Dataset (1.0.0) - Experiment 2 | | Author (year) | `Madsen2024_E2` | | Canonical | — | | Importable as | `NM000255`, `Madsen2024_E2` | | Year | — | | Authors | Jens Madsen, Nikhil Kuppa, Lucas Parra | | License | CC BY 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000255) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000255) | [Source URL](https://nemar.org/dataexplorer/detail/nm000255) | ## Technical Details - Subjects: 30 - Recordings: 291 - Tasks: 5 - Channels: 64 - Sampling rate (Hz): 128 - Duration (hours): 0.0404166666666666 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 5.3 GB - File count: 291 - Format: BIDS - License: CC BY 4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000255](https://openneuro.org/datasets/nm000255) - NeMAR: [nm000255](https://nemar.org/dataexplorer/detail?dataset_id=nm000255) ## API Reference Use the `NM000255` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000255(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Brain, Body, and Behaviour Dataset (1.0.0) - Experiment 2 * **Study:** `nm000255` (NeMAR) * **Author (year):** `Madsen2024_E2` * **Canonical:** — Also importable as: `NM000255`, `Madsen2024_E2`. Modality: `eeg`. Subjects: 30; recordings: 291; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000255](https://openneuro.org/datasets/nm000255) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000255](https://nemar.org/dataexplorer/detail?dataset_id=nm000255) ### Examples ```pycon >>> from eegdash.dataset import NM000255 >>> dataset = NM000255(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000255) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000255) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000256: eeg dataset, 29 subjects *The Brain, Body, and Behaviour Dataset (1.0.0) - Experiment 3* Access recordings and metadata through EEGDash. **Citation:** Jens Madsen, Nikhil Kuppa, Lucas Parra (—). *The Brain, Body, and Behaviour Dataset (1.0.0) - Experiment 3*. Modality: eeg Subjects: 29 Recordings: 332 License: CC BY 4.0 Source: nemar Metadata: Complete (90%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000256 dataset = NM000256(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000256(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000256( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000256, title = {The Brain, Body, and Behaviour Dataset (1.0.0) - Experiment 3}, author = {Jens Madsen and Nikhil Kuppa and Lucas Parra}, } ``` ## About This Dataset **The Brain, Body, and Behaviour Dataset - Experiment 3** **Summary:** *Description:* Subjects watched six videos, knowing they’d be tested afterward. After each video, they answered 11 to 12 factual multiple-choice questions. Videos and questions were presented in random order. *Subjects:\*29,\*Sessions:* 2 1. *Attentive* - Watch videos with focus and answer questions after 2. *Distracted* - Watch videos while counting backwards in your head, no test after watching ### View full README **The Brain, Body, and Behaviour Dataset - Experiment 3** **Summary:** *Description:* Subjects watched six videos, knowing they’d be tested afterward. After each video, they answered 11 to 12 factual multiple-choice questions. Videos and questions were presented in random order. *Subjects:\*29,\*Sessions:* 2 1. *Attentive* - Watch videos with focus and answer questions after 2. *Distracted* - Watch videos while counting backwards in your head, no test after watching **Tasks (Stimuli)** **Experiment 3** ```text | **Stimulus ID** | **Name** | **URL** | |-----------------|-------------------------------------------------------------|---------------------------------------------------------| | Stim-01 | What If We Killed All the Mosquitoes | [Watch Here](https://www.youtube.com/embed/9w-5wJYVmcw) | | Stim-02 | Are We All Related | [Watch Here](https://www.youtube.com/embed/mnYSMhR3jCI) | | Stim-03 | Work and the Work-Energy Principle | [Watch Here](https://www.youtube.com/embed/30o4omX5qfo) | | Stim-04 | Dielectrics in Capacitors Circuits | [Watch Here](https://www.youtube.com/embed/rkntp3_cZl4) | | Stim-05 | How Do People Measure Planets & Suns | [Watch Here](https://www.youtube.com/embed/bYgV9nvgJ3E) | | Stim-06 | Three Factors That May Alter the Action of an Enzyme | [Watch Here](https://www.youtube.com/embed/lkRZKqDdwzU) | ``` *Modalities Recorded:* - Gaze (X, Y) - Pupil Size - Blinks - Saccades - Head Position (X, Y, Z) - EOG - EEG - ECG **Experiment Setup:** In experiment 3, subjects were aware they would be tested on the video material each video was presented - \*\*Intentional Learning\*\*. We tested 6 informational video stimuli, 2 each from 3 different styles of presentation for this experiment to understand if the video design would have an impact on the attention retention of students. The video styles tested were 1. Presenter and animation 2. Presenter and glassboard 3. Animation and writing hand We collected ECG, EEG, EOG, Head Motion, Gaze Coordinates, and Pupil Size. Sessions: - Ses-01 contains these signals recorded on subjects watching these videos in an attentive condition. They answered questions pertaining to each video after watching the videos one at a time. - Ses-02 contains the data recorded on subjects watching all of the videos again in a distracted condition in the same order as Ses-01 described above. The distraction from the stimuli was to silently count backwards from a random prime number in steps of 7. No questions were asked after this session. **Questionnaires:** Questionnaires and answers to them can be found in the phenotype/ directory. 1. *stimuli_questionnaire:* The stimuli_questionnaire tsv and json files have the questions, answers and correct answers. - “Domain” question type is a general domain knowledge question that was asked before the subject watched a video. - “Memory” question type is a memory testing question that is asked after the subject finishes watching the video, and has questions that are directly pertaining to the video content. 2. *asrs_questionnaire:* These tsv and json files have questions and answers to the ASRS questionnaire that is an adult ADHD symptom checklist test. The answers are options from 0 (never) to 4 (very often). - There are two different scales used to score this test, and there are 2 different parts (Part A, Part B) to this test. Screen test involves just the first 6 questions (Part A), and the full-test involves the entire 18 question test (Part A + Part B). Despite lesser questions, the screen test (first 6 ques) gives one a higher indication of ADHD prevalence than the full-test. - This is the reason why the 2 different scoring scales are based on the Screen test. Scale one is out of 6, where each question counts for just 1 point depending on the frequency of the symptom occurrence, and if one scores over 4, there is a high chance of ADHD prevalence. The next scale is out of 24 where the 5 frequencies of symptom occurrence (never, rarely, sometimes, often, very often) are assigned scores in an increasing order of 0-4, and for each question, the respective scores of the frequency is the score for that question. Out of 24, the threshold for high prevalence of ADHD is 18 for this scale. The following are the question numbers for inattentive ADHD and Hyperactive ADHD inattentive_questions = [1,2,3,4,7,8,9,10,11,12] hyperactive_questions = [5,6,13,14,15,16,17,18] **General BIDS dataset structure overview for all MEVD experiments** Each BIDS dataset (one per experiment) has files that describe the dataset, its participants, and related metadata at the root directory - dataset_description.json, participants.tsv and participants.json, providing essential information about the study and participants to anyone working with the dataset. In the root directory, the raw data of each participant is organized by subject (sub-XX) and then further divided into sessions (ses-XX) to accommodate multi-session data collection (some experiments have 2 sessions: attentive and distracted). Inside each session folder, you’ll find modality-specific subfolders, such as: - **eeg**: Contains electroencephalogram data files (.bdf), event logs (events.tsv), and additional metadata (.json) that describe the experiment and recording conditions. - **beh**: Contains physiological recordings like ECG (electrocardiogram) and EOG (electrooculogram) stored in compressed .tsv.gz files, with accompanying metadata in .json files. - **eyetrack**: Contains physiological recordings like eye-tracking (gaze coordinates and pupil size), and head movement data, stored in compressed .tsv.gz files, with accompanying metadata in .json files. ```text | Modality | Filename Format | Data File Extension | Metadata | Notes | |-----------|---------------------------------------------------------------------|---------------------|----------------------------------------------|--------------------------------------------------------------------------------------------| | EEG | `sub_xx-ses_xx-task-stimxx_{file_of_interest.extension}` | `.bdf` | Exists for each file as a `.json` file | There are event files with a `.tsv` extension that include start and end times in seconds.| | Beh | `sub_xx-ses_xx-task-stimxx_recording-{modality}_physio.{extension}` | `.tsv.gz` | Exists for each file as a `.json` file | | | EyeTrack | `sub_xx-ses_xx-task-stimxx_{modality}_eyetrack.{extension}` | `.tsv.gz` | Exists for each file as a `.json` file | | ``` There is also a **derivatives** directory which contains preprocessed data derived from the raw recordings, such as filtered heart rate data or preprocessed physiological signals, making it easy to work with and apply advanced analyses. Files in this directory are also stored in the BIDS structure (subject-wise → session-wise → modality-wise). A brief overview on what you can expect in the derivatives directory: - **eeg**: Contains filtered electroencephalogram (EEG) data files (.bdf). - **beh**: Contains heart beats (r-peak timestamps synchronized with the stimulus), heart-rates, filtered ECG, breath-rates, and they are stored in compressed .tsv.gz files. - **eyetrack**: Contains saccades (timestamps), saccade-rates, fixations (timestamps), fixation-rates, blinks (timestamps), blink-rates, and filtered pupil and gaze files all stored in compressed .tsv.gz files. **How to Navigate the Dataset** - **Top-Level Files**: Files like dataset_description.json and participants.tsv give you an overview of the study and participants, serving as your starting point when exploring a dataset. - **Derivatives Folder**: In this folder, processed data is organized like how raw data is organized in the BIDS directory. - *Subject Folders (sub-XX):* Inside these folders, data is organized by individual participants, providing separate directories for each person involved in the study. - **Session Folders (ses-XX)**: For longitudinal or multi-session studies, session folders contain the raw and derived data for each session, making it easy to track and analyze data collected over time. - *Modality-Specific Subfolders:* Each session is further split into subfolders according to data modalities (e.g., EEG, behavior/physio), helping you to quickly locate the data of interest, whether it’s brain recordings, heart rate data, or eye movement information. - *Tasks:* Each subject is exposed to different stimuli during data collection, and BIDS uses “tasks” in filenames to clearly differentiate recordings based on these experimental conditions. This makes it easy to identify, retrieve, and analyze data associated with specific tasks across multiple modalities. **Dataset Overview** The table below is a breakdown on the total minutes of raw data available for each session across the complete dataset: ```text | Modality | Hours | |----------------------------|---------| | EEG | 65 | | ECG | 93.5 | | EOG | 94.35 | | Head | 92.46 | | Gaze | 110.56 | | Pupil | 110.56 | | Respiration | 44.2 | ``` ## Dataset Information | Dataset ID | `NM000256` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | The Brain, Body, and Behaviour Dataset (1.0.0) - Experiment 3 | | Author (year) | `Madsen2024_E3` | | Canonical | — | | Importable as | `NM000256`, `Madsen2024_E3` | | Year | — | | Authors | Jens Madsen, Nikhil Kuppa, Lucas Parra | | License | CC BY 4.0 | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000256) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000256) | [Source URL](https://nemar.org/dataexplorer/detail/nm000256) | ## Technical Details - Subjects: 29 - Recordings: 332 - Tasks: 6 - Channels: 64 - Sampling rate (Hz): 128 - Duration (hours): 0.0461111111111111 - Pathology: Not specified - Modality: — - Type: — - Size on disk: 7.5 GB - File count: 332 - Format: BIDS - License: CC BY 4.0 - DOI: — - Source: nemar - OpenNeuro: [nm000256](https://openneuro.org/datasets/nm000256) - NeMAR: [nm000256](https://nemar.org/dataexplorer/detail?dataset_id=nm000256) ## API Reference Use the `NM000256` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000256(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Brain, Body, and Behaviour Dataset (1.0.0) - Experiment 3 * **Study:** `nm000256` (NeMAR) * **Author (year):** `Madsen2024_E3` * **Canonical:** — Also importable as: `NM000256`, `Madsen2024_E3`. Modality: `eeg`. Subjects: 29; recordings: 332; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000256](https://openneuro.org/datasets/nm000256) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000256](https://nemar.org/dataexplorer/detail?dataset_id=nm000256) ### Examples ```pycon >>> from eegdash.dataset import NM000256 >>> dataset = NM000256(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000256) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000256) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000259: eeg dataset, 10 subjects *Speier2017* Access recordings and metadata through EEGDash. **Citation:** William Speier, Corey Arnold, Aniket Deshpande, Nader Pouratian (2017). *Speier2017*. [10.1371/journal.pone.0175382](https://doi.org/10.1371/journal.pone.0175382) Modality: eeg Subjects: 10 Recordings: 60 License: CC0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000259 dataset = NM000259(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000259(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000259( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000259, title = {Speier2017}, author = {William Speier and Corey Arnold and Aniket Deshpande and Nader Pouratian}, doi = {10.1371/journal.pone.0175382}, url = {https://doi.org/10.1371/journal.pone.0175382}, } ``` ## About This Dataset **Speier2017** P300 speller dataset from Speier et al 2017. **Dataset Overview** > Code: Speier2017 > Paradigm: p300 > DOI: 10.1371/journal.pone.0175382 ### View full README **Speier2017** P300 speller dataset from Speier et al 2017. **Dataset Overview** > Code: Speier2017 > Paradigm: p300 > DOI: 10.1371/journal.pone.0175382 > Subjects: 10 > Sessions per subject: 2 > Events: Target=2, NonTarget=1 > Trial interval: [0, 0.8] s > Runs per session: 3 > File format: BCI2000 **Acquisition** > Sampling rate: 256.0 Hz > Number of channels: 32 > Channel types: eeg=32 > Channel names: Fz, FC1, FCz, FC2, FC4, FC6, C4, C6, CP4, CP6, FC3, FC5, C3, C5, CP3, CP5, CP1, P1, Cz, CPz, Pz, POz, CP2, P2, PO7, PO3, O1, Oz, Iz, O2, PO4, PO8 > Montage: standard_1005 > Hardware: g.tec amplifier > Reference: left ear > Ground: AFz > Line frequency: 60.0 Hz **Participants** > Number of subjects: 10 > Health status: healthy > Age: min=20, max=35 > Species: human **Experimental Protocol** > Paradigm: p300 > Number of classes: 2 > Class labels: Target, NonTarget > Trial duration: 1.0 s > Study design: P300 row-column speller; 2 stimulus conditions (Famous Faces, Inverting); 6x6 character matrix > Feedback type: visual > Stimulus type: flash / famous face overlay > Stimulus modalities: visual > Primary modality: visual > Mode: online **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 > Inter-stimulus interval: 25.0 ms > Stimulus onset asynchrony: 125.0 ms **Data Structure** > Trials: ~1200 flashes per training run (10 chars x 10 seq x 12) > Trials context: per_run **Tags** > Pathology: Healthy > Modality: ERP > Type: P300 **Documentation** > DOI: 10.1371/journal.pone.0175382 > License: CC0 > Investigators: William Speier, Corey Arnold, Aniket Deshpande, Nader Pouratian > Institution: University of California, Los Angeles > Country: US > Data URL: [https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/PHHHB6](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/PHHHB6) > Publication year: 2017 **References** Speier, W., Deshpande, A., & Pouratian, N. (2017). A comparison of stimulus types in online classification of the P300 speller using language models. PLoS ONE, 12(4), e0175382. [https://doi.org/10.1371/journal.pone.0175382](https://doi.org/10.1371/journal.pone.0175382) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000259` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Speier2017 | | Author (year) | `Speier2017` | | Canonical | — | | Importable as | `NM000259`, `Speier2017` | | Year | 2017 | | Authors | William Speier, Corey Arnold, Aniket Deshpande, Nader Pouratian | | License | CC0 | | Citation / DOI | [doi:10.1371/journal.pone.0175382](https://doi.org/10.1371/journal.pone.0175382) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000259) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000259) | [Source URL](https://nemar.org/dataexplorer/detail/nm000259) | ### Copy-paste BibTeX ```bibtex @dataset{nm000259, title = {Speier2017}, author = {William Speier and Corey Arnold and Aniket Deshpande and Nader Pouratian}, doi = {10.1371/journal.pone.0175382}, url = {https://doi.org/10.1371/journal.pone.0175382}, } ``` ## Technical Details - Subjects: 10 - Recordings: 60 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 256.0 - Duration (hours): 3.3766015625 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 290.2 MB - File count: 60 - Format: BIDS - License: CC0 - DOI: doi:10.1371/journal.pone.0175382 - Source: nemar - OpenNeuro: [nm000259](https://openneuro.org/datasets/nm000259) - NeMAR: [nm000259](https://nemar.org/dataexplorer/detail?dataset_id=nm000259) ## API Reference Use the `NM000259` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000259(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Speier2017 * **Study:** `nm000259` (NeMAR) * **Author (year):** `Speier2017` * **Canonical:** — Also importable as: `NM000259`, `Speier2017`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000259](https://openneuro.org/datasets/nm000259) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000259](https://nemar.org/dataexplorer/detail?dataset_id=nm000259) DOI: [https://doi.org/10.1371/journal.pone.0175382](https://doi.org/10.1371/journal.pone.0175382) ### Examples ```pycon >>> from eegdash.dataset import NM000259 >>> dataset = NM000259(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000259) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000259) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000260: eeg dataset, 23 subjects *BrainInvaders2012* Access recordings and metadata through EEGDash. **Citation:** G.F.P. Van Veen, A. Barachant, A. Andreev, G. Cattan, P. Rodrigues, M. Congedo (2019). *BrainInvaders2012*. [10.5281/zenodo.2649006](https://doi.org/10.5281/zenodo.2649006) Modality: eeg Subjects: 23 Recordings: 46 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000260 dataset = NM000260(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000260(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000260( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000260, title = {BrainInvaders2012}, author = {G.F.P. Van Veen and A. Barachant and A. Andreev and G. Cattan and P. Rodrigues and M. Congedo}, doi = {10.5281/zenodo.2649006}, url = {https://doi.org/10.5281/zenodo.2649006}, } ``` ## About This Dataset **BrainInvaders2012** P300 dataset BI2012 from a “Brain Invaders” experiment. **Dataset Overview** > Code: BrainInvaders2012 > Paradigm: p300 > DOI: [https://doi.org/10.5281/zenodo.2649006](https://doi.org/10.5281/zenodo.2649006) ### View full README **BrainInvaders2012** P300 dataset BI2012 from a “Brain Invaders” experiment. **Dataset Overview** > Code: BrainInvaders2012 > Paradigm: p300 > DOI: [https://doi.org/10.5281/zenodo.2649006](https://doi.org/10.5281/zenodo.2649006) > Subjects: 25 > Sessions per subject: 1 > Events: Target=2, NonTarget=1 > Trial interval: [0, 1] s > Runs per session: 2 > Session IDs: 0 > File format: mat, csv > Contributing labs: GIPSA-lab **Acquisition** > Sampling rate: 128.0 Hz > Number of channels: 16 > Channel types: eeg=16 > Channel names: F7, F3, F4, F8, T7, C3, Cz, C4, T8, P7, P3, Pz, P4, P8, O1, O2 > Montage: standard_1020 > Hardware: NeXus-32 (MindMedia/TMSi) > Software: OpenVibe > Reference: hardware common average reference > Ground: FZ > Sensor type: EEG > Line frequency: 50.0 Hz > Electrode type: wet > Electrode material: Silver/Silver Chloride **Participants** > Number of subjects: 25 > Health status: healthy > Age: mean=24.4, std=2.76, min=21, max=31 > BCI experience: half played games occasionally (around 4.5 hours a week) > Species: human **Experimental Protocol** > Paradigm: p300 > Task type: brain_invaders > Number of classes: 2 > Class labels: Target, NonTarget > Study design: longitudinal and transversal design with training-test mode of operation > Feedback type: visual (game interface) > Stimulus type: visual flashes of alien groups > Stimulus modalities: visual > Primary modality: visual > Synchronicity: synchronous > Mode: both > Training/test split: True > Instructions: limit eye blinks, head movements and face muscular contractions; silently count the number of Target flashes > Stimulus presentation: repetition_structure=12 flashes per repetition (2 Target, 10 non-Target), flash_groups=12 groups of 6 aliens (36 total aliens), target_ratio=1:5 (Target to non-Target), screen_distance=75 to 115 cm **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 > Number of repetitions: 8 **Data Structure** > Trials: {‘Target’: 128, ‘non-Target’: 640} > Trials per class: Target=128, non-Target=640 > Trials context: per session (Training session); variable in Online session depending on user performance **Preprocessing** > Data state: raw EEG with software tagging (note: tagging introduces jitter and latency) > Preprocessing applied: False > Notes: Software tagging introduces a jitter and a latency which artificially modify the ERPs onset. Strong drift over time resulting in higher jitter. Only possible to compare ERP acquired within the same experimental conditions when latency is not corrected. **Signal Processing** > Classifiers: xDAWN, Riemannian > Feature extraction: Covariance/Riemannian, xDAWN > Spatial filters: xDAWN **Performance (Original Study)** **BCI Application** > Applications: gaming > Environment: laboratory > Online feedback: True **Tags** > Pathology: Healthy > Modality: Visual > Type: Perception **Documentation** > Description: EEG recordings of 25 subjects testing the Brain Invaders, a visual P300 Brain-Computer Interface inspired by the famous vintage video game Space Invaders > DOI: 10.5281/zenodo.2649006 > Associated paper DOI: 10.5281/zenodo.2649006 > License: CC-BY-4.0 > Investigators: G.F.P. Van Veen, A. Barachant, A. Andreev, G. Cattan, P. Rodrigues, M. Congedo > Senior author: M. Congedo > Institution: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP > Address: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France > Country: FR > Repository: Zenodo > Data URL: [https://doi.org/10.5281/zenodo.2649006](https://doi.org/10.5281/zenodo.2649006) > Publication year: 2019 > Acknowledgements: All subjects were volunteers recruited by means of flyers and of the mailing list of the University of Grenoble-Alpes. All participants provided written informed consent confirming the notification of the experimental process, the data management procedures and the right to withdraw from the experiment at any moment. > Keywords: Electroencephalography (EEG), P300, Brain-Computer Interface, Experiment **Abstract** We describe the experimental procedures for a dataset that we have made publicly available at [https://doi.org/10.5281/zenodo.2649006](https://doi.org/10.5281/zenodo.2649006) in mat and csv formats. This dataset contains electroencephalographic (EEG) recordings of 25 subjects testing the Brain Invaders (1), a visual P300 Brain-Computer Interface inspired by the famous vintage video game Space Invaders (Taito, Tokyo, Japan). The visual P300 is an event-related potential elicited by a visual stimulation, peaking 240-600 ms after stimulus onset. EEG data were recorded by 16 electrodes in an experiment that took place in the GIPSA-lab, Grenoble, France, in 2012 (2,3). Python code for manipulating the data is available at [https://github.com/plcrodrigues/py.BI.EEG.2012-GIPSA](https://github.com/plcrodrigues/py.BI.EEG.2012-GIPSA). The ID of this dataset is BI.EEG.2012-GIPSA. **Methodology** The visual P300 is an event-related potential (ERP) elicited by a visual stimulation, peaking 240-600 ms after stimulus onset. The experiment features a training-test mode of operation and both a longitudinal and transversal design. Training session: Target alien chosen randomly at each repetition, 8 Targets total, 8 repetitions each, resulting in 128 Target trials and 640 non-Target flashes. Online session: consisted of three levels with different distractor configurations, minimum 3.5 minutes per level, counter-balanced order across subjects. Interface: 36 aliens flashing in 12 groups of 6, each repetition has 12 flashes (2 Target, 10 non-Target). P300 peak latency: 240-600 ms post-stimulus. **References** Van Veen, G., Barachant, A., Andreev, A., Cattan, G., Rodrigues, P. C., & Congedo, M. (2019). Building Brain Invaders: EEG data of an experimental validation. arXiv preprint arXiv:1905.05182. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb ## Dataset Information | Dataset ID | `NM000260` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BrainInvaders2012 | | Author (year) | `BrainInvaders2012` | | Canonical | `BI2012`, `BrainInvaders` | | Importable as | `NM000260`, `BrainInvaders2012`, `BI2012`, `BrainInvaders` | | Year | 2019 | | Authors | G.F.P. Van Veen, A. Barachant, A. Andreev, G. Cattan, P. Rodrigues, M. Congedo | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.5281/zenodo.2649006](https://doi.org/10.5281/zenodo.2649006) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000260) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000260) | [Source URL](https://nemar.org/dataexplorer/detail/nm000260) | ### Copy-paste BibTeX ```bibtex @dataset{nm000260, title = {BrainInvaders2012}, author = {G.F.P. Van Veen and A. Barachant and A. Andreev and G. Cattan and P. Rodrigues and M. Congedo}, doi = {10.5281/zenodo.2649006}, url = {https://doi.org/10.5281/zenodo.2649006}, } ``` ## Technical Details - Subjects: 23 - Recordings: 46 - Tasks: 1 - Channels: 17 - Sampling rate (Hz): 128.0 - Duration (hours): 7.122400173611111 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 164.8 MB - File count: 46 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.5281/zenodo.2649006 - Source: nemar - OpenNeuro: [nm000260](https://openneuro.org/datasets/nm000260) - NeMAR: [nm000260](https://nemar.org/dataexplorer/detail?dataset_id=nm000260) ## API Reference Use the `NM000260` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000260(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BrainInvaders2012 * **Study:** `nm000260` (NeMAR) * **Author (year):** `BrainInvaders2012` * **Canonical:** `BI2012`, `BrainInvaders` Also importable as: `NM000260`, `BrainInvaders2012`, `BI2012`, `BrainInvaders`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 23; recordings: 46; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000260](https://openneuro.org/datasets/nm000260) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000260](https://nemar.org/dataexplorer/detail?dataset_id=nm000260) DOI: [https://doi.org/10.5281/zenodo.2649006](https://doi.org/10.5281/zenodo.2649006) ### Examples ```pycon >>> from eegdash.dataset import NM000260 >>> dataset = NM000260(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000260) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000260) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000264: eeg dataset, 24 subjects *BrainInvaders2013a* Access recordings and metadata through EEGDash. **Citation:** E. Vaineau, A. Barachant, A. Andreev, P. Rodrigues, G. Cattan, M. Congedo (2019). *BrainInvaders2013a*. [10.5281/zenodo.1494163](https://doi.org/10.5281/zenodo.1494163) Modality: eeg Subjects: 24 Recordings: 292 License: CC-BY-1.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000264 dataset = NM000264(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000264(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000264( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000264, title = {BrainInvaders2013a}, author = {E. Vaineau and A. Barachant and A. Andreev and P. Rodrigues and G. Cattan and M. Congedo}, doi = {10.5281/zenodo.1494163}, url = {https://doi.org/10.5281/zenodo.1494163}, } ``` ## About This Dataset **BrainInvaders2013a** P300 dataset BI2013a from a “Brain Invaders” experiment. **Dataset Overview** > Code: BrainInvaders2013a > Paradigm: p300 > DOI: [https://doi.org/10.5281/zenodo.2669187](https://doi.org/10.5281/zenodo.2669187) ### View full README **BrainInvaders2013a** P300 dataset BI2013a from a “Brain Invaders” experiment. **Dataset Overview** > Code: BrainInvaders2013a > Paradigm: p300 > DOI: [https://doi.org/10.5281/zenodo.2669187](https://doi.org/10.5281/zenodo.2669187) > Subjects: 24 > Sessions per subject: 8 > Events: Target=33285, NonTarget=33286 > Trial interval: [0, 1] s > Runs per session: 2 > File format: mat, csv, gdf > Contributing labs: GIPSA-lab **Acquisition** > Sampling rate: 512.0 Hz > Number of channels: 16 > Channel types: eeg=16 > Channel names: Fp1, Fp2, F5, AFz, F6, T7, Cz, T8, P7, P3, Pz, P4, P8, O1, Oz, O2 > Montage: standard_1020 > Hardware: g.USBamp (g.tec, Schiedlberg, Austria) > Software: OpenVibe > Reference: left earlobe > Ground: FZ > Sensor type: wet Silver/Silver Chloride electrodes > Line frequency: 50.0 Hz > Online filters: no digital filter applied > Cap manufacturer: g.tec > Cap model: g.GAMMAcap > Electrode type: wet > Electrode material: Silver/Silver Chloride **Participants** > Number of subjects: 24 > Health status: healthy > Age: mean=25.96, std=4.46, min=20.0, max=30.0 > Gender distribution: male=12, female=12 > BCI experience: volunteers recruited via flyers and university mailing list > Species: human **Experimental Protocol** > Paradigm: p300 > Task type: visual P300 BCI > Number of classes: 2 > Class labels: Target, NonTarget > Study design: compare P300-based BCI with and without adaptive calibration using Riemannian geometry; randomised order of runs (adaptive vs non-adaptive) > Feedback type: visual (Brain Invaders video game interface) > Stimulus type: visual flashes > Stimulus modalities: visual > Primary modality: visual > Mode: both > Training/test split: True > Instructions: destroy targets in Brain Invaders BCI video game > Stimulus presentation: distance_from_screen=75 to 115 cm, screen=ViewSonic 22 inch, flash_groups=36 symbols distributed in 12 groups **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 **Data Structure** > Trials: {‘Training_Target’: 80, ‘Training_non-Target’: 400, ‘Online’: ‘variable (depends on user performance)’} > Trials context: per_phase **Preprocessing** > Data state: raw EEG with software tagging via USB (note: tagging introduces jitter and latency) > Preprocessing applied: False > Notes: Tags sent by application to amplifier through USB port and recorded as supplementary channel; tagging process identical in all experimental conditions **Signal Processing** > Classifiers: xDAWN, Riemannian, RMDM (Riemannian Minimum Distance to Mean) > Feature extraction: Covariance/Riemannian, xDAWN, common spatiotemporal pattern **Cross-Validation** > Evaluation type: cross_session **Performance (Original Study)** > Balanced Accuracy: used due to unbalanced classes (1:5 ratio Target to non-Target) **BCI Application** > Applications: gaming > Environment: small room (4 square meters) with one-way glass window for experimenter observation > Online feedback: True **Tags** > Pathology: Healthy > Modality: Visual > Type: Perception **Documentation** > Description: EEG recordings of 24 subjects doing a visual P300 Brain-Computer Interface experiment comparing adaptive vs non-adaptive calibration using Riemannian geometry > DOI: 10.5281/zenodo.1494163 > Associated paper DOI: 10.5281/zenodo.2649006 > License: CC-BY-1.0 > Investigators: E. Vaineau, A. Barachant, A. Andreev, P. Rodrigues, G. Cattan, M. Congedo > Senior author: M. Congedo > Institution: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP > Address: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France > Country: FR > Repository: Zenodo > Data URL: [https://doi.org/10.5281/zenodo.1494163](https://doi.org/10.5281/zenodo.1494163) > Publication year: 2019 > Ethics approval: Approved by the Ethical Committee of the University of Grenoble Alpes (Comité d’Ethique pour la Recherche Non-Interventionnelle) > Keywords: Electroencephalography (EEG), P300, Brain-Computer Interface, Experiment, Adaptive, Calibration **Abstract** This dataset contains electroencephalographic (EEG) recordings of 24 subjects doing a visual P300 Brain-Computer Interface experiment on PC. The visual P300 is an event-related potential elicited by visual stimulation, peaking 240-600 ms after stimulus onset. The experiment was designed to compare the use of a P300-based brain-computer interface with and without adaptive calibration using Riemannian geometry. EEG data were recorded using 16 electrodes during an experiment at GIPSA-lab, Grenoble, France, in 2013. **Methodology** Subjects participated in sessions with two runs (Non-Adaptive and Adaptive, randomised order). Each run had Training (calibration) and Online phases. In Non-Adaptive mode, Training data calibrated the MDM classifier for Online phase. In Adaptive mode, classifier initialized with generic class geometric means from previous experiment and continuously adapted using Riemannian method. Brain Invaders interface: 36 symbols in 12 groups, one repetition = 12 flashes (2 Target, 10 non-Target). Training phase: 80 Target and 400 non-Target flashes (fixed). Online phase: variable repetitions based on performance to destroy targets. Subjects blind to mode of operation. **References** Vaineau, E., Barachant, A., Andreev, A., Rodrigues, P. C., Cattan, G. & Congedo, M. (2019). Brain invaders adaptive versus non-adaptive P300 brain-computer interface dataset. arXiv preprint arXiv:1904.09111. Barachant A, Congedo M (2014) A Plug & Play P300 BCI using Information Geometry. arXiv:1409.0107. Congedo M, Goyat M, Tarrin N, Ionescu G, Rivet B,Varnet L, Rivet B, Phlypo R, Jrad N, Acquadro M, Jutten C (2011) “Brain Invaders”: a prototype of an open-source P300-based video game working with the OpenViBE platform. Proc. IBCI Conf., Graz, Austria, 280-283. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896 Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8 Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) https://github.com/NeuroTechX/moabb ## Dataset Information | Dataset ID | `NM000264` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | BrainInvaders2013a | | Author (year) | `BrainInvaders2013` | | Canonical | `BrainInvaders2013a`, `BI2013a` | | Importable as | `NM000264`, `BrainInvaders2013`, `BrainInvaders2013a`, `BI2013a` | | Year | 2019 | | Authors | 1. Vaineau, A. Barachant, A. Andreev, P. Rodrigues, G. Cattan, M. Congedo | | License | CC-BY-1.0 | | Citation / DOI | [doi:10.5281/zenodo.1494163](https://doi.org/10.5281/zenodo.1494163) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000264) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000264) | [Source URL](https://nemar.org/dataexplorer/detail/nm000264) | ### Copy-paste BibTeX ```bibtex @dataset{nm000264, title = {BrainInvaders2013a}, author = {E. Vaineau and A. Barachant and A. Andreev and P. Rodrigues and G. Cattan and M. Congedo}, doi = {10.5281/zenodo.1494163}, url = {https://doi.org/10.5281/zenodo.1494163}, } ``` ## Technical Details - Subjects: 24 - Recordings: 292 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 512.0 - Duration (hours): 20.632897135416663 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 1.7 GB - File count: 292 - Format: BIDS - License: CC-BY-1.0 - DOI: doi:10.5281/zenodo.1494163 - Source: nemar - OpenNeuro: [nm000264](https://openneuro.org/datasets/nm000264) - NeMAR: [nm000264](https://nemar.org/dataexplorer/detail?dataset_id=nm000264) ## API Reference Use the `NM000264` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000264(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BrainInvaders2013a * **Study:** `nm000264` (NeMAR) * **Author (year):** `BrainInvaders2013` * **Canonical:** `BrainInvaders2013a`, `BI2013a` Also importable as: `NM000264`, `BrainInvaders2013`, `BrainInvaders2013a`, `BI2013a`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 24; recordings: 292; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000264](https://openneuro.org/datasets/nm000264) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000264](https://nemar.org/dataexplorer/detail?dataset_id=nm000264) DOI: [https://doi.org/10.5281/zenodo.1494163](https://doi.org/10.5281/zenodo.1494163) ### Examples ```pycon >>> from eegdash.dataset import NM000264 >>> dataset = NM000264(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000264) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000264) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000265: eeg dataset, 31 subjects *GuttmannFlury2025-MI* Access recordings and metadata through EEGDash. **Citation:** Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu (2025). *GuttmannFlury2025-MI*. [10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) Modality: eeg Subjects: 31 Recordings: 126 License: CC0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000265 dataset = NM000265(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000265(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000265( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000265, title = {GuttmannFlury2025-MI}, author = {Eva Guttmann-Flury and Xinjun Sheng and Xiangyang Zhu}, doi = {10.1038/s41597-025-04861-9}, url = {https://doi.org/10.1038/s41597-025-04861-9}, } ``` ## About This Dataset **GuttmannFlury2025-MI** Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025. **Dataset Overview** > Code: GuttmannFlury2025-MI > Paradigm: imagery > DOI: 10.1038/s41597-025-04861-9 ### View full README **GuttmannFlury2025-MI** Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025. **Dataset Overview** > Code: GuttmannFlury2025-MI > Paradigm: imagery > DOI: 10.1038/s41597-025-04861-9 > Subjects: 31 > Sessions per subject: 3 > Events: left_hand=1, right_hand=2 > Trial interval: [0, 4] s > File format: BDF **Acquisition** > Sampling rate: 1000.0 Hz > Number of channels: 66 > Channel types: eeg=64, eog=1, stim=1 > Channel names: FP1, FPZ, FP2, AF3, AF4, F7, F5, F3, F1, FZ, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCZ, FC2, FC4, FC6, FT8, T7, C5, C3, C1, CZ, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPZ, CP2, CP4, CP6, TP8, P7, P5, P3, P1, PZ, P2, P4, P6, P8, PO7, PO5, PO3, POZ, PO4, PO6, PO8, O1, OZ, O2, CB1, CB2 > Montage: standard_1005 > Hardware: Neuroscan Quik-Cap 65-ch, SynAmps2 > Reference: right mastoid (M1) > Ground: forehead > Sensor type: Ag/AgCl > Line frequency: 50.0 Hz > Online filters: {‘highpass_time_constant_s’: 10} **Participants** > Number of subjects: 31 > Health status: healthy > Age: mean=28.3, min=20.0, max=57.0 > Gender distribution: female=11, male=20 > Species: human **Experimental Protocol** > Paradigm: imagery > Number of classes: 2 > Class labels: left_hand, right_hand > Trial duration: 7.5 s > Study design: Multi-paradigm BCI (MI/ME/SSVEP/P300). MI and ME: 2-class hand grasping, 40 trials/session, up to 3 sessions per subject. > Feedback type: none > Stimulus type: visual rectangle cue > Stimulus modalities: visual > Primary modality: visual > Synchronicity: synchronous > Mode: offline **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > left_hand ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand ``` **Paradigm-Specific Parameters** > Detected paradigm: motor_imagery > Imagery tasks: left_hand, right_hand > Cue duration: 2.0 s > Imagery duration: 4.0 s **Data Structure** > Trials: 2520 > Trials context: 63 sessions x 40 trials = 2520 (MI only, default) **BCI Application** > Applications: motor_control > Environment: laboratory > Online feedback: False **Tags** > Pathology: Healthy > Modality: Motor > Type: Research **Documentation** > DOI: 10.1038/s41597-025-04861-9 > License: CC0 > Investigators: Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu > Institution: Shanghai Jiao Tong University > Country: CN > Publication year: 2025 **References** Guttmann-Flury, E., Sheng, X., & Zhu, X. (2025). Dataset combining EEG, eye-tracking, and high-speed video for ocular activity analysis across BCI paradigms. Scientific Data, 12, 587. [https://doi.org/10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000265` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | GuttmannFlury2025-MI | | Author (year) | `GuttmannFlury2025_MI` | | Canonical | — | | Importable as | `NM000265`, `GuttmannFlury2025_MI` | | Year | 2025 | | Authors | Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu | | License | CC0 | | Citation / DOI | [doi:10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000265) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000265) | [Source URL](https://nemar.org/dataexplorer/detail/nm000265) | ### Copy-paste BibTeX ```bibtex @dataset{nm000265, title = {GuttmannFlury2025-MI}, author = {Eva Guttmann-Flury and Xinjun Sheng and Xiangyang Zhu}, doi = {10.1038/s41597-025-04861-9}, url = {https://doi.org/10.1038/s41597-025-04861-9}, } ``` ## Technical Details - Subjects: 31 - Recordings: 126 - Tasks: 1 - Channels: 65 - Sampling rate (Hz): 1000.0 - Duration (hours): 14.089965 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 9.2 GB - File count: 126 - Format: BIDS - License: CC0 - DOI: doi:10.1038/s41597-025-04861-9 - Source: nemar - OpenNeuro: [nm000265](https://openneuro.org/datasets/nm000265) - NeMAR: [nm000265](https://nemar.org/dataexplorer/detail?dataset_id=nm000265) ## API Reference Use the `NM000265` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000265(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) GuttmannFlury2025-MI * **Study:** `nm000265` (NeMAR) * **Author (year):** `GuttmannFlury2025_MI` * **Canonical:** — Also importable as: `NM000265`, `GuttmannFlury2025_MI`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 31; recordings: 126; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000265](https://openneuro.org/datasets/nm000265) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000265](https://nemar.org/dataexplorer/detail?dataset_id=nm000265) DOI: [https://doi.org/10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) ### Examples ```pycon >>> from eegdash.dataset import NM000265 >>> dataset = NM000265(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000265) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000265) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000266: eeg dataset, 13 subjects *Sosulski2019* Access recordings and metadata through EEGDash. **Citation:** Jan Sosulski, David Hübner, Aaron Klein, Michael Tangermann (2019). *Sosulski2019*. [10.48550/arXiv.2109.06011](https://doi.org/10.48550/arXiv.2109.06011) Modality: eeg Subjects: 13 Recordings: 1060 License: CC-BY-SA-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000266 dataset = NM000266(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000266(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000266( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000266, title = {Sosulski2019}, author = {Jan Sosulski and David Hübner and Aaron Klein and Michael Tangermann}, doi = {10.48550/arXiv.2109.06011}, url = {https://doi.org/10.48550/arXiv.2109.06011}, } ``` ## About This Dataset **Sosulski2019** P300 dataset from initial spot study. **Dataset Overview** > Code: Sosulski2019 > Paradigm: p300 > DOI: 10.6094/UNIFR/154576 ### View full README **Sosulski2019** P300 dataset from initial spot study. **Dataset Overview** > Code: Sosulski2019 > Paradigm: p300 > DOI: 10.6094/UNIFR/154576 > Subjects: 13 > Sessions per subject: 80 > Events: Target=21, NonTarget=1 > Trial interval: [-0.2, 1] s > File format: brainvision **Acquisition** > Sampling rate: 1000.0 Hz > Number of channels: 31 > Channel types: eeg=31, eog=1, misc=5 > Channel names: C3, C4, CP1, CP2, CP5, CP6, Cz, EOGvu, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, P10, P3, P4, P7, P8, P9, Pz, T7, T8, x_EMGl, x_GSR, x_Optic, x_Pulse, x_Respi > Montage: standard_1020 > Hardware: BrainProducts BrainAmp DC > Reference: nose > Sensor type: passive Ag/AgCl > Line frequency: 50.0 Hz > Auxiliary channels: EOG (1 ch, vertical) **Participants** > Number of subjects: 13 > Health status: healthy > Age: mean=22.7, std=1.64, min=20, max=26 > Gender distribution: male=5, female=8 > Species: human **Experimental Protocol** > Paradigm: p300 > Number of classes: 2 > Class labels: Target, NonTarget > Study design: Subjects focused attention on target tones (1000 Hz) and ignored non-target tones (500 Hz) presented via speaker at 65 cm distance. One trial consisted of 15 target and 75 non-target stimuli in pseudo-random order with at least two non-target tones between target tones. The experiment was split into optimization and validation parts. > Stimulus type: oddball > Stimulus modalities: auditory > Primary modality: auditory > Synchronicity: synchronous > Mode: online > Instructions: Focus on the target tones (1000 Hz) and ignore the non-target tones (500 Hz). Refrain from blinking and movement as much as possible. > Stimulus presentation: target_tone_hz=1000, non_target_tone_hz=500, tone_duration_ms=40, distance_cm=65 **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 > Number of targets: 1 **Data Structure** > Trials: Variable: optimization part used time-limited trials (20 minutes per strategy), validation part used 20 trials per SOA > Trials per class: target=13 per trial (after preprocessing, originally 15), non_target=65 per trial (after preprocessing, originally 75) > Trials context: Each trial consisted of 90 stimuli (15 target, 75 non-target). After preprocessing (removing first and last 6 epochs), 78 data points available per trial: 13 target and 65 non-target epochs. **Signal Processing** > Classifiers: rLDA, Shrinkage LDA > Feature extraction: Mean amplitude in time intervals > Frequency bands: analyzed=[1.5, 40.0] Hz **Cross-Validation** > Method: 13-fold > Folds: 13 > Evaluation type: within_session **Performance (Original Study)** > Auc: 0.701 > Mean Auc Ucb: 0.701 > Mean Auc Rand: 0.704 > Mean Auc P300 Ucb: 0.67 > Mean Auc P300 Rand: 0.681 > Mean Auc Fixed60: 0.517 **BCI Application** > Applications: communication > Online feedback: False **Tags** > Pathology: Healthy > Modality: Auditory > Type: Research **Documentation** > Description: Auditory oddball ERP dataset from 13 healthy subjects. Two sinusoidal tones (target 1000 Hz, non-target 500 Hz) presented at various stimulus onset asynchronies (SOAs, 60-600 ms). 31-channel EEG recorded at 1000 Hz with BrainProducts BrainAmp DC. Raw BrainVision format data. > DOI: 10.48550/arXiv.2109.06011 > License: CC-BY-SA-4.0 > Investigators: Jan Sosulski, David Hübner, Aaron Klein, Michael Tangermann > Senior author: Michael Tangermann > Contact: [jan.sosulski@blbt.uni-freiburg.de](mailto:jan.sosulski@blbt.uni-freiburg.de); [davhuebn@gmail.com](mailto:davhuebn@gmail.com); [kleinaa@cs.uni-freiburg.de](mailto:kleinaa@cs.uni-freiburg.de); [michael.tangermann@donders.ru.nl](mailto:michael.tangermann@donders.ru.nl) > Institution: University of Freiburg > Country: DE > Repository: FreiDok > Data URL: [https://freidok.uni-freiburg.de/data/154576](https://freidok.uni-freiburg.de/data/154576) > Publication year: 2021 > Funding: Cluster of Excellence BrainLinks-BrainTools funded by the German Research Foundation (DFG) [grant number EXC 1086]; DFG project SuitAble [grant number TA 1258/1-1]; state of Baden-Württemberg, Germany, through bwHPC and the German Research Foundation (DFG) [grant number INST 39/963-1 FUGG] > Ethics approval: Approved by the ethics committee of the university medical center of Freiburg > Acknowledgements: Experiments were performed according to the Declaration of Helsinki. > Keywords: Bayesian optimization, individual experimental parameters, brain-computer interfaces, learning from small data, auditory event-related potentials, closed-loop parameter optimization **Abstract** The decoding of brain signals recorded via, e.g., an electroencephalogram, using machine learning is key to brain-computer interfaces (BCIs). Stimulation parameters or other experimental settings of the BCI protocol typically are chosen according to the literature. The decoding performance directly depends on the choice of parameters, as they influence the elicited brain signals and optimal parameters are subject-dependent. Thus a fast and automated selection procedure for experimental parameters could greatly improve the usability of BCIs. We evaluate a standalone random search and a combined Bayesian optimization with random search into a closed-loop auditory event-related potential protocol. We aimed at finding the individually best stimulation speed—also known as stimulus onset asynchrony (SOA)—that maximizes the classification performance of a regularized linear discriminant analysis. **Methodology** The experiment was divided into two parts: (1) Optimization part: four strategies (AUC-ucb, AUC-rand, P300-ucb, P300-rand) each allocated 20 minutes to find optimal SOA. Strategies alternated to minimize non-stationarity effects. (2) Validation part: evaluated SOAs from each optimization strategy plus fixed 60ms SOA using 20 trials each (in blocks of 5 trials). Features were mean amplitudes in 5 time intervals ([100, 170], [171, 230], [231, 300], [301, 410], [411, 500] ms) across 31 channels (155 dimensions total). Classification used rLDA with automatic shrinkage regularization and 13-fold cross-validation on single trials. **References** Sosulski, J., Tangermann, M.: Electroencephalogram signals recorded from 13 healthy subjects during an auditory oddball paradigm under different stimulus onset asynchrony conditions. Dataset. DOI: 10.6094/UNIFR/154576 Sosulski, J., Tangermann, M.: Spatial filters for auditory evoked potentials transfer between different experimental conditions. Graz BCI Conference. 2019. Sosulski, J., Hübner, D., Klein, A., Tangermann, M.: Online Optimization of Stimulation Speed in an Auditory Brain-Computer Interface under Time Constraints. arXiv preprint. 2021. Notes .. versionadded:: 0.4.5 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000266` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Sosulski2019 | | Author (year) | `Sosulski2019` | | Canonical | — | | Importable as | `NM000266`, `Sosulski2019` | | Year | 2019 | | Authors | Jan Sosulski, David Hübner, Aaron Klein, Michael Tangermann | | License | CC-BY-SA-4.0 | | Citation / DOI | [doi:10.48550/arXiv.2109.06011](https://doi.org/10.48550/arXiv.2109.06011) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000266) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000266) | [Source URL](https://nemar.org/dataexplorer/detail/nm000266) | ### Copy-paste BibTeX ```bibtex @dataset{nm000266, title = {Sosulski2019}, author = {Jan Sosulski and David Hübner and Aaron Klein and Michael Tangermann}, doi = {10.48550/arXiv.2109.06011}, url = {https://doi.org/10.48550/arXiv.2109.06011}, } ``` ## Technical Details - Subjects: 13 - Recordings: 1060 - Tasks: 1 - Channels: 37 - Sampling rate (Hz): 1000.0 - Duration (hours): 9.793594444444444 - Pathology: Healthy - Modality: Auditory - Type: Attention - Size on disk: 3.7 GB - File count: 1060 - Format: BIDS - License: CC-BY-SA-4.0 - DOI: doi:10.48550/arXiv.2109.06011 - Source: nemar - OpenNeuro: [nm000266](https://openneuro.org/datasets/nm000266) - NeMAR: [nm000266](https://nemar.org/dataexplorer/detail?dataset_id=nm000266) ## API Reference Use the `NM000266` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000266(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sosulski2019 * **Study:** `nm000266` (NeMAR) * **Author (year):** `Sosulski2019` * **Canonical:** — Also importable as: `NM000266`, `Sosulski2019`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 1060; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000266](https://openneuro.org/datasets/nm000266) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000266](https://nemar.org/dataexplorer/detail?dataset_id=nm000266) DOI: [https://doi.org/10.48550/arXiv.2109.06011](https://doi.org/10.48550/arXiv.2109.06011) ### Examples ```pycon >>> from eegdash.dataset import NM000266 >>> dataset = NM000266(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000266) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000266) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000267: eeg dataset, 29 subjects *Shin2017A* Access recordings and metadata through EEGDash. **Citation:** Jaeyoung Shin, Alexander von Lühmann, Benjamin Blankertz, Do-Won Kim, Jichai Jeong, Han-Jeong Hwang, Klaus-Robert Müller (2019). *Shin2017A*. [10.1109/TNSRE.2016.2628057](https://doi.org/10.1109/TNSRE.2016.2628057) Modality: eeg Subjects: 29 Recordings: 174 License: GPL-3.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000267 dataset = NM000267(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000267(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000267( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000267, title = {Shin2017A}, author = {Jaeyoung Shin and Alexander von Lühmann and Benjamin Blankertz and Do-Won Kim and Jichai Jeong and Han-Jeong Hwang and Klaus-Robert Müller}, doi = {10.1109/TNSRE.2016.2628057}, url = {https://doi.org/10.1109/TNSRE.2016.2628057}, } ``` ## About This Dataset **Shin2017A** Motor Imagey Dataset from Shin et al 2017. **Dataset Overview** > Code: Shin2017A > Paradigm: imagery > DOI: 10.1109/TNSRE.2016.2628057 ### View full README **Shin2017A** Motor Imagey Dataset from Shin et al 2017. **Dataset Overview** > Code: Shin2017A > Paradigm: imagery > DOI: 10.1109/TNSRE.2016.2628057 > Subjects: 29 > Sessions per subject: 6 > Events: left_hand=1, right_hand=2, subtraction=3, rest=4 > Trial interval: [0, 10] s > File format: MATLAB > Data preprocessed: True **Acquisition** > Sampling rate: 200.0 Hz > Number of channels: 30 > Channel types: eeg=30, eog=2 > Channel names: AFF1h, AFF2h, AFF5h, AFF6h, AFp1, AFp2, CCP3h, CCP4h, CCP5h, CCP6h, Cz, F3, F4, F7, F8, FCC3h, FCC4h, FCC5h, FCC6h, HEOG, P3, P4, P7, P8, POO1, POO2, PPO1h, PPO2h, Pz, T7, T8, VEOG > Montage: 10-5 > Hardware: BrainAmp > Reference: linked mastoids > Ground: Fz > Sensor type: active electrodes > Line frequency: 50.0 Hz > Cap manufacturer: EASYCAP GmbH > Cap model: custom-made stretchy fabric cap > Auxiliary channels: EOG (4 ch, horizontal, vertical), ecg, respiration **Participants** > Number of subjects: 29 > Health status: healthy > Age: mean=28.5, std=3.7 > Gender distribution: male=14, female=15 > Handedness: {‘right’: 29, ‘left’: 1} > BCI experience: naive to MI experiment > Species: human **Experimental Protocol** > Paradigm: imagery > Number of classes: 2 > Class labels: left_hand, right_hand > Trial duration: 10.0 s > Study design: Dataset A: left vs right hand motor imagery (kinesthetic imagery of opening and closing hands) > Feedback type: none > Stimulus type: visual arrow and fixation cross > Stimulus modalities: visual, auditory > Primary modality: visual > Synchronicity: cued > Mode: offline > Instructions: Subjects were instructed to perform kinesthetic MI (i.e., to imagine the opening and closing their hands as they were grabbing a ball) to ensure that actual MI, not visual MI, was performed. Subjects were asked to imagine hand gripping (opening and closing their hands) with a 1 Hz pace. **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > left_hand ```text ├─ Sensory-event │ ├─ Experimental-stimulus │ ├─ Visual-presentation │ └─ Leftward, Arrow └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event │ ├─ Experimental-stimulus │ ├─ Visual-presentation │ └─ Rightward, Arrow └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand subtraction ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Think └─ Label/subtraction rest ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Rest ``` **Paradigm-Specific Parameters** > Detected paradigm: motor_imagery > Number of repetitions: 20 > Imagery tasks: left_hand, right_hand > Cue duration: 2.0 s > Imagery duration: 10.0 s **Data Structure** > Trials: {‘per_session’: 20, ‘per_class_per_session’: 10, ‘total_per_class’: 30} > Blocks per session: 10 > Trials context: 10 blocks per session, each block containing 2 trials (one left, one right hand MI) randomized **Preprocessing** > Data state: preprocessed > Preprocessing applied: True > Steps: common average reference, bandpass filtering (0.5-50 Hz), ICA-based EOG rejection, downsampling to 200 Hz > Highpass filter: 0.5 Hz > Lowpass filter: 50.0 Hz > Bandpass filter: [0.5, 50.0] > Filter type: Chebyshev type II > Filter order: 4 > Artifact methods: ICA, EOG rejection > Re-reference: car > Downsampled to: 200.0 Hz **Signal Processing** > Classifiers: Shrinkage LDA > Feature extraction: CSP, log-variance > Frequency bands: mu=[8.0, 12.0] Hz; beta=[12.0, 25.0] Hz; analyzed=[8.0, 25.0] Hz > Spatial filters: CSP **Cross-Validation** > Method: 10x5-fold > Folds: 5 > Evaluation type: within_subject **Performance (Original Study)** > Accuracy: 65.6% > Eeg Accuracy: 65.6 > Hbr Accuracy: 66.5 > Hbo Accuracy: 63.5 > Eeg+Hbr+Hbo Accuracy: 74.2 **BCI Application** > Applications: motor_control > Environment: laboratory > Online feedback: False **Tags** > Pathology: Healthy > Modality: Motor > Type: Imagery **Documentation** > Description: Open access dataset for hybrid brain-computer interfaces (BCIs) using electroencephalography (EEG) and near-infrared spectroscopy (NIRS). Dataset includes two BCI experiments: left versus right hand motor imagery, and mental arithmetic versus resting state. > DOI: 10.1109/TNSRE.2016.2628057 > License: GPL-3.0 > Investigators: Jaeyoung Shin, Alexander von Lühmann, Benjamin Blankertz, Do-Won Kim, Jichai Jeong, Han-Jeong Hwang, Klaus-Robert Müller > Senior author: Klaus-Robert Müller > Contact: [h2j@kumoh.ac.kr](mailto:h2j@kumoh.ac.kr); [klaus-robert.mueller@tuberlin.de](mailto:klaus-robert.mueller@tuberlin.de) > Institution: Berlin Institute of Technology > Department: Machine Learning Group, Department of Computer Science > Address: 10587 Berlin, Germany > Country: DE > Repository: GitHub > Data URL: [http://doc.ml.tu-berlin.de/hBCI](http://doc.ml.tu-berlin.de/hBCI) > Publication year: 2017 > Funding: Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF2014R1A6A3A03057524); Ministry of Science, ICT & Future Planning (NRF-2015R1C1A1A02037032); Brain Korea 21 PLUS Program through the NRF funded by the Ministry of Education; Korea University Grant; BMBF (#01GQ0850, Bernstein Focus: Neurotechnology) > Ethics approval: Ethics Committee of the Institute of Psychology and Ergonomics, Technical University of Berlin (approval number: SH_01_20150330); Declaration of Helsinki > Keywords: Brain-computer interface (BCI), electroencephalography (EEG), hybrid BCI, mental arithmetic, motor imagery, near-infrared spectroscopy (NIRS), open access dataset **Abstract** We provide an open access dataset for hybrid brain-computer interfaces (BCIs) using electroencephalography (EEG) and near-infrared spectroscopy (NIRS). For this, we conducted two BCI experiments (left versus right hand motor imagery; mental arithmetic versus resting state). The dataset was validated using baseline signal analysis methods, with which classification performance was evaluated for each modality and a combination of both modalities. As already shown in previous literature, the capability of discriminating different mental states can be enhanced by using a hybrid approach, when comparing to single modality analyses. This makes the provided data highly suitable for hybrid BCI investigations. Since our open access dataset also comprises motion artifacts and physiological data, we expect that it can be used in a wide range of future validation approaches in multimodal BCI research. **Methodology** Twenty-nine right-handed and one left-handed healthy subjects participated in motor imagery and mental arithmetic tasks. EEG data was recorded at 1000 Hz using 30 active electrodes with a BrainAmp amplifier, referenced to linked mastoids. NIRS data was collected at 12.5 Hz using NIRScout with 14 sources and 16 detectors resulting in 36 channels. Three sessions were conducted for each paradigm (MI and MA). Each session included 20 trials with 10s task periods and 15-17s rest periods. For MI, subjects performed kinesthetic hand gripping imagery at 1 Hz pace. Visual instructions included arrows for MI and arithmetic problems for MA. Motion artifacts from eye/head movements were also recorded. Signal processing included CSP for spatial filtering, log-variance features, and shrinkage LDA classifier with 10x5-fold cross-validation. **References** Shin, J., von Lühmann, A., Blankertz, B., Kim, D.W., Jeong, J., Hwang, H.J. and Müller, K.R., 2017. Open access dataset for EEG+NIRS single-trial classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(10), pp.1735-1745. GNU General Public License, Version 3 [https://www.gnu.org/licenses/gpl-3.0.txt](https://www.gnu.org/licenses/gpl-3.0.txt) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000267` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Shin2017A | | Author (year) | `Shin2017_Shin2017A` | | Canonical | `Shin2017A` | | Importable as | `NM000267`, `Shin2017_Shin2017A`, `Shin2017A` | | Year | 2019 | | Authors | Jaeyoung Shin, Alexander von Lühmann, Benjamin Blankertz, Do-Won Kim, Jichai Jeong, Han-Jeong Hwang, Klaus-Robert Müller | | License | GPL-3.0 | | Citation / DOI | [doi:10.1109/TNSRE.2016.2628057](https://doi.org/10.1109/TNSRE.2016.2628057) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000267) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000267) | [Source URL](https://nemar.org/dataexplorer/detail/nm000267) | ### Copy-paste BibTeX ```bibtex @dataset{nm000267, title = {Shin2017A}, author = {Jaeyoung Shin and Alexander von Lühmann and Benjamin Blankertz and Do-Won Kim and Jichai Jeong and Han-Jeong Hwang and Klaus-Robert Müller}, doi = {10.1109/TNSRE.2016.2628057}, url = {https://doi.org/10.1109/TNSRE.2016.2628057}, } ``` ## Technical Details - Subjects: 29 - Recordings: 174 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 200.0 - Duration (hours): 29.03336944444445 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 1.9 GB - File count: 174 - Format: BIDS - License: GPL-3.0 - DOI: doi:10.1109/TNSRE.2016.2628057 - Source: nemar - OpenNeuro: [nm000267](https://openneuro.org/datasets/nm000267) - NeMAR: [nm000267](https://nemar.org/dataexplorer/detail?dataset_id=nm000267) ## API Reference Use the `NM000267` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000267(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Shin2017A * **Study:** `nm000267` (NeMAR) * **Author (year):** `Shin2017_Shin2017A` * **Canonical:** `Shin2017A` Also importable as: `NM000267`, `Shin2017_Shin2017A`, `Shin2017A`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 29; recordings: 174; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000267](https://openneuro.org/datasets/nm000267) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000267](https://nemar.org/dataexplorer/detail?dataset_id=nm000267) DOI: [https://doi.org/10.1109/TNSRE.2016.2628057](https://doi.org/10.1109/TNSRE.2016.2628057) ### Examples ```pycon >>> from eegdash.dataset import NM000267 >>> dataset = NM000267(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000267) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000267) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000268: eeg dataset, 29 subjects *Shin2017B* Access recordings and metadata through EEGDash. **Citation:** Jaeyoung Shin, Alexander von Lühmann, Benjamin Blankertz, Do-Won Kim, Jichai Jeong, Han-Jeong Hwang, Klaus-Robert Müller (2019). *Shin2017B*. [10.1109/TNSRE.2016.2628057](https://doi.org/10.1109/TNSRE.2016.2628057) Modality: eeg Subjects: 29 Recordings: 174 License: GPL-3.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000268 dataset = NM000268(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000268(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000268( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000268, title = {Shin2017B}, author = {Jaeyoung Shin and Alexander von Lühmann and Benjamin Blankertz and Do-Won Kim and Jichai Jeong and Han-Jeong Hwang and Klaus-Robert Müller}, doi = {10.1109/TNSRE.2016.2628057}, url = {https://doi.org/10.1109/TNSRE.2016.2628057}, } ``` ## About This Dataset **Shin2017B** Mental Arithmetic Dataset from Shin et al 2017. **Dataset Overview** > Code: Shin2017B > Paradigm: imagery > DOI: 10.1109/TNSRE.2016.2628057 ### View full README **Shin2017B** Mental Arithmetic Dataset from Shin et al 2017. **Dataset Overview** > Code: Shin2017B > Paradigm: imagery > DOI: 10.1109/TNSRE.2016.2628057 > Subjects: 29 > Sessions per subject: 6 > Events: left_hand=1, right_hand=2, subtraction=3, rest=4 > Trial interval: [0, 10] s > Session IDs: 1arithmetic, 3arithmetic, 5arithmetic > File format: MATLAB > Data preprocessed: True **Acquisition** > Sampling rate: 200.0 Hz > Number of channels: 30 > Channel types: eeg=30, eog=2 > Channel names: AFF1h, AFF2h, AFF5h, AFF6h, AFp1, AFp2, CCP3h, CCP4h, CCP5h, CCP6h, Cz, F3, F4, F7, F8, FCC3h, FCC4h, FCC5h, FCC6h, HEOG, P3, P4, P7, P8, POO1, POO2, PPO1h, PPO2h, Pz, T7, T8, VEOG > Montage: 10-5 > Hardware: BrainAmp > Software: MATLAB R2013b > Reference: linked mastoids > Ground: Fz > Sensor type: active electrodes > Line frequency: 50.0 Hz > Cap manufacturer: EASYCAP GmbH > Cap model: custom-made stretchy fabric cap > Auxiliary channels: EOG (4 ch, horizontal, vertical), ecg, respiration **Participants** > Number of subjects: 29 > Health status: healthy > Age: mean=28.5, std=3.7 > Gender distribution: male=14, female=15 > Handedness: {‘right’: 29, ‘left’: 1} > BCI experience: naive to MI experiment > Species: human **Experimental Protocol** > Paradigm: imagery > Number of classes: 2 > Class labels: subtraction, rest > Trial duration: 10.0 s > Trials per class: subtraction=30, rest=30 > Study design: Dataset B: mental arithmetic (serial subtraction of one-digit number) versus baseline/rest task > Feedback type: none > Stimulus type: visual instruction (subtraction problem and fixation cross) > Stimulus modalities: visual, auditory > Primary modality: visual > Synchronicity: cued-synchronous > Mode: offline > Training/test split: False > Instructions: For the MA task, subjects memorized an initial subtraction (three-digit minus one-digit) displayed for 2s, then repeatedly subtracted the one-digit number from each result. For baseline, subjects rested with no specific thought. **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > left_hand ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand subtraction ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Think └─ Label/subtraction rest ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Rest ``` **Paradigm-Specific Parameters** > Detected paradigm: motor_imagery > Number of repetitions: 20 **Data Structure** > Trials: {‘per_session’: 20, ‘per_condition_session’: 10, ‘per_condition_total’: 30} > Trials context: Each session: 1 min pre-experiment rest + 20 trials + 1 min post-experiment rest. Trial: 2s visual instruction + 10s task + 15-17s random rest **Preprocessing** > Data state: preprocessed > Preprocessing applied: True > Steps: common average reference, bandpass filtering (0.5-50 Hz), ICA-based EOG rejection, downsampling to 200 Hz > Highpass filter: 0.5 Hz > Lowpass filter: 50.0 Hz > Bandpass filter: [0.5, 50.0] > Filter type: Chebyshev type II > Filter order: 4 > Artifact methods: EOG correction, ICA > Re-reference: car > Downsampled to: 200.0 Hz **Signal Processing** > Classifiers: LDA, Shrinkage LDA > Feature extraction: CSP, log-variance > Frequency bands: analyzed=[4.0, 35.0] Hz > Spatial filters: CSP **Cross-Validation** > Method: 10x5-fold > Folds: 5 > Evaluation type: within_subject **Performance (Original Study)** > Ma Eeg Max Accuracy: 75.9 > Ma Hbr Max Accuracy: 80.7 > Ma Hbo Max Accuracy: 83.6 **BCI Application** > Applications: hybrid_bci_research > Environment: laboratory > Online feedback: False **Tags** > Pathology: Healthy > Modality: Cognitive > Type: Cognitive **Documentation** > Description: Open access dataset for hybrid brain-computer interfaces using EEG and NIRS with motor imagery and mental arithmetic tasks > DOI: 10.1109/TNSRE.2016.2628057 > License: GPL-3.0 > Investigators: Jaeyoung Shin, Alexander von Lühmann, Benjamin Blankertz, Do-Won Kim, Jichai Jeong, Han-Jeong Hwang, Klaus-Robert Müller > Senior author: Klaus-Robert Müller > Contact: [h2j@kumoh.ac.kr](mailto:h2j@kumoh.ac.kr); [klaus-robert.mueller@tuberlin.de](mailto:klaus-robert.mueller@tuberlin.de) > Institution: Berlin Institute of Technology > Department: Department of Computer Science, Machine Learning Group > Address: 10587 Berlin, Germany > Country: DE > Repository: GitHub > Data URL: [http://doc.ml.tu-berlin.de/hBCI](http://doc.ml.tu-berlin.de/hBCI) > Publication year: 2017 > Funding: Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2014R1A6A3A03057524); Ministry of Science, ICT & Future Planning (NRF-2015R1C1A1A02037032); Brain Korea 21 PLUS Program through the NRF funded by the Ministry of Education; Korea University Grant; BMBF (#01GQ0850, Bernstein Focus: Neurotechnology) > Ethics approval: Ethics Committee of the Institute of Psychology and Ergonomics, Technical University of Berlin (approval number: SH_01_20150330) > Keywords: Brain-computer interface, BCI, electroencephalography, EEG, hybrid BCI, mental arithmetic, motor imagery, near-infrared spectroscopy, NIRS, open access dataset **Abstract** Open access dataset for hybrid brain-computer interfaces using EEG and NIRS. Includes two experiments: (1) left vs right hand motor imagery, (2) mental arithmetic vs resting state. Dataset validated using baseline signal analysis showing hybrid approach enhances discrimination of mental states. Also includes motion artifacts and physiological data for wide range of validation approaches. **Methodology** Thirty subjects performed 6 sessions alternating between motor imagery (dataset A: left/right hand) and mental arithmetic (dataset B: MA vs rest). Each session: 20 trials with 2s cue, 10s task, 15-17s rest. EEG recorded at 1000 Hz with 30 channels, downsampled to 200 Hz. Preprocessing: CAR, 0.5-50 Hz bandpass (4th order Chebyshev II), ICA-based EOG rejection. Feature extraction: CSP with log-variance of first/last 3 components using 3s moving window (1s step). Classification: shrinkage LDA with 10x5-fold CV. Hybrid analysis combines EEG and NIRS outputs using meta-classifier. **References** Shin, J., von Lühmann, A., Blankertz, B., Kim, D.W., Jeong, J., Hwang, H.J. and Müller, K.R., 2017. Open access dataset for EEG+NIRS single-trial classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(10), pp.1735-1745. GNU General Public License, Version 3 [https://www.gnu.org/licenses/gpl-3.0.txt](https://www.gnu.org/licenses/gpl-3.0.txt) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000268` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Shin2017B | | Author (year) | `Shin2017_Shin2017B` | | Canonical | `Shin2017B` | | Importable as | `NM000268`, `Shin2017_Shin2017B`, `Shin2017B` | | Year | 2019 | | Authors | Jaeyoung Shin, Alexander von Lühmann, Benjamin Blankertz, Do-Won Kim, Jichai Jeong, Han-Jeong Hwang, Klaus-Robert Müller | | License | GPL-3.0 | | Citation / DOI | [doi:10.1109/TNSRE.2016.2628057](https://doi.org/10.1109/TNSRE.2016.2628057) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000268) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000268) | [Source URL](https://nemar.org/dataexplorer/detail/nm000268) | ### Copy-paste BibTeX ```bibtex @dataset{nm000268, title = {Shin2017B}, author = {Jaeyoung Shin and Alexander von Lühmann and Benjamin Blankertz and Do-Won Kim and Jichai Jeong and Han-Jeong Hwang and Klaus-Robert Müller}, doi = {10.1109/TNSRE.2016.2628057}, url = {https://doi.org/10.1109/TNSRE.2016.2628057}, } ``` ## Technical Details - Subjects: 29 - Recordings: 174 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 200.0 - Duration (hours): 29.03336944444445 - Pathology: Healthy - Modality: Visual - Type: Memory - Size on disk: 1.9 GB - File count: 174 - Format: BIDS - License: GPL-3.0 - DOI: doi:10.1109/TNSRE.2016.2628057 - Source: nemar - OpenNeuro: [nm000268](https://openneuro.org/datasets/nm000268) - NeMAR: [nm000268](https://nemar.org/dataexplorer/detail?dataset_id=nm000268) ## API Reference Use the `NM000268` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000268(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Shin2017B * **Study:** `nm000268` (NeMAR) * **Author (year):** `Shin2017_Shin2017B` * **Canonical:** `Shin2017B` Also importable as: `NM000268`, `Shin2017_Shin2017B`, `Shin2017B`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 29; recordings: 174; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000268](https://openneuro.org/datasets/nm000268) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000268](https://nemar.org/dataexplorer/detail?dataset_id=nm000268) DOI: [https://doi.org/10.1109/TNSRE.2016.2628057](https://doi.org/10.1109/TNSRE.2016.2628057) ### Examples ```pycon >>> from eegdash.dataset import NM000268 >>> dataset = NM000268(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000268) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000268) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000270: eeg dataset, 27 subjects *liu2025 - NEMAR Dataset* Access recordings and metadata through EEGDash. **Citation:** Unknown (—). *liu2025 - NEMAR Dataset*. Modality: eeg Subjects: 27 Recordings: 797 License: — Source: nemar Metadata: Partial (60%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000270 dataset = NM000270(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000270(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000270( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000270, title = {liu2025 - NEMAR Dataset}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `NM000270` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | liu2025 - NEMAR Dataset | | Author (year) | `Liu2025` | | Canonical | — | | Importable as | `NM000270`, `Liu2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000270) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000270) | [Source URL](https://github.com/nemarDatasets/nm000270) | ## Technical Details - Subjects: 27 - Recordings: 797 - Tasks: 3 - Channels: 64 - Sampling rate (Hz): 1000 - Duration (hours): 8.997747222222221 - Pathology: Not specified - Modality: — - Type: Motor - Size on disk: — - File count: 797 - Format: BIDS - License: See source - DOI: — - Source: nemar - OpenNeuro: [nm000270](https://openneuro.org/datasets/nm000270) - NeMAR: [nm000270](https://nemar.org/dataexplorer/detail?dataset_id=nm000270) ## API Reference Use the `NM000270` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000270(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) liu2025 - NEMAR Dataset * **Study:** `nm000270` (NeMAR) * **Author (year):** `Liu2025` * **Canonical:** — Also importable as: `NM000270`, `Liu2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 27; recordings: 797; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000270](https://openneuro.org/datasets/nm000270) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000270](https://nemar.org/dataexplorer/detail?dataset_id=nm000270) ### Examples ```pycon >>> from eegdash.dataset import NM000270 >>> dataset = NM000270(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000270) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000270) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000271: eeg dataset, 28 subjects *chang2025 - NEMAR Dataset* Access recordings and metadata through EEGDash. **Citation:** Unknown (—). *chang2025 - NEMAR Dataset*. Modality: eeg Subjects: 28 Recordings: 1245 License: — Source: nemar Metadata: Partial (60%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000271 dataset = NM000271(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000271(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000271( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000271, title = {chang2025 - NEMAR Dataset}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `NM000271` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | chang2025 - NEMAR Dataset | | Author (year) | `Chang2025_2` | | Canonical | `Chang2025` | | Importable as | `NM000271`, `Chang2025_2`, `Chang2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000271) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000271) | [Source URL](https://github.com/nemarDatasets/nm000271) | ## Technical Details - Subjects: 28 - Recordings: 1245 - Tasks: 3 - Channels: 59 - Sampling rate (Hz): 1000 - Duration (hours): 5.824969444444444 - Pathology: Not specified - Modality: Visual - Type: Motor - Size on disk: — - File count: 1245 - Format: BIDS - License: See source - DOI: — - Source: nemar - OpenNeuro: [nm000271](https://openneuro.org/datasets/nm000271) - NeMAR: [nm000271](https://nemar.org/dataexplorer/detail?dataset_id=nm000271) ## API Reference Use the `NM000271` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000271(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) chang2025 - NEMAR Dataset * **Study:** `nm000271` (NeMAR) * **Author (year):** `Chang2025_2` * **Canonical:** `Chang2025` Also importable as: `NM000271`, `Chang2025_2`, `Chang2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 28; recordings: 1245; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000271](https://openneuro.org/datasets/nm000271) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000271](https://nemar.org/dataexplorer/detail?dataset_id=nm000271) ### Examples ```pycon >>> from eegdash.dataset import NM000271 >>> dataset = NM000271(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000271) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000271) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000272: eeg dataset, 22 subjects *romani-bf2025-erp - NEMAR Dataset* Access recordings and metadata through EEGDash. **Citation:** Unknown (—). *romani-bf2025-erp - NEMAR Dataset*. Modality: eeg Subjects: 22 Recordings: 1022 License: — Source: nemar Metadata: Partial (60%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000272 dataset = NM000272(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000272(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000272( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000272, title = {romani-bf2025-erp - NEMAR Dataset}, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `NM000272` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | romani-bf2025-erp - NEMAR Dataset | | Author (year) | `Romani2025_BF_ERP` | | Canonical | `Romani2025_erp` | | Importable as | `NM000272`, `Romani2025_BF_ERP`, `Romani2025_erp` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000272) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000272) | [Source URL](https://github.com/nemarDatasets/nm000272) | ## Technical Details - Subjects: 22 - Recordings: 1022 - Tasks: 3 - Channels: 8 - Sampling rate (Hz): 250 - Duration (hours): 6.2782 - Pathology: Not specified - Modality: Visual - Type: Attention - Size on disk: — - File count: 1022 - Format: BIDS - License: See source - DOI: — - Source: nemar - OpenNeuro: [nm000272](https://openneuro.org/datasets/nm000272) - NeMAR: [nm000272](https://nemar.org/dataexplorer/detail?dataset_id=nm000272) ## API Reference Use the `NM000272` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000272(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) romani-bf2025-erp - NEMAR Dataset * **Study:** `nm000272` (NeMAR) * **Author (year):** `Romani2025_BF_ERP` * **Canonical:** `Romani2025_erp` Also importable as: `NM000272`, `Romani2025_BF_ERP`, `Romani2025_erp`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Unknown`. Subjects: 22; recordings: 1022; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000272](https://openneuro.org/datasets/nm000272) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000272](https://nemar.org/dataexplorer/detail?dataset_id=nm000272) ### Examples ```pycon >>> from eegdash.dataset import NM000272 >>> dataset = NM000272(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000272) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000272) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000277: eeg dataset, 20 subjects *Mainsah2025-G* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *Mainsah2025-G*. [10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) Modality: eeg Subjects: 20 Recordings: 320 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000277 dataset = NM000277(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000277(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000277( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000277, title = {Mainsah2025-G}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## About This Dataset **Mainsah2025-G** BigP3BCI Study G — 9x8 checkerboard/dynamic (20 healthy subjects). **Dataset Overview** > Code: Mainsah2025-G > Paradigm: p300 > DOI: 10.13026/0byy-ry86 ### View full README **Mainsah2025-G** BigP3BCI Study G — 9x8 checkerboard/dynamic (20 healthy subjects). **Dataset Overview** > Code: Mainsah2025-G > Paradigm: p300 > DOI: 10.13026/0byy-ry86 > Subjects: 20 > Sessions per subject: 1 > Events: Target=2, NonTarget=1 > Trial interval: [0, 1.0] s **Acquisition** > Sampling rate: 256.0 Hz > Number of channels: 16 > Channel types: eeg=16 > Montage: standard_1020 > Hardware: g.USBamp (g.tec) > Line frequency: 60.0 Hz **Participants** > Number of subjects: 20 > Health status: healthy **Experimental Protocol** > Paradigm: p300 > Number of classes: 2 > Class labels: Target, NonTarget **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 **Signal Processing** > Feature extraction: P300_ERP_detection **Cross-Validation** > Method: calibration-then-test > Evaluation type: within_subject **BCI Application** > Applications: speller > Environment: laboratory > Online feedback: True **Tags** > Modality: visual > Type: perception **Documentation** > Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. > DOI: 10.13026/0byy-ry86 > License: CC-BY-4.0 > Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins > Institution: Duke University; East Tennessee State University > Country: US > Repository: PhysioNet > Data URL: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) > Publication year: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000277` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mainsah2025-G | | Author (year) | `Mainsah2025_G` | | Canonical | `BigP3BCI_G`, `BigP3BCI_StudyG` | | Importable as | `NM000277`, `Mainsah2025_G`, `BigP3BCI_G`, `BigP3BCI_StudyG` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000277) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000277) | [Source URL](https://nemar.org/dataexplorer/detail/nm000277) | ### Copy-paste BibTeX ```bibtex @dataset{nm000277, title = {Mainsah2025-G}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## Technical Details - Subjects: 20 - Recordings: 320 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 256.0 - Duration (hours): 7.619930555555555 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 333.2 MB - File count: 320 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.13026/0byy-ry86 - Source: nemar - OpenNeuro: [nm000277](https://openneuro.org/datasets/nm000277) - NeMAR: [nm000277](https://nemar.org/dataexplorer/detail?dataset_id=nm000277) ## API Reference Use the `NM000277` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000277(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-G * **Study:** `nm000277` (NeMAR) * **Author (year):** `Mainsah2025_G` * **Canonical:** `BigP3BCI_G`, `BigP3BCI_StudyG` Also importable as: `NM000277`, `Mainsah2025_G`, `BigP3BCI_G`, `BigP3BCI_StudyG`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 20; recordings: 320; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000277](https://openneuro.org/datasets/nm000277) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000277](https://nemar.org/dataexplorer/detail?dataset_id=nm000277) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000277 >>> dataset = NM000277(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000277) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000277) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000301: eeg dataset, 17 subjects *Mainsah2025-D* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *Mainsah2025-D*. [10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) Modality: eeg Subjects: 17 Recordings: 307 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000301 dataset = NM000301(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000301(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000301( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000301, title = {Mainsah2025-D}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## About This Dataset **Mainsah2025-D** BigP3BCI Study D — 6x6 dynamic/row-column (17 healthy subjects). **Dataset Overview** > Code: Mainsah2025-D > Paradigm: p300 > DOI: 10.13026/0byy-ry86 ### View full README **Mainsah2025-D** BigP3BCI Study D — 6x6 dynamic/row-column (17 healthy subjects). **Dataset Overview** > Code: Mainsah2025-D > Paradigm: p300 > DOI: 10.13026/0byy-ry86 > Subjects: 17 > Sessions per subject: 1 > Events: Target=2, NonTarget=1 > Trial interval: [0, 1.0] s **Acquisition** > Sampling rate: 256.0 Hz > Number of channels: 32 > Channel types: eeg=32 > Montage: standard_1020 > Hardware: g.USBamp (g.tec) > Line frequency: 60.0 Hz **Participants** > Number of subjects: 17 > Health status: healthy **Experimental Protocol** > Paradigm: p300 > Number of classes: 2 > Class labels: Target, NonTarget **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 **Signal Processing** > Feature extraction: P300_ERP_detection **Cross-Validation** > Method: calibration-then-test > Evaluation type: within_subject **BCI Application** > Applications: speller > Environment: laboratory > Online feedback: True **Tags** > Modality: visual > Type: perception **Documentation** > Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. > DOI: 10.13026/0byy-ry86 > License: CC-BY-4.0 > Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins > Institution: Duke University; East Tennessee State University > Country: US > Repository: PhysioNet > Data URL: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) > Publication year: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000301` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mainsah2025-D | | Author (year) | `Mainsah2025_D` | | Canonical | — | | Importable as | `NM000301`, `Mainsah2025_D` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000301) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000301) | [Source URL](https://nemar.org/dataexplorer/detail/nm000301) | ### Copy-paste BibTeX ```bibtex @dataset{nm000301, title = {Mainsah2025-D}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## Technical Details - Subjects: 17 - Recordings: 307 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 256.0 - Duration (hours): 8.574389105902778 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 738.1 MB - File count: 307 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.13026/0byy-ry86 - Source: nemar - OpenNeuro: [nm000301](https://openneuro.org/datasets/nm000301) - NeMAR: [nm000301](https://nemar.org/dataexplorer/detail?dataset_id=nm000301) ## API Reference Use the `NM000301` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000301(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-D * **Study:** `nm000301` (NeMAR) * **Author (year):** `Mainsah2025_D` * **Canonical:** — Also importable as: `NM000301`, `Mainsah2025_D`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 307; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000301](https://openneuro.org/datasets/nm000301) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000301](https://nemar.org/dataexplorer/detail?dataset_id=nm000301) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000301 >>> dataset = NM000301(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000301) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000301) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000303: eeg dataset, 18 subjects *Mainsah2025-O* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *Mainsah2025-O*. [10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) Modality: eeg Subjects: 18 Recordings: 347 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000303 dataset = NM000303(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000303(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000303( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000303, title = {Mainsah2025-O}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## About This Dataset **Mainsah2025-O** BigP3BCI Study O — 9x8 supervised/checkerboard (18 ALS subjects). **Dataset Overview** > Code: Mainsah2025-O > Paradigm: p300 > DOI: 10.13026/0byy-ry86 ### View full README **Mainsah2025-O** BigP3BCI Study O — 9x8 supervised/checkerboard (18 ALS subjects). **Dataset Overview** > Code: Mainsah2025-O > Paradigm: p300 > DOI: 10.13026/0byy-ry86 > Subjects: 18 > Sessions per subject: 2 > Events: Target=2, NonTarget=1 > Trial interval: [0, 1.0] s **Acquisition** > Sampling rate: 256.0 Hz > Number of channels: 32 > Channel types: eeg=32 > Montage: standard_1020 > Hardware: g.USBamp (g.tec) > Line frequency: 60.0 Hz **Participants** > Number of subjects: 18 > Health status: healthy **Experimental Protocol** > Paradigm: p300 > Number of classes: 2 > Class labels: Target, NonTarget **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 **Signal Processing** > Feature extraction: P300_ERP_detection **Cross-Validation** > Method: calibration-then-test > Evaluation type: within_subject **BCI Application** > Applications: speller > Environment: laboratory > Online feedback: True **Tags** > Modality: visual > Type: perception **Documentation** > Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. > DOI: 10.13026/0byy-ry86 > License: CC-BY-4.0 > Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins > Institution: Duke University; East Tennessee State University > Country: US > Repository: PhysioNet > Data URL: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) > Publication year: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000303` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mainsah2025-O | | Author (year) | `Mainsah2025_O` | | Canonical | — | | Importable as | `NM000303`, `Mainsah2025_O` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000303) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000303) | [Source URL](https://nemar.org/dataexplorer/detail/nm000303) | ### Copy-paste BibTeX ```bibtex @dataset{nm000303, title = {Mainsah2025-O}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## Technical Details - Subjects: 18 - Recordings: 347 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 256.0000930697907 (162), 256.0 (101), 256.00008203487505 (40), 256.0001098418278 (12), 256.00012071918457 (12), 256.00010076264726 (7), 256.0001184842897 (4), 256.00009694678226 (3), 256.00010663894057 (3), 256.00008886963377 (2), 256.00010131094785 - Duration (hours): 11.497953407383674 - Pathology: Other - Modality: Visual - Type: Perception - Size on disk: 992.2 MB - File count: 347 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.13026/0byy-ry86 - Source: nemar - OpenNeuro: [nm000303](https://openneuro.org/datasets/nm000303) - NeMAR: [nm000303](https://nemar.org/dataexplorer/detail?dataset_id=nm000303) ## API Reference Use the `NM000303` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000303(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-O * **Study:** `nm000303` (NeMAR) * **Author (year):** `Mainsah2025_O` * **Canonical:** — Also importable as: `NM000303`, `Mainsah2025_O`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Other`. Subjects: 18; recordings: 347; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000303](https://openneuro.org/datasets/nm000303) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000303](https://nemar.org/dataexplorer/detail?dataset_id=nm000303) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000303 >>> dataset = NM000303(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000303) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000303) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000310: eeg dataset, 11 subjects *GuttmannFlury2025-SSVEP* Access recordings and metadata through EEGDash. **Citation:** Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu (2025). *GuttmannFlury2025-SSVEP*. [10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) Modality: eeg Subjects: 11 Recordings: 26 License: CC0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000310 dataset = NM000310(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000310(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000310( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000310, title = {GuttmannFlury2025-SSVEP}, author = {Eva Guttmann-Flury and Xinjun Sheng and Xiangyang Zhu}, doi = {10.1038/s41597-025-04861-9}, url = {https://doi.org/10.1038/s41597-025-04861-9}, } ``` ## About This Dataset **GuttmannFlury2025-SSVEP** Eye-BCI multimodal SSVEP dataset from Guttmann-Flury et al 2025. **Dataset Overview** > Code: GuttmannFlury2025-SSVEP > Paradigm: ssvep > DOI: 10.1038/s41597-025-04861-9 ### View full README **GuttmannFlury2025-SSVEP** Eye-BCI multimodal SSVEP dataset from Guttmann-Flury et al 2025. **Dataset Overview** > Code: GuttmannFlury2025-SSVEP > Paradigm: ssvep > DOI: 10.1038/s41597-025-04861-9 > Subjects: 31 > Sessions per subject: 3 > Events: 10.0=1, 11.0=2, 12.0=3, 13.0=4 > Trial interval: [0, 5] s > File format: BDF **Acquisition** > Sampling rate: 1000.0 Hz > Number of channels: 66 > Channel types: eeg=64, eog=1, stim=1 > Channel names: FP1, FPZ, FP2, AF3, AF4, F7, F5, F3, F1, FZ, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCZ, FC2, FC4, FC6, FT8, T7, C5, C3, C1, CZ, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPZ, CP2, CP4, CP6, TP8, P7, P5, P3, P1, PZ, P2, P4, P6, P8, PO7, PO5, PO3, POZ, PO4, PO6, PO8, O1, OZ, O2, CB1, CB2 > Montage: standard_1005 > Hardware: Neuroscan Quik-Cap 65-ch, SynAmps2 > Reference: right mastoid (M1) > Ground: forehead > Sensor type: Ag/AgCl > Line frequency: 50.0 Hz > Online filters: {‘highpass_time_constant_s’: 10} **Participants** > Number of subjects: 31 > Health status: healthy > Age: mean=28.3, min=20.0, max=57.0 > Gender distribution: female=11, male=20 > Species: human **Experimental Protocol** > Paradigm: ssvep > Number of classes: 4 > Class labels: 10.0, 11.0, 12.0, 13.0 > Trial duration: 7.0 s > Study design: Multi-paradigm BCI (MI/ME/SSVEP/P300). SSVEP: 4-class frequency flickering, 48 trials/session, up to 3 sessions per subject. > Feedback type: none > Stimulus type: flickering LED > Stimulus modalities: visual > Primary modality: visual > Synchronicity: synchronous > Mode: offline **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > 10.0 ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/10_0 11.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/11_0 12.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/12_0 13.0 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/13_0 ``` **Paradigm-Specific Parameters** > Detected paradigm: ssvep > Stimulus frequencies: [8.0, 10.0, 12.0, 15.0] Hz **Data Structure** > Trials: 3024 > Trials context: 63 sessions x 48 trials = 3024 **BCI Application** > Applications: communication > Environment: laboratory **Tags** > Pathology: Healthy > Modality: Visual > Type: Research **Documentation** > DOI: 10.1038/s41597-025-04861-9 > License: CC0 > Investigators: Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu > Institution: Shanghai Jiao Tong University > Country: CN > Publication year: 2025 **References** Guttmann-Flury, E., Sheng, X., & Zhu, X. (2025). Dataset combining EEG, eye-tracking, and high-speed video for ocular activity analysis across BCI paradigms. Scientific Data, 12, 587. [https://doi.org/10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000310` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | GuttmannFlury2025-SSVEP | | Author (year) | `GuttmannFlury2025_SSVEP` | | Canonical | — | | Importable as | `NM000310`, `GuttmannFlury2025_SSVEP` | | Year | 2025 | | Authors | Eva Guttmann-Flury, Xinjun Sheng, Xiangyang Zhu | | License | CC0 | | Citation / DOI | [doi:10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000310) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000310) | [Source URL](https://nemar.org/dataexplorer/detail/nm000310) | ### Copy-paste BibTeX ```bibtex @dataset{nm000310, title = {GuttmannFlury2025-SSVEP}, author = {Eva Guttmann-Flury and Xinjun Sheng and Xiangyang Zhu}, doi = {10.1038/s41597-025-04861-9}, url = {https://doi.org/10.1038/s41597-025-04861-9}, } ``` ## Technical Details - Subjects: 11 - Recordings: 26 - Tasks: 1 - Channels: 65 - Sampling rate (Hz): 1000.0 - Duration (hours): 3.1566594444444447 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 2.1 GB - File count: 26 - Format: BIDS - License: CC0 - DOI: doi:10.1038/s41597-025-04861-9 - Source: nemar - OpenNeuro: [nm000310](https://openneuro.org/datasets/nm000310) - NeMAR: [nm000310](https://nemar.org/dataexplorer/detail?dataset_id=nm000310) ## API Reference Use the `NM000310` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000310(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) GuttmannFlury2025-SSVEP * **Study:** `nm000310` (NeMAR) * **Author (year):** `GuttmannFlury2025_SSVEP` * **Canonical:** — Also importable as: `NM000310`, `GuttmannFlury2025_SSVEP`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 11; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000310](https://openneuro.org/datasets/nm000310) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000310](https://nemar.org/dataexplorer/detail?dataset_id=nm000310) DOI: [https://doi.org/10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) ### Examples ```pycon >>> from eegdash.dataset import NM000310 >>> dataset = NM000310(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000310) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000310) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000311: eeg dataset, 25 subjects *Multimodal upper-limb MI/ME EEG (Jeong et al. 2020)* Access recordings and metadata through EEGDash. **Citation:** Ji-Hoon Jeong, Jeong-Hyun Cho, Kyung-Hwan Shim, Byoung-Hee Kwon, Byeong-Hoo Lee, Do-Yeun Lee, Dae-Hyeok Lee, Seong-Whan Lee (2020). *Multimodal upper-limb MI/ME EEG (Jeong et al. 2020)*. [10.82901/nemar.nm000311](https://doi.org/10.82901/nemar.nm000311) Modality: eeg Subjects: 25 Recordings: 213 License: CC0-1.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000311 dataset = NM000311(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000311(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000311( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000311, title = {Multimodal upper-limb MI/ME EEG (Jeong et al. 2020)}, author = {Ji-Hoon Jeong and Jeong-Hyun Cho and Kyung-Hwan Shim and Byoung-Hee Kwon and Byeong-Hoo Lee and Do-Yeun Lee and Dae-Hyeok Lee and Seong-Whan Lee}, doi = {10.82901/nemar.nm000311}, url = {https://doi.org/10.82901/nemar.nm000311}, } ``` ## About This Dataset [DOI](https://doi.org/10.82901/nemar.nm000311) **Jeong2020** Multimodal MI+ME dataset from Jeong et al 2020. **Dataset Overview** > Code: Jeong2020 > Paradigm: imagery ### View full README [DOI](https://doi.org/10.82901/nemar.nm000311) **Jeong2020** Multimodal MI+ME dataset from Jeong et al 2020. **Dataset Overview** > Code: Jeong2020 > Paradigm: imagery > DOI: 10.1093/gigascience/giaa098 > Subjects: 25 > Sessions per subject: 3 > Events: reach_forward=1, reach_backward=2, reach_left=3, reach_right=4, reach_up=5, reach_down=6, grasp_cup=7, grasp_ball=8, grasp_card=9, twist_pronation=10, twist_supination=11 > Trial interval: [0, 4] s > Runs per session: 3 > File format: BrainVision **Acquisition** > Sampling rate: 1000.0 Hz > Number of channels: 71 > Channel types: eeg=60, eog=4, emg=7 > Channel names: Fp1, AF7, AF3, AFz, F7, F5, F3, F1, Fz, FT7, FC5, FC3, FC1, T7, C5, C3, C1, Cz, TP7, CP5, CP3, CP1, CPz, P7, P5, P3, P1, Pz, PO7, PO3, POz, Fp2, AF4, AF8, F2, F4, F6, F8, FC2, FC4, FC6, FT8, C2, C4, C6, T8, CP2, CP4, CP6, TP8, P2, P4, P6, P8, PO4, PO8, O1, Oz, O2, Iz > Montage: standard_1005 > Hardware: BrainAmp (BrainProducts GmbH) > Reference: FCz > Ground: Fpz > Sensor type: actiCap > Line frequency: 60.0 Hz > Online filters: {‘highpass’: 0.016, ‘lowpass’: 1000} **Participants** > Number of subjects: 25 > Health status: healthy > Age: min=24.0, max=32.0 > Gender distribution: female=10, male=15 > Handedness: right-handed > BCI experience: naive > Species: human **Experimental Protocol** > Paradigm: imagery > Number of classes: 11 > Class labels: reach_forward, reach_backward, reach_left, reach_right, reach_up, reach_down, grasp_cup, grasp_ball, grasp_card, twist_pronation, twist_supination > Trial duration: 4.0 s > Study design: 11 intuitive upper-limb movement tasks: 6 reaching + 3 grasping + 2 wrist twisting. MI and real movement conditions, 3 sessions. > Feedback type: none > Stimulus type: text cues > Stimulus modalities: visual > Primary modality: visual > Synchronicity: synchronous > Mode: offline **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > reach_forward ```text ├─ Sensory-event └─ Label/reach_forward reach_backward ``` ```text ├─ Sensory-event └─ Label/reach_backward reach_left ``` ```text ├─ Sensory-event └─ Label/reach_left reach_right ``` ```text ├─ Sensory-event └─ Label/reach_right reach_up ``` ```text ├─ Sensory-event └─ Label/reach_up reach_down ``` ```text ├─ Sensory-event └─ Label/reach_down grasp_cup ``` ```text ├─ Sensory-event └─ Label/grasp_cup grasp_ball ``` ```text ├─ Sensory-event └─ Label/grasp_ball grasp_card ``` ```text ├─ Sensory-event └─ Label/grasp_card twist_pronation ``` ```text ├─ Sensory-event └─ Label/twist_pronation twist_supination ``` ```text ├─ Sensory-event └─ Label/twist_supination ``` **Paradigm-Specific Parameters** > Detected paradigm: motor_imagery > Imagery tasks: reach_forward, reach_backward, reach_left, reach_right, reach_up, reach_down, grasp_cup, grasp_ball, grasp_card, twist_pronation, twist_supination > Imagery duration: 4.0 s **Data Structure** > Trials: 41250 > Trials context: 25 subjects x 3 sessions x 550 trials (300 reaching + 150 grasping + 100 twisting) **Signal Processing** > Classifiers: CSP+RLDA > Feature extraction: CSP > Frequency bands: mu_beta=[8.0, 30.0] Hz > Spatial filters: CSP **Cross-Validation** > Method: 10x10-fold > Folds: 10 > Evaluation type: within_session **BCI Application** > Applications: motor_control, prosthetics > Environment: laboratory > Online feedback: False **Tags** > Pathology: Healthy > Modality: Motor > Type: Research **Documentation** > DOI: 10.1093/gigascience/giaa098 > License: CC0-1.0 > Investigators: Ji-Hoon Jeong, Jeong-Hyun Cho, Kyung-Hwan Shim, Byoung-Hee Kwon, Byeong-Hoo Lee, Do-Yeun Lee, Dae-Hyeok Lee, Seong-Whan Lee > Institution: Korea University > Country: KR > Data URL: [https://zenodo.org/records/19021436](https://zenodo.org/records/19021436) > Publication year: 2020 **References** Jeong, J.-H., Cho, J.-H., Shim, K.-H., et al. (2020). Multimodal signal dataset for 11 intuitive movement tasks from single upper extremity during multiple recording sessions. GigaScience, 9(10), giaa098. [https://doi.org/10.1093/gigascience/giaa098](https://doi.org/10.1093/gigascience/giaa098) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000311` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Multimodal upper-limb MI/ME EEG (Jeong et al. 2020) | | Author (year) | `Jeong2020` | | Canonical | — | | Importable as | `NM000311`, `Jeong2020` | | Year | 2020 | | Authors | Ji-Hoon Jeong, Jeong-Hyun Cho, Kyung-Hwan Shim, Byoung-Hee Kwon, Byeong-Hoo Lee, Do-Yeun Lee, Dae-Hyeok Lee, Seong-Whan Lee | | License | CC0-1.0 | | Citation / DOI | [10.82901/nemar.nm000311](https://doi.org/10.82901/nemar.nm000311) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000311) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000311) | [Source URL](https://nemar.org/dataexplorer/detail/nm000311) | ### Copy-paste BibTeX ```bibtex @dataset{nm000311, title = {Multimodal upper-limb MI/ME EEG (Jeong et al. 2020)}, author = {Ji-Hoon Jeong and Jeong-Hyun Cho and Kyung-Hwan Shim and Byoung-Hee Kwon and Byeong-Hoo Lee and Do-Yeun Lee and Dae-Hyeok Lee and Seong-Whan Lee}, doi = {10.82901/nemar.nm000311}, url = {https://doi.org/10.82901/nemar.nm000311}, } ``` ## Technical Details - Subjects: 25 - Recordings: 213 - Tasks: 1 - Channels: 71 - Sampling rate (Hz): 1000.0 - Duration (hours): 124.0643847222222 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 88.6 GB - File count: 213 - Format: BIDS - License: CC0-1.0 - DOI: 10.82901/nemar.nm000311 - Source: nemar - OpenNeuro: [nm000311](https://openneuro.org/datasets/nm000311) - NeMAR: [nm000311](https://nemar.org/dataexplorer/detail?dataset_id=nm000311) ## API Reference Use the `NM000311` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000311(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal upper-limb MI/ME EEG (Jeong et al. 2020) * **Study:** `nm000311` (NeMAR) * **Author (year):** `Jeong2020` * **Canonical:** — Also importable as: `NM000311`, `Jeong2020`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 25; recordings: 213; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000311](https://openneuro.org/datasets/nm000311) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000311](https://nemar.org/dataexplorer/detail?dataset_id=nm000311) DOI: [https://doi.org/10.82901/nemar.nm000311](https://doi.org/10.82901/nemar.nm000311) ### Examples ```pycon >>> from eegdash.dataset import NM000311 >>> dataset = NM000311(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000311) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000311) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000313: eeg dataset, 24 subjects *Mainsah2025-S2* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *Mainsah2025-S2*. [10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) Modality: eeg Subjects: 24 Recordings: 288 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000313 dataset = NM000313(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000313(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000313( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000313, title = {Mainsah2025-S2}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## About This Dataset **Mainsah2025-S2** BigP3BCI Study S2 — 9x8 house/tool paradigm (24 healthy subjects). **Dataset Overview** > Code: Mainsah2025-S2 > Paradigm: p300 > DOI: 10.13026/0byy-ry86 ### View full README **Mainsah2025-S2** BigP3BCI Study S2 — 9x8 house/tool paradigm (24 healthy subjects). **Dataset Overview** > Code: Mainsah2025-S2 > Paradigm: p300 > DOI: 10.13026/0byy-ry86 > Subjects: 24 > Sessions per subject: 1 > Events: Target=2, NonTarget=1 > Trial interval: [0, 1.0] s **Acquisition** > Sampling rate: 256.0 Hz > Number of channels: 32 > Channel types: eeg=32 > Montage: standard_1020 > Hardware: g.USBamp (g.tec) > Line frequency: 60.0 Hz **Participants** > Number of subjects: 24 > Health status: healthy **Experimental Protocol** > Paradigm: p300 > Number of classes: 2 > Class labels: Target, NonTarget **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 **Signal Processing** > Feature extraction: P300_ERP_detection **Cross-Validation** > Method: calibration-then-test > Evaluation type: within_subject **BCI Application** > Applications: speller > Environment: laboratory > Online feedback: True **Tags** > Modality: visual > Type: perception **Documentation** > Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. > DOI: 10.13026/0byy-ry86 > License: CC-BY-4.0 > Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins > Institution: Duke University; East Tennessee State University > Country: US > Repository: PhysioNet > Data URL: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) > Publication year: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000313` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mainsah2025-S2 | | Author (year) | `Mainsah2025_S2` | | Canonical | — | | Importable as | `NM000313`, `Mainsah2025_S2` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000313) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000313) | [Source URL](https://nemar.org/dataexplorer/detail/nm000313) | ### Copy-paste BibTeX ```bibtex @dataset{nm000313, title = {Mainsah2025-S2}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## Technical Details - Subjects: 24 - Recordings: 288 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 256.0000766323896 - Duration (hours): 13.359683500841909 - Pathology: Healthy - Modality: Visual - Type: Perception - Size on disk: 1.1 GB - File count: 288 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.13026/0byy-ry86 - Source: nemar - OpenNeuro: [nm000313](https://openneuro.org/datasets/nm000313) - NeMAR: [nm000313](https://nemar.org/dataexplorer/detail?dataset_id=nm000313) ## API Reference Use the `NM000313` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000313(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-S2 * **Study:** `nm000313` (NeMAR) * **Author (year):** `Mainsah2025_S2` * **Canonical:** — Also importable as: `NM000313`, `Mainsah2025_S2`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 24; recordings: 288; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000313](https://openneuro.org/datasets/nm000313) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000313](https://nemar.org/dataexplorer/detail?dataset_id=nm000313) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000313 >>> dataset = NM000313(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000313) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000313) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000321: eeg dataset, 36 subjects *Mainsah2025-Q* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *Mainsah2025-Q*. [10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) Modality: eeg Subjects: 36 Recordings: 360 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000321 dataset = NM000321(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000321(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000321( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000321, title = {Mainsah2025-Q}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## About This Dataset **Mainsah2025-Q** BigP3BCI Study Q — 6x6 color intensification (36 ALS subjects). **Dataset Overview** > Code: Mainsah2025-Q > Paradigm: p300 > DOI: 10.13026/0byy-ry86 ### View full README **Mainsah2025-Q** BigP3BCI Study Q — 6x6 color intensification (36 ALS subjects). **Dataset Overview** > Code: Mainsah2025-Q > Paradigm: p300 > DOI: 10.13026/0byy-ry86 > Subjects: 36 > Sessions per subject: 3 > Events: Target=2, NonTarget=1 > Trial interval: [0, 1.0] s **Acquisition** > Sampling rate: 256.0 Hz > Number of channels: 32 > Channel types: eeg=32 > Montage: standard_1020 > Hardware: g.USBamp (g.tec) > Line frequency: 60.0 Hz **Participants** > Number of subjects: 36 > Health status: healthy **Experimental Protocol** > Paradigm: p300 > Number of classes: 2 > Class labels: Target, NonTarget **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 **Signal Processing** > Feature extraction: P300_ERP_detection **Cross-Validation** > Method: calibration-then-test > Evaluation type: within_subject **BCI Application** > Applications: speller > Environment: laboratory > Online feedback: True **Tags** > Modality: visual > Type: perception **Documentation** > Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. > DOI: 10.13026/0byy-ry86 > License: CC-BY-4.0 > Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins > Institution: Duke University; East Tennessee State University > Country: US > Repository: PhysioNet > Data URL: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) > Publication year: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000321` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mainsah2025-Q | | Author (year) | `Mainsah2025_Q` | | Canonical | — | | Importable as | `NM000321`, `Mainsah2025_Q` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000321) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000321) | [Source URL](https://nemar.org/dataexplorer/detail/nm000321) | ### Copy-paste BibTeX ```bibtex @dataset{nm000321, title = {Mainsah2025-Q}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## Technical Details - Subjects: 36 - Recordings: 360 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 256.0000930697907 (208), 256.00008203487505 (52), 256.0 (43), 256.00010076264726 (16), 256.0001098418278 (12), 256.00012071918457 (12), 256.0001184842897 (7), 256.00008886963377 (4), 256.00009694678226 (3), 256.00010663894057 (3) - Duration (hours): 13.13960498457674 - Pathology: Other - Modality: Visual - Type: Clinical/Intervention - Size on disk: 1.1 GB - File count: 360 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.13026/0byy-ry86 - Source: nemar - OpenNeuro: [nm000321](https://openneuro.org/datasets/nm000321) - NeMAR: [nm000321](https://nemar.org/dataexplorer/detail?dataset_id=nm000321) ## API Reference Use the `NM000321` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000321(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-Q * **Study:** `nm000321` (NeMAR) * **Author (year):** `Mainsah2025_Q` * **Canonical:** — Also importable as: `NM000321`, `Mainsah2025_Q`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 36; recordings: 360; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000321](https://openneuro.org/datasets/nm000321) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000321](https://nemar.org/dataexplorer/detail?dataset_id=nm000321) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000321 >>> dataset = NM000321(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000321) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000321) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000323: eeg dataset, 54 subjects *Lee2019-ERP* Access recordings and metadata through EEGDash. **Citation:** Min-Ho Lee, O-Yeon Kwon, Yong-Jeong Kim, Hong-Kyung Kim, Young-Eun Lee, John Williamson, Siamac Fazli, Seong-Whan Lee (2019). *Lee2019-ERP*. [10.1093/gigascience/giz002](https://doi.org/10.1093/gigascience/giz002) Modality: eeg Subjects: 54 Recordings: 216 License: GPL-3.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000323 dataset = NM000323(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000323(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000323( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000323, title = {Lee2019-ERP}, author = {Min-Ho Lee and O-Yeon Kwon and Yong-Jeong Kim and Hong-Kyung Kim and Young-Eun Lee and John Williamson and Siamac Fazli and Seong-Whan Lee}, doi = {10.1093/gigascience/giz002}, url = {https://doi.org/10.1093/gigascience/giz002}, } ``` ## About This Dataset **Lee2019-ERP** BMI/OpenBMI dataset for P300. **Dataset Overview** > Code: Lee2019-ERP > Paradigm: p300 > DOI: 10.5524/100542 ### View full README **Lee2019-ERP** BMI/OpenBMI dataset for P300. **Dataset Overview** > Code: Lee2019-ERP > Paradigm: p300 > DOI: 10.5524/100542 > Subjects: 54 > Sessions per subject: 2 > Events: Target=1, NonTarget=2 > Trial interval: [0.0, 1.0] s > Runs per session: 2 > File format: MAT **Acquisition** > Sampling rate: 1000.0 Hz > Number of channels: 62 > Channel types: eeg=62, emg=4 > Channel names: AF3, AF4, AF7, AF8, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EMG1, EMG2, EMG3, EMG4, F10, F3, F4, F7, F8, F9, FC1, FC2, FC3, FC4, FC5, FC6, FT10, FT9, FTT10h, FTT9h, Fp1, Fp2, Fz, O1, O2, Oz, P1, P2, P3, P4, P7, P8, PO10, PO3, PO4, PO9, POz, Pz, T7, T8, TP10, TP7, TP8, TP9, TPP10h, TPP8h, TPP9h, TTP7h > Montage: standard_1005 > Hardware: BrainAmp > Software: OpenBMI > Reference: nasion > Ground: AFz > Sensor type: Ag/AgCl > Line frequency: 60.0 Hz > Impedance threshold: 10 kOhm > Cap manufacturer: Brain Products > Auxiliary channels: EMG (4 ch) **Participants** > Number of subjects: 54 > Health status: healthy > Age: mean=29.5, min=24, max=35 > Gender distribution: female=25, male=29 > Handedness: right > BCI experience: mixed > Species: human **Experimental Protocol** > Paradigm: p300 > Task type: copy_spelling > Number of classes: 2 > Class labels: Target, NonTarget > Study design: 36-symbol ERP row-column speller with random-set presentation and face stimuli, offline training and online test phases > Feedback type: visual > Stimulus type: rc_speller > Stimulus modalities: visual > Primary modality: visual > Mode: offline > Training/test split: True > Instructions: Subjects were asked to copy-spell given sentences by gazing at target characters on screen. In training: ‘NEURAL NETWORKS AND DEEP LEARNING’ (33 characters), in test: ‘PATTERN RECOGNITION MACHINE LEARNING’ (36 characters). Participants counted number of times each target character flashed. **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 > Number of targets: 36 > Number of repetitions: 5 > Inter-stimulus interval: 135.0 ms > Stimulus onset asynchrony: 215.0 ms **Data Structure** > Trials: {‘training’: 1980, ‘test’: 2160} > Trials context: Training: copy-spell ‘NEURAL NETWORKS AND DEEP LEARNING’ (33 characters). Test: copy-spell ‘PATTERN RECOGNITION MACHINE LEARNING’ (36 characters). Each character received 5 sequences of 12 flashes (60 flashes total). **Preprocessing** > Data state: raw > Preprocessing applied: False **Signal Processing** > Classifiers: LDA > Feature extraction: Mean Amplitudes **Cross-Validation** > Method: training-test split > Evaluation type: within_session, cross_session **Performance (Original Study)** > Accuracy: 96.7% > Accuracy Std: 0.05 > Illiteracy Rate: 11.1 **BCI Application** > Applications: speller, communication > Online feedback: True **Tags** > Pathology: Healthy > Modality: Visual > Type: Perception **Documentation** > Description: EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy > DOI: 10.1093/gigascience/giz002 > License: GPL-3.0 > Investigators: Min-Ho Lee, O-Yeon Kwon, Yong-Jeong Kim, Hong-Kyung Kim, Young-Eun Lee, John Williamson, Siamac Fazli, Seong-Whan Lee > Senior author: Seong-Whan Lee > Contact: [sw.lee@korea.ac.kr](mailto:sw.lee@korea.ac.kr); Tel: +82-2-3290-3197; Fax: +82-2-3290-3583 > Institution: Korea University > Department: Department of Brain and Cognitive Engineering > Address: 145 Anam-ro, Seongbuk-gu, Seoul, 02841, Korea > Country: KR > Repository: GigaDB > Publication year: 2019 > Keywords: EEG datasets, brain-computer interface, event-related potential, steady-state visually evoked potential, motor-imagery, OpenBMI toolbox, BCI illiteracy **References** Lee, M. H., Kwon, O. Y., Kim, Y. J., Kim, H. K., Lee, Y. E., Williamson, J., … Lee, S. W. (2019). EEG dataset and OpenBMI toolbox for three BCI paradigms: An investigation into BCI illiteracy. GigaScience, 8(5), 1–16. [https://doi.org/10.1093/gigascience/giz002](https://doi.org/10.1093/gigascience/giz002) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000323` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Lee2019-ERP | | Author (year) | `Lee2019_ERP` | | Canonical | `OpenBMI_ERP`, `OpenBMI_P300` | | Importable as | `NM000323`, `Lee2019_ERP`, `OpenBMI_ERP`, `OpenBMI_P300` | | Year | 2019 | | Authors | Min-Ho Lee, O-Yeon Kwon, Yong-Jeong Kim, Hong-Kyung Kim, Young-Eun Lee, John Williamson, Siamac Fazli, Seong-Whan Lee | | License | GPL-3.0 | | Citation / DOI | [doi:10.1093/gigascience/giz002](https://doi.org/10.1093/gigascience/giz002) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000323) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000323) | [Source URL](https://nemar.org/dataexplorer/detail/nm000323) | ### Copy-paste BibTeX ```bibtex @dataset{nm000323, title = {Lee2019-ERP}, author = {Min-Ho Lee and O-Yeon Kwon and Yong-Jeong Kim and Hong-Kyung Kim and Young-Eun Lee and John Williamson and Siamac Fazli and Seong-Whan Lee}, doi = {10.1093/gigascience/giz002}, url = {https://doi.org/10.1093/gigascience/giz002}, } ``` ## Technical Details - Subjects: 54 - Recordings: 216 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 1000.0 - Duration (hours): 58.12466222222223 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 38.6 GB - File count: 216 - Format: BIDS - License: GPL-3.0 - DOI: doi:10.1093/gigascience/giz002 - Source: nemar - OpenNeuro: [nm000323](https://openneuro.org/datasets/nm000323) - NeMAR: [nm000323](https://nemar.org/dataexplorer/detail?dataset_id=nm000323) ## API Reference Use the `NM000323` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000323(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Lee2019-ERP * **Study:** `nm000323` (NeMAR) * **Author (year):** `Lee2019_ERP` * **Canonical:** `OpenBMI_ERP`, `OpenBMI_P300` Also importable as: `NM000323`, `Lee2019_ERP`, `OpenBMI_ERP`, `OpenBMI_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 54; recordings: 216; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000323](https://openneuro.org/datasets/nm000323) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000323](https://nemar.org/dataexplorer/detail?dataset_id=nm000323) DOI: [https://doi.org/10.1093/gigascience/giz002](https://doi.org/10.1093/gigascience/giz002) ### Examples ```pycon >>> from eegdash.dataset import NM000323 >>> dataset = NM000323(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000323) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000323) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000326: eeg dataset, 19 subjects *Mainsah2025-C* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *Mainsah2025-C*. [10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) Modality: eeg Subjects: 19 Recordings: 341 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000326 dataset = NM000326(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000326(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000326( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000326, title = {Mainsah2025-C}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## About This Dataset **Mainsah2025-C** BigP3BCI Study C — 6x6 checkerboard with ERN (19 healthy subjects). **Dataset Overview** > Code: Mainsah2025-C > Paradigm: p300 > DOI: 10.13026/0byy-ry86 ### View full README **Mainsah2025-C** BigP3BCI Study C — 6x6 checkerboard with ERN (19 healthy subjects). **Dataset Overview** > Code: Mainsah2025-C > Paradigm: p300 > DOI: 10.13026/0byy-ry86 > Subjects: 19 > Sessions per subject: 1 > Events: Target=2, NonTarget=1 > Trial interval: [0, 1.0] s **Acquisition** > Sampling rate: 256.0 Hz > Number of channels: 32 > Channel types: eeg=32 > Montage: standard_1020 > Hardware: g.USBamp (g.tec) > Line frequency: 60.0 Hz **Participants** > Number of subjects: 19 > Health status: healthy **Experimental Protocol** > Paradigm: p300 > Number of classes: 2 > Class labels: Target, NonTarget **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 **Signal Processing** > Feature extraction: P300_ERP_detection **Cross-Validation** > Method: calibration-then-test > Evaluation type: within_subject **BCI Application** > Applications: speller > Environment: laboratory > Online feedback: True **Tags** > Modality: visual > Type: perception **Documentation** > Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. > DOI: 10.13026/0byy-ry86 > License: CC-BY-4.0 > Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins > Institution: Duke University; East Tennessee State University > Country: US > Repository: PhysioNet > Data URL: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) > Publication year: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000326` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mainsah2025-C | | Author (year) | `Mainsah2025_C` | | Canonical | — | | Importable as | `NM000326`, `Mainsah2025_C` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000326) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000326) | [Source URL](https://nemar.org/dataexplorer/detail/nm000326) | ### Copy-paste BibTeX ```bibtex @dataset{nm000326, title = {Mainsah2025-C}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## Technical Details - Subjects: 19 - Recordings: 341 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 256.0000775610915 (284), 256.0001218685495 (57) - Duration (hours): 14.678791922980263 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 1.2 GB - File count: 341 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.13026/0byy-ry86 - Source: nemar - OpenNeuro: [nm000326](https://openneuro.org/datasets/nm000326) - NeMAR: [nm000326](https://nemar.org/dataexplorer/detail?dataset_id=nm000326) ## API Reference Use the `NM000326` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000326(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-C * **Study:** `nm000326` (NeMAR) * **Author (year):** `Mainsah2025_C` * **Canonical:** — Also importable as: `NM000326`, `Mainsah2025_C`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 19; recordings: 341; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000326](https://openneuro.org/datasets/nm000326) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000326](https://nemar.org/dataexplorer/detail?dataset_id=nm000326) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000326 >>> dataset = NM000326(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000326) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000326) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000329: eeg dataset, 16 subjects *Brandl2020* Access recordings and metadata through EEGDash. **Citation:** Stephanie Brandl, Benjamin Blankertz, Tobias Dahne (2020). *Brandl2020*. [10.3389/fnins.2020.566147](https://doi.org/10.3389/fnins.2020.566147) Modality: eeg Subjects: 16 Recordings: 112 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000329 dataset = NM000329(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000329(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000329( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000329, title = {Brandl2020}, author = {Stephanie Brandl and Benjamin Blankertz and Tobias Dahne}, doi = {10.3389/fnins.2020.566147}, url = {https://doi.org/10.3389/fnins.2020.566147}, } ``` ## About This Dataset **Brandl2020** Motor Imagery under distraction dataset from Brandl and Blankertz 2020. **Dataset Overview** > Code: Brandl2020 > Paradigm: imagery > DOI: 10.3389/fnins.2020.566147 ### View full README **Brandl2020** Motor Imagery under distraction dataset from Brandl and Blankertz 2020. **Dataset Overview** > Code: Brandl2020 > Paradigm: imagery > DOI: 10.3389/fnins.2020.566147 > Subjects: 16 > Sessions per subject: 1 > Events: left_hand=1, right_hand=2 > Trial interval: [0, 4.5] s > Runs per session: 7 > File format: MAT (HDF5 v7.3) **Acquisition** > Sampling rate: 1000.0 Hz > Number of channels: 63 > Channel types: eeg=63 > Channel names: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP10, TP7, TP8, TP9 > Montage: standard_1005 > Hardware: 2x BrainAmp (Brain Products) > Software: BBCI Toolbox (MATLAB) > Reference: nose > Sensor type: Ag/AgCl wet > Line frequency: 50.0 Hz > Cap manufacturer: EasyCap > Cap model: Fast’n Easy Cap **Participants** > Number of subjects: 16 > Health status: healthy > Age: mean=26.3 > Gender distribution: female=6, male=10 > BCI experience: mostly naive (3/16 had prior BCI experience) **Experimental Protocol** > Paradigm: imagery > Number of classes: 2 > Class labels: left_hand, right_hand > Trial duration: 4.5 s > Tasks: calibration, clean, eyesclosed, news, numbers, flicker, stimulation > Study design: Motor imagery under distraction: 1 calibration run (no feedback, no distraction) + 6 feedback runs with different distraction conditions (clean, eyes closed, news, number search, flicker, vibro-tactile stimulation) > Feedback type: auditory > Stimulus type: auditory > Stimulus modalities: auditory > Primary modality: auditory > Synchronicity: cue-based > Mode: online > Training/test split: False > Instructions: Subjects received auditory cues (‘links’ for left, ‘rechts’ for right) and performed motor imagery of left or right hand movement **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > left_hand ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand ``` **Paradigm-Specific Parameters** > Detected paradigm: motor_imagery > Imagery tasks: left_hand, right_hand > Imagery duration: 4.5 s **Data Structure** > Trials: 504 > Trials per class: left_hand=252, right_hand=252 > Blocks per session: 7 > Trials context: 7 runs per subject: 1 calibration (72 trials) + 6 feedback runs (72 trials each, 6 distraction conditions) **Preprocessing** > Data state: raw > Preprocessing applied: False **Signal Processing** > Classifiers: CSP+LDA > Feature extraction: CSP, bandpower > Frequency bands: mu=[8.0, 13.0] Hz; beta=[13.0, 30.0] Hz > Spatial filters: CSP **Cross-Validation** > Method: holdout > Evaluation type: within_subject **BCI Application** > Applications: motor_control > Environment: laboratory > Online feedback: True **Tags** > Pathology: Healthy > Modality: Motor > Type: Motor Imagery **Documentation** > DOI: 10.3389/fnins.2020.566147 > License: CC-BY-NC-ND-4.0 > Investigators: Stephanie Brandl, Benjamin Blankertz, Tobias Dahne > Senior author: Benjamin Blankertz > Institution: Technische Universitaet Berlin > Department: Department of Neurotechnology > Country: DE > Repository: DepositOnce TU Berlin > Data URL: [https://depositonce.tu-berlin.de/handle/11303/10934.2](https://depositonce.tu-berlin.de/handle/11303/10934.2) > Publication year: 2020 > Funding: BMBF/BIFOLD (01IS18025A, 01IS18037A) > Ethics approval: Approved by the ethics committee of the Charite University Medicine Berlin > How to acknowledge: Please cite: Brandl, S. and Blankertz, B. (2020). Motor Imagery Under Distraction – An Open Access BCI Dataset. Frontiers in Neuroscience, 14, 566147. [https://doi.org/10.3389/fnins.2020.566147](https://doi.org/10.3389/fnins.2020.566147) > Keywords: brain-computer interface, motor imagery, EEG, distraction, open access, BCI **Abstract** We present an open-access dataset of a motor imagery brain-computer interface (BCI) experiment conducted under six different distraction conditions. Sixteen healthy participants performed left vs. right hand motor imagery while being distracted by flickering video, number search tasks, news listening, eyes closed, vibro-tactile stimulation, or no distraction. Each participant completed one calibration run without feedback and six feedback runs under the different distraction conditions, resulting in 504 trials per subject. **Methodology** Participants completed one session with 7 runs of 72 trials each. Run 1 was calibration (no feedback, no distraction). Runs 2-7 included auditory feedback and one of six distraction conditions. Auditory cues indicated left or right hand imagery. Trial duration was 4.5 s with 2.5 s ITI. Online classification used CSP with LDA. EEG recorded at 1000 Hz with 63 channels, nose reference, using two BrainAmp amplifiers. **References** Brandl, S. and Blankertz, B. (2020). Motor Imagery Under Distraction – An Open Access BCI Dataset. Frontiers in Neuroscience, 14, 566147. [https://doi.org/10.3389/fnins.2020.566147](https://doi.org/10.3389/fnins.2020.566147) Notes .. versionadded:: 1.2.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000329` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Brandl2020 | | Author (year) | `Brandl2020` | | Canonical | — | | Importable as | `NM000329`, `Brandl2020` | | Year | 2020 | | Authors | Stephanie Brandl, Benjamin Blankertz, Tobias Dahne | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | [doi:10.3389/fnins.2020.566147](https://doi.org/10.3389/fnins.2020.566147) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000329) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000329) | [Source URL](https://nemar.org/dataexplorer/detail/nm000329) | ### Copy-paste BibTeX ```bibtex @dataset{nm000329, title = {Brandl2020}, author = {Stephanie Brandl and Benjamin Blankertz and Tobias Dahne}, doi = {10.3389/fnins.2020.566147}, url = {https://doi.org/10.3389/fnins.2020.566147}, } ``` ## Technical Details - Subjects: 16 - Recordings: 112 - Tasks: 1 - Channels: 63 - Sampling rate (Hz): 1000.0 - Duration (hours): 97.11163555555557 - Pathology: Healthy - Modality: Auditory - Type: Motor - Size on disk: 61.6 GB - File count: 112 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: doi:10.3389/fnins.2020.566147 - Source: nemar - OpenNeuro: [nm000329](https://openneuro.org/datasets/nm000329) - NeMAR: [nm000329](https://nemar.org/dataexplorer/detail?dataset_id=nm000329) ## API Reference Use the `NM000329` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000329(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Brandl2020 * **Study:** `nm000329` (NeMAR) * **Author (year):** `Brandl2020` * **Canonical:** — Also importable as: `NM000329`, `Brandl2020`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 16; recordings: 112; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000329](https://openneuro.org/datasets/nm000329) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000329](https://nemar.org/dataexplorer/detail?dataset_id=nm000329) DOI: [https://doi.org/10.3389/fnins.2020.566147](https://doi.org/10.3389/fnins.2020.566147) ### Examples ```pycon >>> from eegdash.dataset import NM000329 >>> dataset = NM000329(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000329) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000329) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000336: eeg dataset, 20 subjects *Mainsah2025-R* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *Mainsah2025-R*. [10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) Modality: eeg Subjects: 20 Recordings: 480 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000336 dataset = NM000336(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000336(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000336( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000336, title = {Mainsah2025-R}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## About This Dataset **Mainsah2025-R** BigP3BCI Study R — 9x8 multi-face paradigms (20 ALS subjects). **Dataset Overview** > Code: Mainsah2025-R > Paradigm: p300 > DOI: 10.13026/0byy-ry86 ### View full README **Mainsah2025-R** BigP3BCI Study R — 9x8 multi-face paradigms (20 ALS subjects). **Dataset Overview** > Code: Mainsah2025-R > Paradigm: p300 > DOI: 10.13026/0byy-ry86 > Subjects: 20 > Sessions per subject: 2 > Events: Target=2, NonTarget=1 > Trial interval: [0, 1.0] s **Acquisition** > Sampling rate: 256.0 Hz > Number of channels: 32 > Channel types: eeg=32 > Montage: standard_1020 > Hardware: g.USBamp (g.tec) > Line frequency: 60.0 Hz **Participants** > Number of subjects: 20 > Health status: healthy **Experimental Protocol** > Paradigm: p300 > Number of classes: 2 > Class labels: Target, NonTarget **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 **Signal Processing** > Feature extraction: P300_ERP_detection **Cross-Validation** > Method: calibration-then-test > Evaluation type: within_subject **BCI Application** > Applications: speller > Environment: laboratory > Online feedback: True **Tags** > Modality: visual > Type: perception **Documentation** > Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. > DOI: 10.13026/0byy-ry86 > License: CC-BY-4.0 > Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins > Institution: Duke University; East Tennessee State University > Country: US > Repository: PhysioNet > Data URL: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) > Publication year: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000336` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mainsah2025-R | | Author (year) | `Mainsah2025_R` | | Canonical | — | | Importable as | `NM000336`, `Mainsah2025_R` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000336) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000336) | [Source URL](https://nemar.org/dataexplorer/detail/nm000336) | ### Copy-paste BibTeX ```bibtex @dataset{nm000336, title = {Mainsah2025-R}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## Technical Details - Subjects: 20 - Recordings: 480 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 256.0000579103764 (249), 256.00011324306917 (132), 256.00009140820043 (39), 256.0000766323896 (27), 256.000065968772 (18), 256.0 (15) - Duration (hours): 23.526139376380517 - Pathology: Other - Modality: Visual - Type: Attention - Size on disk: 2.0 GB - File count: 480 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.13026/0byy-ry86 - Source: nemar - OpenNeuro: [nm000336](https://openneuro.org/datasets/nm000336) - NeMAR: [nm000336](https://nemar.org/dataexplorer/detail?dataset_id=nm000336) ## API Reference Use the `NM000336` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000336(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-R * **Study:** `nm000336` (NeMAR) * **Author (year):** `Mainsah2025_R` * **Canonical:** — Also importable as: `NM000336`, `Mainsah2025_R`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 20; recordings: 480; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000336](https://openneuro.org/datasets/nm000336) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000336](https://nemar.org/dataexplorer/detail?dataset_id=nm000336) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000336 >>> dataset = NM000336(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000336) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000336) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000338: eeg dataset, 54 subjects *Lee2019-MI* Access recordings and metadata through EEGDash. **Citation:** Min-Ho Lee, O-Yeon Kwon, Yong-Jeong Kim, Hong-Kyung Kim, Young-Eun Lee, John Williamson, Siamac Fazli, Seong-Whan Lee (2019). *Lee2019-MI*. [10.1093/gigascience/giz002](https://doi.org/10.1093/gigascience/giz002) Modality: eeg Subjects: 54 Recordings: 216 License: GPL-3.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000338 dataset = NM000338(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000338(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000338( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000338, title = {Lee2019-MI}, author = {Min-Ho Lee and O-Yeon Kwon and Yong-Jeong Kim and Hong-Kyung Kim and Young-Eun Lee and John Williamson and Siamac Fazli and Seong-Whan Lee}, doi = {10.1093/gigascience/giz002}, url = {https://doi.org/10.1093/gigascience/giz002}, } ``` ## About This Dataset **Lee2019-MI** BMI/OpenBMI dataset for MI. **Dataset Overview** > Code: Lee2019-MI > Paradigm: imagery > DOI: 10.5524/100542 ### View full README **Lee2019-MI** BMI/OpenBMI dataset for MI. **Dataset Overview** > Code: Lee2019-MI > Paradigm: imagery > DOI: 10.5524/100542 > Subjects: 54 > Sessions per subject: 2 > Events: left_hand=2, right_hand=1 > Trial interval: [0.0, 4.0] s > File format: MAT **Acquisition** > Sampling rate: 1000.0 Hz > Number of channels: 62 > Channel types: eeg=62, emg=4 > Channel names: AF3, AF4, AF7, AF8, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, EMG1, EMG2, EMG3, EMG4, F10, F3, F4, F7, F8, F9, FC1, FC2, FC3, FC4, FC5, FC6, FT10, FT9, FTT10h, FTT9h, Fp1, Fp2, Fz, O1, O2, Oz, P1, P2, P3, P4, P7, P8, PO10, PO3, PO4, PO9, POz, Pz, T7, T8, TP10, TP7, TP8, TP9, TPP10h, TPP8h, TPP9h, TTP7h > Montage: standard_1005 > Hardware: BrainAmp > Reference: nasion > Ground: AFz > Sensor type: Ag/AgCl > Line frequency: 60.0 Hz > Impedance threshold: 10.0 kOhm > Auxiliary channels: EMG (4 ch) **Participants** > Number of subjects: 54 > Health status: healthy > Age: min=24, max=35 > Gender distribution: female=25, male=29 > Handedness: {‘right’: 50, ‘left’: 2, ‘ambidexter’: 2} > BCI experience: mixed **Experimental Protocol** > Paradigm: imagery > Number of classes: 2 > Class labels: left_hand, right_hand > Trial duration: 4.0 s > Tasks: MI > Study design: Binary-class motor imagery (left/right hand grasping). Two sessions on different days, each with offline training and online test phases of 100 trials each. > Feedback type: visual > Stimulus type: arrow > Stimulus modalities: visual > Primary modality: visual > Synchronicity: synchronous > Mode: both > Training/test split: True > Instructions: Subjects performed the imagery task of grasping with the appropriate hand for 4 s when the right or left arrow appeared as a visual cue. First 3 s of each trial began with a black fixation cross to prepare subjects for the MI task. After each task, the screen remained blank for 6 s (± 1.5 s). **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > left_hand ```text ├─ Sensory-event │ ├─ Experimental-stimulus │ ├─ Visual-presentation │ └─ Leftward, Arrow └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event │ ├─ Experimental-stimulus │ ├─ Visual-presentation │ └─ Rightward, Arrow └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand ``` **Paradigm-Specific Parameters** > Detected paradigm: motor_imagery > Imagery tasks: left_hand, right_hand > Cue duration: 3.0 s > Imagery duration: 4.0 s **Data Structure** > Trials: 200 > Trials per class: left_hand=100, right_hand=100 > Trials context: 100 trials per session per phase (50 per class per phase). Training: 50 left + 50 right. Test: 50 left + 50 right. Total per session: 200. **Preprocessing** > Data state: raw > Preprocessing applied: False **Signal Processing** > Classifiers: CSP+LDA, CSSP, FBCSP, BSSFO > Feature extraction: CSP, CSSP, FBCSP, BSSFO, log-variance > Frequency bands: mu=[8.0, 12.0] Hz; analyzed=[8.0, 30.0] Hz > Spatial filters: CSP, CSSP, FBCSP, BSSFO **Cross-Validation** > Method: train-test split > Evaluation type: within_session, cross_session **Performance (Original Study)** > Accuracy: 71.1% > Accuracy Std: 0.15 > Illiteracy Rate: 53.7 > Session1 Accuracy: 70.0 > Session2 Accuracy: 72.2 **BCI Application** > Applications: motor_control > Environment: laboratory > Online feedback: True **Tags** > Pathology: Healthy > Modality: Motor > Type: Research **Documentation** > Description: EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy. Includes MI, ERP, and SSVEP paradigms with a large number of subjects over multiple sessions. > DOI: 10.1093/gigascience/giz002 > License: GPL-3.0 > Investigators: Min-Ho Lee, O-Yeon Kwon, Yong-Jeong Kim, Hong-Kyung Kim, Young-Eun Lee, John Williamson, Siamac Fazli, Seong-Whan Lee > Senior author: Seong-Whan Lee > Contact: [sw.lee@korea.ac.kr](mailto:sw.lee@korea.ac.kr) > Institution: Korea University > Department: Department of Brain and Cognitive Engineering > Address: 145 Anam-ro, Seongbuk-gu, Seoul, 02841, Korea > Country: KR > Repository: GigaDB > Publication year: 2019 > How to acknowledge: This is an Open Access article distributed under the terms of the Creative Commons Attribution License ([http://creativecommons.org/licenses/by/4.0/](http://creativecommons.org/licenses/by/4.0/)), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. > Keywords: EEG datasets, brain-computer interface, event-related potential, steady-state visually evoked potential, motor-imagery, OpenBMI toolbox, BCI illiteracy **Abstract** Electroencephalography (EEG)-based brain-computer interface (BCI) systems are mainly divided into three major paradigms: motor imagery (MI), event-related potential (ERP), and steady-state visually evoked potential (SSVEP). Here, we present a BCI dataset that includes the three major BCI paradigms with a large number of subjects over multiple sessions. In addition, information about the psychological and physiological conditions of BCI users was obtained using a questionnaire, and task-unrelated parameters such as resting state, artifacts, and electromyography of both arms were also recorded. We evaluated the decoding accuracies for the individual paradigms and determined performance variations across both subjects and sessions. Furthermore, we looked for more general, severe cases of BCI illiteracy than have been previously reported in the literature. Average decoding accuracies across all subjects and sessions were 71.1% (± 0.15), 96.7% (± 0.05), and 95.1% (± 0.09), and rates of BCI illiteracy were 53.7%, 11.1%, and 10.2% for MI, ERP, and SSVEP, respectively. Compared to the ERP and SSVEP paradigms, the MI paradigm exhibited large performance variations between both subjects and sessions. Furthermore, we found that 27.8% (15 out of 54) of users were universally BCI literate, i.e., they were able to proficiently perform all three paradigms. Interestingly, we found no universally illiterate BCI user, i.e., all participants were able to control at least one type of BCI system. **Methodology** Experimental procedure: 54 healthy subjects participated in two sessions on different days. Each session consisted of three BCI paradigms performed sequentially: ERP speller (36 symbols, row-column presentation with face stimuli), MI task (binary left/right hand imagery), and SSVEP (four target frequencies: 5.45, 6.67, 8.57, 12 Hz). Each paradigm had offline training and online test phases. EEG recorded at 1000 Hz with 62 Ag/AgCl electrodes using BrainAmp amplifier, nose-referenced, grounded to AFz. Impedance maintained below 10 kOhm. Subjects seated 60 cm from 21-inch LCD monitor. Questionnaires collected demographic, physiological, and psychological data. Artifact data (eye blinking, eye movements, teeth clenching, arm flexing) and resting state EEG also recorded. Total experiment duration: ~205 minutes per session. **References** Lee, M. H., Kwon, O. Y., Kim, Y. J., Kim, H. K., Lee, Y. E., Williamson, J., … Lee, S. W. (2019). EEG dataset and OpenBMI toolbox for three BCI paradigms: An investigation into BCI illiteracy. GigaScience, 8(5), 1–16. [https://doi.org/10.1093/gigascience/giz002](https://doi.org/10.1093/gigascience/giz002) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000338` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Lee2019-MI | | Author (year) | `Lee2019_MI` | | Canonical | `OpenBMI_MI` | | Importable as | `NM000338`, `Lee2019_MI`, `OpenBMI_MI` | | Year | 2019 | | Authors | Min-Ho Lee, O-Yeon Kwon, Yong-Jeong Kim, Hong-Kyung Kim, Young-Eun Lee, John Williamson, Siamac Fazli, Seong-Whan Lee | | License | GPL-3.0 | | Citation / DOI | [doi:10.1093/gigascience/giz002](https://doi.org/10.1093/gigascience/giz002) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000338) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000338) | [Source URL](https://nemar.org/dataexplorer/detail/nm000338) | ### Copy-paste BibTeX ```bibtex @dataset{nm000338, title = {Lee2019-MI}, author = {Min-Ho Lee and O-Yeon Kwon and Yong-Jeong Kim and Hong-Kyung Kim and Young-Eun Lee and John Williamson and Siamac Fazli and Seong-Whan Lee}, doi = {10.1093/gigascience/giz002}, url = {https://doi.org/10.1093/gigascience/giz002}, } ``` ## Technical Details - Subjects: 54 - Recordings: 216 - Tasks: 1 - Channels: 66 - Sampling rate (Hz): 1000.0 - Duration (hours): 91.54160666666668 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 60.8 GB - File count: 216 - Format: BIDS - License: GPL-3.0 - DOI: doi:10.1093/gigascience/giz002 - Source: nemar - OpenNeuro: [nm000338](https://openneuro.org/datasets/nm000338) - NeMAR: [nm000338](https://nemar.org/dataexplorer/detail?dataset_id=nm000338) ## API Reference Use the `NM000338` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000338(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Lee2019-MI * **Study:** `nm000338` (NeMAR) * **Author (year):** `Lee2019_MI` * **Canonical:** `OpenBMI_MI` Also importable as: `NM000338`, `Lee2019_MI`, `OpenBMI_MI`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 54; recordings: 216; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000338](https://openneuro.org/datasets/nm000338) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000338](https://nemar.org/dataexplorer/detail?dataset_id=nm000338) DOI: [https://doi.org/10.1093/gigascience/giz002](https://doi.org/10.1093/gigascience/giz002) ### Examples ```pycon >>> from eegdash.dataset import NM000338 >>> dataset = NM000338(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000338) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000338) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000339: eeg dataset, 62 subjects *Stieger2021* Access recordings and metadata through EEGDash. **Citation:** James R. Stieger, Stephen A. Engel, Bin He (2021). *Stieger2021*. [10.1038/s41597-021-00883-1](https://doi.org/10.1038/s41597-021-00883-1) Modality: eeg Subjects: 62 Recordings: 598 License: CC-BY-NC-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000339 dataset = NM000339(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000339(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000339( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000339, title = {Stieger2021}, author = {James R. Stieger and Stephen A. Engel and Bin He}, doi = {10.1038/s41597-021-00883-1}, url = {https://doi.org/10.1038/s41597-021-00883-1}, } ``` ## About This Dataset **Stieger2021** Motor Imagery dataset from Stieger et al. 2021 ``` [1]_ ``` . **Dataset Overview** > Code: Stieger2021 > Paradigm: imagery > DOI: 10.1038/s41597-021-00883-1 ### View full README **Stieger2021** Motor Imagery dataset from Stieger et al. 2021 ``` [1]_ ``` . **Dataset Overview** > Code: Stieger2021 > Paradigm: imagery > DOI: 10.1038/s41597-021-00883-1 > Subjects: 62 > Sessions per subject: 11 > Events: right_hand=1, left_hand=2, both_hand=3, rest=4 > Trial interval: [0, 3] s > File format: MAT **Acquisition** > Sampling rate: 1000.0 Hz > Number of channels: 62 > Channel types: eeg=62 > Channel names: AF3, AF4, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, Cz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT7, FT8, Fp1, Fp2, Fpz, Fz, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO5, PO6, PO7, PO8, POz, Pz, T7, T8, TP7, TP8 > Montage: 10-10 > Hardware: Neuroscan SynAmps RT amplifiers > Software: Neuroscan > Sensor type: EEG > Line frequency: 60.0 Hz > Online filters: 0.1 to 200 Hz with 60 Hz notch filter > Impedance threshold: 5.0 kOhm > Cap manufacturer: Neuroscan > Cap model: Quik-Cap **Participants** > Number of subjects: 62 > Health status: healthy > Age: min=18, max=63 > Gender distribution: male=13, female=49 > Handedness: mostly right-handed > Species: human **Experimental Protocol** > Paradigm: imagery > Number of classes: 4 > Class labels: right_hand, left_hand, both_hand, rest > Tasks: LR, UD, 2D > Study design: longitudinal training study with intervention > Feedback type: visual > Stimulus type: target_bar > Stimulus modalities: visual > Primary modality: visual > Mode: online > Instructions: Imagine your left (right) hand opening and closing to move the cursor left (right). Imagine both hands opening and closing to move the cursor up. Finally, to move the cursor down, voluntarily rest; in other words, clear your mind. **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > right_hand ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand left_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand both_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine, Move, Hand rest ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Rest ``` **Paradigm-Specific Parameters** > Detected paradigm: motor_imagery > Imagery tasks: left_hand, right_hand, both_hands, rest > Cue duration: 2.0 s > Imagery duration: 6.0 s **Data Structure** > Trials: 450 > Blocks per session: 18 > Trials context: per_session **Preprocessing** > Data state: raw > Preprocessing applied: False **Signal Processing** > Feature extraction: ERD, ERS, autoregressive model, power spectrum > Frequency bands: alpha=[10.5, 13.5] Hz; mu=[8, 14] Hz > Spatial filters: Laplacian (C3/C4 with 4 surrounding electrodes) **Cross-Validation** > Evaluation type: cross_session **Performance (Original Study)** > Accuracy: 70.0% > Pvc 1D Threshold: 70.0 > Pvc 2D Threshold: 40.0 **BCI Application** > Applications: cursor_control > Environment: laboratory > Online feedback: True **Tags** > Pathology: Healthy > Modality: Motor > Type: Active **Documentation** > Description: Continuous sensorimotor rhythm based brain computer interface learning in a large population > DOI: 10.1038/s41597-021-00883-1 > License: CC-BY-NC-4.0 > Investigators: James R. Stieger, Stephen A. Engel, Bin He > Senior author: Bin He > Contact: [bhe1@andrew.cmu.edu](mailto:bhe1@andrew.cmu.edu) > Institution: Carnegie Mellon University, University of Minnesota > Department: Carnegie Mellon University, Pittsburgh, PA, USA; University of Minnesota, Minneapolis, MN, USA > Address: Pittsburgh, PA, USA; Minneapolis, MN, USA > Country: US > Repository: GitHub > Data URL: [https://doi.org/10.6084/m9.figshare.13123148.v1](https://doi.org/10.6084/m9.figshare.13123148.v1) > Publication year: 2021 > Funding: NIH AT009263; NIH EB021027; NIH NS096761; NIH MH114233; NIH EB029354 > Ethics approval: University of Minnesota IRB; Carnegie Mellon University IRB > Keywords: BCI, sensorimotor rhythm, motor imagery, EEG, longitudinal, learning **Abstract** Brain computer interfaces (BCIs) are valuable tools that expand the nature of communication through bypassing traditional neuromuscular pathways. The non-invasive, intuitive, and continuous nature of sensorimotor rhythm (SMR) based BCIs enables individuals to control computers, robotic arms, wheelchairs, and even drones by decoding motor imagination from electroencephalography (EEG). Large and uniform datasets are needed to design, evaluate, and improve the BCI algorithms. In this work, we release a large and longitudinal dataset collected during a study that examined how individuals learn to control SMR-BCIs. The dataset contains over 600 hours of EEG recordings collected during online and continuous BCI control from 62 healthy adults, (mostly) right hand dominant participants, across (up to) 11 training sessions per participant. The data record consists of 598 recording sessions, and over 250,000 trials of 4 different motor-imagery-based BCI tasks. **Methodology** Participants completed 7-11 online BCI training sessions. Each session consisted of 450 trials across 3 tasks (LR, UD, 2D) with 6 runs total. Each trial: 2s inter-trial interval, 2s target presentation, up to 6s feedback control. Online control used spatial filtering (Laplacian around C3/C4), autoregressive model (order 16) for spectrum estimation, alpha power (12 Hz ± 1.5 Hz) for control signal. Horizontal motion controlled by lateralized alpha power (C4-C3), vertical motion by total alpha power (C4+C3). Control signals normalized to zero mean and unit variance. Cursor position updated every 40 ms. **References** Stieger, J. R., Engel, S. A., & He, B. (2021). Continuous sensorimotor rhythm based brain computer interface learning in a large population. Scientific Data, 8(1), 98. [https://doi.org/10.1038/s41597-021-00883-1](https://doi.org/10.1038/s41597-021-00883-1) Notes .. versionadded:: 1.1.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000339` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Stieger2021 | | Author (year) | `Stieger2021` | | Canonical | — | | Importable as | `NM000339`, `Stieger2021` | | Year | 2021 | | Authors | James R. Stieger, Stephen A. Engel, Bin He | | License | CC-BY-NC-4.0 | | Citation / DOI | [doi:10.1038/s41597-021-00883-1](https://doi.org/10.1038/s41597-021-00883-1) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000339) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000339) | [Source URL](https://nemar.org/dataexplorer/detail/nm000339) | ### Copy-paste BibTeX ```bibtex @dataset{nm000339, title = {Stieger2021}, author = {James R. Stieger and Stephen A. Engel and Bin He}, doi = {10.1038/s41597-021-00883-1}, url = {https://doi.org/10.1038/s41597-021-00883-1}, } ``` ## Technical Details - Subjects: 62 - Recordings: 598 - Tasks: 1 - Channels: 60 - Sampling rate (Hz): 1000.0 - Duration (hours): 615.3526116666666 - Pathology: Healthy - Modality: Visual - Type: Learning - Size on disk: 371.5 GB - File count: 598 - Format: BIDS - License: CC-BY-NC-4.0 - DOI: doi:10.1038/s41597-021-00883-1 - Source: nemar - OpenNeuro: [nm000339](https://openneuro.org/datasets/nm000339) - NeMAR: [nm000339](https://nemar.org/dataexplorer/detail?dataset_id=nm000339) ## API Reference Use the `NM000339` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000339(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Stieger2021 * **Study:** `nm000339` (NeMAR) * **Author (year):** `Stieger2021` * **Canonical:** — Also importable as: `NM000339`, `Stieger2021`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 62; recordings: 598; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000339](https://openneuro.org/datasets/nm000339) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000339](https://nemar.org/dataexplorer/detail?dataset_id=nm000339) DOI: [https://doi.org/10.1038/s41597-021-00883-1](https://doi.org/10.1038/s41597-021-00883-1) ### Examples ```pycon >>> from eegdash.dataset import NM000339 >>> dataset = NM000339(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000339) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000339) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000340: eeg dataset, 20 subjects *Mainsah2025-J* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *Mainsah2025-J*. [10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) Modality: eeg Subjects: 20 Recordings: 502 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000340 dataset = NM000340(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000340(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000340( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000340, title = {Mainsah2025-J}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## About This Dataset **Mainsah2025-J** BigP3BCI Study J — 9x8 performance-based/row-column (20 healthy subjects). **Dataset Overview** > Code: Mainsah2025-J > Paradigm: p300 > DOI: 10.13026/0byy-ry86 ### View full README **Mainsah2025-J** BigP3BCI Study J — 9x8 performance-based/row-column (20 healthy subjects). **Dataset Overview** > Code: Mainsah2025-J > Paradigm: p300 > DOI: 10.13026/0byy-ry86 > Subjects: 20 > Sessions per subject: 1 > Events: Target=2, NonTarget=1 > Trial interval: [0, 1.0] s **Acquisition** > Sampling rate: 256.0 Hz > Number of channels: 16 > Channel types: eeg=16 > Montage: standard_1020 > Hardware: g.USBamp (g.tec) > Line frequency: 60.0 Hz **Participants** > Number of subjects: 20 > Health status: healthy **Experimental Protocol** > Paradigm: p300 > Number of classes: 2 > Class labels: Target, NonTarget **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 **Signal Processing** > Feature extraction: P300_ERP_detection **Cross-Validation** > Method: calibration-then-test > Evaluation type: within_subject **BCI Application** > Applications: speller > Environment: laboratory > Online feedback: True **Tags** > Modality: visual > Type: perception **Documentation** > Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. > DOI: 10.13026/0byy-ry86 > License: CC-BY-4.0 > Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins > Institution: Duke University; East Tennessee State University > Country: US > Repository: PhysioNet > Data URL: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) > Publication year: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000340` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mainsah2025-J | | Author (year) | `Mainsah2025_J` | | Canonical | — | | Importable as | `NM000340`, `Mainsah2025_J` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000340) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000340) | [Source URL](https://nemar.org/dataexplorer/detail/nm000340) | ### Copy-paste BibTeX ```bibtex @dataset{nm000340, title = {Mainsah2025-J}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## Technical Details - Subjects: 20 - Recordings: 502 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 256.0 - Duration (hours): 9.916121961805556 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 433.6 MB - File count: 502 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.13026/0byy-ry86 - Source: nemar - OpenNeuro: [nm000340](https://openneuro.org/datasets/nm000340) - NeMAR: [nm000340](https://nemar.org/dataexplorer/detail?dataset_id=nm000340) ## API Reference Use the `NM000340` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000340(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-J * **Study:** `nm000340` (NeMAR) * **Author (year):** `Mainsah2025_J` * **Canonical:** — Also importable as: `NM000340`, `Mainsah2025_J`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 20; recordings: 502; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000340](https://openneuro.org/datasets/nm000340) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000340](https://nemar.org/dataexplorer/detail?dataset_id=nm000340) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000340 >>> dataset = NM000340(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000340) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000340) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000341: eeg dataset, 12 subjects *Cattan2019-PHMD* Access recordings and metadata through EEGDash. **Citation:** G. Cattan, P. L. C. Rodrigues, M. Congedo (2019). *Cattan2019-PHMD*. [10.5281/zenodo.2617084](https://doi.org/10.5281/zenodo.2617084) Modality: eeg Subjects: 12 Recordings: 12 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000341 dataset = NM000341(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000341(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000341( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000341, title = {Cattan2019-PHMD}, author = {G. Cattan and P. L. C. Rodrigues and M. Congedo}, doi = {10.5281/zenodo.2617084}, url = {https://doi.org/10.5281/zenodo.2617084}, } ``` ## About This Dataset **Cattan2019-PHMD** Passive Head Mounted Display with Music Listening dataset ``` [1]_ ``` . **Dataset Overview** > Code: Cattan2019-PHMD > Paradigm: rstate > DOI: 10.5281/zenodo.2617084 ### View full README **Cattan2019-PHMD** Passive Head Mounted Display with Music Listening dataset ``` [1]_ ``` . **Dataset Overview** > Code: Cattan2019-PHMD > Paradigm: rstate > DOI: 10.5281/zenodo.2617084 > Subjects: 12 > Sessions per subject: 1 > Events: off=1, on=2 > Trial interval: [0, 1] s > File format: mat and csv **Acquisition** > Sampling rate: 512.0 Hz > Number of channels: 16 > Channel types: eeg=16 > Channel names: Cz, Fc5, Fc6, Fp1, Fp2, Fz, O1, O2, Oz, P3, P4, P7, P8, Pz, T7, T8 > Montage: standard_1020 > Hardware: g.USBamp > Software: OpenViBE > Reference: right earlobe > Ground: AFz > Sensor type: wet > Line frequency: 50.0 Hz > Online filters: no digital filter > Cap manufacturer: EasyCap > Cap model: EC20 > Electrode type: wet **Participants** > Number of subjects: 12 > Health status: healthy > Age: mean=26.25, std=2.63 > Gender distribution: male=9, female=3 > Species: human **Experimental Protocol** > Paradigm: rstate > Number of classes: 2 > Class labels: off, on > Trial duration: 60.0 s > Study design: focus on the marker and to listen to the music that was diffused during the experiment (Bach Invention from one to ten on harpsichord). > Feedback type: none > Stimulus type: visual fixation marker > Stimulus modalities: visual, auditory > Primary modality: auditory > Training/test split: False > Instructions: Subjects were asked to focus on the marker and to listen to the music that was diffused during the experiment **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > off ```text ├─ Experiment-structure └─ Rest on ``` ```text ├─ Experiment-structure └─ Rest ``` **Data Structure** > Blocks per session: 10 > Block duration: 60.0 s > Trials context: 5 blocks with smartphone switched-off and 5 blocks with smartphone switched-on, randomized sequence **Preprocessing** > Data state: raw, unfiltered > Preprocessing applied: False > Notes: Data were acquired with no digital filter. No Faraday cage used to mimic real-world usage. **BCI Application** > Applications: vr_ar > Environment: laboratory > Online feedback: False **Tags** > Pathology: Healthy > Modality: EEG > Type: Resting State **Documentation** > Description: This dataset contains electroencephalographic recordings of 12 subjects listening to music with and without a passive head-mounted display > DOI: 10.5281/zenodo.2617084 > Associated paper DOI: 10.2312/vriphys.20181064 > License: CC-BY-4.0 > Investigators: G. Cattan, P. L. C. Rodrigues, M. Congedo > Senior author: M. Congedo > Institution: GIPSA-lab, CNRS, University Grenoble-Alpes, Grenoble INP > Address: GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402, France > Country: FR > Repository: Zenodo > Data URL: [https://doi.org/10.5281/zenodo.2617084](https://doi.org/10.5281/zenodo.2617084) > Publication year: 2019 > How to acknowledge: Python code for manipulating the data is available at [https://github.com/plcrodrigues/py.PHMDML.EEG.2017-GIPSA](https://github.com/plcrodrigues/py.PHMDML.EEG.2017-GIPSA) > Keywords: Electroencephalography (EEG), Virtual Reality (VR), Passive Head-Mounted Display (PHMD), experiment **Abstract** We describe the experimental procedures for a dataset that we have made publicly available at [https://doi.org/10.5281/zenodo.2617084](https://doi.org/10.5281/zenodo.2617084) in mat (Mathworks, Natick, USA) and csv formats. This dataset contains electroencephalographic recordings of 12 subjects listening to music with and without a passive head-mounted display, that is, a head-mounted display which does not include any electronics at the exception of a smartphone. The electroencephalographic headset consisted of 16 electrodes. Data were recorded during a pilot experiment taking place in the GIPSA-lab, Grenoble, France, in 2017. Python code for manipulating the data is available at [https://github.com/plcrodrigues/py.PHMDML.EEG.2017-GIPSA](https://github.com/plcrodrigues/py.PHMDML.EEG.2017-GIPSA). The ID of this dataset is PHMDML.EEG.2017-GIPSA. **Methodology** Subjects sat in front of screen at ~50 cm distance without instrumental noise-reduction devices. EEG cap and Samsung Gear were placed on subject. Smartphones were continuously swapped between switched-on and switched-off conditions. Each block consisted of 1 minute of EEG recording with eyes opened. The sequence of 10 blocks was randomized prior to experiment using random number generator with no autocorrelation. Triggers marked beginning of each block (1=switched-off, 2=switched-on). **References** G. Cattan, P. L. Coelho Rodrigues, and M. Congedo, ‘Passive Head-Mounted Display Music-Listening EEG dataset’, Gipsa-Lab ; IHMTEK, Research Report 2, Mar. 2019. doi: 10.5281/zenodo.2617084. Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000341` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Cattan2019-PHMD | | Author (year) | `Cattan2019_PHMD` | | Canonical | — | | Importable as | `NM000341`, `Cattan2019_PHMD` | | Year | 2019 | | Authors | 1. Cattan, P. L. C. Rodrigues, M. Congedo | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.5281/zenodo.2617084](https://doi.org/10.5281/zenodo.2617084) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000341) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000341) | [Source URL](https://nemar.org/dataexplorer/detail/nm000341) | ### Copy-paste BibTeX ```bibtex @dataset{nm000341, title = {Cattan2019-PHMD}, author = {G. Cattan and P. L. C. Rodrigues and M. Congedo}, doi = {10.5281/zenodo.2617084}, url = {https://doi.org/10.5281/zenodo.2617084}, } ``` ## Technical Details - Subjects: 12 - Recordings: 12 - Tasks: 1 - Channels: 16 - Sampling rate (Hz): 512.0 - Duration (hours): 2.7361046006944445 - Pathology: Healthy - Modality: Auditory - Type: Resting-state - Size on disk: 231.3 MB - File count: 12 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.5281/zenodo.2617084 - Source: nemar - OpenNeuro: [nm000341](https://openneuro.org/datasets/nm000341) - NeMAR: [nm000341](https://nemar.org/dataexplorer/detail?dataset_id=nm000341) ## API Reference Use the `NM000341` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000341(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cattan2019-PHMD * **Study:** `nm000341` (NeMAR) * **Author (year):** `Cattan2019_PHMD` * **Canonical:** — Also importable as: `NM000341`, `Cattan2019_PHMD`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000341](https://openneuro.org/datasets/nm000341) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000341](https://nemar.org/dataexplorer/detail?dataset_id=nm000341) DOI: [https://doi.org/10.5281/zenodo.2617084](https://doi.org/10.5281/zenodo.2617084) ### Examples ```pycon >>> from eegdash.dataset import NM000341 >>> dataset = NM000341(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000341) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000341) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000342: eeg dataset, 12 subjects *CastillosCVEP40* Access recordings and metadata through EEGDash. **Citation:** Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais (2023). *CastillosCVEP40*. [10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) Modality: eeg Subjects: 12 Recordings: 12 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000342 dataset = NM000342(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000342(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000342( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000342, title = {CastillosCVEP40}, author = {Kalou Cabrera Castillos and Simon Ladouce and Ludovic Darmet and Frédéric Dehais}, doi = {10.1016/j.neuroimage.2023.120446}, url = {https://doi.org/10.1016/j.neuroimage.2023.120446}, } ``` ## About This Dataset **CastillosCVEP40** c-VEP and Burst-VEP dataset from Castillos et al. (2023) **Dataset Overview** > Code: CastillosCVEP40 > Paradigm: cvep > DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### View full README **CastillosCVEP40** c-VEP and Burst-VEP dataset from Castillos et al. (2023) **Dataset Overview** > Code: CastillosCVEP40 > Paradigm: cvep > DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) > Subjects: 12 > Sessions per subject: 1 > Events: 0=100, 1=101 > Trial interval: (0, 0.25) s > File format: EEGLAB .set > Number of contributing labs: 1 **Acquisition** > Sampling rate: 500.0 Hz > Number of channels: 32 > Channel types: eeg=32 > Channel names: C3, C4, CP1, CP2, CP5, CP6, Cz, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, Oz, P10, P3, P4, P7, P8, P9, Pz, T7, T8 > Montage: standard_1020 > Hardware: BrainProducts LiveAmp 32 > Reference: FCz > Ground: FPz > Sensor type: EEG > Line frequency: 50.0 Hz > Online filters: {‘line_noise_filter’: ‘IIR cut-band filter 49.9-50.1 Hz, order 16’} > Impedance threshold: 25.0 kOhm > Cap manufacturer: BrainProducts > Cap model: Acticap > Electrode type: active **Participants** > Number of subjects: 12 > Health status: healthy > Age: mean=30.6, std=7.1 > Gender distribution: female=4, male=8 > Species: human **Experimental Protocol** > Paradigm: cvep > Task type: reactive BCI > Number of classes: 2 > Class labels: 0, 1 > Trial duration: 2.2 s > Tasks: visual_attention > Study design: factorial design > Study domain: brain-computer interface > Feedback type: none > Stimulus type: visual flicker > Stimulus modalities: visual > Primary modality: visual > Synchronicity: synchronous > Mode: offline > Training/test split: False > Instructions: focus on targets that were cued sequentially in a random order for 0.5 s, followed by a 2.2 s stimulation phase > Stimulus presentation: cue_duration=500 ms, stimulation_duration=2200 ms, inter_trial_interval=700 ms, cue_type=red-bordered square around target stimulus, display=Dell P2419HC, 1920×1080 pixels, 265 cd/m², 60 Hz **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > 0 ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_0 1 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_1 ``` **Paradigm-Specific Parameters** > Detected paradigm: cvep > Code type: m-sequence > Number of targets: 4 > Cue duration: 0.5 s **Data Structure** > Trials: 60 > Blocks per session: 15 > Trials context: 15 blocks x 4 trials per block = 60 trials per subject for m-sequence c-VEP at 40% amplitude **Preprocessing** > Data state: raw **Signal Processing** > Classifiers: CNN (Convolutional Neural Network) > Feature extraction: sliding windows, bitwise decoding **Cross-Validation** > Evaluation type: offline **Performance (Original Study)** > Accuracy: 95.6% > Burst 100 Accuracy 17.6S Calibration: 90.5 > Burst 100 Accuracy 52.8S Calibration: 95.6 > Burst 40 Accuracy: 94.2 > Mseq 100 Accuracy 17.6S Calibration: 71.4 > Mseq 100 Accuracy 52.8S Calibration: 85.0 > Mean Selection Time: 1.5 **BCI Application** > Applications: reactive BCI > Environment: laboratory > Online feedback: False **Tags** > Pathology: Healthy > Modality: EEG > Type: reactive, code-VEP, visual **Documentation** > Description: Burst c-VEP Based BCI: Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience > DOI: 10.1016/j.neuroimage.2023.120446 > Associated paper DOI: 10.1016/j.neuroimage.2023.120446 > License: CC-BY-4.0 > Investigators: Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais > Senior author: Frédéric Dehais > Contact: [kalou.cabrera-castillos@isae-supaero.fr](mailto:kalou.cabrera-castillos@isae-supaero.fr) > Institution: Institut Supérieur de l’Aéronautique et de l’Espace (ISAE-SUPAERO) > Department: Human Factors and Neuroergonomics > Address: 10 Av. Edouard Belin, Toulouse, 31400, France > Country: FR > Repository: Zenodo > Data URL: [https://zenodo.org/record/8255618](https://zenodo.org/record/8255618) > Publication year: 2023 > Ethics approval: University of Toulouse CER approval number 2020-334 > Keywords: Code-VEP, Reactive BCI, CNN, Amplitude depth reduction, Visual comfort **External Links** > Source: [https://zenodo.org/record/8255618](https://zenodo.org/record/8255618) **Abstract** The utilization of aperiodic flickering visual stimuli under the form of code-modulated Visual Evoked Potentials (c-VEP) represents a pivotal advancement in the field of reactive Brain–Computer Interface (rBCI). This study introduces an innovative variant of code-VEP, referred to as ‘Burst c-VEP’, involving the presentation of short bursts of aperiodic visual flashes at a deliberately slow rate (2-4 flashes per second). The study tested an offline 4-classes c-VEP protocol involving 12 participants with factorial design manipulating pattern (burst and m-sequences) and amplitude (100% or 40% depth modulations). Full amplitude burst c-VEP sequences exhibited higher accuracy (90.5% with 17.6s calibration to 95.6% with 52.8s calibration) compared to m-sequence (71.4% to 85.0%). Mean selection time was 1.5s. Lowering intensity to 40% decreased accuracy slightly to 94.2% while improving user experience substantially. **Methodology** Factorial experimental design with 12 participants. Four conditions: burst vs m-sequence × 100% vs 40% amplitude depth. Participants seated comfortably, presented with 15 blocks of 4 trials for each condition. Each trial: 0.5s cue (red-bordered square), 2.2s stimulation, 0.7s inter-trial interval. Four disc targets (150 pixels) on Dell monitor (60 Hz). Background: medium grey (50% max luminance, 124 lux). 100% condition: modulation to brightest white (168 lux). 40% condition: 40% of grey-to-white range (142 lux). EEG recorded with BrainProducts LiveAmp (32 channels, 500 Hz), impedance <25kΩ. Analysis on subset: O1, O2, Oz, Pz, P3, P4, P8, P9. Preprocessing: average re-reference, IIR notch filter (49.9-50.1 Hz, order 16), epoching (0-2.2s), baseline removal. Classification: CNN architecture with sliding windows for bitwise decoding. **References** Kalou Cabrera Castillos. (2023). 4-class code-VEP EEG data [Data set]. Zenodo.(dataset). DOI: [https://doi.org/10.5281/zenodo.8255618](https://doi.org/10.5281/zenodo.8255618) Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais. Burst c-VEP Based BCI: Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience,NeuroImage,Volume 284, 2023,120446,ISSN 1053-8119 DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) Notes .. versionadded:: 1.1.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000342` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | CastillosCVEP40 | | Author (year) | `Castillos2023_CastillosCVEP40` | | Canonical | `CastillosCVEP40` | | Importable as | `NM000342`, `Castillos2023_CastillosCVEP40`, `CastillosCVEP40` | | Year | 2023 | | Authors | Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000342) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000342) | [Source URL](https://nemar.org/dataexplorer/detail/nm000342) | ### Copy-paste BibTeX ```bibtex @dataset{nm000342, title = {CastillosCVEP40}, author = {Kalou Cabrera Castillos and Simon Ladouce and Ludovic Darmet and Frédéric Dehais}, doi = {10.1016/j.neuroimage.2023.120446}, url = {https://doi.org/10.1016/j.neuroimage.2023.120446}, } ``` ## Technical Details - Subjects: 12 - Recordings: 12 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 0.8488822222222221 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 145.3 MB - File count: 12 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.1016/j.neuroimage.2023.120446 - Source: nemar - OpenNeuro: [nm000342](https://openneuro.org/datasets/nm000342) - NeMAR: [nm000342](https://nemar.org/dataexplorer/detail?dataset_id=nm000342) ## API Reference Use the `NM000342` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000342(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CastillosCVEP40 * **Study:** `nm000342` (NeMAR) * **Author (year):** `Castillos2023_CastillosCVEP40` * **Canonical:** `CastillosCVEP40` Also importable as: `NM000342`, `Castillos2023_CastillosCVEP40`, `CastillosCVEP40`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000342](https://openneuro.org/datasets/nm000342) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000342](https://nemar.org/dataexplorer/detail?dataset_id=nm000342) DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### Examples ```pycon >>> from eegdash.dataset import NM000342 >>> dataset = NM000342(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000342) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000342) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000343: eeg dataset, 15 subjects *Hinss2021* Access recordings and metadata through EEGDash. **Citation:** Marcel F. Hinss, Emilie S. Jahanpour, Bertille Somon, Lou Pluchon, Frédéric Dehais, Raphaëlle N. Roy (2023). *Hinss2021*. [10.1038/s41597-022-01898-y](https://doi.org/10.1038/s41597-022-01898-y) Modality: eeg Subjects: 15 Recordings: 30 License: CC-BY-SA-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000343 dataset = NM000343(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000343(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000343( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000343, title = {Hinss2021}, author = {Marcel F. Hinss and Emilie S. Jahanpour and Bertille Somon and Lou Pluchon and Frédéric Dehais and Raphaëlle N. Roy}, doi = {10.1038/s41597-022-01898-y}, url = {https://doi.org/10.1038/s41597-022-01898-y}, } ``` ## About This Dataset **Hinss2021** Neuroergonomic 2021 dataset. **Dataset Overview** > Code: Hinss2021 > Paradigm: rstate > DOI: 10.1038/s41597-022-01898-y ### View full README **Hinss2021** Neuroergonomic 2021 dataset. **Dataset Overview** > Code: Hinss2021 > Paradigm: rstate > DOI: 10.1038/s41597-022-01898-y > Subjects: 15 > Sessions per subject: 2 > Events: rs=1, easy=2, medium=3, diff=4 > Trial interval: [0, 2] s > File format: set **Acquisition** > Sampling rate: 500.0 Hz > Number of channels: 62 > Channel types: eeg=62 > Channel names: AF3, AF4, AF7, AF8, AFz, C1, C2, C3, C4, C5, C6, CP1, CP2, CP3, CP4, CP5, CP6, CPz, F1, F2, F3, F4, F5, F6, F7, F8, FC1, FC2, FC3, FC4, FC5, FC6, FCz, FT10, FT7, FT8, FT9, Fp1, Fp2, Fz, O1, O2, Oz, P1, P2, P3, P4, P5, P6, P7, P8, PO3, PO4, PO7, PO8, POz, Pz, T7, T8, TP7, TP8 > Montage: standard_1020 > Hardware: ActiCHamp (Brain Products Gmbh) > Reference: Fpz > Sensor type: active Ag/AgCl > Line frequency: 50.0 Hz > Impedance threshold: 25 kOhm > Auxiliary channels: ecg **Participants** > Number of subjects: 15 > Health status: healthy > Age: mean=23.9 > Gender distribution: female=11, male=18 **Experimental Protocol** > Paradigm: rstate > Number of classes: 4 > Class labels: rs, easy, medium, diff > Study design: Passive BCI neuroergonomics dataset with resting state and 3 difficulty levels of MATB-II task (easy, medium, difficult). The MOABB loader provides resting state and MATB conditions only. > Feedback type: none > Stimulus type: visual display > Training/test split: False **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > rs ```text ├─ Experiment-structure └─ Rest easy ``` ```text ├─ Experiment-structure └─ Label/easy medium ``` ```text ├─ Experiment-structure └─ Label/medium diff ``` ```text ├─ Experiment-structure └─ Label/difficult ``` **Paradigm-Specific Parameters** > Detected paradigm: resting_state **Data Structure** > Trials: 90 > Trials context: total **Preprocessing** > Data state: raw > Preprocessing applied: False **Signal Processing** > Classifiers: MDM, Riemannian > Feature extraction: Bandpower, Covariance/Riemannian, ICA > Frequency bands: alpha=[8.0, 13.0] Hz; theta=[4.0, 8.0] Hz **Cross-Validation** > Method: 5-fold > Folds: 5 > Evaluation type: cross_subject, cross_session, transfer_learning **Performance (Original Study)** > Accuracy: 70.67% **BCI Application** > Applications: neuroergonomics, mental_workload_estimation > Environment: laboratory **Tags** > Pathology: Healthy > Modality: Cognitive > Type: Research **Documentation** > DOI: 10.1038/s41597-022-01898-y > License: CC-BY-SA-4.0 > Investigators: Marcel F. Hinss, Emilie S. Jahanpour, Bertille Somon, Lou Pluchon, Frédéric Dehais, Raphaëlle N. Roy > Senior author: Raphaëlle N. Roy > Contact: [marcel.hinss@isae-supaero.fr](mailto:marcel.hinss@isae-supaero.fr) > Institution: ISAE-SUPAERO, Université de Toulouse > Department: Department of Information Processing and Systems > Address: Toulouse, France > Country: FR > Repository: Zenodo > Data URL: [https://doi.org/10.5281/zenodo.6874128](https://doi.org/10.5281/zenodo.6874128) > Publication year: 2023 > Funding: ERASMUS program; ANITI (Artificial and Natural Intelligence Toulouse Institute) > Ethics approval: Comité d’Éthique de la Recherche (CER), Université de Toulouse (CER number 2021-342) > Acknowledgements: This research was supported in part by the ERASMUS program (which funded Mr Hinss’ internship), and by ANITI (Artificial and Natural Intelligence Toulouse Institute), Toulouse, France. > How to acknowledge: Please cite: Hinss et al. (2023). Open multi-session and multi-task EEG cognitive dataset for passive brain-computer interface applications. Scientific Data, 10, 85. [https://doi.org/10.1038/s41597-022-01898-y](https://doi.org/10.1038/s41597-022-01898-y) **References** [Hinss2021] M. Hinss, B. Somon, F. Dehais & R. N. Roy (2021) Open EEG Datasets for Passive Brain-Computer Interface Applications: Lacks and Perspectives. IEEE Neural Engineering Conference. [Hinss2023] M. F. Hinss, et al. (2023) An EEG dataset for cross-session mental workload estimation: Passive BCI competition of the Neuroergonomics Conference 2021. Scientific Data, 10, 85. [https://doi.org/10.1038/s41597-022-01898-y](https://doi.org/10.1038/s41597-022-01898-y) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000343` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Hinss2021 | | Author (year) | `Hinss2021` | | Canonical | `Hinss2021_v2` | | Importable as | `NM000343`, `Hinss2021`, `Hinss2021_v2` | | Year | 2023 | | Authors | Marcel F. Hinss, Emilie S. Jahanpour, Bertille Somon, Lou Pluchon, Frédéric Dehais, Raphaëlle N. Roy | | License | CC-BY-SA-4.0 | | Citation / DOI | [doi:10.1038/s41597-022-01898-y](https://doi.org/10.1038/s41597-022-01898-y) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000343) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000343) | [Source URL](https://nemar.org/dataexplorer/detail/nm000343) | ### Copy-paste BibTeX ```bibtex @dataset{nm000343, title = {Hinss2021}, author = {Marcel F. Hinss and Emilie S. Jahanpour and Bertille Somon and Lou Pluchon and Frédéric Dehais and Raphaëlle N. Roy}, doi = {10.1038/s41597-022-01898-y}, url = {https://doi.org/10.1038/s41597-022-01898-y}, } ``` ## Technical Details - Subjects: 15 - Recordings: 30 - Tasks: 1 - Channels: 61 - Sampling rate (Hz): 500.0 - Duration (hours): 3.974983333333333 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 1.2 GB - File count: 30 - Format: BIDS - License: CC-BY-SA-4.0 - DOI: doi:10.1038/s41597-022-01898-y - Source: nemar - OpenNeuro: [nm000343](https://openneuro.org/datasets/nm000343) - NeMAR: [nm000343](https://nemar.org/dataexplorer/detail?dataset_id=nm000343) ## API Reference Use the `NM000343` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000343(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Hinss2021 * **Study:** `nm000343` (NeMAR) * **Author (year):** `Hinss2021` * **Canonical:** `Hinss2021_v2` Also importable as: `NM000343`, `Hinss2021`, `Hinss2021_v2`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000343](https://openneuro.org/datasets/nm000343) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000343](https://nemar.org/dataexplorer/detail?dataset_id=nm000343) DOI: [https://doi.org/10.1038/s41597-022-01898-y](https://doi.org/10.1038/s41597-022-01898-y) ### Examples ```pycon >>> from eegdash.dataset import NM000343 >>> dataset = NM000343(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000343) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000343) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000344: eeg dataset, 12 subjects *CastillosBurstVEP100* Access recordings and metadata through EEGDash. **Citation:** Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais (2023). *CastillosBurstVEP100*. [10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) Modality: eeg Subjects: 12 Recordings: 12 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000344 dataset = NM000344(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000344(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000344( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000344, title = {CastillosBurstVEP100}, author = {Kalou Cabrera Castillos and Simon Ladouce and Ludovic Darmet and Frédéric Dehais}, doi = {10.1016/j.neuroimage.2023.120446}, url = {https://doi.org/10.1016/j.neuroimage.2023.120446}, } ``` ## About This Dataset **CastillosBurstVEP100** c-VEP and Burst-VEP dataset from Castillos et al. (2023) **Dataset Overview** > Code: CastillosBurstVEP100 > Paradigm: cvep > DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### View full README **CastillosBurstVEP100** c-VEP and Burst-VEP dataset from Castillos et al. (2023) **Dataset Overview** > Code: CastillosBurstVEP100 > Paradigm: cvep > DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) > Subjects: 12 > Sessions per subject: 1 > Events: 0=100, 1=101 > Trial interval: (0, 0.25) s > File format: EEGLAB .set > Number of contributing labs: 1 **Acquisition** > Sampling rate: 500.0 Hz > Number of channels: 32 > Channel types: eeg=32 > Channel names: C3, C4, CP1, CP2, CP5, CP6, Cz, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, Oz, P10, P3, P4, P7, P8, P9, Pz, T7, T8 > Montage: standard_1020 > Hardware: BrainProducts LiveAmp 32 > Reference: FCz > Ground: FPz > Sensor type: eeg > Line frequency: 50.0 Hz > Online filters: {‘notch’: {‘freq’: 50.0, ‘bandwidth’: 0.2, ‘order’: 16, ‘type’: ‘IIR cut-band’}} > Impedance threshold: 25.0 kOhm > Cap manufacturer: BrainProducts > Cap model: Acticap > Electrode type: active **Participants** > Number of subjects: 12 > Health status: healthy > Age: mean=30.6, std=7.1 > Gender distribution: female=4, male=8 > Species: human **Experimental Protocol** > Paradigm: cvep > Task type: target selection > Number of classes: 2 > Class labels: 0, 1 > Trial duration: 2.2 s > Tasks: visual attention, target selection > Study design: factorial within-subject > Study domain: BCI performance and user experience > Feedback type: none > Stimulus type: visual > Stimulus modalities: visual > Primary modality: visual > Synchronicity: synchronous > Mode: offline > Training/test split: False > Instructions: Focus on cued targets sequentially in random order > Stimulus presentation: software=PsychoPy, monitor=Dell P2419HC, resolution=1920x1080, refresh_rate_hz=60 **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > 0 ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_0 1 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_1 ``` **Paradigm-Specific Parameters** > Detected paradigm: cvep > Code type: burst > Number of targets: 4 > Cue duration: 0.5 s **Data Structure** > Trials: 60 > Blocks per session: 15 > Trials context: 15 blocks x 4 trials per block = 60 trials per subject for burst c-VEP at 100% amplitude **Preprocessing** > Data state: raw **Signal Processing** > Classifiers: Convolutional Neural Network (CNN), Pearson correlation > Feature extraction: CNN spatial filtering (8x1 kernel, 16 filters), CNN temporal filtering (1x32 kernel with dilation 2, 8 filters), CNN 2D convolution (5x5 kernel, 4 filters), sliding windows (250ms, 2ms stride) > Frequency bands: analyzed=[0.1, 40.0] Hz > Spatial filters: CNN 8x1 spatial convolution (16 filters) **Cross-Validation** > Method: sequential train/test split > Evaluation type: offline classification, iterative calibration (1-6 blocks) **Performance (Original Study)** > Accuracy: 95.6% > Itr: 67.49 bits/min > Selection Time S: 1.5 > Cnn Training Time S: 15.0 > Burst 40 Accuracy: 94.2 > Mseq 100 Accuracy: 85.0 **BCI Application** > Applications: reactive BCI > Environment: controlled laboratory > Online feedback: False **Tags** > Pathology: Healthy > Modality: EEG > Type: reactive BCI, c-VEP, visual evoked potentials **Documentation** > Description: Burst c-VEP based BCI study comparing novel burst code sequences to traditional m-sequences at two amplitude depths (100% and 40%) to optimize classification performance, minimize calibration data, and improve user experience > DOI: 10.1016/j.neuroimage.2023.120446 > Associated paper DOI: 10.1016/j.neuroimage.2023.120446 > License: CC-BY-4.0 > Investigators: Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais > Senior author: Frédéric Dehais > Contact: [kalou.cabrera-castillos@isae-supaero.fr](mailto:kalou.cabrera-castillos@isae-supaero.fr) > Institution: Institut Supérieur de l’Aéronautique et de l’Espace (ISAE-SUPAERO) > Department: Human Factors and Neuroergonomics > Address: 10 Av. Edouard Belin, Toulouse, 31400, France > Country: FR > Repository: Zenodo > Data URL: [https://zenodo.org/record/8255618](https://zenodo.org/record/8255618) > Publication year: 2023 > Funding: AID (Powerbrain project), France; AXA Research Fund Chair for Neuroergonomics, France; Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France > Ethics approval: University of Toulouse ethics committee (CER approval number 2020-334); Declaration of Helsinki > Acknowledgements: This work was funded by AID (Powerbrain project), France, the AXA Research Fund Chair for Neuroergonomics, France and Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France. > Keywords: Code-VEP, Reactive BCI, CNN, Amplitude depth reduction, Visual comfort **External Links** > Source: [https://zenodo.org/record/8255618](https://zenodo.org/record/8255618) > Github: [https://github.com/neuroergoISAE/burst_codes](https://github.com/neuroergoISAE/burst_codes) **Abstract** The utilization of aperiodic flickering visual stimuli under the form of code-modulated Visual Evoked Potentials (c-VEP) represents a pivotal advancement in the field of reactive Brain–Computer Interface (rBCI). This study introduces Burst c-VEP, an innovative variant involving short bursts of aperiodic visual flashes at 2-4 flashes per second. The proposed burst c-VEP sequences exhibited higher accuracy (90.5%-95.6%) compared to m-sequence counterparts (71.4%-85.0%) with mean selection time of 1.5s. Reducing stimulus intensity to 40% amplitude depth only slightly decreased accuracy to 94.2% while substantially improving user experience. The collected dataset and CNN architecture implementation are shared through open-access repositories. **Methodology** Twelve healthy participants completed an offline 4-class c-VEP protocol using a factorial design. EEG was recorded at 500 Hz using BrainProducts LiveAmp 32-channel system. Participants focused on cued targets with factorial manipulation of pattern type (burst vs m-sequence) and amplitude depth (100% vs 40%). Visual stimuli were presented on a 60 Hz Dell monitor. Burst codes consisted of brief flashes (~50ms) with minimum 200ms inter-burst interval, while m-sequences used Fibonacci-type LFSR with segmented 132-frame subsequences. A CNN architecture with spatial (8x1, 16 filters), temporal (1x32, 8 filters), and 2D convolution (5x5, 4 filters) layers decoded EEG using 250ms sliding windows with 2ms stride. Calibration data ranged from 1-6 blocks (8.8-52.8s). Classification used sequential train/test splits with Pearson correlation for target selection. VEP analysis examined amplitude, latency, and inter-trial coherence. Statistical analyses used 2×2 repeated measures ANOVA. **References** Kalou Cabrera Castillos. (2023). 4-class code-VEP EEG data [Data set]. Zenodo.(dataset). DOI: [https://doi.org/10.5281/zenodo.8255618](https://doi.org/10.5281/zenodo.8255618) Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais. Burst c-VEP Based BCI: Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience,NeuroImage,Volume 284, 2023,120446,ISSN 1053-8119 DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) Notes .. versionadded:: 1.1.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000344` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | CastillosBurstVEP100 | | Author (year) | `Castillos2023_CastillosBurstVEP100` | | Canonical | — | | Importable as | `NM000344`, `Castillos2023_CastillosBurstVEP100` | | Year | 2023 | | Authors | Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000344) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000344) | [Source URL](https://nemar.org/dataexplorer/detail/nm000344) | ### Copy-paste BibTeX ```bibtex @dataset{nm000344, title = {CastillosBurstVEP100}, author = {Kalou Cabrera Castillos and Simon Ladouce and Ludovic Darmet and Frédéric Dehais}, doi = {10.1016/j.neuroimage.2023.120446}, url = {https://doi.org/10.1016/j.neuroimage.2023.120446}, } ``` ## Technical Details - Subjects: 12 - Recordings: 12 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 0.8772155555555554 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 150.0 MB - File count: 12 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.1016/j.neuroimage.2023.120446 - Source: nemar - OpenNeuro: [nm000344](https://openneuro.org/datasets/nm000344) - NeMAR: [nm000344](https://nemar.org/dataexplorer/detail?dataset_id=nm000344) ## API Reference Use the `NM000344` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000344(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CastillosBurstVEP100 * **Study:** `nm000344` (NeMAR) * **Author (year):** `Castillos2023_CastillosBurstVEP100` * **Canonical:** — Also importable as: `NM000344`, `Castillos2023_CastillosBurstVEP100`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000344](https://openneuro.org/datasets/nm000344) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000344](https://nemar.org/dataexplorer/detail?dataset_id=nm000344) DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### Examples ```pycon >>> from eegdash.dataset import NM000344 >>> dataset = NM000344(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000344) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000344) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000345: eeg dataset, 12 subjects *CastillosBurstVEP40* Access recordings and metadata through EEGDash. **Citation:** Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais (2023). *CastillosBurstVEP40*. [10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) Modality: eeg Subjects: 12 Recordings: 12 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000345 dataset = NM000345(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000345(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000345( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000345, title = {CastillosBurstVEP40}, author = {Kalou Cabrera Castillos and Simon Ladouce and Ludovic Darmet and Frédéric Dehais}, doi = {10.1016/j.neuroimage.2023.120446}, url = {https://doi.org/10.1016/j.neuroimage.2023.120446}, } ``` ## About This Dataset **CastillosBurstVEP40** c-VEP and Burst-VEP dataset from Castillos et al. (2023) **Dataset Overview** > Code: CastillosBurstVEP40 > Paradigm: cvep > DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### View full README **CastillosBurstVEP40** c-VEP and Burst-VEP dataset from Castillos et al. (2023) **Dataset Overview** > Code: CastillosBurstVEP40 > Paradigm: cvep > DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) > Subjects: 12 > Sessions per subject: 1 > Events: 0=100, 1=101 > Trial interval: (0, 0.25) s > File format: EEGLAB .set > Number of contributing labs: 1 **Acquisition** > Sampling rate: 500.0 Hz > Number of channels: 32 > Channel types: eeg=32 > Channel names: C3, C4, CP1, CP2, CP5, CP6, Cz, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, Oz, P10, P3, P4, P7, P8, P9, Pz, T7, T8 > Montage: standard_1020 > Hardware: BrainProducts LiveAmp 32 > Reference: FCz > Ground: FPz > Sensor type: eeg > Line frequency: 50.0 Hz > Online filters: {‘line_noise’: ‘IIR cut-band filter between 49.9 and 50.1 Hz of order 16’} > Impedance threshold: 25.0 kOhm > Cap manufacturer: BrainProducts > Cap model: Acticap > Electrode type: active **Participants** > Number of subjects: 12 > Health status: healthy > Age: mean=30.6, std=7.1 > Gender distribution: female=4, male=8 > Species: human **Experimental Protocol** > Paradigm: cvep > Task type: reactive BCI > Number of classes: 2 > Class labels: 0, 1 > Trial duration: 2.2 s > Tasks: attend to cued target > Study design: factorial design > Study domain: brain-computer interface > Feedback type: none > Stimulus type: aperiodic visual flashes > Stimulus modalities: visual > Primary modality: visual > Synchronicity: synchronous > Mode: offline > Training/test split: False > Instructions: Participants were instructed to focus on c-VEP targets cued sequentially > Stimulus presentation: screen=Dell P2419HC, 1920 × 1080 pixels, 265 cd/m2, 60 Hz **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > 0 ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_0 1 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_1 ``` **Paradigm-Specific Parameters** > Detected paradigm: cvep > Stimulus frequencies: [2.0, 3.0, 4.0] Hz > Code type: burst > Number of targets: 4 > Cue duration: 0.5 s **Data Structure** > Trials: 60 > Blocks per session: 15 > Trials context: 15 blocks x 4 trials per block = 60 trials per subject for burst c-VEP at 40% amplitude **Preprocessing** > Data state: raw **Signal Processing** > Classifiers: CNN, Convolutional Neural Network > Feature extraction: EEG2Code bitwise decoding **Cross-Validation** > Evaluation type: offline **Performance (Original Study)** > Accuracy: 95.6% > Burst 100 Accuracy 17.6S Calibration: 90.5 > Burst 100 Accuracy 52.8S Calibration: 95.6 > Mseq 100 Accuracy 17.6S Calibration: 71.4 > Mseq 100 Accuracy 52.8S Calibration: 85.0 > Burst 40 Accuracy: 94.2 > Mean Selection Time S: 1.5 **BCI Application** > Applications: brain-computer interface > Environment: laboratory > Online feedback: False **Tags** > Pathology: Healthy > Modality: EEG > Type: reactive BCI, c-VEP **Documentation** > Description: Burst c-VEP based BCI study optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience. The study introduces an innovative variant of code-VEP called ‘Burst c-VEP’ involving short bursts of aperiodic visual flashes at 2-4 flashes per second. > DOI: 10.1016/j.neuroimage.2023.120446 > Associated paper DOI: 10.1016/j.neuroimage.2023.120446 > License: CC-BY-4.0 > Investigators: Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais > Senior author: Frédéric Dehais > Contact: [kalou.cabrera-castillos@isae-supaero.fr](mailto:kalou.cabrera-castillos@isae-supaero.fr) > Institution: Institut Supérieur de l’Aéronautique et de l’Espace (ISAE-SUPAERO) > Department: Human Factors and Neuroergonomics > Address: 10 Av. Edouard Belin, Toulouse, 31400, France > Country: FR > Repository: Zenodo > Data URL: [https://zenodo.org/record/8255618](https://zenodo.org/record/8255618) > Publication year: 2023 > Ethics approval: University of Toulouse ethics committee (CER approval number 2020-334); Declaration of Helsinki > Keywords: Code-VEP, Reactive BCI, CNN, Amplitude depth reduction, Visual comfort **External Links** > Source: [https://zenodo.org/record/8255618](https://zenodo.org/record/8255618) **Abstract** The utilization of aperiodic flickering visual stimuli under the form of code-modulated Visual Evoked Potentials (c-VEP) represents a pivotal advancement in the field of reactive Brain–Computer Interface (rBCI). A major advantage of the c-VEP approach is that the training of the model is independent of the number and complexity of targets, which helps reduce calibration time. Nevertheless, the existing designs of c-VEP stimuli can be further improved in terms of visual user experience but also to achieve a higher signal-to-noise ratio, while shortening the selection time and calibration process. In this study, we introduce an innovative variant of code-VEP, referred to as ‘Burst c-VEP’. This original approach involves the presentation of short bursts of aperiodic visual flashes at a deliberately slow rate, typically ranging from two to four flashes per second. The rationale behind this design is to leverage the sensitivity of the primary visual cortex to transient changes in low-level stimuli features to reliably elicit distinctive series of visual evoked potentials. In comparison to other types of faster-paced code sequences, burst c-VEP exhibit favorable properties to achieve high bitwise decoding performance using convolutional neural networks (CNN), which yields potential to attain faster selection time with the need for less calibration data. Furthermore, our investigation focuses on reducing the perceptual saliency of c-VEP through the attenuation of visual stimuli contrast and intensity to significantly improve users’ visual comfort. The proposed solutions were tested through an offline 4-classes c-VEP protocol involving 12 participants. Following a factorial design, participants were instructed to focus on c-VEP targets whose pattern (burst and maximum-length sequences) and amplitude (100% or 40% amplitude depth modulations) were manipulated across experimental conditions. Firstly, the full amplitude burst c-VEP sequences exhibited higher accuracy, ranging from 90.5% (with 17.6 s of calibration data) to 95.6% (with 52.8 s of calibration data), compared to its m-sequence counterpart (71.4% to 85.0%). The mean selection time for both types of codes (1.5 s) compared favorably to reports from previous studies. Secondly, our findings revealed that lowering the intensity of the stimuli only slightly decreased the accuracy of the burst code sequences to 94.2% while leading to substantial improvements in terms of user experience. Taken together, these results demonstrate the high potential of the proposed burst codes to advance reactive BCI both in terms of performance and usability. The collected dataset, along with the proposed CNN architecture implementation, are shared through open-access repositories. **Methodology** Factorial experimental design with 12 participants. Four conditions: burst or m-sequence codes × 100% or 40% amplitude depth. Participants attended to cued targets presented as aperiodic visual flashes. Burst codes: 50ms flashes at 2-4 Hz with 200ms minimum inter-burst interval. M-sequences: pseudo-random binary sequences at ~10 Hz. EEG recorded at 500 Hz using 32-channel BrainProduct LiveAmp. Analysis on occipital/parietal electrodes. CNN-based bitwise decoding (improved EEG2Code architecture). Each participant completed 15 blocks of 4 trials per condition (60 trials per class, 240 total trials). Trial structure: 700ms ITI, 500ms cue, 2200ms stimulation. Display: Dell P2419HC 60Hz LCD. Luminance: medium grey background (124 lux), 100% condition (168 lux), 40% condition (142 lux). Preprocessing: average re-reference, 50Hz notch filter (IIR order 16), epoching 0-2.2s, baseline removal. Subjective assessments of visual comfort, tiredness, and intrusiveness collected. **References** Kalou Cabrera Castillos. (2023). 4-class code-VEP EEG data [Data set]. Zenodo.(dataset). DOI: [https://doi.org/10.5281/zenodo.8255618](https://doi.org/10.5281/zenodo.8255618) Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais. Burst c-VEP Based BCI: Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience,NeuroImage,Volume 284, 2023,120446,ISSN 1053-8119 DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) Notes .. versionadded:: 1.1.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000345` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | CastillosBurstVEP40 | | Author (year) | `Castillos2023_CastillosBurstVEP40` | | Canonical | — | | Importable as | `NM000345`, `Castillos2023_CastillosBurstVEP40` | | Year | 2023 | | Authors | Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000345) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000345) | [Source URL](https://nemar.org/dataexplorer/detail/nm000345) | ### Copy-paste BibTeX ```bibtex @dataset{nm000345, title = {CastillosBurstVEP40}, author = {Kalou Cabrera Castillos and Simon Ladouce and Ludovic Darmet and Frédéric Dehais}, doi = {10.1016/j.neuroimage.2023.120446}, url = {https://doi.org/10.1016/j.neuroimage.2023.120446}, } ``` ## Technical Details - Subjects: 12 - Recordings: 12 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 0.8422155555555555 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 144.2 MB - File count: 12 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.1016/j.neuroimage.2023.120446 - Source: nemar - OpenNeuro: [nm000345](https://openneuro.org/datasets/nm000345) - NeMAR: [nm000345](https://nemar.org/dataexplorer/detail?dataset_id=nm000345) ## API Reference Use the `NM000345` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000345(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CastillosBurstVEP40 * **Study:** `nm000345` (NeMAR) * **Author (year):** `Castillos2023_CastillosBurstVEP40` * **Canonical:** — Also importable as: `NM000345`, `Castillos2023_CastillosBurstVEP40`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000345](https://openneuro.org/datasets/nm000345) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000345](https://nemar.org/dataexplorer/detail?dataset_id=nm000345) DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### Examples ```pycon >>> from eegdash.dataset import NM000345 >>> dataset = NM000345(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000345) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000345) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000346: eeg dataset, 12 subjects *CastillosCVEP100* Access recordings and metadata through EEGDash. **Citation:** Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais (2023). *CastillosCVEP100*. [10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) Modality: eeg Subjects: 12 Recordings: 12 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000346 dataset = NM000346(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000346(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000346( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000346, title = {CastillosCVEP100}, author = {Kalou Cabrera Castillos and Simon Ladouce and Ludovic Darmet and Frédéric Dehais}, doi = {10.1016/j.neuroimage.2023.120446}, url = {https://doi.org/10.1016/j.neuroimage.2023.120446}, } ``` ## About This Dataset **CastillosCVEP100** c-VEP and Burst-VEP dataset from Castillos et al. (2023) **Dataset Overview** > Code: CastillosCVEP100 > Paradigm: cvep > DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### View full README **CastillosCVEP100** c-VEP and Burst-VEP dataset from Castillos et al. (2023) **Dataset Overview** > Code: CastillosCVEP100 > Paradigm: cvep > DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) > Subjects: 12 > Sessions per subject: 1 > Events: 0=100, 1=101 > Trial interval: (0, 0.25) s > File format: EEGLAB .set **Acquisition** > Sampling rate: 500.0 Hz > Number of channels: 32 > Channel types: eeg=32 > Channel names: C3, C4, CP1, CP2, CP5, CP6, Cz, F10, F3, F4, F7, F8, F9, FC1, FC2, FC5, FC6, Fp1, Fp2, Fz, O1, O2, Oz, P10, P3, P4, P7, P8, P9, Pz, T7, T8 > Montage: standard_1020 > Hardware: BrainProducts LiveAmp > Reference: FCz > Ground: FPz > Sensor type: EEG > Line frequency: 50.0 Hz > Impedance threshold: 25.0 kOhm > Cap manufacturer: BrainProducts > Cap model: Acticap > Electrode type: active **Participants** > Number of subjects: 12 > Health status: healthy > Age: mean=30.6, std=7.1 > Gender distribution: female=4, male=8 > Species: human **Experimental Protocol** > Paradigm: cvep > Task type: visual attention > Number of classes: 2 > Class labels: 0, 1 > Trial duration: 2.2 s > Study design: factorial design (code type × amplitude depth) > Study domain: BCI performance and user experience > Feedback type: none > Stimulus type: visual flashing > Stimulus modalities: visual > Primary modality: visual > Synchronicity: synchronous > Mode: offline > Training/test split: False > Instructions: focus on four targets that were cued sequentially in a random order for 0.5 s, followed by a 2.2 s stimulation phase, before a 0.7 s inter-trial period > Stimulus presentation: display=Dell P2419HC LCD monitor, resolution=1920×1080 pixels, refresh_rate=60 Hz, brightness=265 cd/m², stimulus_size=150 pixels, background_luminance=124 lux (50% screen luminance), on_state_100=168 lux (100% amplitude depth), on_state_40=142 lux (40% amplitude depth), cue_duration=0.5 s, stimulation_duration=2.2 s, inter_trial_interval=0.7 s **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > 0 ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_0 1 ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Label/intensity_1 ``` **Paradigm-Specific Parameters** > Detected paradigm: cvep > Code type: m-sequence (maximum-length sequence) > Code length: 132 > Number of targets: 4 **Data Structure** > Trials: 60 > Blocks per session: 15 > Trials context: 15 blocks × 4 trials (one per target) × 4 conditions (burst/mseq × 100%/40%) **Preprocessing** > Data state: raw **Signal Processing** > Classifiers: Convolutional Neural Network (CNN) > Feature extraction: Sliding windows (250ms, 2ms stride), Standard deviation normalization > Spatial filters: 16 spatial filters via 1D spatial convolution (8×1 kernel) **Cross-Validation** > Method: sequential train/test split > Evaluation type: offline classification **Performance (Original Study)** > Accuracy: 85.0% > Itr: 48.7 bits/min > Selection Time S: 1.5 > Cnn Training Time 6Blocks S: 40.0 > Calibration Data 6Blocks S: 52.8 **BCI Application** > Applications: reactive BCI > Environment: laboratory > Online feedback: False **Tags** > Pathology: Healthy > Modality: EEG > Type: reactive BCI, visual evoked potentials **Documentation** > Description: 4-class code-VEP BCI dataset comparing burst c-VEP and m-sequence stimulation at two amplitude depths (100% and 40%) to optimize performance and user experience > DOI: 10.1016/j.neuroimage.2023.120446 > Associated paper DOI: 10.1016/j.neuroimage.2023.120446 > License: CC-BY-4.0 > Investigators: Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais > Senior author: Frédéric Dehais > Contact: [kalou.cabrera-castillos@isae-supaero.fr](mailto:kalou.cabrera-castillos@isae-supaero.fr) > Institution: Institut Supérieur de l’Aéronautique et de l’Espace (ISAE-SUPAERO) > Department: Human Factors and Neuroergonomics > Address: 10 Av. Edouard Belin, Toulouse, 31400, France > Country: FR > Repository: Zenodo > Data URL: [https://zenodo.org/record/8255618](https://zenodo.org/record/8255618) > Publication year: 2023 > Funding: AID (Powerbrain project), France; AXA Research Fund Chair for Neuroergonomics, France; Chair for Neuroadaptive Technology, Artificial and Natural Intelligence Toulouse Institute (ANITI), France > Ethics approval: Ethics committee of the University of Toulouse (CER approval number 2020-334); Declaration of Helsinki > Keywords: Code-VEP, Reactive BCI, CNN, Amplitude depth reduction, Visual comfort **External Links** > Source: [https://zenodo.org/record/8255618](https://zenodo.org/record/8255618) > Github Code: [https://github.com/neuroergoISAE/burst_codes](https://github.com/neuroergoISAE/burst_codes) > Paper: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) **Abstract** The utilization of aperiodic flickering visual stimuli under the form of code-modulated Visual Evoked Potentials (c-VEP) represents a pivotal advancement in the field of reactive Brain–Computer Interface (rBCI). This study introduces an innovative variant of code-VEP, referred to as ‘Burst c-VEP’, involving the presentation of short bursts of aperiodic visual flashes at a deliberately slow rate, typically ranging from two to four flashes per second. The proposed solutions were tested through an offline 4-classes c-VEP protocol involving 12 participants. The full amplitude burst c-VEP sequences exhibited higher accuracy, ranging from 90.5% (with 17.6 s of calibration data) to 95.6% (with 52.8 s of calibration data), compared to its m-sequence counterpart (71.4% to 85.0%). The mean selection time for both types of codes (1.5 s) compared favorably to reports from previous studies. Lowering the intensity of the stimuli only slightly decreased the accuracy of the burst code sequences to 94.2% while leading to substantial improvements in terms of user experience. **Methodology** Factorial experimental design with 12 healthy participants. EEG recorded with BrainProducts LiveAmp 32-channel system at 500 Hz. Four conditions tested: burst c-VEP and m-sequence c-VEP, each at 100% and 40% amplitude depth. Participants focused on cued targets (4 classes) in 15 blocks of 4 trials per condition. CNN-based decoding with 250ms sliding windows. Subjective ratings collected for visual comfort, mental tiredness, and intrusiveness. VEP analysis included amplitude, latency, and inter-trial coherence metrics. **References** Kalou Cabrera Castillos. (2023). 4-class code-VEP EEG data [Data set]. Zenodo.(dataset). DOI: [https://doi.org/10.5281/zenodo.8255618](https://doi.org/10.5281/zenodo.8255618) Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais. Burst c-VEP Based BCI: Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience,NeuroImage,Volume 284, 2023,120446,ISSN 1053-8119 DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) Notes .. versionadded:: 1.1.0 Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000346` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | CastillosCVEP100 | | Author (year) | `Castillos2023_CastillosCVEP100` | | Canonical | — | | Importable as | `NM000346`, `Castillos2023_CastillosCVEP100` | | Year | 2023 | | Authors | Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, Frédéric Dehais | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000346) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000346) | [Source URL](https://nemar.org/dataexplorer/detail/nm000346) | ### Copy-paste BibTeX ```bibtex @dataset{nm000346, title = {CastillosCVEP100}, author = {Kalou Cabrera Castillos and Simon Ladouce and Ludovic Darmet and Frédéric Dehais}, doi = {10.1016/j.neuroimage.2023.120446}, url = {https://doi.org/10.1016/j.neuroimage.2023.120446}, } ``` ## Technical Details - Subjects: 12 - Recordings: 12 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 500.0 - Duration (hours): 0.880271111111111 - Pathology: Healthy - Modality: Visual - Type: Attention - Size on disk: 150.6 MB - File count: 12 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.1016/j.neuroimage.2023.120446 - Source: nemar - OpenNeuro: [nm000346](https://openneuro.org/datasets/nm000346) - NeMAR: [nm000346](https://nemar.org/dataexplorer/detail?dataset_id=nm000346) ## API Reference Use the `NM000346` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000346(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CastillosCVEP100 * **Study:** `nm000346` (NeMAR) * **Author (year):** `Castillos2023_CastillosCVEP100` * **Canonical:** — Also importable as: `NM000346`, `Castillos2023_CastillosCVEP100`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000346](https://openneuro.org/datasets/nm000346) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000346](https://nemar.org/dataexplorer/detail?dataset_id=nm000346) DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### Examples ```pycon >>> from eegdash.dataset import NM000346 >>> dataset = NM000346(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000346) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000346) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000347: eeg dataset, 37 subjects *HefmiIch2025* Access recordings and metadata through EEGDash. **Citation:** Jian Shi, Danyang Chen, Xingwei Zhao, Zhixian Zhao, Shengjie Li, Yeguang Xu, Tao Ding, Zheng Zhu, Peng Zhang, Qing Ye, Yingxin Tang, Ping Zhang, Bo Tao, Zhouping Tang (2025). *HefmiIch2025*. [10.1038/s41597-025-06100-7](https://doi.org/10.1038/s41597-025-06100-7) Modality: eeg Subjects: 37 Recordings: 98 License: CC-BY-NC-ND-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000347 dataset = NM000347(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000347(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000347( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000347, title = {HefmiIch2025}, author = {Jian Shi and Danyang Chen and Xingwei Zhao and Zhixian Zhao and Shengjie Li and Yeguang Xu and Tao Ding and Zheng Zhu and Peng Zhang and Qing Ye and Yingxin Tang and Ping Zhang and Bo Tao and Zhouping Tang}, doi = {10.1038/s41597-025-06100-7}, url = {https://doi.org/10.1038/s41597-025-06100-7}, } ``` ## About This Dataset **HefmiIch2025** Hybrid EEG-fNIRS MI dataset for ICH from Shi et al 2025. **Dataset Overview** > Code: HefmiIch2025 > Paradigm: imagery > DOI: 10.1038/s41597-025-06100-7 ### View full README **HefmiIch2025** Hybrid EEG-fNIRS MI dataset for ICH from Shi et al 2025. **Dataset Overview** > Code: HefmiIch2025 > Paradigm: imagery > DOI: 10.1038/s41597-025-06100-7 > Subjects: 37 > Sessions per subject: 3 > Events: left_hand=1, right_hand=2 > Trial interval: [0, 10] s > File format: MAT (pre-epoched) > Data preprocessed: True **Acquisition** > Sampling rate: 256.0 Hz > Number of channels: 32 > Channel types: eeg=32 > Channel names: FC1, AF3, AF4, CP1, CP2, CP6, Cz, C3, C4, T7, T8, FC2, FC5, FC6, Pz, CP5, PO3, PO4, Oz, Fp2, Fp1, Fz, F3, F4, F7, F8, P3, P4, P7, P8, O1, O2 > Montage: biosemi32 > Hardware: g.HIamp (g.tec medical engineering GmbH) > Line frequency: 50.0 Hz > Online filters: {} **Participants** > Number of subjects: 37 > Health status: mixed (17 healthy, 20 ICH patients) > Clinical population: intracerebral hemorrhage (ICH) > Age: min=20.0, max=65.0 > Gender distribution: female=8, male=29 > Handedness: right-handed > Species: human **Experimental Protocol** > Paradigm: imagery > Number of classes: 2 > Class labels: left_hand, right_hand > Trial duration: 27.0 s > Study design: 2-class hand MI (left/right grasping) for ICH rehabilitation. 17 healthy + 20 ICH patients, 1-6 sessions per subject. > Feedback type: none > Stimulus type: directional arrow + auditory beep > Stimulus modalities: visual, auditory > Primary modality: visual > Synchronicity: synchronous > Mode: offline **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > left_hand ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand ``` **Paradigm-Specific Parameters** > Detected paradigm: motor_imagery > Imagery tasks: left_hand, right_hand > Cue duration: 2.0 s > Imagery duration: 10.0 s **Data Structure** > Trials: 3330 > Trials context: 37 subjects x ~3 sessions x 30 trials = ~3330 **Signal Processing** > Classifiers: CSP+SVM, FBCSP+SVM, EEGBaseNet, TF+SVM > Feature extraction: CSP, FBCSP, time-frequency features > Frequency bands: preprocessing=[0.5, 30.0] Hz > Spatial filters: CSP, FBCSP **Cross-Validation** > Method: 5-fold > Folds: 5 > Evaluation type: within_subject **BCI Application** > Applications: rehabilitation > Environment: clinical > Online feedback: False **Tags** > Pathology: Healthy, Stroke > Modality: Motor > Type: Clinical, Research **Documentation** > DOI: 10.1038/s41597-025-06100-7 > License: CC-BY-NC-ND-4.0 > Investigators: Jian Shi, Danyang Chen, Xingwei Zhao, Zhixian Zhao, Shengjie Li, Yeguang Xu, Tao Ding, Zheng Zhu, Peng Zhang, Qing Ye, Yingxin Tang, Ping Zhang, Bo Tao, Zhouping Tang > Institution: Huazhong University of Science and Technology > Country: CN > Data URL: [https://figshare.com/articles/dataset/28955456](https://figshare.com/articles/dataset/28955456) > Publication year: 2025 **References** Shi, J., Chen, D., et al. (2025). HEFMI-ICH: a hybrid EEG-fNIRS motor imagery dataset for brain-computer interface in intracerebral hemorrhage. Scientific Data. [https://doi.org/10.1038/s41597-025-06100-7](https://doi.org/10.1038/s41597-025-06100-7) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000347` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | HefmiIch2025 | | Author (year) | `HefmiIch2025` | | Canonical | `HEFMI_ICH`, `HEFMIICH` | | Importable as | `NM000347`, `HefmiIch2025`, `HEFMI_ICH`, `HEFMIICH` | | Year | 2025 | | Authors | Jian Shi, Danyang Chen, Xingwei Zhao, Zhixian Zhao, Shengjie Li, Yeguang Xu, Tao Ding, Zheng Zhu, Peng Zhang, Qing Ye, Yingxin Tang, Ping Zhang, Bo Tao, Zhouping Tang | | License | CC-BY-NC-ND-4.0 | | Citation / DOI | [doi:10.1038/s41597-025-06100-7](https://doi.org/10.1038/s41597-025-06100-7) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000347) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000347) | [Source URL](https://nemar.org/dataexplorer/detail/nm000347) | ### Copy-paste BibTeX ```bibtex @dataset{nm000347, title = {HefmiIch2025}, author = {Jian Shi and Danyang Chen and Xingwei Zhao and Zhixian Zhao and Shengjie Li and Yeguang Xu and Tao Ding and Zheng Zhu and Peng Zhang and Qing Ye and Yingxin Tang and Ping Zhang and Bo Tao and Zhouping Tang}, doi = {10.1038/s41597-025-06100-7}, url = {https://doi.org/10.1038/s41597-025-06100-7}, } ``` ## Technical Details - Subjects: 37 - Recordings: 98 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 256.0 - Duration (hours): 31.19656032986111 - Pathology: Other - Modality: Multisensory - Type: Motor - Size on disk: 2.6 GB - File count: 98 - Format: BIDS - License: CC-BY-NC-ND-4.0 - DOI: doi:10.1038/s41597-025-06100-7 - Source: nemar - OpenNeuro: [nm000347](https://openneuro.org/datasets/nm000347) - NeMAR: [nm000347](https://nemar.org/dataexplorer/detail?dataset_id=nm000347) ## API Reference Use the `NM000347` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000347(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HefmiIch2025 * **Study:** `nm000347` (NeMAR) * **Author (year):** `HefmiIch2025` * **Canonical:** `HEFMI_ICH`, `HEFMIICH` Also importable as: `NM000347`, `HefmiIch2025`, `HEFMI_ICH`, `HEFMIICH`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 37; recordings: 98; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000347](https://openneuro.org/datasets/nm000347) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000347](https://nemar.org/dataexplorer/detail?dataset_id=nm000347) DOI: [https://doi.org/10.1038/s41597-025-06100-7](https://doi.org/10.1038/s41597-025-06100-7) ### Examples ```pycon >>> from eegdash.dataset import NM000347 >>> dataset = NM000347(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000347) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000347) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000348: eeg dataset, 51 subjects *Yang2025* Access recordings and metadata through EEGDash. **Citation:** Banghua Yang, Fenqi Rong, Yunlong Xie, Du Li, Jiayang Zhang, Fu Li, Guangming Shi, Xiaorong Gao (2025). *Yang2025*. [10.1038/s41597-025-04826-y](https://doi.org/10.1038/s41597-025-04826-y) Modality: eeg Subjects: 51 Recordings: 153 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000348 dataset = NM000348(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000348(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000348( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000348, title = {Yang2025}, author = {Banghua Yang and Fenqi Rong and Yunlong Xie and Du Li and Jiayang Zhang and Fu Li and Guangming Shi and Xiaorong Gao}, doi = {10.1038/s41597-025-04826-y}, url = {https://doi.org/10.1038/s41597-025-04826-y}, } ``` ## About This Dataset **Yang2025** Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025. **Dataset Overview** > Code: Yang2025 > Paradigm: imagery > DOI: 10.1038/s41597-025-04826-y ### View full README **Yang2025** Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025. **Dataset Overview** > Code: Yang2025 > Paradigm: imagery > DOI: 10.1038/s41597-025-04826-y > Subjects: 51 > Sessions per subject: 3 > Events: left_hand=1, right_hand=2 > Trial interval: [1.5, 5.5] s > File format: BDF **Acquisition** > Sampling rate: 1000.0 Hz > Number of channels: 59 > Channel types: eeg=59, ecg=1, eog=4 > Channel names: Fpz, Fp1, Fp2, AF3, AF4, AF7, AF8, Fz, F1, F2, F3, F4, F5, F6, F7, F8, FCz, FC1, FC2, FC3, FC4, FC5, FC6, FT7, FT8, Cz, C1, C2, C3, C4, C5, C6, T7, T8, CP1, CP2, CP3, CP4, CP5, CP6, TP7, TP8, Pz, P3, P4, P5, P6, P7, P8, POz, PO3, PO4, PO5, PO6, PO7, PO8, Oz, O1, O2 > Montage: standard_1005 > Hardware: Neuracle NeuSen W > Sensor type: Ag/AgCl > Line frequency: 50.0 Hz > Online filters: {} **Participants** > Number of subjects: 51 > Health status: healthy > Age: min=17.0, max=30.0 > Gender distribution: female=18, male=44 > Handedness: right-handed > BCI experience: naive > Species: human **Experimental Protocol** > Paradigm: imagery > Number of classes: 2 > Class labels: left_hand, right_hand > Trial duration: 7.5 s > Study design: Multi-day MI-BCI: 2C (left/right hand, 51 subj) and 3C (left hand, right hand, foot-hooking, 11 subj). 3 sessions per subject on different days. > Feedback type: none > Stimulus type: video cues > Stimulus modalities: visual, auditory > Primary modality: visual > Synchronicity: synchronous > Mode: offline **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > left_hand ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Left, Hand right_hand ``` ```text ├─ Sensory-event, Experimental-stimulus, Visual-presentation └─ Agent-action └─ Imagine ├─ Move └─ Right, Hand ``` **Paradigm-Specific Parameters** > Detected paradigm: motor_imagery > Imagery tasks: left_hand, right_hand, feet > Cue duration: 1.5 s > Imagery duration: 4.0 s **Data Structure** > Trials: 39600 > Trials context: 51 subjects x 3 sessions x 200 trials (2C) + 11 subjects x 3 sessions x 300 trials (3C) = 39600 **Signal Processing** > Classifiers: CSP+SVM, FBCSP+SVM, EEGNet, deepConvNet, FBCNet > Feature extraction: CSP, FBCSP > Frequency bands: bandpass=[0.5, 40.0] Hz > Spatial filters: CSP, FBCSP **Cross-Validation** > Method: 10-fold > Folds: 10 > Evaluation type: within_session **BCI Application** > Applications: motor_control > Environment: laboratory > Online feedback: False **Tags** > Pathology: Healthy > Modality: Motor > Type: Research **Documentation** > DOI: 10.1038/s41597-025-04826-y > License: CC-BY-4.0 > Investigators: Banghua Yang, Fenqi Rong, Yunlong Xie, Du Li, Jiayang Zhang, Fu Li, Guangming Shi, Xiaorong Gao > Institution: Shanghai University > Country: CN > Data URL: [https://plus.figshare.com/articles/dataset/22671172](https://plus.figshare.com/articles/dataset/22671172) > Publication year: 2025 **References** Yang, B., Rong, F., Xie, Y., et al. (2025). A multi-day and high-quality EEG dataset for motor imagery brain-computer interface. Scientific Data, 12, 488. [https://doi.org/10.1038/s41597-025-04826-y](https://doi.org/10.1038/s41597-025-04826-y) Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000348` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Yang2025 | | Author (year) | `Yang2025` | | Canonical | — | | Importable as | `NM000348`, `Yang2025` | | Year | 2025 | | Authors | Banghua Yang, Fenqi Rong, Yunlong Xie, Du Li, Jiayang Zhang, Fu Li, Guangming Shi, Xiaorong Gao | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.1038/s41597-025-04826-y](https://doi.org/10.1038/s41597-025-04826-y) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000348) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000348) | [Source URL](https://nemar.org/dataexplorer/detail/nm000348) | ### Copy-paste BibTeX ```bibtex @dataset{nm000348, title = {Yang2025}, author = {Banghua Yang and Fenqi Rong and Yunlong Xie and Du Li and Jiayang Zhang and Fu Li and Guangming Shi and Xiaorong Gao}, doi = {10.1038/s41597-025-04826-y}, url = {https://doi.org/10.1038/s41597-025-04826-y}, } ``` ## Technical Details - Subjects: 51 - Recordings: 153 - Tasks: 1 - Channels: 64 - Sampling rate (Hz): 1000.0 - Duration (hours): 98.42606861111108 - Pathology: Healthy - Modality: Visual - Type: Motor - Size on disk: 63.4 GB - File count: 153 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.1038/s41597-025-04826-y - Source: nemar - OpenNeuro: [nm000348](https://openneuro.org/datasets/nm000348) - NeMAR: [nm000348](https://nemar.org/dataexplorer/detail?dataset_id=nm000348) ## API Reference Use the `NM000348` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000348(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Yang2025 * **Study:** `nm000348` (NeMAR) * **Author (year):** `Yang2025` * **Canonical:** — Also importable as: `NM000348`, `Yang2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 51; recordings: 153; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000348](https://openneuro.org/datasets/nm000348) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000348](https://nemar.org/dataexplorer/detail?dataset_id=nm000348) DOI: [https://doi.org/10.1038/s41597-025-04826-y](https://doi.org/10.1038/s41597-025-04826-y) ### Examples ```pycon >>> from eegdash.dataset import NM000348 >>> dataset = NM000348(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000348) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000348) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NM000351: eeg dataset, 19 subjects *Mainsah2025-P* Access recordings and metadata through EEGDash. **Citation:** Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins (2019). *Mainsah2025-P*. [10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) Modality: eeg Subjects: 19 Recordings: 228 License: CC-BY-4.0 Source: nemar Metadata: Complete (100%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NM000351 dataset = NM000351(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NM000351(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NM000351( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nm000351, title = {Mainsah2025-P}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## About This Dataset **Mainsah2025-P** BigP3BCI Study P — 9x8 predictive/non-predictive spelling (19 ALS subjects). **Dataset Overview** > Code: Mainsah2025-P > Paradigm: p300 > DOI: 10.13026/0byy-ry86 ### View full README **Mainsah2025-P** BigP3BCI Study P — 9x8 predictive/non-predictive spelling (19 ALS subjects). **Dataset Overview** > Code: Mainsah2025-P > Paradigm: p300 > DOI: 10.13026/0byy-ry86 > Subjects: 19 > Sessions per subject: 2 > Events: Target=2, NonTarget=1 > Trial interval: [0, 1.0] s **Acquisition** > Sampling rate: 256.0 Hz > Number of channels: 32 > Channel types: eeg=32 > Montage: standard_1020 > Hardware: g.USBamp (g.tec) > Line frequency: 60.0 Hz **Participants** > Number of subjects: 19 > Health status: healthy **Experimental Protocol** > Paradigm: p300 > Number of classes: 2 > Class labels: Target, NonTarget **HED Event Annotations** > Schema: HED 8.4.0 | Browse: [https://www.hedtags.org/hed-schema-browser](https://www.hedtags.org/hed-schema-browser) > Target ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Target NonTarget ``` ```text ├─ Sensory-event ├─ Experimental-stimulus ├─ Visual-presentation └─ Non-target ``` **Paradigm-Specific Parameters** > Detected paradigm: p300 **Signal Processing** > Feature extraction: P300_ERP_detection **Cross-Validation** > Method: calibration-then-test > Evaluation type: within_subject **BCI Application** > Applications: speller > Environment: laboratory > Online feedback: True **Tags** > Modality: visual > Type: perception **Documentation** > Description: BigP3BCI: the largest public P300 BCI dataset, containing EEG recordings from ~267 subjects across 20 studies using 6x6 or 9x8 character grids with various stimulus paradigms. > DOI: 10.13026/0byy-ry86 > License: CC-BY-4.0 > Investigators: Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins > Institution: Duke University; East Tennessee State University > Country: US > Repository: PhysioNet > Data URL: [https://physionet.org/content/bigp3bci/1.0.0/](https://physionet.org/content/bigp3bci/1.0.0/) > Publication year: 2025 **References** Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Hochenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). [https://doi.org/10.21105/joss.01896](https://doi.org/10.21105/joss.01896) Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. [https://doi.org/10.1038/s41597-019-0104-8](https://doi.org/10.1038/s41597-019-0104-8) Generated by MOABB 1.5.0 (Mother of All BCI Benchmarks) [https://github.com/NeuroTechX/moabb](https://github.com/NeuroTechX/moabb) ## Dataset Information | Dataset ID | `NM000351` | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | Mainsah2025-P | | Author (year) | `Mainsah2025_P` | | Canonical | — | | Importable as | `NM000351`, `Mainsah2025_P` | | Year | 2019 | | Authors | Boyla Mainsah, Chance Fleeting, Thomas Balmat, Eric Sellers, Leslie Collins | | License | CC-BY-4.0 | | Citation / DOI | [doi:10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) | | Source links | [OpenNeuro](https://openneuro.org/datasets/nm000351) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nm000351) | [Source URL](https://nemar.org/dataexplorer/detail/nm000351) | ### Copy-paste BibTeX ```bibtex @dataset{nm000351, title = {Mainsah2025-P}, author = {Boyla Mainsah and Chance Fleeting and Thomas Balmat and Eric Sellers and Leslie Collins}, doi = {10.13026/0byy-ry86}, url = {https://doi.org/10.13026/0byy-ry86}, } ``` ## Technical Details - Subjects: 19 - Recordings: 228 - Tasks: 1 - Channels: 32 - Sampling rate (Hz): 256.0000930697907 (152), 256.00008203487505 (38), 256.0 (22), 256.00009466146327, 256.0001237687188, 256.00011666197287, 256.00009240629606, 256.0001196052653, 256.0001141647201, 256.0000983717175, 256.00007259581656, 256.00008694547637, 256.0000896869156, 256.0001109005733, 256.00011860780387, 256.00008676866054, 256.00009044741086, 256.00006037203326, 256.0000936228914 - Duration (hours): 17.685858862826244 - Pathology: Other - Modality: Visual - Type: Attention - Size on disk: 1.5 GB - File count: 228 - Format: BIDS - License: CC-BY-4.0 - DOI: doi:10.13026/0byy-ry86 - Source: nemar - OpenNeuro: [nm000351](https://openneuro.org/datasets/nm000351) - NeMAR: [nm000351](https://nemar.org/dataexplorer/detail?dataset_id=nm000351) ## API Reference Use the `NM000351` class to access this dataset programmatically. ### *class* eegdash.dataset.NM000351(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-P * **Study:** `nm000351` (NeMAR) * **Author (year):** `Mainsah2025_P` * **Canonical:** — Also importable as: `NM000351`, `Mainsah2025_P`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 19; recordings: 228; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000351](https://openneuro.org/datasets/nm000351) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000351](https://nemar.org/dataexplorer/detail?dataset_id=nm000351) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000351 >>> dataset = NM000351(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### \_\_init_\_(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nm000351) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nm000351) * [eegdash.dataset.DS001785](eegdash.dataset.DS001785.md) * [eegdash.dataset.DS001787](eegdash.dataset.DS001787.md) * [eegdash.dataset.DS001810](eegdash.dataset.DS001810.md) * [eegdash.dataset.DS001849](eegdash.dataset.DS001849.md) * [eegdash.dataset.DS001971](eegdash.dataset.DS001971.md) # NOD_EEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NOD_EEG dataset = NOD_EEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NOD_EEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NOD_EEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nod_eeg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `NOD_EEG` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `NOD_EEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nod_eeg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nod_eeg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [nod_eeg](https://openneuro.org/datasets/nod_eeg) - NeMAR: [nod_eeg](https://nemar.org/dataexplorer/detail?dataset_id=nod_eeg) ## API Reference Use the `NOD_EEG` class to access this dataset programmatically. ### eegdash.dataset.NOD_EEG alias of [`DS005811`](eegdash.dataset.DS005811.md#eegdash.dataset.DS005811) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nod_eeg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nod_eeg) # NOD_MEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NOD_MEG dataset = NOD_MEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NOD_MEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NOD_MEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nod_meg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `NOD_MEG` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `NOD_MEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nod_meg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nod_meg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [nod_meg](https://openneuro.org/datasets/nod_meg) - NeMAR: [nod_meg](https://nemar.org/dataexplorer/detail?dataset_id=nod_meg) ## API Reference Use the `NOD_MEG` class to access this dataset programmatically. ### eegdash.dataset.NOD_MEG alias of [`DS005810`](eegdash.dataset.DS005810.md#eegdash.dataset.DS005810) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nod_meg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nod_meg) # NenckiSymfonia: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NenckiSymfonia dataset = NenckiSymfonia(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NenckiSymfonia(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NenckiSymfonia( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nenckisymfonia, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `NENCKISYMFONIA` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `NENCKISYMFONIA` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nenckisymfonia) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nenckisymfonia) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [nenckisymfonia](https://openneuro.org/datasets/nenckisymfonia) - NeMAR: [nenckisymfonia](https://nemar.org/dataexplorer/detail?dataset_id=nenckisymfonia) ## API Reference Use the `NenckiSymfonia` class to access this dataset programmatically. ### eegdash.dataset.NenckiSymfonia alias of [`DS004621`](eegdash.dataset.DS004621.md#eegdash.dataset.DS004621) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nenckisymfonia) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nenckisymfonia) # Neuma: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Neuma dataset = Neuma(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Neuma(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Neuma( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{neuma, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `NEUMA` | |----------------|---------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `NEUMA` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/neuma) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=neuma) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [neuma](https://openneuro.org/datasets/neuma) - NeMAR: [neuma](https://nemar.org/dataexplorer/detail?dataset_id=neuma) ## API Reference Use the `Neuma` class to access this dataset programmatically. ### eegdash.dataset.Neuma alias of [`DS004588`](eegdash.dataset.DS004588.md#eegdash.dataset.DS004588) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/neuma) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=neuma) # NeuroMorph: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import NeuroMorph dataset = NeuroMorph(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = NeuroMorph(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = NeuroMorph( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{neuromorph, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `NEUROMORPH` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `NEUROMORPH` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/neuromorph) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=neuromorph) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [neuromorph](https://openneuro.org/datasets/neuromorph) - NeMAR: [neuromorph](https://nemar.org/dataexplorer/detail?dataset_id=neuromorph) ## API Reference Use the `NeuroMorph` class to access this dataset programmatically. ### eegdash.dataset.NeuroMorph alias of [`DS005241`](eegdash.dataset.DS005241.md#eegdash.dataset.DS005241) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/neuromorph) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=neuromorph) # Nierula2019: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Nierula2019 dataset = Nierula2019(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Nierula2019(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Nierula2019( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{nierula2019, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `NIERULA2019` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `NIERULA2019` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/nierula2019) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=nierula2019) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [nierula2019](https://openneuro.org/datasets/nierula2019) - NeMAR: [nierula2019](https://nemar.org/dataexplorer/detail?dataset_id=nierula2019) ## API Reference Use the `Nierula2019` class to access this dataset programmatically. ### eegdash.dataset.Nierula2019 alias of [`DS005307`](eegdash.dataset.DS005307.md#eegdash.dataset.DS005307) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/nierula2019) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=nierula2019) # Ning2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Ning2024 dataset = Ning2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Ning2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Ning2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ning2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `NING2024` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `NING2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/ning2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ning2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [ning2024](https://openneuro.org/datasets/ning2024) - NeMAR: [ning2024](https://nemar.org/dataexplorer/detail?dataset_id=ning2024) ## API Reference Use the `Ning2024` class to access this dataset programmatically. ### eegdash.dataset.Ning2024 alias of [`DS004830`](eegdash.dataset.DS004830.md#eegdash.dataset.DS004830) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ning2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ning2024) # Normannseth2026: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Normannseth2026 dataset = Normannseth2026(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Normannseth2026(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Normannseth2026( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{normannseth2026, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `NORMANNSETH2026` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `NORMANNSETH2026` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/normannseth2026) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=normannseth2026) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [normannseth2026](https://openneuro.org/datasets/normannseth2026) - NeMAR: [normannseth2026](https://nemar.org/dataexplorer/detail?dataset_id=normannseth2026) ## API Reference Use the `Normannseth2026` class to access this dataset programmatically. ### eegdash.dataset.Normannseth2026 alias of [`DS007615`](eegdash.dataset.DS007615.md#eegdash.dataset.DS007615) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/normannseth2026) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=normannseth2026) # OMEGA: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import OMEGA dataset = OMEGA(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = OMEGA(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = OMEGA( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{omega, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `OMEGA` | |----------------|---------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `OMEGA` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/omega) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=omega) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [omega](https://openneuro.org/datasets/omega) - NeMAR: [omega](https://nemar.org/dataexplorer/detail?dataset_id=omega) ## API Reference Use the `OMEGA` class to access this dataset programmatically. ### eegdash.dataset.OMEGA alias of [`DS000247`](eegdash.dataset.DS000247.md#eegdash.dataset.DS000247) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/omega) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=omega) # ORHA: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import ORHA dataset = ORHA(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = ORHA(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = ORHA( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{orha, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ORHA` | |----------------|-------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ORHA` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/orha) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=orha) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [orha](https://openneuro.org/datasets/orha) - NeMAR: [orha](https://nemar.org/dataexplorer/detail?dataset_id=orha) ## API Reference Use the `ORHA` class to access this dataset programmatically. ### eegdash.dataset.ORHA alias of [`DS005363`](eegdash.dataset.DS005363.md#eegdash.dataset.DS005363) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/orha) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=orha) # OcularLDT: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import OcularLDT dataset = OcularLDT(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = OcularLDT(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = OcularLDT( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ocularldt, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `OCULARLDT` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `OCULARLDT` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/ocularldt) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ocularldt) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [ocularldt](https://openneuro.org/datasets/ocularldt) - NeMAR: [ocularldt](https://nemar.org/dataexplorer/detail?dataset_id=ocularldt) ## API Reference Use the `OcularLDT` class to access this dataset programmatically. ### eegdash.dataset.OcularLDT alias of [`DS002312`](eegdash.dataset.DS002312.md#eegdash.dataset.DS002312) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ocularldt) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ocularldt) # Oikonomou2016: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Oikonomou2016 dataset = Oikonomou2016(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Oikonomou2016(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Oikonomou2016( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{oikonomou2016, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `OIKONOMOU2016` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `OIKONOMOU2016` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/oikonomou2016) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=oikonomou2016) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [oikonomou2016](https://openneuro.org/datasets/oikonomou2016) - NeMAR: [oikonomou2016](https://nemar.org/dataexplorer/detail?dataset_id=oikonomou2016) ## API Reference Use the `Oikonomou2016` class to access this dataset programmatically. ### eegdash.dataset.Oikonomou2016 alias of [`NM000119`](eegdash.dataset.NM000119.md#eegdash.dataset.NM000119) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/oikonomou2016) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=oikonomou2016) # Omelyusik2026: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Omelyusik2026 dataset = Omelyusik2026(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Omelyusik2026(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Omelyusik2026( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{omelyusik2026, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `OMELYUSIK2026` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `OMELYUSIK2026` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/omelyusik2026) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=omelyusik2026) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [omelyusik2026](https://openneuro.org/datasets/omelyusik2026) - NeMAR: [omelyusik2026](https://nemar.org/dataexplorer/detail?dataset_id=omelyusik2026) ## API Reference Use the `Omelyusik2026` class to access this dataset programmatically. ### eegdash.dataset.Omelyusik2026 alias of [`DS006136`](eegdash.dataset.DS006136.md#eegdash.dataset.DS006136) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/omelyusik2026) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=omelyusik2026) # Onton2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Onton2024 dataset = Onton2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Onton2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Onton2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{onton2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ONTON2024` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ONTON2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/onton2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=onton2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [onton2024](https://openneuro.org/datasets/onton2024) - NeMAR: [onton2024](https://nemar.org/dataexplorer/detail?dataset_id=onton2024) ## API Reference Use the `Onton2024` class to access this dataset programmatically. ### eegdash.dataset.Onton2024 alias of [`DS006695`](eegdash.dataset.DS006695.md#eegdash.dataset.DS006695) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/onton2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=onton2024) # OpenBMI_ERP: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import OpenBMI_ERP dataset = OpenBMI_ERP(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = OpenBMI_ERP(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = OpenBMI_ERP( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{openbmi_erp, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `OPENBMI_ERP` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `OPENBMI_ERP` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/openbmi_erp) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=openbmi_erp) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [openbmi_erp](https://openneuro.org/datasets/openbmi_erp) - NeMAR: [openbmi_erp](https://nemar.org/dataexplorer/detail?dataset_id=openbmi_erp) ## API Reference Use the `OpenBMI_ERP` class to access this dataset programmatically. ### eegdash.dataset.OpenBMI_ERP alias of [`NM000323`](eegdash.dataset.NM000323.md#eegdash.dataset.NM000323) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/openbmi_erp) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=openbmi_erp) # OpenBMI_MI: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import OpenBMI_MI dataset = OpenBMI_MI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = OpenBMI_MI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = OpenBMI_MI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{openbmi_mi, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `OPENBMI_MI` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `OPENBMI_MI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/openbmi_mi) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=openbmi_mi) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [openbmi_mi](https://openneuro.org/datasets/openbmi_mi) - NeMAR: [openbmi_mi](https://nemar.org/dataexplorer/detail?dataset_id=openbmi_mi) ## API Reference Use the `OpenBMI_MI` class to access this dataset programmatically. ### eegdash.dataset.OpenBMI_MI alias of [`NM000338`](eegdash.dataset.NM000338.md#eegdash.dataset.NM000338) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/openbmi_mi) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=openbmi_mi) # OpenBMI_P300: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import OpenBMI_P300 dataset = OpenBMI_P300(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = OpenBMI_P300(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = OpenBMI_P300( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{openbmi_p300, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `OPENBMI_P300` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `OPENBMI_P300` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/openbmi_p300) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=openbmi_p300) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [openbmi_p300](https://openneuro.org/datasets/openbmi_p300) - NeMAR: [openbmi_p300](https://nemar.org/dataexplorer/detail?dataset_id=openbmi_p300) ## API Reference Use the `OpenBMI_P300` class to access this dataset programmatically. ### eegdash.dataset.OpenBMI_P300 alias of [`NM000323`](eegdash.dataset.NM000323.md#eegdash.dataset.NM000323) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/openbmi_p300) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=openbmi_p300) # PAL: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import PAL dataset = PAL(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = PAL(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = PAL( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{pal, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `PAL` | |----------------|-----------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `PAL` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/pal) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=pal) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [pal](https://openneuro.org/datasets/pal) - NeMAR: [pal](https://nemar.org/dataexplorer/detail?dataset_id=pal) ## API Reference Use the `PAL` class to access this dataset programmatically. ### eegdash.dataset.PAL alias of [`DS005059`](eegdash.dataset.DS005059.md#eegdash.dataset.DS005059) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/pal) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=pal) # PDEEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import PDEEG dataset = PDEEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = PDEEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = PDEEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{pdeeg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `PDEEG` | |----------------|---------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `PDEEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/pdeeg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=pdeeg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [pdeeg](https://openneuro.org/datasets/pdeeg) - NeMAR: [pdeeg](https://nemar.org/dataexplorer/detail?dataset_id=pdeeg) ## API Reference Use the `PDEEG` class to access this dataset programmatically. ### eegdash.dataset.PDEEG alias of [`DS007526`](eegdash.dataset.DS007526.md#eegdash.dataset.DS007526) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/pdeeg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=pdeeg) # PD_EEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import PD_EEG dataset = PD_EEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = PD_EEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = PD_EEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{pd_eeg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `PD_EEG` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `PD_EEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/pd_eeg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=pd_eeg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [pd_eeg](https://openneuro.org/datasets/pd_eeg) - NeMAR: [pd_eeg](https://nemar.org/dataexplorer/detail?dataset_id=pd_eeg) ## API Reference Use the `PD_EEG` class to access this dataset programmatically. ### eegdash.dataset.PD_EEG alias of [`DS007526`](eegdash.dataset.DS007526.md#eegdash.dataset.DS007526) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/pd_eeg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=pd_eeg) # PEARLNeuro: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import PEARLNeuro dataset = PEARLNeuro(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = PEARLNeuro(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = PEARLNeuro( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{pearlneuro, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `PEARLNEURO` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `PEARLNEURO` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/pearlneuro) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=pearlneuro) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [pearlneuro](https://openneuro.org/datasets/pearlneuro) - NeMAR: [pearlneuro](https://nemar.org/dataexplorer/detail?dataset_id=pearlneuro) ## API Reference Use the `PEARLNeuro` class to access this dataset programmatically. ### eegdash.dataset.PEARLNeuro alias of [`DS004796`](eegdash.dataset.DS004796.md#eegdash.dataset.DS004796) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/pearlneuro) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=pearlneuro) # PEERS: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import PEERS dataset = PEERS(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = PEERS(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = PEERS( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{peers, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `PEERS` | |----------------|---------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `PEERS` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/peers) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=peers) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [peers](https://openneuro.org/datasets/peers) - NeMAR: [peers](https://nemar.org/dataexplorer/detail?dataset_id=peers) ## API Reference Use the `PEERS` class to access this dataset programmatically. ### eegdash.dataset.PEERS alias of [`DS004395`](eegdash.dataset.DS004395.md#eegdash.dataset.DS004395) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/peers) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=peers) # PRIOS: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import PRIOS dataset = PRIOS(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = PRIOS(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = PRIOS( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{prios, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `PRIOS` | |----------------|---------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `PRIOS` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/prios) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=prios) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [prios](https://openneuro.org/datasets/prios) - NeMAR: [prios](https://nemar.org/dataexplorer/detail?dataset_id=prios) ## API Reference Use the `PRIOS` class to access this dataset programmatically. ### eegdash.dataset.PRIOS alias of [`DS004370`](eegdash.dataset.DS004370.md#eegdash.dataset.DS004370) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/prios) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=prios) # PROMENADE: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import PROMENADE dataset = PROMENADE(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = PROMENADE(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = PROMENADE( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{promenade, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `PROMENADE` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `PROMENADE` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/promenade) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=promenade) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [promenade](https://openneuro.org/datasets/promenade) - NeMAR: [promenade](https://nemar.org/dataexplorer/detail?dataset_id=promenade) ## API Reference Use the `PROMENADE` class to access this dataset programmatically. ### eegdash.dataset.PROMENADE alias of [`DS005946`](eegdash.dataset.DS005946.md#eegdash.dataset.DS005946) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/promenade) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=promenade) # PWIe: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import PWIe dataset = PWIe(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = PWIe(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = PWIe( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{pwie, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `PWIE` | |----------------|-------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `PWIE` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/pwie) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=pwie) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [pwie](https://openneuro.org/datasets/pwie) - NeMAR: [pwie](https://nemar.org/dataexplorer/detail?dataset_id=pwie) ## API Reference Use the `PWIe` class to access this dataset programmatically. ### eegdash.dataset.PWIe alias of [`DS005932`](eegdash.dataset.DS005932.md#eegdash.dataset.DS005932) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/pwie) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=pwie) # Penalver2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Penalver2024 dataset = Penalver2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Penalver2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Penalver2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{penalver2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `PENALVER2024` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `PENALVER2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/penalver2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=penalver2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [penalver2024](https://openneuro.org/datasets/penalver2024) - NeMAR: [penalver2024](https://nemar.org/dataexplorer/detail?dataset_id=penalver2024) ## API Reference Use the `Penalver2024` class to access this dataset programmatically. ### eegdash.dataset.Penalver2024 alias of [`DS004502`](eegdash.dataset.DS004502.md#eegdash.dataset.DS004502) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/penalver2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=penalver2024) # Peng2018: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Peng2018 dataset = Peng2018(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Peng2018(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Peng2018( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{peng2018, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `PENG2018` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `PENG2018` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/peng2018) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=peng2018) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [peng2018](https://openneuro.org/datasets/peng2018) - NeMAR: [peng2018](https://nemar.org/dataexplorer/detail?dataset_id=peng2018) ## API Reference Use the `Peng2018` class to access this dataset programmatically. ### eegdash.dataset.Peng2018 alias of [`DS005777`](eegdash.dataset.DS005777.md#eegdash.dataset.DS005777) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/peng2018) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=peng2018) # PerceiveImagine: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import PerceiveImagine dataset = PerceiveImagine(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = PerceiveImagine(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = PerceiveImagine( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{perceiveimagine, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `PERCEIVEIMAGINE` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `PERCEIVEIMAGINE` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/perceiveimagine) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=perceiveimagine) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [perceiveimagine](https://openneuro.org/datasets/perceiveimagine) - NeMAR: [perceiveimagine](https://nemar.org/dataexplorer/detail?dataset_id=perceiveimagine) ## API Reference Use the `PerceiveImagine` class to access this dataset programmatically. ### eegdash.dataset.PerceiveImagine alias of [`DS005697`](eegdash.dataset.DS005697.md#eegdash.dataset.DS005697) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/perceiveimagine) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=perceiveimagine) # PhysionetMI: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import PhysionetMI dataset = PhysionetMI(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = PhysionetMI(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = PhysionetMI( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{physionetmi, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `PHYSIONETMI` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `PHYSIONETMI` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/physionetmi) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=physionetmi) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [physionetmi](https://openneuro.org/datasets/physionetmi) - NeMAR: [physionetmi](https://nemar.org/dataexplorer/detail?dataset_id=physionetmi) ## API Reference Use the `PhysionetMI` class to access this dataset programmatically. ### eegdash.dataset.PhysionetMI alias of [`DS004362`](eegdash.dataset.DS004362.md#eegdash.dataset.DS004362) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/physionetmi) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=physionetmi) # Podcast: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Podcast dataset = Podcast(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Podcast(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Podcast( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{podcast, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `PODCAST` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `PODCAST` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/podcast) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=podcast) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [podcast](https://openneuro.org/datasets/podcast) - NeMAR: [podcast](https://nemar.org/dataexplorer/detail?dataset_id=podcast) ## API Reference Use the `Podcast` class to access this dataset programmatically. ### eegdash.dataset.Podcast alias of [`DS005574`](eegdash.dataset.DS005574.md#eegdash.dataset.DS005574) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/podcast) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=podcast) # Pohle2019: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Pohle2019 dataset = Pohle2019(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Pohle2019(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Pohle2019( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{pohle2019, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `POHLE2019` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `POHLE2019` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/pohle2019) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=pohle2019) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [pohle2019](https://openneuro.org/datasets/pohle2019) - NeMAR: [pohle2019](https://nemar.org/dataexplorer/detail?dataset_id=pohle2019) ## API Reference Use the `Pohle2019` class to access this dataset programmatically. ### eegdash.dataset.Pohle2019 alias of [`DS006374`](eegdash.dataset.DS006374.md#eegdash.dataset.DS006374) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/pohle2019) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=pohle2019) # RAM_catFR: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import RAM_catFR dataset = RAM_catFR(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = RAM_catFR(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = RAM_catFR( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ram_catfr, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `RAM_CATFR` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `RAM_CATFR` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/ram_catfr) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ram_catfr) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [ram_catfr](https://openneuro.org/datasets/ram_catfr) - NeMAR: [ram_catfr](https://nemar.org/dataexplorer/detail?dataset_id=ram_catfr) ## API Reference Use the `RAM_catFR` class to access this dataset programmatically. ### eegdash.dataset.RAM_catFR alias of [`DS005491`](eegdash.dataset.DS005491.md#eegdash.dataset.DS005491) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ram_catfr) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ram_catfr) # RESPect_CCEP: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import RESPect_CCEP dataset = RESPect_CCEP(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = RESPect_CCEP(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = RESPect_CCEP( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{respect_ccep, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `RESPECT_CCEP` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `RESPECT_CCEP` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/respect_ccep) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=respect_ccep) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [respect_ccep](https://openneuro.org/datasets/respect_ccep) - NeMAR: [respect_ccep](https://nemar.org/dataexplorer/detail?dataset_id=respect_ccep) ## API Reference Use the `RESPect_CCEP` class to access this dataset programmatically. ### eegdash.dataset.RESPect_CCEP alias of [`DS004080`](eegdash.dataset.DS004080.md#eegdash.dataset.DS004080) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/respect_ccep) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=respect_ccep) # RESPect_intraop: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import RESPect_intraop dataset = RESPect_intraop(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = RESPect_intraop(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = RESPect_intraop( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{respect_intraop, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `RESPECT_INTRAOP` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `RESPECT_INTRAOP` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/respect_intraop) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=respect_intraop) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [respect_intraop](https://openneuro.org/datasets/respect_intraop) - NeMAR: [respect_intraop](https://nemar.org/dataexplorer/detail?dataset_id=respect_intraop) ## API Reference Use the `RESPect_intraop` class to access this dataset programmatically. ### eegdash.dataset.RESPect_intraop alias of [`DS003844`](eegdash.dataset.DS003844.md#eegdash.dataset.DS003844) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/respect_intraop) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=respect_intraop) # RESPect_longterm: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import RESPect_longterm dataset = RESPect_longterm(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = RESPect_longterm(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = RESPect_longterm( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{respect_longterm, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `RESPECT_LONGTERM` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `RESPECT_LONGTERM` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/respect_longterm) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=respect_longterm) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [respect_longterm](https://openneuro.org/datasets/respect_longterm) - NeMAR: [respect_longterm](https://nemar.org/dataexplorer/detail?dataset_id=respect_longterm) ## API Reference Use the `RESPect_longterm` class to access this dataset programmatically. ### eegdash.dataset.RESPect_longterm alias of [`DS003848`](eegdash.dataset.DS003848.md#eegdash.dataset.DS003848) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/respect_longterm) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=respect_longterm) # Ramzaoui2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Ramzaoui2024 dataset = Ramzaoui2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Ramzaoui2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Ramzaoui2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ramzaoui2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `RAMZAOUI2024` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `RAMZAOUI2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/ramzaoui2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ramzaoui2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [ramzaoui2024](https://openneuro.org/datasets/ramzaoui2024) - NeMAR: [ramzaoui2024](https://nemar.org/dataexplorer/detail?dataset_id=ramzaoui2024) ## API Reference Use the `Ramzaoui2024` class to access this dataset programmatically. ### eegdash.dataset.Ramzaoui2024 alias of [`DS006979`](eegdash.dataset.DS006979.md#eegdash.dataset.DS006979) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ramzaoui2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ramzaoui2024) # Rani2019: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Rani2019 dataset = Rani2019(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Rani2019(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Rani2019( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{rani2019, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `RANI2019` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `RANI2019` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/rani2019) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=rani2019) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [rani2019](https://openneuro.org/datasets/rani2019) - NeMAR: [rani2019](https://nemar.org/dataexplorer/detail?dataset_id=rani2019) ## API Reference Use the `Rani2019` class to access this dataset programmatically. ### eegdash.dataset.Rani2019 alias of [`DS004012`](eegdash.dataset.DS004012.md#eegdash.dataset.DS004012) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/rani2019) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=rani2019) # Rockhill2022: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Rockhill2022 dataset = Rockhill2022(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Rockhill2022(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Rockhill2022( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{rockhill2022, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ROCKHILL2022` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ROCKHILL2022` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/rockhill2022) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=rockhill2022) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [rockhill2022](https://openneuro.org/datasets/rockhill2022) - NeMAR: [rockhill2022](https://nemar.org/dataexplorer/detail?dataset_id=rockhill2022) ## API Reference Use the `Rockhill2022` class to access this dataset programmatically. ### eegdash.dataset.Rockhill2022 alias of [`DS004473`](eegdash.dataset.DS004473.md#eegdash.dataset.DS004473) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/rockhill2022) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=rockhill2022) # Rodrigues2017: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Rodrigues2017 dataset = Rodrigues2017(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Rodrigues2017(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Rodrigues2017( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{rodrigues2017, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `RODRIGUES2017` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `RODRIGUES2017` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/rodrigues2017) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=rodrigues2017) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [rodrigues2017](https://openneuro.org/datasets/rodrigues2017) - NeMAR: [rodrigues2017](https://nemar.org/dataexplorer/detail?dataset_id=rodrigues2017) ## API Reference Use the `Rodrigues2017` class to access this dataset programmatically. ### eegdash.dataset.Rodrigues2017 alias of [`NM000221`](eegdash.dataset.NM000221.md#eegdash.dataset.NM000221) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/rodrigues2017) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=rodrigues2017) # Romani2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Romani2025 dataset = Romani2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Romani2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Romani2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{romani2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ROMANI2025` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ROMANI2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/romani2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=romani2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [romani2025](https://openneuro.org/datasets/romani2025) - NeMAR: [romani2025](https://nemar.org/dataexplorer/detail?dataset_id=romani2025) ## API Reference Use the `Romani2025` class to access this dataset programmatically. ### eegdash.dataset.Romani2025 alias of [`NM000147`](eegdash.dataset.NM000147.md#eegdash.dataset.NM000147) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/romani2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=romani2025) # Romani2025_erp: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Romani2025_erp dataset = Romani2025_erp(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Romani2025_erp(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Romani2025_erp( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{romani2025_erp, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ROMANI2025_ERP` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ROMANI2025_ERP` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/romani2025_erp) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=romani2025_erp) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [romani2025_erp](https://openneuro.org/datasets/romani2025_erp) - NeMAR: [romani2025_erp](https://nemar.org/dataexplorer/detail?dataset_id=romani2025_erp) ## API Reference Use the `Romani2025_erp` class to access this dataset programmatically. ### eegdash.dataset.Romani2025_erp alias of [`NM000272`](eegdash.dataset.NM000272.md#eegdash.dataset.NM000272) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/romani2025_erp) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=romani2025_erp) # Runabout: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Runabout dataset = Runabout(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Runabout(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Runabout( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{runabout, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `RUNABOUT` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `RUNABOUT` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/runabout) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=runabout) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [runabout](https://openneuro.org/datasets/runabout) - NeMAR: [runabout](https://nemar.org/dataexplorer/detail?dataset_id=runabout) ## API Reference Use the `Runabout` class to access this dataset programmatically. ### eegdash.dataset.Runabout alias of [`DS003620`](eegdash.dataset.DS003620.md#eegdash.dataset.DS003620) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/runabout) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=runabout) # SINGSING: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import SINGSING dataset = SINGSING(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = SINGSING(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = SINGSING( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{singsing, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SINGSING` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SINGSING` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/singsing) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=singsing) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [singsing](https://openneuro.org/datasets/singsing) - NeMAR: [singsing](https://nemar.org/dataexplorer/detail?dataset_id=singsing) ## API Reference Use the `SINGSING` class to access this dataset programmatically. ### eegdash.dataset.SINGSING alias of [`DS006629`](eegdash.dataset.DS006629.md#eegdash.dataset.DS006629) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/singsing) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=singsing) # SSVEPMAMEM2: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import SSVEPMAMEM2 dataset = SSVEPMAMEM2(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = SSVEPMAMEM2(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = SSVEPMAMEM2( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ssvepmamem2, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SSVEPMAMEM2` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SSVEPMAMEM2` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/ssvepmamem2) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ssvepmamem2) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [ssvepmamem2](https://openneuro.org/datasets/ssvepmamem2) - NeMAR: [ssvepmamem2](https://nemar.org/dataexplorer/detail?dataset_id=ssvepmamem2) ## API Reference Use the `SSVEPMAMEM2` class to access this dataset programmatically. ### eegdash.dataset.SSVEPMAMEM2 alias of [`NM000120`](eegdash.dataset.NM000120.md#eegdash.dataset.NM000120) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ssvepmamem2) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ssvepmamem2) # SSVEP_MAMEM3: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import SSVEP_MAMEM3 dataset = SSVEP_MAMEM3(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = SSVEP_MAMEM3(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = SSVEP_MAMEM3( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ssvep_mamem3, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SSVEP_MAMEM3` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SSVEP_MAMEM3` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/ssvep_mamem3) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ssvep_mamem3) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [ssvep_mamem3](https://openneuro.org/datasets/ssvep_mamem3) - NeMAR: [ssvep_mamem3](https://nemar.org/dataexplorer/detail?dataset_id=ssvep_mamem3) ## API Reference Use the `SSVEP_MAMEM3` class to access this dataset programmatically. ### eegdash.dataset.SSVEP_MAMEM3 alias of [`NM000121`](eegdash.dataset.NM000121.md#eegdash.dataset.NM000121) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ssvep_mamem3) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ssvep_mamem3) # STRONG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import STRONG dataset = STRONG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = STRONG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = STRONG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{strong, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `STRONG` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `STRONG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/strong) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=strong) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [strong](https://openneuro.org/datasets/strong) - NeMAR: [strong](https://nemar.org/dataexplorer/detail?dataset_id=strong) ## API Reference Use the `STRONG` class to access this dataset programmatically. ### eegdash.dataset.STRONG alias of [`DS004849`](eegdash.dataset.DS004849.md#eegdash.dataset.DS004849) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/strong) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=strong) # STReEF: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import STReEF dataset = STReEF(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = STReEF(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = STReEF( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{streef, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `STREEF` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `STREEF` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/streef) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=streef) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [streef](https://openneuro.org/datasets/streef) - NeMAR: [streef](https://nemar.org/dataexplorer/detail?dataset_id=streef) ## API Reference Use the `STReEF` class to access this dataset programmatically. ### eegdash.dataset.STReEF alias of [`DS005448`](eegdash.dataset.DS005448.md#eegdash.dataset.DS005448) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/streef) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=streef) # Sakakura2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Sakakura2024 dataset = Sakakura2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Sakakura2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Sakakura2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{sakakura2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SAKAKURA2024` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SAKAKURA2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/sakakura2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=sakakura2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [sakakura2024](https://openneuro.org/datasets/sakakura2024) - NeMAR: [sakakura2024](https://nemar.org/dataexplorer/detail?dataset_id=sakakura2024) ## API Reference Use the `Sakakura2024` class to access this dataset programmatically. ### eegdash.dataset.Sakakura2024 alias of [`DS004859`](eegdash.dataset.DS004859.md#eegdash.dataset.DS004859) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/sakakura2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=sakakura2024) # Sakakura2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Sakakura2025 dataset = Sakakura2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Sakakura2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Sakakura2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{sakakura2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SAKAKURA2025` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SAKAKURA2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/sakakura2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=sakakura2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [sakakura2025](https://openneuro.org/datasets/sakakura2025) - NeMAR: [sakakura2025](https://nemar.org/dataexplorer/detail?dataset_id=sakakura2025) ## API Reference Use the `Sakakura2025` class to access this dataset programmatically. ### eegdash.dataset.Sakakura2025 alias of [`DS004551`](eegdash.dataset.DS004551.md#eegdash.dataset.DS004551) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/sakakura2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=sakakura2025) # Sato2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Sato2024 dataset = Sato2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Sato2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Sato2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{sato2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SATO2024` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SATO2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/sato2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=sato2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [sato2024](https://openneuro.org/datasets/sato2024) - NeMAR: [sato2024](https://nemar.org/dataexplorer/detail?dataset_id=sato2024) ## API Reference Use the `Sato2024` class to access this dataset programmatically. ### eegdash.dataset.Sato2024 alias of [`DS007602`](eegdash.dataset.DS007602.md#eegdash.dataset.DS007602) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/sato2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=sato2024) # Sato2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Sato2025 dataset = Sato2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Sato2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Sato2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{sato2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SATO2025` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SATO2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/sato2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=sato2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [sato2025](https://openneuro.org/datasets/sato2025) - NeMAR: [sato2025](https://nemar.org/dataexplorer/detail?dataset_id=sato2025) ## API Reference Use the `Sato2025` class to access this dataset programmatically. ### eegdash.dataset.Sato2025 alias of [`DS007591`](eegdash.dataset.DS007591.md#eegdash.dataset.DS007591) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/sato2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=sato2025) # SeizeIT2: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import SeizeIT2 dataset = SeizeIT2(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = SeizeIT2(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = SeizeIT2( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{seizeit2, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SEIZEIT2` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SEIZEIT2` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/seizeit2) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=seizeit2) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [seizeit2](https://openneuro.org/datasets/seizeit2) - NeMAR: [seizeit2](https://nemar.org/dataexplorer/detail?dataset_id=seizeit2) ## API Reference Use the `SeizeIT2` class to access this dataset programmatically. ### eegdash.dataset.SeizeIT2 alias of [`DS005873`](eegdash.dataset.DS005873.md#eegdash.dataset.DS005873) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/seizeit2) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=seizeit2) # Shalamberidze2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Shalamberidze2025 dataset = Shalamberidze2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Shalamberidze2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Shalamberidze2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{shalamberidze2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SHALAMBERIDZE2025` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SHALAMBERIDZE2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/shalamberidze2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=shalamberidze2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [shalamberidze2025](https://openneuro.org/datasets/shalamberidze2025) - NeMAR: [shalamberidze2025](https://nemar.org/dataexplorer/detail?dataset_id=shalamberidze2025) ## API Reference Use the `Shalamberidze2025` class to access this dataset programmatically. ### eegdash.dataset.Shalamberidze2025 alias of [`DS007609`](eegdash.dataset.DS007609.md#eegdash.dataset.DS007609) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/shalamberidze2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=shalamberidze2025) # Shin2017A: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Shin2017A dataset = Shin2017A(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Shin2017A(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Shin2017A( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{shin2017a, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SHIN2017A` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SHIN2017A` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/shin2017a) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=shin2017a) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [shin2017a](https://openneuro.org/datasets/shin2017a) - NeMAR: [shin2017a](https://nemar.org/dataexplorer/detail?dataset_id=shin2017a) ## API Reference Use the `Shin2017A` class to access this dataset programmatically. ### eegdash.dataset.Shin2017A alias of [`NM000267`](eegdash.dataset.NM000267.md#eegdash.dataset.NM000267) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/shin2017a) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=shin2017a) # Shin2017B: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Shin2017B dataset = Shin2017B(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Shin2017B(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Shin2017B( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{shin2017b, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SHIN2017B` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SHIN2017B` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/shin2017b) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=shin2017b) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [shin2017b](https://openneuro.org/datasets/shin2017b) - NeMAR: [shin2017b](https://nemar.org/dataexplorer/detail?dataset_id=shin2017b) ## API Reference Use the `Shin2017B` class to access this dataset programmatically. ### eegdash.dataset.Shin2017B alias of [`NM000268`](eegdash.dataset.NM000268.md#eegdash.dataset.NM000268) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/shin2017b) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=shin2017b) # SleepEDF: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import SleepEDF dataset = SleepEDF(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = SleepEDF(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = SleepEDF( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{sleepedf, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SLEEPEDF` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SLEEPEDF` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/sleepedf) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=sleepedf) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [sleepedf](https://openneuro.org/datasets/sleepedf) - NeMAR: [sleepedf](https://nemar.org/dataexplorer/detail?dataset_id=sleepedf) ## API Reference Use the `SleepEDF` class to access this dataset programmatically. ### eegdash.dataset.SleepEDF alias of [`NM000185`](eegdash.dataset.NM000185.md#eegdash.dataset.NM000185) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/sleepedf) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=sleepedf) # SleepEDFExpanded: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import SleepEDFExpanded dataset = SleepEDFExpanded(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = SleepEDFExpanded(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = SleepEDFExpanded( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{sleepedfexpanded, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SLEEPEDFEXPANDED` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SLEEPEDFEXPANDED` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/sleepedfexpanded) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=sleepedfexpanded) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [sleepedfexpanded](https://openneuro.org/datasets/sleepedfexpanded) - NeMAR: [sleepedfexpanded](https://nemar.org/dataexplorer/detail?dataset_id=sleepedfexpanded) ## API Reference Use the `SleepEDFExpanded` class to access this dataset programmatically. ### eegdash.dataset.SleepEDFExpanded alias of [`NM000185`](eegdash.dataset.NM000185.md#eegdash.dataset.NM000185) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/sleepedfexpanded) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=sleepedfexpanded) # Somato: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Somato dataset = Somato(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Somato(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Somato( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{somato, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SOMATO` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SOMATO` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/somato) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=somato) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [somato](https://openneuro.org/datasets/somato) - NeMAR: [somato](https://nemar.org/dataexplorer/detail?dataset_id=somato) ## API Reference Use the `Somato` class to access this dataset programmatically. ### eegdash.dataset.Somato alias of [`DS003104`](eegdash.dataset.DS003104.md#eegdash.dataset.DS003104) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/somato) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=somato) # Surrey_cEEGrid_sleep: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Surrey_cEEGrid_sleep dataset = Surrey_cEEGrid_sleep(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Surrey_cEEGrid_sleep(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Surrey_cEEGrid_sleep( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{surrey_ceegrid_sleep, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `SURREY_CEEGRID_SLEEP` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `SURREY_CEEGRID_SLEEP` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/surrey_ceegrid_sleep) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=surrey_ceegrid_sleep) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [surrey_ceegrid_sleep](https://openneuro.org/datasets/surrey_ceegrid_sleep) - NeMAR: [surrey_ceegrid_sleep](https://nemar.org/dataexplorer/detail?dataset_id=surrey_ceegrid_sleep) ## API Reference Use the `Surrey_cEEGrid_sleep` class to access this dataset programmatically. ### eegdash.dataset.Surrey_cEEGrid_sleep alias of [`DS005207`](eegdash.dataset.DS005207.md#eegdash.dataset.DS005207) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/surrey_ceegrid_sleep) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=surrey_ceegrid_sleep) # THINGS: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import THINGS dataset = THINGS(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = THINGS(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = THINGS( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{things, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `THINGS` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `THINGS` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/things) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=things) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [things](https://openneuro.org/datasets/things) - NeMAR: [things](https://nemar.org/dataexplorer/detail?dataset_id=things) ## API Reference Use the `THINGS` class to access this dataset programmatically. ### eegdash.dataset.THINGS alias of [`DS003825`](eegdash.dataset.DS003825.md#eegdash.dataset.DS003825) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/things) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=things) # THINGSMEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import THINGSMEG dataset = THINGSMEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = THINGSMEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = THINGSMEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{thingsmeg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `THINGSMEG` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `THINGSMEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/thingsmeg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=thingsmeg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [thingsmeg](https://openneuro.org/datasets/thingsmeg) - NeMAR: [thingsmeg](https://nemar.org/dataexplorer/detail?dataset_id=thingsmeg) ## API Reference Use the `THINGSMEG` class to access this dataset programmatically. ### eegdash.dataset.THINGSMEG alias of [`DS004212`](eegdash.dataset.DS004212.md#eegdash.dataset.DS004212) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/thingsmeg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=thingsmeg) # THINGS_EEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import THINGS_EEG dataset = THINGS_EEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = THINGS_EEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = THINGS_EEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{things_eeg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `THINGS_EEG` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `THINGS_EEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/things_eeg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=things_eeg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [things_eeg](https://openneuro.org/datasets/things_eeg) - NeMAR: [things_eeg](https://nemar.org/dataexplorer/detail?dataset_id=things_eeg) ## API Reference Use the `THINGS_EEG` class to access this dataset programmatically. ### eegdash.dataset.THINGS_EEG alias of [`DS003825`](eegdash.dataset.DS003825.md#eegdash.dataset.DS003825) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/things_eeg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=things_eeg) # THINGS_MEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import THINGS_MEG dataset = THINGS_MEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = THINGS_MEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = THINGS_MEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{things_meg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `THINGS_MEG` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `THINGS_MEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/things_meg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=things_meg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [things_meg](https://openneuro.org/datasets/things_meg) - NeMAR: [things_meg](https://nemar.org/dataexplorer/detail?dataset_id=things_meg) ## API Reference Use the `THINGS_MEG` class to access this dataset programmatically. ### eegdash.dataset.THINGS_MEG alias of [`DS004212`](eegdash.dataset.DS004212.md#eegdash.dataset.DS004212) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/things_meg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=things_meg) # TMNRED: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import TMNRED dataset = TMNRED(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = TMNRED(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = TMNRED( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{tmnred, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `TMNRED` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `TMNRED` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/tmnred) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=tmnred) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [tmnred](https://openneuro.org/datasets/tmnred) - NeMAR: [tmnred](https://nemar.org/dataexplorer/detail?dataset_id=tmnred) ## API Reference Use the `TMNRED` class to access this dataset programmatically. ### eegdash.dataset.TMNRED alias of [`DS005383`](eegdash.dataset.DS005383.md#eegdash.dataset.DS005383) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/tmnred) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=tmnred) # TNO: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import TNO dataset = TNO(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = TNO(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = TNO( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{tno, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `TNO` | |----------------|-----------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `TNO` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/tno) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=tno) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [tno](https://openneuro.org/datasets/tno) - NeMAR: [tno](https://nemar.org/dataexplorer/detail?dataset_id=tno) ## API Reference Use the `TNO` class to access this dataset programmatically. ### eegdash.dataset.TNO alias of [`DS004660`](eegdash.dataset.DS004660.md#eegdash.dataset.DS004660) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/tno) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=tno) # TX14: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import TX14 dataset = TX14(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = TX14(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = TX14( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{tx14, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `TX14` | |----------------|-------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `TX14` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/tx14) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=tx14) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [tx14](https://openneuro.org/datasets/tx14) - NeMAR: [tx14](https://nemar.org/dataexplorer/detail?dataset_id=tx14) ## API Reference Use the `TX14` class to access this dataset programmatically. ### eegdash.dataset.TX14 alias of [`DS004841`](eegdash.dataset.DS004841.md#eegdash.dataset.DS004841) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/tx14) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=tx14) # TX15: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import TX15 dataset = TX15(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = TX15(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = TX15( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{tx15, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `TX15` | |----------------|-------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `TX15` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/tx15) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=tx15) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [tx15](https://openneuro.org/datasets/tx15) - NeMAR: [tx15](https://nemar.org/dataexplorer/detail?dataset_id=tx15) ## API Reference Use the `TX15` class to access this dataset programmatically. ### eegdash.dataset.TX15 alias of [`DS004842`](eegdash.dataset.DS004842.md#eegdash.dataset.DS004842) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/tx15) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=tx15) # TX18: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import TX18 dataset = TX18(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = TX18(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = TX18( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{tx18, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `TX18` | |----------------|-------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `TX18` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/tx18) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=tx18) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [tx18](https://openneuro.org/datasets/tx18) - NeMAR: [tx18](https://nemar.org/dataexplorer/detail?dataset_id=tx18) ## API Reference Use the `TX18` class to access this dataset programmatically. ### eegdash.dataset.TX18 alias of [`DS004854`](eegdash.dataset.DS004854.md#eegdash.dataset.DS004854) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/tx18) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=tx18) # TX20: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import TX20 dataset = TX20(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = TX20(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = TX20( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{tx20, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `TX20` | |----------------|-------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `TX20` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/tx20) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=tx20) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [tx20](https://openneuro.org/datasets/tx20) - NeMAR: [tx20](https://nemar.org/dataexplorer/detail?dataset_id=tx20) ## API Reference Use the `TX20` class to access this dataset programmatically. ### eegdash.dataset.TX20 alias of [`DS004657`](eegdash.dataset.DS004657.md#eegdash.dataset.DS004657) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/tx20) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=tx20) # Todorovic2023: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Todorovic2023 dataset = Todorovic2023(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Todorovic2023(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Todorovic2023( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{todorovic2023, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `TODOROVIC2023` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `TODOROVIC2023` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/todorovic2023) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=todorovic2023) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [todorovic2023](https://openneuro.org/datasets/todorovic2023) - NeMAR: [todorovic2023](https://nemar.org/dataexplorer/detail?dataset_id=todorovic2023) ## API Reference Use the `Todorovic2023` class to access this dataset programmatically. ### eegdash.dataset.Todorovic2023 alias of [`DS005261`](eegdash.dataset.DS005261.md#eegdash.dataset.DS005261) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/todorovic2023) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=todorovic2023) # ToonFaces: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import ToonFaces dataset = ToonFaces(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = ToonFaces(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = ToonFaces( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{toonfaces, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `TOONFACES` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `TOONFACES` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/toonfaces) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=toonfaces) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [toonfaces](https://openneuro.org/datasets/toonfaces) - NeMAR: [toonfaces](https://nemar.org/dataexplorer/detail?dataset_id=toonfaces) ## API Reference Use the `ToonFaces` class to access this dataset programmatically. ### eegdash.dataset.ToonFaces alias of [`DS004324`](eegdash.dataset.DS004324.md#eegdash.dataset.DS004324) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/toonfaces) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=toonfaces) # Touryan1999: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Touryan1999 dataset = Touryan1999(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Touryan1999(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Touryan1999( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{touryan1999, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `TOURYAN1999` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `TOURYAN1999` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/touryan1999) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=touryan1999) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [touryan1999](https://openneuro.org/datasets/touryan1999) - NeMAR: [touryan1999](https://nemar.org/dataexplorer/detail?dataset_id=touryan1999) ## API Reference Use the `Touryan1999` class to access this dataset programmatically. ### eegdash.dataset.Touryan1999 alias of [`DS004118`](eegdash.dataset.DS004118.md#eegdash.dataset.DS004118) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/touryan1999) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=touryan1999) # Tripathy2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Tripathy2024 dataset = Tripathy2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Tripathy2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Tripathy2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{tripathy2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `TRIPATHY2024` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `TRIPATHY2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/tripathy2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=tripathy2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [tripathy2024](https://openneuro.org/datasets/tripathy2024) - NeMAR: [tripathy2024](https://nemar.org/dataexplorer/detail?dataset_id=tripathy2024) ## API Reference Use the `Tripathy2024` class to access this dataset programmatically. ### eegdash.dataset.Tripathy2024 alias of [`DS007473`](eegdash.dataset.DS007473.md#eegdash.dataset.DS007473) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/tripathy2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=tripathy2024) # VEPCON: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import VEPCON dataset = VEPCON(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = VEPCON(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = VEPCON( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{vepcon, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `VEPCON` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `VEPCON` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/vepcon) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=vepcon) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [vepcon](https://openneuro.org/datasets/vepcon) - NeMAR: [vepcon](https://nemar.org/dataexplorer/detail?dataset_id=vepcon) ## API Reference Use the `VEPCON` class to access this dataset programmatically. ### eegdash.dataset.VEPCON alias of [`DS003505`](eegdash.dataset.DS003505.md#eegdash.dataset.DS003505) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/vepcon) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=vepcon) # Veillette2019: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Veillette2019 dataset = Veillette2019(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Veillette2019(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Veillette2019( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{veillette2019, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `VEILLETTE2019` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `VEILLETTE2019` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/veillette2019) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=veillette2019) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [veillette2019](https://openneuro.org/datasets/veillette2019) - NeMAR: [veillette2019](https://nemar.org/dataexplorer/detail?dataset_id=veillette2019) ## API Reference Use the `Veillette2019` class to access this dataset programmatically. ### eegdash.dataset.Veillette2019 alias of [`DS005403`](eegdash.dataset.DS005403.md#eegdash.dataset.DS005403) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/veillette2019) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=veillette2019) # Vianney2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Vianney2025 dataset = Vianney2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Vianney2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Vianney2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{vianney2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `VIANNEY2025` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `VIANNEY2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/vianney2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=vianney2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [vianney2025](https://openneuro.org/datasets/vianney2025) - NeMAR: [vianney2025](https://nemar.org/dataexplorer/detail?dataset_id=vianney2025) ## API Reference Use the `Vianney2025` class to access this dataset programmatically. ### eegdash.dataset.Vianney2025 alias of [`DS007358`](eegdash.dataset.DS007358.md#eegdash.dataset.DS007358) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/vianney2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=vianney2025) # VisualContextTrajectory: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import VisualContextTrajectory dataset = VisualContextTrajectory(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = VisualContextTrajectory(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = VisualContextTrajectory( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{visualcontexttrajectory, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `VISUALCONTEXTTRAJECTORY` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `VISUALCONTEXTTRAJECTORY` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/visualcontexttrajectory) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=visualcontexttrajectory) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [visualcontexttrajectory](https://openneuro.org/datasets/visualcontexttrajectory) - NeMAR: [visualcontexttrajectory](https://nemar.org/dataexplorer/detail?dataset_id=visualcontexttrajectory) ## API Reference Use the `VisualContextTrajectory` class to access this dataset programmatically. ### eegdash.dataset.VisualContextTrajectory alias of [`DS004603`](eegdash.dataset.DS004603.md#eegdash.dataset.DS004603) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/visualcontexttrajectory) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=visualcontexttrajectory) # VisualContextTrajectory_v2: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import VisualContextTrajectory_v2 dataset = VisualContextTrajectory_v2(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = VisualContextTrajectory_v2(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = VisualContextTrajectory_v2( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{visualcontexttrajectory_v2, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `VISUALCONTEXTTRAJECTORY_V2` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `VISUALCONTEXTTRAJECTORY_V2` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/visualcontexttrajectory_v2) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=visualcontexttrajectory_v2) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [visualcontexttrajectory_v2](https://openneuro.org/datasets/visualcontexttrajectory_v2) - NeMAR: [visualcontexttrajectory_v2](https://nemar.org/dataexplorer/detail?dataset_id=visualcontexttrajectory_v2) ## API Reference Use the `VisualContextTrajectory_v2` class to access this dataset programmatically. ### eegdash.dataset.VisualContextTrajectory_v2 alias of [`DS006817`](eegdash.dataset.DS006817.md#eegdash.dataset.DS006817) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/visualcontexttrajectory_v2) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=visualcontexttrajectory_v2) # WBCICSHU: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import WBCICSHU dataset = WBCICSHU(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = WBCICSHU(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = WBCICSHU( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{wbcicshu, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `WBCICSHU` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `WBCICSHU` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/wbcicshu) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=wbcicshu) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [wbcicshu](https://openneuro.org/datasets/wbcicshu) - NeMAR: [wbcicshu](https://nemar.org/dataexplorer/detail?dataset_id=wbcicshu) ## API Reference Use the `WBCICSHU` class to access this dataset programmatically. ### eegdash.dataset.WBCICSHU alias of [`NM000246`](eegdash.dataset.NM000246.md#eegdash.dataset.NM000246) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/wbcicshu) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=wbcicshu) # WBCIC_SHU: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import WBCIC_SHU dataset = WBCIC_SHU(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = WBCIC_SHU(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = WBCIC_SHU( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{wbcic_shu, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `WBCIC_SHU` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `WBCIC_SHU` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/wbcic_shu) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=wbcic_shu) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [wbcic_shu](https://openneuro.org/datasets/wbcic_shu) - NeMAR: [wbcic_shu](https://nemar.org/dataexplorer/detail?dataset_id=wbcic_shu) ## API Reference Use the `WBCIC_SHU` class to access this dataset programmatically. ### eegdash.dataset.WBCIC_SHU alias of [`NM000246`](eegdash.dataset.NM000246.md#eegdash.dataset.NM000246) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/wbcic_shu) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=wbcic_shu) # WIRED_ICM: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import WIRED_ICM dataset = WIRED_ICM(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = WIRED_ICM(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = WIRED_ICM( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{wired_icm, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `WIRED_ICM` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `WIRED_ICM` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/wired_icm) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=wired_icm) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [wired_icm](https://openneuro.org/datasets/wired_icm) - NeMAR: [wired_icm](https://nemar.org/dataexplorer/detail?dataset_id=wired_icm) ## API Reference Use the `WIRED_ICM` class to access this dataset programmatically. ### eegdash.dataset.WIRED_ICM alias of [`DS004993`](eegdash.dataset.DS004993.md#eegdash.dataset.DS004993) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/wired_icm) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=wired_icm) # Wakeman2015: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Wakeman2015 dataset = Wakeman2015(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Wakeman2015(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Wakeman2015( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{wakeman2015, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `WAKEMAN2015` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `WAKEMAN2015` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/wakeman2015) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=wakeman2015) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [wakeman2015](https://openneuro.org/datasets/wakeman2015) - NeMAR: [wakeman2015](https://nemar.org/dataexplorer/detail?dataset_id=wakeman2015) ## API Reference Use the `Wakeman2015` class to access this dataset programmatically. ### eegdash.dataset.Wakeman2015 alias of [`DS000117`](eegdash.dataset.DS000117.md#eegdash.dataset.DS000117) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/wakeman2015) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=wakeman2015) # WakemanHenson: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import WakemanHenson dataset = WakemanHenson(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = WakemanHenson(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = WakemanHenson( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{wakemanhenson, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `WAKEMANHENSON` | |----------------|-------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `WAKEMANHENSON` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/wakemanhenson) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=wakemanhenson) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [wakemanhenson](https://openneuro.org/datasets/wakemanhenson) - NeMAR: [wakemanhenson](https://nemar.org/dataexplorer/detail?dataset_id=wakemanhenson) ## API Reference Use the `WakemanHenson` class to access this dataset programmatically. ### eegdash.dataset.WakemanHenson alias of [`DS000117`](eegdash.dataset.DS000117.md#eegdash.dataset.DS000117) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/wakemanhenson) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=wakemanhenson) # WakemanHenson_EEG_MEG: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import WakemanHenson_EEG_MEG dataset = WakemanHenson_EEG_MEG(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = WakemanHenson_EEG_MEG(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = WakemanHenson_EEG_MEG( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{wakemanhenson_eeg_meg, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `WAKEMANHENSON_EEG_MEG` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `WAKEMANHENSON_EEG_MEG` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/wakemanhenson_eeg_meg) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=wakemanhenson_eeg_meg) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [wakemanhenson_eeg_meg](https://openneuro.org/datasets/wakemanhenson_eeg_meg) - NeMAR: [wakemanhenson_eeg_meg](https://nemar.org/dataexplorer/detail?dataset_id=wakemanhenson_eeg_meg) ## API Reference Use the `WakemanHenson_EEG_MEG` class to access this dataset programmatically. ### eegdash.dataset.WakemanHenson_EEG_MEG alias of [`DS002718`](eegdash.dataset.DS002718.md#eegdash.dataset.DS002718) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/wakemanhenson_eeg_meg) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=wakemanhenson_eeg_meg) # Weibo2014: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Weibo2014 dataset = Weibo2014(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Weibo2014(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Weibo2014( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{weibo2014, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `WEIBO2014` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `WEIBO2014` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/weibo2014) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=weibo2014) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [weibo2014](https://openneuro.org/datasets/weibo2014) - NeMAR: [weibo2014](https://nemar.org/dataexplorer/detail?dataset_id=weibo2014) ## API Reference Use the `Weibo2014` class to access this dataset programmatically. ### eegdash.dataset.Weibo2014 alias of [`NM000146`](eegdash.dataset.NM000146.md#eegdash.dataset.NM000146) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/weibo2014) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=weibo2014) # Weisend2007: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Weisend2007 dataset = Weisend2007(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Weisend2007(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Weisend2007( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{weisend2007, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `WEISEND2007` | |----------------|---------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `WEISEND2007` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/weisend2007) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=weisend2007) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [weisend2007](https://openneuro.org/datasets/weisend2007) - NeMAR: [weisend2007](https://nemar.org/dataexplorer/detail?dataset_id=weisend2007) ## API Reference Use the `Weisend2007` class to access this dataset programmatically. ### eegdash.dataset.Weisend2007 alias of [`DS004107`](eegdash.dataset.DS004107.md#eegdash.dataset.DS004107) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/weisend2007) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=weisend2007) # Wimmer2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Wimmer2024 dataset = Wimmer2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Wimmer2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Wimmer2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{wimmer2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `WIMMER2024` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `WIMMER2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/wimmer2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=wimmer2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [wimmer2024](https://openneuro.org/datasets/wimmer2024) - NeMAR: [wimmer2024](https://nemar.org/dataexplorer/detail?dataset_id=wimmer2024) ## API Reference Use the `Wimmer2024` class to access this dataset programmatically. ### eegdash.dataset.Wimmer2024 alias of [`DS004398`](eegdash.dataset.DS004398.md#eegdash.dataset.DS004398) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/wimmer2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=wimmer2024) # Yang2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Yang2025 dataset = Yang2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Yang2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Yang2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{yang2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `YANG2025` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `YANG2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/yang2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=yang2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [yang2025](https://openneuro.org/datasets/yang2025) - NeMAR: [yang2025](https://nemar.org/dataexplorer/detail?dataset_id=yang2025) ## API Reference Use the `Yang2025` class to access this dataset programmatically. ### eegdash.dataset.Yang2025 alias of [`NM000348`](eegdash.dataset.NM000348.md#eegdash.dataset.NM000348) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/yang2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=yang2025) # Yu2019: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Yu2019 dataset = Yu2019(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Yu2019(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Yu2019( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{yu2019, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `YU2019` | |----------------|-----------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `YU2019` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/yu2019) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=yu2019) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [yu2019](https://openneuro.org/datasets/yu2019) - NeMAR: [yu2019](https://nemar.org/dataexplorer/detail?dataset_id=yu2019) ## API Reference Use the `Yu2019` class to access this dataset programmatically. ### eegdash.dataset.Yu2019 alias of [`DS006386`](eegdash.dataset.DS006386.md#eegdash.dataset.DS006386) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/yu2019) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=yu2019) # Yucel2014: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Yucel2014 dataset = Yucel2014(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Yucel2014(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Yucel2014( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{yucel2014, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `YUCEL2014` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `YUCEL2014` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/yucel2014) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=yucel2014) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [yucel2014](https://openneuro.org/datasets/yucel2014) - NeMAR: [yucel2014](https://nemar.org/dataexplorer/detail?dataset_id=yucel2014) ## API Reference Use the `Yucel2014` class to access this dataset programmatically. ### eegdash.dataset.Yucel2014 alias of [`DS005929`](eegdash.dataset.DS005929.md#eegdash.dataset.DS005929) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/yucel2014) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=yucel2014) # Yucel2015: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Yucel2015 dataset = Yucel2015(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Yucel2015(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Yucel2015( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{yucel2015, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `YUCEL2015` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `YUCEL2015` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/yucel2015) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=yucel2015) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [yucel2015](https://openneuro.org/datasets/yucel2015) - NeMAR: [yucel2015](https://nemar.org/dataexplorer/detail?dataset_id=yucel2015) ## API Reference Use the `Yucel2015` class to access this dataset programmatically. ### eegdash.dataset.Yucel2015 alias of [`DS005776`](eegdash.dataset.DS005776.md#eegdash.dataset.DS005776) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/yucel2015) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=yucel2015) # Zhang2025: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Zhang2025 dataset = Zhang2025(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Zhang2025(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Zhang2025( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{zhang2025, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ZHANG2025` | |----------------|-----------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ZHANG2025` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/zhang2025) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=zhang2025) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [zhang2025](https://openneuro.org/datasets/zhang2025) - NeMAR: [zhang2025](https://nemar.org/dataexplorer/detail?dataset_id=zhang2025) ## API Reference Use the `Zhang2025` class to access this dataset programmatically. ### eegdash.dataset.Zhang2025 alias of [`NM000211`](eegdash.dataset.NM000211.md#eegdash.dataset.NM000211) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/zhang2025) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=zhang2025) # Zhao2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Zhao2024 dataset = Zhao2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Zhao2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Zhao2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{zhao2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ZHAO2024` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ZHAO2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/zhao2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=zhao2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [zhao2024](https://openneuro.org/datasets/zhao2024) - NeMAR: [zhao2024](https://nemar.org/dataexplorer/detail?dataset_id=zhao2024) ## API Reference Use the `Zhao2024` class to access this dataset programmatically. ### eegdash.dataset.Zhao2024 alias of [`DS005473`](eegdash.dataset.DS005473.md#eegdash.dataset.DS005473) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/zhao2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=zhao2024) # Zhou2016_NEMAR: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Zhou2016_NEMAR dataset = Zhou2016_NEMAR(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Zhou2016_NEMAR(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Zhou2016_NEMAR( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{zhou2016_nemar, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ZHOU2016_NEMAR` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ZHOU2016_NEMAR` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/zhou2016_nemar) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=zhou2016_nemar) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [zhou2016_nemar](https://openneuro.org/datasets/zhou2016_nemar) - NeMAR: [zhou2016_nemar](https://nemar.org/dataexplorer/detail?dataset_id=zhou2016_nemar) ## API Reference Use the `Zhou2016_NEMAR` class to access this dataset programmatically. ### eegdash.dataset.Zhou2016_NEMAR alias of [`NM000226`](eegdash.dataset.NM000226.md#eegdash.dataset.NM000226) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/zhou2016_nemar) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=zhou2016_nemar) # Zhou2024: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import Zhou2024 dataset = Zhou2024(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = Zhou2024(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = Zhou2024( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{zhou2024, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ZHOU2024` | |----------------|---------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ZHOU2024` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/zhou2024) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=zhou2024) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [zhou2024](https://openneuro.org/datasets/zhou2024) - NeMAR: [zhou2024](https://nemar.org/dataexplorer/detail?dataset_id=zhou2024) ## API Reference Use the `Zhou2024` class to access this dataset programmatically. ### eegdash.dataset.Zhou2024 alias of [`DS007471`](eegdash.dataset.DS007471.md#eegdash.dataset.DS007471) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/zhou2024) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=zhou2024) # eegdash.dataset.base module Data utilities and dataset classes for EEG data handling. This module provides core dataset classes for working with EEG data in the EEGDash ecosystem, including classes for individual recordings and collections of datasets. It integrates with braindecode for machine learning workflows and handles data loading from both local and remote sources. ### *class* eegdash.dataset.base.EEGDashRaw(record: dict[str, Any], cache_dir: str, \*\*kwargs) Bases: `RawDataset` A single EEG recording dataset. Represents a single EEG recording, typically hosted on a remote server (like AWS S3) and cached locally upon first access. This class is a subclass of `braindecode.datasets.BaseDataset` and can be used with braindecode’s preprocessing and training pipelines. * **Parameters:** * **record** (*dict*) – A v2 record containing all metadata and storage information. Must have schema_version=2 and include storage.base (no default bucket). * **cache_dir** (*str*) – The local directory where the data will be cached. * **on_error** (*str* *,* *default "raise"*) – How to handle `DataIntegrityError` when accessing `.raw`: - `"raise"` (default): propagate the exception. - `"warn"`: log the error as a warning and set `.raw` to `None`. - `"skip"`: silently set `.raw` to `None`. * **\*\*kwargs** – Additional keyword arguments passed to the `braindecode.datasets.BaseDataset` constructor. * **Raises:** **ValueError** – If the record is not a valid v2 record or is missing required fields. #### *property* raw *: BaseRaw | None* The MNE Raw object for this recording. Accessing this property triggers the download and caching of the data if it has not been accessed before. Returns `None` when `on_error` is `"warn"` or `"skip"` and the record could not be loaded due to a `DataIntegrityError`. * **Returns:** The loaded MNE Raw object, or `None` for skipped records. * **Return type:** mne.io.BaseRaw | None # eegdash.dataset.bids_dataset module Local BIDS dataset interface for EEGDash. This module provides the EEGBIDSDataset class for interfacing with local BIDS datasets on the filesystem, parsing metadata, and retrieving BIDS-related information. ### *class* eegdash.dataset.bids_dataset.EEGBIDSDataset(data_dir=None, dataset='', allow_symlinks=False, modalities=None) Bases: `object` An interface to a local BIDS dataset containing electrophysiology recordings. This class centralizes interactions with a BIDS dataset on the local filesystem, providing methods to parse metadata, find files, and retrieve BIDS-related information. Supports multiple modalities including EEG, MEG, iEEG, and NIRS. The class uses MNE-BIDS constants to stay synchronized with the BIDS specification and automatically supports all file formats recognized by MNE. * **Parameters:** * **data_dir** (*str* *or* *Path*) – The path to the local BIDS dataset directory. * **dataset** (*str*) – A name for the dataset (e.g., “ds002718”). * **allow_symlinks** (*bool* *,* *default False*) – If True, accept broken symlinks (e.g., git-annex) for metadata extraction. If False, require actual readable files for data loading. Set to True when doing metadata digestion without loading raw data. * **modalities** (*list* *of* *str* *or* *None* *,* *default None*) – List of modalities to search for (e.g., [“eeg”, “meg”]). If None, defaults to all electrophysiology modalities from MNE-BIDS: [‘meg’, ‘eeg’, ‘ieeg’, ‘nirs’]. #### RAW_EXTENSIONS Mapping of file extensions to their companion files, dynamically built from mne_bids.config.reader. * **Type:** dict #### files List of all recording file paths found in the dataset. * **Type:** list of str #### detected_modality The modality of the first file found (e.g., ‘eeg’, ‘meg’). * **Type:** str ### Examples ```pycon >>> # Load EEG-only dataset >>> dataset = EEGBIDSDataset( ... data_dir="/path/to/ds002718", ... dataset="ds002718", ... modalities=["eeg"] ... ) ``` ```pycon >>> # Load dataset with multiple modalities >>> dataset = EEGBIDSDataset( ... data_dir="/path/to/ds005810", ... dataset="ds005810", ... modalities=["meg", "eeg"] ... ) ``` ```pycon >>> # Metadata extraction from git-annex (symlinks) >>> dataset = EEGBIDSDataset( ... data_dir="/path/to/dataset", ... dataset="ds000001", ... allow_symlinks=True ... ) ``` #### RAW_EXTENSIONS *= {'.CNT': ['.CNT'], '.EDF': ['.EDF'], '.EEG': ['.EEG'], '.bdf': ['.bdf'], '.bin': ['.bin'], '.cdt': ['.cdt'], '.cnt': ['.cnt'], '.con': ['.con'], '.ds': ['.ds'], '.edf': ['.edf'], '.fif': ['.fif'], '.lay': ['.lay'], '.pdf': ['.pdf'], '.set': ['.set', '.fdt'], '.snirf': ['.snirf'], '.sqd': ['.sqd'], '.vhdr': ['.vhdr', '.eeg', '.vmrk', '.dat']}* #### channel_labels(data_filepath: str) → list[str] Get a list of channel labels from channels.tsv. #### channel_types(data_filepath: str) → list[str] Get a list of channel types from channels.tsv. #### check_eeg_dataset() → bool Check if the BIDS dataset contains EEG data. * **Returns:** True if the dataset’s modality is EEG, False otherwise. * **Return type:** bool #### eeg_json(data_filepath: str) → dict[str, Any] Get the merged eeg.json metadata for a data file. * **Parameters:** **data_filepath** (*str*) – The path to the data file. * **Returns:** The merged eeg.json metadata. * **Return type:** dict #### get_all_participants_tsv() → dict[str, dict[str, Any]] Get all rows from participants.tsv as a dictionary. * **Returns:** A dictionary mapping participant_id to a dict of column values. Returns `{}` if no participants.tsv exists or it is empty. * **Return type:** dict #### get_bids_file_attribute(attribute: str, data_filepath: str) → Any Retrieve a specific attribute from BIDS metadata. * **Parameters:** * **attribute** (*str*) – The name of the attribute to retrieve (e.g., “sfreq”, “subject”). * **data_filepath** (*str*) – The path to the data file. * **Returns:** The value of the requested attribute, or None if not found. * **Return type:** Any #### get_bids_metadata_files(filepath: str | Path, metadata_file_extension: str) → list[Path] Retrieve all metadata files that apply to a given data file. Follows the BIDS inheritance principle to find all relevant metadata files (e.g., `channels.tsv`, `eeg.json`) for a specific recording. * **Parameters:** * **filepath** (*str* *or* *Path*) – The path to the data file. * **metadata_file_extension** (*str*) – The extension of the metadata file to search for (e.g., “channels.tsv”). * **Returns:** A list of paths to the matching metadata files. * **Return type:** list of Path #### get_files() → list[str] Get all EEG recording file paths in the BIDS dataset. * **Returns:** A list of file paths for all valid EEG recordings. * **Return type:** list of str #### get_orphan_participants() → dict[str, dict[str, Any]] Get participant rows that have no matching file in the dataset. Identifies subjects present in `participants.tsv` but with no corresponding recording file in `self.files`. * **Returns:** A dictionary mapping orphan participant_id to their TSV data. Returns `{}` if there are no orphans, no TSV, or no files. * **Return type:** dict #### get_relative_bidspath(filepath: str | Path) → str Get the dataset-relative path for a file. * **Parameters:** **filepath** (*str* *or* *Path*) – The absolute or relative path to a file in the BIDS dataset. * **Returns:** The path relative to the dataset root, prefixed with the dataset name. e.g., “ds004477/sub-001/eeg/sub-001_task-PES_eeg.json” * **Return type:** str #### num_times(data_filepath: str) → int Get the number of time points in the recording. Calculated from `SamplingFrequency` and `RecordingDuration` in the modality-specific JSON sidecar (e.g., `eeg.json` or `meg.json`). * **Parameters:** **data_filepath** (*str*) – The path to the data file. * **Returns:** The approximate number of time points. * **Return type:** int #### subject_participant_tsv(data_filepath: str) → dict[str, Any] Get the participants.tsv record for a subject. * **Parameters:** **data_filepath** (*str*) – The path to a data file belonging to the subject. * **Returns:** A dictionary of the subject’s information from participants.tsv. * **Return type:** dict # catFR_Categorized_Free_Recall: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import catFR_Categorized_Free_Recall dataset = catFR_Categorized_Free_Recall(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = catFR_Categorized_Free_Recall(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = catFR_Categorized_Free_Recall( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{catfr_categorized_free_recall, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CATFR_CATEGORIZED_FREE_RECALL` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CATFR_CATEGORIZED_FREE_RECALL` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/catfr_categorized_free_recall) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=catfr_categorized_free_recall) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [catfr_categorized_free_recall](https://openneuro.org/datasets/catfr_categorized_free_recall) - NeMAR: [catfr_categorized_free_recall](https://nemar.org/dataexplorer/detail?dataset_id=catfr_categorized_free_recall) ## API Reference Use the `catFR_Categorized_Free_Recall` class to access this dataset programmatically. ### eegdash.dataset.catFR_Categorized_Free_Recall alias of [`DS004809`](eegdash.dataset.DS004809.md#eegdash.dataset.DS004809) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/catfr_categorized_free_recall) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=catfr_categorized_free_recall) # catFR_closed_loop: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import catFR_closed_loop dataset = catFR_closed_loop(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = catFR_closed_loop(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = catFR_closed_loop( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{catfr_closed_loop, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CATFR_CLOSED_LOOP` | |----------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CATFR_CLOSED_LOOP` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/catfr_closed_loop) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=catfr_closed_loop) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [catfr_closed_loop](https://openneuro.org/datasets/catfr_closed_loop) - NeMAR: [catfr_closed_loop](https://nemar.org/dataexplorer/detail?dataset_id=catfr_closed_loop) ## API Reference Use the `catFR_closed_loop` class to access this dataset programmatically. ### eegdash.dataset.catFR_closed_loop alias of [`DS005558`](eegdash.dataset.DS005558.md#eegdash.dataset.DS005558) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/catfr_closed_loop) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=catfr_closed_loop) # catFR_open_loop: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import catFR_open_loop dataset = catFR_open_loop(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = catFR_open_loop(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = catFR_open_loop( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{catfr_open_loop, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CATFR_OPEN_LOOP` | |----------------|-----------------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CATFR_OPEN_LOOP` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/catfr_open_loop) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=catfr_open_loop) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [catfr_open_loop](https://openneuro.org/datasets/catfr_open_loop) - NeMAR: [catfr_open_loop](https://nemar.org/dataexplorer/detail?dataset_id=catfr_open_loop) ## API Reference Use the `catFR_open_loop` class to access this dataset programmatically. ### eegdash.dataset.catFR_open_loop alias of [`DS005491`](eegdash.dataset.DS005491.md#eegdash.dataset.DS005491) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/catfr_open_loop) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=catfr_open_loop) # catFR_stim: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import catFR_stim dataset = catFR_stim(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = catFR_stim(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = catFR_stim( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{catfr_stim, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `CATFR_STIM` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `CATFR_STIM` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/catfr_stim) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=catfr_stim) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [catfr_stim](https://openneuro.org/datasets/catfr_stim) - NeMAR: [catfr_stim](https://nemar.org/dataexplorer/detail?dataset_id=catfr_stim) ## API Reference Use the `catFR_stim` class to access this dataset programmatically. ### eegdash.dataset.catFR_stim alias of [`DS005491`](eegdash.dataset.DS005491.md#eegdash.dataset.DS005491) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/catfr_stim) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=catfr_stim) # eegdash.dataset.dataset module ### eegdash.dataset.dataset.ABSeqMEG alias of [`DS004483`](eegdash.dataset.DS004483.md#eegdash.dataset.DS004483) ### eegdash.dataset.dataset.ANDI alias of [`DS004661`](eegdash.dataset.DS004661.md#eegdash.dataset.DS004661) ### eegdash.dataset.dataset.APPLESEED alias of [`DS003710`](eegdash.dataset.DS003710.md#eegdash.dataset.DS003710) ### eegdash.dataset.dataset.AlexMI alias of [`NM000138`](eegdash.dataset.NM000138.md#eegdash.dataset.NM000138) ### eegdash.dataset.dataset.AlexMotorImagery alias of [`NM000138`](eegdash.dataset.NM000138.md#eegdash.dataset.NM000138) ### eegdash.dataset.dataset.AlexandreMotorImagery alias of [`NM000138`](eegdash.dataset.NM000138.md#eegdash.dataset.NM000138) ### eegdash.dataset.dataset.Alljoined alias of [`NM000133`](eegdash.dataset.NM000133.md#eegdash.dataset.NM000133) ### eegdash.dataset.dataset.Alljoined1 alias of [`NM000133`](eegdash.dataset.NM000133.md#eegdash.dataset.NM000133) ### eegdash.dataset.dataset.Alljoined16M alias of [`NM000134`](eegdash.dataset.NM000134.md#eegdash.dataset.NM000134) ### eegdash.dataset.dataset.Alljoined1p6M alias of [`NM000134`](eegdash.dataset.NM000134.md#eegdash.dataset.NM000134) ### eegdash.dataset.dataset.Alljoined_16M alias of [`NM000134`](eegdash.dataset.NM000134.md#eegdash.dataset.NM000134) ### eegdash.dataset.dataset.AlphaWaves alias of [`NM000221`](eegdash.dataset.NM000221.md#eegdash.dataset.NM000221) ### eegdash.dataset.dataset.Alphawaves alias of [`NM000221`](eegdash.dataset.NM000221.md#eegdash.dataset.NM000221) ### eegdash.dataset.dataset.ArEEG alias of [`DS005262`](eegdash.dataset.DS005262.md#eegdash.dataset.DS005262) ### eegdash.dataset.dataset.Ataseven2024 alias of [`DS007431`](eegdash.dataset.DS007431.md#eegdash.dataset.DS007431) ### eegdash.dataset.dataset.BCI2000_Intracranial alias of [`DS004624`](eegdash.dataset.DS004624.md#eegdash.dataset.DS004624) ### eegdash.dataset.dataset.BCI2000_intraop alias of [`DS004944`](eegdash.dataset.DS004944.md#eegdash.dataset.DS004944) ### eegdash.dataset.dataset.BCIAUT alias of [`NM000210`](eegdash.dataset.NM000210.md#eegdash.dataset.NM000210) ### eegdash.dataset.dataset.BCIAUTP300 alias of [`NM000210`](eegdash.dataset.NM000210.md#eegdash.dataset.NM000210) ### eegdash.dataset.dataset.BCIAUT_P300 alias of [`NM000210`](eegdash.dataset.NM000210.md#eegdash.dataset.NM000210) ### eegdash.dataset.dataset.BCICIII_IVa alias of [`NM000143`](eegdash.dataset.NM000143.md#eegdash.dataset.NM000143) ### eegdash.dataset.dataset.BCICIV1 alias of [`NM000139`](eegdash.dataset.NM000139.md#eegdash.dataset.NM000139) ### eegdash.dataset.dataset.BCICompIII_IVa alias of [`NM000143`](eegdash.dataset.NM000143.md#eegdash.dataset.NM000143) ### eegdash.dataset.dataset.BCICompIV1 alias of [`NM000139`](eegdash.dataset.NM000139.md#eegdash.dataset.NM000139) ### eegdash.dataset.dataset.BCIT alias of [`DS004119`](eegdash.dataset.DS004119.md#eegdash.dataset.DS004119) ### eegdash.dataset.dataset.BCITAdvancedGuardDuty alias of [`DS004106`](eegdash.dataset.DS004106.md#eegdash.dataset.DS004106) ### eegdash.dataset.dataset.BCITBaselineDriving alias of [`DS004120`](eegdash.dataset.DS004120.md#eegdash.dataset.DS004120) ### eegdash.dataset.dataset.BCITMindWandering alias of [`DS004121`](eegdash.dataset.DS004121.md#eegdash.dataset.DS004121) ### eegdash.dataset.dataset.BCIT_Auditory_Cueing alias of [`DS004105`](eegdash.dataset.DS004105.md#eegdash.dataset.DS004105) ### eegdash.dataset.dataset.BCIT_Traffic_Complexity alias of [`DS004123`](eegdash.dataset.DS004123.md#eegdash.dataset.DS004123) ### eegdash.dataset.dataset.BETA alias of [`NM000129`](eegdash.dataset.NM000129.md#eegdash.dataset.NM000129) ### eegdash.dataset.dataset.BETA_SSVEP alias of [`NM000129`](eegdash.dataset.NM000129.md#eegdash.dataset.NM000129) ### eegdash.dataset.dataset.BI2012 alias of [`NM000260`](eegdash.dataset.NM000260.md#eegdash.dataset.NM000260) ### eegdash.dataset.dataset.BI2013a alias of [`NM000264`](eegdash.dataset.NM000264.md#eegdash.dataset.NM000264) ### eegdash.dataset.dataset.BI2014a alias of [`NM000244`](eegdash.dataset.NM000244.md#eegdash.dataset.NM000244) ### eegdash.dataset.dataset.BI2014b alias of [`NM000215`](eegdash.dataset.NM000215.md#eegdash.dataset.NM000215) ### eegdash.dataset.dataset.BI2015a alias of [`NM000216`](eegdash.dataset.NM000216.md#eegdash.dataset.NM000216) ### eegdash.dataset.dataset.BI2015b alias of [`NM000217`](eegdash.dataset.NM000217.md#eegdash.dataset.NM000217) ### eegdash.dataset.dataset.BMI_HDEEG_D1 alias of [`DS004444`](eegdash.dataset.DS004444.md#eegdash.dataset.DS004444) ### eegdash.dataset.dataset.BMI_HDEEG_D2 alias of [`DS004446`](eegdash.dataset.DS004446.md#eegdash.dataset.DS004446) ### eegdash.dataset.dataset.BMI_HDEEG_D3 alias of [`DS004447`](eegdash.dataset.DS004447.md#eegdash.dataset.DS004447) ### eegdash.dataset.dataset.BMI_HDEEG_D4 alias of [`DS004448`](eegdash.dataset.DS004448.md#eegdash.dataset.DS004448) ### eegdash.dataset.dataset.BNCI2003_IVa alias of [`NM000143`](eegdash.dataset.NM000143.md#eegdash.dataset.NM000143) ### eegdash.dataset.dataset.BNCI2014001 alias of [`NM000139`](eegdash.dataset.NM000139.md#eegdash.dataset.NM000139) ### eegdash.dataset.dataset.BNCI2014002 alias of [`NM000171`](eegdash.dataset.NM000171.md#eegdash.dataset.NM000171) ### eegdash.dataset.dataset.BNCI2014004 alias of [`NM000135`](eegdash.dataset.NM000135.md#eegdash.dataset.NM000135) ### eegdash.dataset.dataset.BNCI2014008 alias of [`NM000169`](eegdash.dataset.NM000169.md#eegdash.dataset.NM000169) ### eegdash.dataset.dataset.BNCI2014_009_P300 alias of [`NM000188`](eegdash.dataset.NM000188.md#eegdash.dataset.NM000188) ### eegdash.dataset.dataset.BNCI2015 alias of [`NM000140`](eegdash.dataset.NM000140.md#eegdash.dataset.NM000140) ### eegdash.dataset.dataset.BNCI2015001 alias of [`NM000140`](eegdash.dataset.NM000140.md#eegdash.dataset.NM000140) ### eegdash.dataset.dataset.BNCI2015_003_AMUSE alias of [`NM000189`](eegdash.dataset.NM000189.md#eegdash.dataset.NM000189) ### eegdash.dataset.dataset.BNCI2015_003_P300 alias of [`NM000189`](eegdash.dataset.NM000189.md#eegdash.dataset.NM000189) ### eegdash.dataset.dataset.BNCI2015_006_MusicBCI alias of [`NM000192`](eegdash.dataset.NM000192.md#eegdash.dataset.NM000192) ### eegdash.dataset.dataset.BNCI2015_008_CenterSpeller alias of [`NM000198`](eegdash.dataset.NM000198.md#eegdash.dataset.NM000198) ### eegdash.dataset.dataset.BNCI2015_008_P300 alias of [`NM000198`](eegdash.dataset.NM000198.md#eegdash.dataset.NM000198) ### eegdash.dataset.dataset.BNCI2015_BNCI_006_Music alias of [`NM000192`](eegdash.dataset.NM000192.md#eegdash.dataset.NM000192) ### eegdash.dataset.dataset.BNCI2015_ERP alias of [`NM000234`](eegdash.dataset.NM000234.md#eegdash.dataset.NM000234) ### eegdash.dataset.dataset.BNCI2015_P300 alias of [`NM000189`](eegdash.dataset.NM000189.md#eegdash.dataset.NM000189) ### eegdash.dataset.dataset.BNCI2016 alias of [`NM000243`](eegdash.dataset.NM000243.md#eegdash.dataset.NM000243) ### eegdash.dataset.dataset.BNCI2016002 alias of [`NM000243`](eegdash.dataset.NM000243.md#eegdash.dataset.NM000243) ### eegdash.dataset.dataset.BNCI2020 alias of [`NM000219`](eegdash.dataset.NM000219.md#eegdash.dataset.NM000219) ### eegdash.dataset.dataset.BNCI2020_002_AttentionShift alias of [`NM000219`](eegdash.dataset.NM000219.md#eegdash.dataset.NM000219) ### eegdash.dataset.dataset.BNCI2020_002_CovertSpatialAttention alias of [`NM000219`](eegdash.dataset.NM000219.md#eegdash.dataset.NM000219) ### eegdash.dataset.dataset.BNCI2025 alias of [`NM000162`](eegdash.dataset.NM000162.md#eegdash.dataset.NM000162) ### eegdash.dataset.dataset.BNCI_2015_006_Music alias of [`NM000192`](eegdash.dataset.NM000192.md#eegdash.dataset.NM000192) ### eegdash.dataset.dataset.BOAS alias of [`DS005555`](eegdash.dataset.DS005555.md#eegdash.dataset.DS005555) ### eegdash.dataset.dataset.Barras2021 alias of [`DS007169`](eegdash.dataset.DS007169.md#eegdash.dataset.DS007169) ### eegdash.dataset.dataset.Barras2025 alias of [`DS007262`](eegdash.dataset.DS007262.md#eegdash.dataset.DS007262) ### eegdash.dataset.dataset.BetaSSVEP alias of [`NM000129`](eegdash.dataset.NM000129.md#eegdash.dataset.NM000129) ### eegdash.dataset.dataset.BigP3BCI_E alias of [`NM000186`](eegdash.dataset.NM000186.md#eegdash.dataset.NM000186) ### eegdash.dataset.dataset.BigP3BCI_F alias of [`NM000191`](eegdash.dataset.NM000191.md#eegdash.dataset.NM000191) ### eegdash.dataset.dataset.BigP3BCI_G alias of [`NM000277`](eegdash.dataset.NM000277.md#eegdash.dataset.NM000277) ### eegdash.dataset.dataset.BigP3BCI_H alias of [`NM000218`](eegdash.dataset.NM000218.md#eegdash.dataset.NM000218) ### eegdash.dataset.dataset.BigP3BCI_I alias of [`NM000200`](eegdash.dataset.NM000200.md#eegdash.dataset.NM000200) ### eegdash.dataset.dataset.BigP3BCI_K alias of [`NM000176`](eegdash.dataset.NM000176.md#eegdash.dataset.NM000176) ### eegdash.dataset.dataset.BigP3BCI_M alias of [`NM000197`](eegdash.dataset.NM000197.md#eegdash.dataset.NM000197) ### eegdash.dataset.dataset.BigP3BCI_S1 alias of [`NM000247`](eegdash.dataset.NM000247.md#eegdash.dataset.NM000247) ### eegdash.dataset.dataset.BigP3BCI_StudyE alias of [`NM000186`](eegdash.dataset.NM000186.md#eegdash.dataset.NM000186) ### eegdash.dataset.dataset.BigP3BCI_StudyF alias of [`NM000191`](eegdash.dataset.NM000191.md#eegdash.dataset.NM000191) ### eegdash.dataset.dataset.BigP3BCI_StudyG alias of [`NM000277`](eegdash.dataset.NM000277.md#eegdash.dataset.NM000277) ### eegdash.dataset.dataset.BigP3BCI_StudyH alias of [`NM000218`](eegdash.dataset.NM000218.md#eegdash.dataset.NM000218) ### eegdash.dataset.dataset.BigP3BCI_StudyI alias of [`NM000200`](eegdash.dataset.NM000200.md#eegdash.dataset.NM000200) ### eegdash.dataset.dataset.BigP3BCI_StudyK alias of [`NM000176`](eegdash.dataset.NM000176.md#eegdash.dataset.NM000176) ### eegdash.dataset.dataset.BigP3BCI_StudyM alias of [`NM000197`](eegdash.dataset.NM000197.md#eegdash.dataset.NM000197) ### eegdash.dataset.dataset.BigP3BCI_StudyN alias of [`NM000187`](eegdash.dataset.NM000187.md#eegdash.dataset.NM000187) ### eegdash.dataset.dataset.BigP3BCI_StudyS1 alias of [`NM000247`](eegdash.dataset.NM000247.md#eegdash.dataset.NM000247) ### eegdash.dataset.dataset.Bogacz2024 alias of [`DS002908`](eegdash.dataset.DS002908.md#eegdash.dataset.DS002908) ### eegdash.dataset.dataset.BrainInvaders alias of [`NM000260`](eegdash.dataset.NM000260.md#eegdash.dataset.NM000260) ### eegdash.dataset.dataset.BrainInvaders2013a alias of [`NM000264`](eegdash.dataset.NM000264.md#eegdash.dataset.NM000264) ### eegdash.dataset.dataset.BrainInvaders2014a alias of [`NM000244`](eegdash.dataset.NM000244.md#eegdash.dataset.NM000244) ### eegdash.dataset.dataset.BrainInvaders2014b alias of [`NM000215`](eegdash.dataset.NM000215.md#eegdash.dataset.NM000215) ### eegdash.dataset.dataset.BrainInvaders2015a alias of [`NM000216`](eegdash.dataset.NM000216.md#eegdash.dataset.NM000216) ### eegdash.dataset.dataset.BrainInvaders2015b alias of [`NM000217`](eegdash.dataset.NM000217.md#eegdash.dataset.NM000217) ### eegdash.dataset.dataset.BrainInvadersBI2014b alias of [`NM000215`](eegdash.dataset.NM000215.md#eegdash.dataset.NM000215) ### eegdash.dataset.dataset.BrainTreeBank alias of [`NM000253`](eegdash.dataset.NM000253.md#eegdash.dataset.NM000253) ### eegdash.dataset.dataset.Broitman2019 alias of [`DS005857`](eegdash.dataset.DS005857.md#eegdash.dataset.DS005857) ### eegdash.dataset.dataset.CARLA alias of [`DS004977`](eegdash.dataset.DS004977.md#eegdash.dataset.DS004977) ### eegdash.dataset.dataset.CHBMIT alias of [`NM000110`](eegdash.dataset.NM000110.md#eegdash.dataset.NM000110) ### eegdash.dataset.dataset.CHB_MIT alias of [`NM000110`](eegdash.dataset.NM000110.md#eegdash.dataset.NM000110) ### eegdash.dataset.dataset.CHISCO20 alias of [`DS006317`](eegdash.dataset.DS006317.md#eegdash.dataset.DS006317) ### eegdash.dataset.dataset.CPSEED alias of [`DS006465`](eegdash.dataset.DS006465.md#eegdash.dataset.DS006465) ### eegdash.dataset.dataset.CPSEED_3M alias of [`DS006465`](eegdash.dataset.DS006465.md#eegdash.dataset.DS006465) ### eegdash.dataset.dataset.CastillosCVEP40 alias of [`NM000342`](eegdash.dataset.NM000342.md#eegdash.dataset.NM000342) ### eegdash.dataset.dataset.CatFR alias of [`DS004809`](eegdash.dataset.DS004809.md#eegdash.dataset.DS004809) ### eegdash.dataset.dataset.Chandravadia2022 alias of [`DS005028`](eegdash.dataset.DS005028.md#eegdash.dataset.DS005028) ### eegdash.dataset.dataset.Chang2025 alias of [`NM000271`](eegdash.dataset.NM000271.md#eegdash.dataset.NM000271) ### eegdash.dataset.dataset.Chavarriaga2010 alias of [`NM000168`](eegdash.dataset.NM000168.md#eegdash.dataset.NM000168) ### eegdash.dataset.dataset.Chisco alias of [`DS005170`](eegdash.dataset.DS005170.md#eegdash.dataset.DS005170) ### eegdash.dataset.dataset.Chisco20 alias of [`DS006317`](eegdash.dataset.DS006317.md#eegdash.dataset.DS006317) ### eegdash.dataset.dataset.Chisco2_0 alias of [`DS006317`](eegdash.dataset.DS006317.md#eegdash.dataset.DS006317) ### eegdash.dataset.dataset.Cote2015 alias of [`DS003082`](eegdash.dataset.DS003082.md#eegdash.dataset.DS003082) ### eegdash.dataset.dataset.Couperus2017 alias of [`DS007096`](eegdash.dataset.DS007096.md#eegdash.dataset.DS007096) ### eegdash.dataset.dataset.Couperus2021_LRP alias of [`DS007139`](eegdash.dataset.DS007139.md#eegdash.dataset.DS007139) ### eegdash.dataset.dataset.Couperus2021_MMN alias of [`DS007069`](eegdash.dataset.DS007069.md#eegdash.dataset.DS007069) ### eegdash.dataset.dataset.Couperus2021_N2pc alias of [`DS007137`](eegdash.dataset.DS007137.md#eegdash.dataset.DS007137) ### eegdash.dataset.dataset.Couperus2021_N400 alias of [`DS007052`](eegdash.dataset.DS007052.md#eegdash.dataset.DS007052) ### eegdash.dataset.dataset.Couperus2021_P300 alias of [`DS007056`](eegdash.dataset.DS007056.md#eegdash.dataset.DS007056) ### eegdash.dataset.dataset.DENS alias of [`DS003751`](eegdash.dataset.DS003751.md#eegdash.dataset.DS003751) ### *class* eegdash.dataset.dataset.DS000117(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multisubject, multimodal face processing * **Study:** `ds000117` (OpenNeuro) * **Author (year):** `Wakeman2018` * **Canonical:** `Wakeman2015`, `WakemanHenson` Also importable as: `DS000117`, `Wakeman2018`, `Wakeman2015`, `WakemanHenson`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 17; recordings: 104; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds000117](https://openneuro.org/datasets/ds000117) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds000117](https://nemar.org/dataexplorer/detail?dataset_id=ds000117) DOI: [https://doi.org/10.18112/openneuro.ds000117.v1.1.0](https://doi.org/10.18112/openneuro.ds000117.v1.1.0) NEMAR citation count: 77 ### Examples ```pycon >>> from eegdash.dataset import DS000117 >>> dataset = DS000117(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Wakeman2015', 'WakemanHenson']* ### *class* eegdash.dataset.dataset.DS000246(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEG-BIDS Brainstorm data sample * **Study:** `ds000246` (OpenNeuro) * **Author (year):** `Bock2018` * **Canonical:** — Also importable as: `DS000246`, `Bock2018`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 2; recordings: 3; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds000246](https://openneuro.org/datasets/ds000246) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds000246](https://nemar.org/dataexplorer/detail?dataset_id=ds000246) DOI: [https://doi.org/10.18112/openneuro.ds000246.v1.0.1](https://doi.org/10.18112/openneuro.ds000246.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS000246 >>> dataset = DS000246(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS000247(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEG-BIDS OMEGA RestingState_sample * **Study:** `ds000247` (OpenNeuro) * **Author (year):** `Niso2018` * **Canonical:** `OMEGA` Also importable as: `DS000247`, `Niso2018`, `OMEGA`. Modality: `meg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 6; recordings: 10; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds000247](https://openneuro.org/datasets/ds000247) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds000247](https://nemar.org/dataexplorer/detail?dataset_id=ds000247) DOI: [https://doi.org/10.18112/openneuro.ds000247.v1.0.2](https://doi.org/10.18112/openneuro.ds000247.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS000247 >>> dataset = DS000247(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['OMEGA']* ### *class* eegdash.dataset.dataset.DS000248(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MNE-Sample-Data * **Study:** `ds000248` (OpenNeuro) * **Author (year):** `Gramfort2018` * **Canonical:** `MNE_Sample_Data` Also importable as: `DS000248`, `Gramfort2018`, `MNE_Sample_Data`. Modality: `meg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 2; recordings: 3; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds000248](https://openneuro.org/datasets/ds000248) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds000248](https://nemar.org/dataexplorer/detail?dataset_id=ds000248) DOI: [https://doi.org/10.18112/openneuro.ds000248.v1.2.4](https://doi.org/10.18112/openneuro.ds000248.v1.2.4) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS000248 >>> dataset = DS000248(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MNE_Sample_Data']* ### *class* eegdash.dataset.dataset.DS001785(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Evidence accumulation relates to perceptual consciousness and monitoring * **Study:** `ds001785` (OpenNeuro) * **Author (year):** `Pereira2019_Evidence` * **Canonical:** — Also importable as: `DS001785`, `Pereira2019_Evidence`. Modality: `eeg`. Subjects: 18; recordings: 54; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001785](https://openneuro.org/datasets/ds001785) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001785](https://nemar.org/dataexplorer/detail?dataset_id=ds001785) DOI: [https://doi.org/10.18112/openneuro.ds001785.v1.1.1](https://doi.org/10.18112/openneuro.ds001785.v1.1.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS001785 >>> dataset = DS001785(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS001787(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG meditation study * **Study:** `ds001787` (OpenNeuro) * **Author (year):** `Delorme2019` * **Canonical:** — Also importable as: `DS001787`, `Delorme2019`. Modality: `eeg`. Subjects: 24; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001787](https://openneuro.org/datasets/ds001787) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001787](https://nemar.org/dataexplorer/detail?dataset_id=ds001787) DOI: [https://doi.org/10.18112/openneuro.ds001787.v1.1.1](https://doi.org/10.18112/openneuro.ds001787.v1.1.1) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS001787 >>> dataset = DS001787(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS001810(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG study of the attentional blink; before, during, and after transcranial Direct Current Stimulation (tDCS) * **Study:** `ds001810` (OpenNeuro) * **Author (year):** `Reteig2019` * **Canonical:** — Also importable as: `DS001810`, `Reteig2019`. Modality: `eeg`. Subjects: 47; recordings: 263; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001810](https://openneuro.org/datasets/ds001810) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001810](https://nemar.org/dataexplorer/detail?dataset_id=ds001810) DOI: [https://doi.org/10.18112/openneuro.ds001810.v1.1.0](https://doi.org/10.18112/openneuro.ds001810.v1.1.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS001810 >>> dataset = DS001810(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS001849(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RS_TMSEEG_Data * **Study:** `ds001849` (OpenNeuro) * **Author (year):** `Freedberg2019` * **Canonical:** — Also importable as: `DS001849`, `Freedberg2019`. Modality: `eeg`. Subjects: 20; recordings: 120; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001849](https://openneuro.org/datasets/ds001849) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001849](https://nemar.org/dataexplorer/detail?dataset_id=ds001849) DOI: [https://doi.org/10.18112/openneuro.ds001849.v1.0.2](https://doi.org/10.18112/openneuro.ds001849.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS001849 >>> dataset = DS001849(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS001971(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Audiocue walking study * **Study:** `ds001971` (OpenNeuro) * **Author (year):** `Wagner2019` * **Canonical:** — Also importable as: `DS001971`, `Wagner2019`. Modality: `eeg`. Subjects: 20; recordings: 273; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001971](https://openneuro.org/datasets/ds001971) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001971](https://nemar.org/dataexplorer/detail?dataset_id=ds001971) DOI: [https://doi.org/10.18112/openneuro.ds001971.v1.1.1](https://doi.org/10.18112/openneuro.ds001971.v1.1.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS001971 >>> dataset = DS001971(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002001(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Rivalry_Tagging * **Study:** `ds002001` (OpenNeuro) * **Author (year):** `Mendola2019` * **Canonical:** `Mendola2020` Also importable as: `DS002001`, `Mendola2019`, `Mendola2020`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 11; recordings: 69; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002001](https://openneuro.org/datasets/ds002001) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002001](https://nemar.org/dataexplorer/detail?dataset_id=ds002001) DOI: [https://doi.org/10.18112/openneuro.ds002001.v1.0.0](https://doi.org/10.18112/openneuro.ds002001.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS002001 >>> dataset = DS002001(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Mendola2020']* ### *class* eegdash.dataset.dataset.DS002034(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Real-time EEG feedback on alpha power lateralization leads to behavioral improvements in a covert attention task * **Study:** `ds002034` (OpenNeuro) * **Author (year):** `Schneider2019` * **Canonical:** — Also importable as: `DS002034`, `Schneider2019`. Modality: `eeg`. Subjects: 14; recordings: 167; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002034](https://openneuro.org/datasets/ds002034) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002034](https://nemar.org/dataexplorer/detail?dataset_id=ds002034) DOI: [https://doi.org/10.18112/openneuro.ds002034.v1.0.3](https://doi.org/10.18112/openneuro.ds002034.v1.0.3) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS002034 >>> dataset = DS002034(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002094(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Single-pulse open-loop TMS-EEG dataset * **Study:** `ds002094` (OpenNeuro) * **Author (year):** `DS2094_Single_pulse` * **Canonical:** — Also importable as: `DS002094`, `DS2094_Single_pulse`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 20; recordings: 43; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002094](https://openneuro.org/datasets/ds002094) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002094](https://nemar.org/dataexplorer/detail?dataset_id=ds002094) NEMAR citation count: 30 ### Examples ```pycon >>> from eegdash.dataset import DS002094 >>> dataset = DS002094(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002158(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Disentangling the origins of confidence in speeded perceptual judgments through multimodal imaging * **Study:** `ds002158` (OpenNeuro) * **Author (year):** `Pereira2019_Disentangling` * **Canonical:** — Also importable as: `DS002158`, `Pereira2019_Disentangling`. Modality: `eeg`. Subjects: 20; recordings: 117; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002158](https://openneuro.org/datasets/ds002158) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002158](https://nemar.org/dataexplorer/detail?dataset_id=ds002158) DOI: [https://doi.org/10.18112/openneuro.ds002158.v1.0.2](https://doi.org/10.18112/openneuro.ds002158.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002158 >>> dataset = DS002158(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002181(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CRYPTO and PROVIDE EEG Baseline Data * **Study:** `ds002181` (OpenNeuro) * **Author (year):** `Xie2019` * **Canonical:** — Also importable as: `DS002181`, `Xie2019`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Development`. Subjects: 226; recordings: 226; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002181](https://openneuro.org/datasets/ds002181) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002181](https://nemar.org/dataexplorer/detail?dataset_id=ds002181) DOI: [https://doi.org/mockDOI](https://doi.org/mockDOI) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002181 >>> dataset = DS002181(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002218(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory and Visual Rhythm Omission EEG * **Study:** `ds002218` (OpenNeuro) * **Author (year):** `Comstock2019` * **Canonical:** — Also importable as: `DS002218`, `Comstock2019`. Modality: `eeg`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002218](https://openneuro.org/datasets/ds002218) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002218](https://nemar.org/dataexplorer/detail?dataset_id=ds002218) DOI: [https://doi.org/mockDOI](https://doi.org/mockDOI) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002218 >>> dataset = DS002218(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002312(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) OcularLDT * **Study:** `ds002312` (OpenNeuro) * **Author (year):** `Brooks2019` * **Canonical:** `OcularLDT`, `ocular_ldt` Also importable as: `DS002312`, `Brooks2019`, `OcularLDT`, `ocular_ldt`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 19; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002312](https://openneuro.org/datasets/ds002312) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002312](https://nemar.org/dataexplorer/detail?dataset_id=ds002312) DOI: [https://doi.org/10.18112/openneuro.ds002312.v1.0.0](https://doi.org/10.18112/openneuro.ds002312.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS002312 >>> dataset = DS002312(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['OcularLDT', 'ocular_ldt']* ### *class* eegdash.dataset.dataset.DS002336(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP1 * **Study:** `ds002336` (OpenNeuro) * **Author (year):** `Lioi2019_multi` * **Canonical:** — Also importable as: `DS002336`, `Lioi2019_multi`. Modality: `eeg`. Subjects: 10; recordings: 54; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002336](https://openneuro.org/datasets/ds002336) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002336](https://nemar.org/dataexplorer/detail?dataset_id=ds002336) DOI: [https://doi.org/10.18112/openneuro.ds002336.v2.0.2](https://doi.org/10.18112/openneuro.ds002336.v2.0.2) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS002336 >>> dataset = DS002336(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002338(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP2 * **Study:** `ds002338` (OpenNeuro) * **Author (year):** `Lioi2019_multi_modal` * **Canonical:** — Also importable as: `DS002338`, `Lioi2019_multi_modal`. Modality: `eeg`. Subjects: 17; recordings: 85; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002338](https://openneuro.org/datasets/ds002338) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002338](https://nemar.org/dataexplorer/detail?dataset_id=ds002338) DOI: [https://doi.org/10.18112/openneuro.ds002338.v2.0.1](https://doi.org/10.18112/openneuro.ds002338.v2.0.1) NEMAR citation count: 11 ### Examples ```pycon >>> from eegdash.dataset import DS002338 >>> dataset = DS002338(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002550(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Differential brain mechanisms of selection and maintenance of information during working memory (MEG data) * **Study:** `ds002550` (OpenNeuro) * **Author (year):** `Quentin2020` * **Canonical:** — Also importable as: `DS002550`, `Quentin2020`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 22; recordings: 377; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002550](https://openneuro.org/datasets/ds002550) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002550](https://nemar.org/dataexplorer/detail?dataset_id=ds002550) DOI: [https://doi.org/10.18112/openneuro.ds002550.v1.0.1](https://doi.org/10.18112/openneuro.ds002550.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS002550 >>> dataset = DS002550(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002578(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual Oddball Task (256 channels) * **Study:** `ds002578` (OpenNeuro) * **Author (year):** `Delorme2020_Visual_Oddball_256` * **Canonical:** — Also importable as: `DS002578`, `Delorme2020_Visual_Oddball_256`. Modality: `eeg`. Subjects: 2; recordings: 2; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002578](https://openneuro.org/datasets/ds002578) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002578](https://nemar.org/dataexplorer/detail?dataset_id=ds002578) DOI: [https://doi.org/10.18112/openneuro.ds002578.v1.1.0](https://doi.org/10.18112/openneuro.ds002578.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002578 >>> dataset = DS002578(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002680(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Go-nogo categorization and detection task * **Study:** `ds002680` (OpenNeuro) * **Author (year):** `Delorme2020_Go_nogo_categorization` * **Canonical:** — Also importable as: `DS002680`, `Delorme2020_Go_nogo_categorization`. Modality: `eeg`. Subjects: 14; recordings: 350; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002680](https://openneuro.org/datasets/ds002680) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002680](https://nemar.org/dataexplorer/detail?dataset_id=ds002680) DOI: [https://doi.org/10.18112/openneuro.ds002680.v1.2.0](https://doi.org/10.18112/openneuro.ds002680.v1.2.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS002680 >>> dataset = DS002680(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002691(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Internal attention study * **Study:** `ds002691` (OpenNeuro) * **Author (year):** `Delorme2020_Internal_attention` * **Canonical:** — Also importable as: `DS002691`, `Delorme2020_Internal_attention`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002691](https://openneuro.org/datasets/ds002691) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002691](https://nemar.org/dataexplorer/detail?dataset_id=ds002691) DOI: [https://doi.org/10.18112/openneuro.ds002691.v1.1.0](https://doi.org/10.18112/openneuro.ds002691.v1.1.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS002691 >>> dataset = DS002691(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002712(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Numbers and Letters * **Study:** `ds002712` (OpenNeuro) * **Author (year):** `Aurtenetxe2020` * **Canonical:** — Also importable as: `DS002712`, `Aurtenetxe2020`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 25; recordings: 82; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002712](https://openneuro.org/datasets/ds002712) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002712](https://nemar.org/dataexplorer/detail?dataset_id=ds002712) DOI: [https://doi.org/10.18112/openneuro.ds002712.v1.0.1](https://doi.org/10.18112/openneuro.ds002712.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002712 >>> dataset = DS002712(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002718(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Face processing EEG dataset for EEGLAB * **Study:** `ds002718` (OpenNeuro) * **Author (year):** `Wakeman2020` * **Canonical:** `WakemanHenson_EEG_MEG` Also importable as: `DS002718`, `Wakeman2020`, `WakemanHenson_EEG_MEG`. Modality: `eeg`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002718](https://openneuro.org/datasets/ds002718) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002718](https://nemar.org/dataexplorer/detail?dataset_id=ds002718) DOI: [https://doi.org/10.18112/openneuro.ds002718.v1.1.0](https://doi.org/10.18112/openneuro.ds002718.v1.1.0) NEMAR citation count: 11 ### Examples ```pycon >>> from eegdash.dataset import DS002718 >>> dataset = DS002718(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['WakemanHenson_EEG_MEG']* ### *class* eegdash.dataset.dataset.DS002720(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recorded during development of a tempo-based brain-computer music interface * **Study:** `ds002720` (OpenNeuro) * **Author (year):** `Daly2020_recorded` * **Canonical:** — Also importable as: `DS002720`, `Daly2020_recorded`. Modality: `eeg`. Subjects: 18; recordings: 165; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002720](https://openneuro.org/datasets/ds002720) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002720](https://nemar.org/dataexplorer/detail?dataset_id=ds002720) DOI: [https://doi.org/10.18112/openneuro.ds002720.v1.0.1](https://doi.org/10.18112/openneuro.ds002720.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002720 >>> dataset = DS002720(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002721(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) An EEG dataset recorded during affective music listening * **Study:** `ds002721` (OpenNeuro) * **Author (year):** `Daly2020_recorded_affective` * **Canonical:** — Also importable as: `DS002721`, `Daly2020_recorded_affective`. Modality: `eeg`. Subjects: 31; recordings: 185; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002721](https://openneuro.org/datasets/ds002721) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002721](https://nemar.org/dataexplorer/detail?dataset_id=ds002721) DOI: [https://doi.org/10.18112/openneuro.ds002721.v1.0.2](https://doi.org/10.18112/openneuro.ds002721.v1.0.2) NEMAR citation count: 10 ### Examples ```pycon >>> from eegdash.dataset import DS002721 >>> dataset = DS002721(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002722(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recorded during development of an affective brain-computer music interface: calibration session * **Study:** `ds002722` (OpenNeuro) * **Author (year):** `Daly2020_recorded_development` * **Canonical:** — Also importable as: `DS002722`, `Daly2020_recorded_development`. Modality: `eeg`. Subjects: 19; recordings: 94; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002722](https://openneuro.org/datasets/ds002722) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002722](https://nemar.org/dataexplorer/detail?dataset_id=ds002722) DOI: [https://doi.org/10.18112/openneuro.ds002722.v1.0.1](https://doi.org/10.18112/openneuro.ds002722.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS002722 >>> dataset = DS002722(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002723(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recorded during development of an affective brain-computer music interface: testing session * **Study:** `ds002723` (OpenNeuro) * **Author (year):** `Daly2020_session` * **Canonical:** — Also importable as: `DS002723`, `Daly2020_session`. Modality: `eeg`. Subjects: 8; recordings: 44; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002723](https://openneuro.org/datasets/ds002723) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002723](https://nemar.org/dataexplorer/detail?dataset_id=ds002723) DOI: [https://doi.org/10.18112/openneuro.ds002723.v1.1.0](https://doi.org/10.18112/openneuro.ds002723.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002723 >>> dataset = DS002723(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002724(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recorded during development of an affective brain-computer music interface: training sessions * **Study:** `ds002724` (OpenNeuro) * **Author (year):** `Daly2020_sessions` * **Canonical:** — Also importable as: `DS002724`, `Daly2020_sessions`. Modality: `eeg`. Subjects: 10; recordings: 96; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002724](https://openneuro.org/datasets/ds002724) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002724](https://nemar.org/dataexplorer/detail?dataset_id=ds002724) DOI: [https://doi.org/10.18112/openneuro.ds002724.v1.0.1](https://doi.org/10.18112/openneuro.ds002724.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002724 >>> dataset = DS002724(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002725(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recording joint EEG-fMRI during affective music listening * **Study:** `ds002725` (OpenNeuro) * **Author (year):** `Daly2020_joint` * **Canonical:** — Also importable as: `DS002725`, `Daly2020_joint`. Modality: `eeg`. Subjects: 21; recordings: 105; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002725](https://openneuro.org/datasets/ds002725) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002725](https://nemar.org/dataexplorer/detail?dataset_id=ds002725) DOI: [https://doi.org/10.18112/openneuro.ds002725.v1.0.0](https://doi.org/10.18112/openneuro.ds002725.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS002725 >>> dataset = DS002725(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002761(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) memoryreplay * **Study:** `ds002761` (OpenNeuro) * **Author (year):** `Wimmer2020` * **Canonical:** — Also importable as: `DS002761`, `Wimmer2020`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 25; recordings: 249; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002761](https://openneuro.org/datasets/ds002761) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002761](https://nemar.org/dataexplorer/detail?dataset_id=ds002761) DOI: [https://doi.org/10.18112/openneuro.ds002761.v1.1.2](https://doi.org/10.18112/openneuro.ds002761.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002761 >>> dataset = DS002761(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002778(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) UC San Diego Resting State EEG Data from Patients with Parkinson’s Disease * **Study:** `ds002778` (OpenNeuro) * **Author (year):** `Rockhill2020` * **Canonical:** — Also importable as: `DS002778`, `Rockhill2020`. Modality: `eeg`. Subjects: 31; recordings: 46; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002778](https://openneuro.org/datasets/ds002778) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002778](https://nemar.org/dataexplorer/detail?dataset_id=ds002778) DOI: [https://doi.org/10.18112/openneuro.ds002778.v1.0.5](https://doi.org/10.18112/openneuro.ds002778.v1.0.5) NEMAR citation count: 42 ### Examples ```pycon >>> from eegdash.dataset import DS002778 >>> dataset = DS002778(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002791(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) DataSet1 * **Study:** `ds002791` (OpenNeuro) * **Author (year):** `Mheich2020_DataSet1` * **Canonical:** `Mheich2020` Also importable as: `DS002791`, `Mheich2020_DataSet1`, `Mheich2020`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 23; recordings: 92; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002791](https://openneuro.org/datasets/ds002791) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002791](https://nemar.org/dataexplorer/detail?dataset_id=ds002791) DOI: [https://doi.org/10.18112/openneuro.ds002791.v1.0.0](https://doi.org/10.18112/openneuro.ds002791.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS002791 >>> dataset = DS002791(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Mheich2020']* ### *class* eegdash.dataset.dataset.DS002799(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Human es-fMRI Resource: Concurrent deep-brain stimulation and whole-brain functional MRI * **Study:** `ds002799` (OpenNeuro) * **Author (year):** `Thompson2024` * **Canonical:** — Also importable as: `DS002799`, `Thompson2024`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 27; recordings: 16824; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002799](https://openneuro.org/datasets/ds002799) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002799](https://nemar.org/dataexplorer/detail?dataset_id=ds002799) DOI: [https://doi.org/10.18112/openneuro.ds002799.v1.0.4](https://doi.org/10.18112/openneuro.ds002799.v1.0.4) ### Examples ```pycon >>> from eegdash.dataset import DS002799 >>> dataset = DS002799(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002814(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Multimodal Neuroimaging Dataset to Study Spatiotemporal Dynamics of Visual Processing in Humans * **Study:** `ds002814` (OpenNeuro) * **Author (year):** `Ebrahiminia2020` * **Canonical:** — Also importable as: `DS002814`, `Ebrahiminia2020`. Modality: `eeg`. Subjects: 21; recordings: 168; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002814](https://openneuro.org/datasets/ds002814) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002814](https://nemar.org/dataexplorer/detail?dataset_id=ds002814) DOI: [https://doi.org/10.18112/openneuro.ds002814.v1.3.0](https://doi.org/10.18112/openneuro.ds002814.v1.3.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS002814 >>> dataset = DS002814(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002833(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) DataSet2 * **Study:** `ds002833` (OpenNeuro) * **Author (year):** `Mheich2020_DataSet2` * **Canonical:** `Mheich2024` Also importable as: `DS002833`, `Mheich2020_DataSet2`, `Mheich2024`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 20; recordings: 80; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002833](https://openneuro.org/datasets/ds002833) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002833](https://nemar.org/dataexplorer/detail?dataset_id=ds002833) DOI: [https://doi.org/10.18112/openneuro.ds002833.v1.0.0](https://doi.org/10.18112/openneuro.ds002833.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS002833 >>> dataset = DS002833(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Mheich2024']* ### *class* eegdash.dataset.dataset.DS002885(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) DBS Phantom Recordings * **Study:** `ds002885` (OpenNeuro) * **Author (year):** `Kandemir2020` * **Canonical:** — Also importable as: `DS002885`, `Kandemir2020`. Modality: `meg`; Experiment type: `Other`; Subject type: `Other`. Subjects: 2; recordings: 7; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002885](https://openneuro.org/datasets/ds002885) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002885](https://nemar.org/dataexplorer/detail?dataset_id=ds002885) DOI: [https://doi.org/10.18112/openneuro.ds002885.v1.0.1](https://doi.org/10.18112/openneuro.ds002885.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002885 >>> dataset = DS002885(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002893(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory-Visual Shift Study * **Study:** `ds002893` (OpenNeuro) * **Author (year):** `Westerfield2022` * **Canonical:** — Also importable as: `DS002893`, `Westerfield2022`. Modality: `eeg`. Subjects: 49; recordings: 52; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002893](https://openneuro.org/datasets/ds002893) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002893](https://nemar.org/dataexplorer/detail?dataset_id=ds002893) DOI: [https://doi.org/10.18112/openneuro.ds002893.v2.0.0](https://doi.org/10.18112/openneuro.ds002893.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002893 >>> dataset = DS002893(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS002908(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Human MEG recordings during sequential conflict task * **Study:** `ds002908` (OpenNeuro) * **Author (year):** `Bogacz2020` * **Canonical:** `Bogacz2024` Also importable as: `DS002908`, `Bogacz2020`, `Bogacz2024`. Modality: `meg`; Experiment type: `Attention`; Subject type: `Unknown`. Subjects: 13; recordings: 53; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002908](https://openneuro.org/datasets/ds002908) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002908](https://nemar.org/dataexplorer/detail?dataset_id=ds002908) DOI: [https://doi.org/10.18112/openneuro.ds002908.v1.0.0](https://doi.org/10.18112/openneuro.ds002908.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002908 >>> dataset = DS002908(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Bogacz2024']* ### *class* eegdash.dataset.dataset.DS003004(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Imagined Emotion Study * **Study:** `ds003004` (OpenNeuro) * **Author (year):** `Onton2020` * **Canonical:** — Also importable as: `DS003004`, `Onton2020`. Modality: `eeg`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003004](https://openneuro.org/datasets/ds003004) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003004](https://nemar.org/dataexplorer/detail?dataset_id=ds003004) DOI: [https://doi.org/10.18112/openneuro.ds003004.v1.1.1](https://doi.org/10.18112/openneuro.ds003004.v1.1.1) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003004 >>> dataset = DS003004(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003029(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Epilepsy-iEEG-Multicenter-Dataset * **Study:** `ds003029` (OpenNeuro) * **Author (year):** `Li2020` * **Canonical:** — Also importable as: `DS003029`, `Li2020`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 35; recordings: 106; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003029](https://openneuro.org/datasets/ds003029) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003029](https://nemar.org/dataexplorer/detail?dataset_id=ds003029) DOI: [https://doi.org/10.18112/openneuro.ds003029.v1.0.5](https://doi.org/10.18112/openneuro.ds003029.v1.0.5) NEMAR citation count: 19 ### Examples ```pycon >>> from eegdash.dataset import DS003029 >>> dataset = DS003029(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003039(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) free walking study * **Study:** `ds003039` (OpenNeuro) * **Author (year):** `Jacobsen2020` * **Canonical:** — Also importable as: `DS003039`, `Jacobsen2020`. Modality: `eeg`. Subjects: 19; recordings: 19; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003039](https://openneuro.org/datasets/ds003039) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003039](https://nemar.org/dataexplorer/detail?dataset_id=ds003039) DOI: [https://doi.org/10.18112/openneuro.ds003039.v1.0.2](https://doi.org/10.18112/openneuro.ds003039.v1.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003039 >>> dataset = DS003039(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003061(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data from an auditory oddball task * **Study:** `ds003061` (OpenNeuro) * **Author (year):** `Delorme2020_auditory_oddball` * **Canonical:** `Delorme` Also importable as: `DS003061`, `Delorme2020_auditory_oddball`, `Delorme`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 39; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003061](https://openneuro.org/datasets/ds003061) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003061](https://nemar.org/dataexplorer/detail?dataset_id=ds003061) DOI: [https://doi.org/10.18112/openneuro.ds003061.v1.1.0](https://doi.org/10.18112/openneuro.ds003061.v1.1.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003061 >>> dataset = DS003061(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Delorme']* ### *class* eegdash.dataset.dataset.DS003078(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PROBE iEEG * **Study:** `ds003078` (OpenNeuro) * **Author (year):** `DOMENECH2020` * **Canonical:** — Also importable as: `DS003078`, `DOMENECH2020`. Modality: `ieeg`; Experiment type: `Unknown`; Subject type: `Surgery`. Subjects: 6; recordings: 72; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003078](https://openneuro.org/datasets/ds003078) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003078](https://nemar.org/dataexplorer/detail?dataset_id=ds003078) DOI: [https://doi.org/10.18112/openneuro.ds003078.v1.0.0](https://doi.org/10.18112/openneuro.ds003078.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003078 >>> dataset = DS003078(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003082(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory Cortex Mapping Dataset * **Study:** `ds003082` (OpenNeuro) * **Author (year):** `Cote2020` * **Canonical:** `Cote2015` Also importable as: `DS003082`, `Cote2020`, `Cote2015`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 2; recordings: 3; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003082](https://openneuro.org/datasets/ds003082) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003082](https://nemar.org/dataexplorer/detail?dataset_id=ds003082) DOI: [https://doi.org/10.18112/openneuro.ds003082.v1.0.0](https://doi.org/10.18112/openneuro.ds003082.v1.0.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003082 >>> dataset = DS003082(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Cote2015']* ### *class* eegdash.dataset.dataset.DS003104(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MNE-somato-data-bids (anonymized) * **Study:** `ds003104` (OpenNeuro) * **Author (year):** `Parkkonen2020` * **Canonical:** `MNESomato`, `Somato`, `MNESomatoData` Also importable as: `DS003104`, `Parkkonen2020`, `MNESomato`, `Somato`, `MNESomatoData`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003104](https://openneuro.org/datasets/ds003104) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003104](https://nemar.org/dataexplorer/detail?dataset_id=ds003104) DOI: [https://doi.org/10.18112/openneuro.ds003104.v1.0.1](https://doi.org/10.18112/openneuro.ds003104.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003104 >>> dataset = DS003104(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MNESomato', 'Somato', 'MNESomatoData']* ### *class* eegdash.dataset.dataset.DS003190(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Assesment of the visual stimuli properties in P300 paradigm * **Study:** `ds003190` (OpenNeuro) * **Author (year):** `MendozaMontoya2020` * **Canonical:** — Also importable as: `DS003190`, `MendozaMontoya2020`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 19; recordings: 384; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003190](https://openneuro.org/datasets/ds003190) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003190](https://nemar.org/dataexplorer/detail?dataset_id=ds003190) DOI: [https://doi.org/10.18112/openneuro.ds003190.v1.0.1](https://doi.org/10.18112/openneuro.ds003190.v1.0.1) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003190 >>> dataset = DS003190(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003194(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neuroepo multisession * **Study:** `ds003194` (OpenNeuro) * **Author (year):** `Vega2020_Neuroepo` * **Canonical:** — Also importable as: `DS003194`, `Vega2020_Neuroepo`. Modality: `eeg`. Subjects: 15; recordings: 29; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003194](https://openneuro.org/datasets/ds003194) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003194](https://nemar.org/dataexplorer/detail?dataset_id=ds003194) DOI: [https://doi.org/10.18112/openneuro.ds003194.v1.0.3](https://doi.org/10.18112/openneuro.ds003194.v1.0.3) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003194 >>> dataset = DS003194(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003195(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Placebo Neuroepo multisession * **Study:** `ds003195` (OpenNeuro) * **Author (year):** `Vega2020_Placebo` * **Canonical:** — Also importable as: `DS003195`, `Vega2020_Placebo`. Modality: `eeg`. Subjects: 10; recordings: 20; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003195](https://openneuro.org/datasets/ds003195) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003195](https://nemar.org/dataexplorer/detail?dataset_id=ds003195) DOI: [https://doi.org/10.18112/openneuro.ds003195.v1.0.3](https://doi.org/10.18112/openneuro.ds003195.v1.0.3) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003195 >>> dataset = DS003195(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003343(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Disentangling the percepts of illusory movement and sensory stimulation during tendon vibration in the EEG * **Study:** `ds003343` (OpenNeuro) * **Author (year):** `Schneider2020` * **Canonical:** — Also importable as: `DS003343`, `Schneider2020`. Modality: `eeg`. Subjects: 20; recordings: 59; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003343](https://openneuro.org/datasets/ds003343) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003343](https://nemar.org/dataexplorer/detail?dataset_id=ds003343) DOI: [https://doi.org/10.18112/openneuro.ds003343.v2.0.1](https://doi.org/10.18112/openneuro.ds003343.v2.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003343 >>> dataset = DS003343(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003352(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 1 - Light Pink Spiral * **Study:** `ds003352` (OpenNeuro) * **Author (year):** `Hermann2020` * **Canonical:** `Hermann2021` Also importable as: `DS003352`, `Hermann2020`, `Hermann2021`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 18; recordings: 138; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003352](https://openneuro.org/datasets/ds003352) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003352](https://nemar.org/dataexplorer/detail?dataset_id=ds003352) DOI: [https://doi.org/10.18112/openneuro.ds003352.v1.0.0](https://doi.org/10.18112/openneuro.ds003352.v1.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003352 >>> dataset = DS003352(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Hermann2021']* ### *class* eegdash.dataset.dataset.DS003374(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of neurons and intracranial EEG from human amygdala during aversive dynamic visual stimulation * **Study:** `ds003374` (OpenNeuro) * **Author (year):** `Fedele2020` * **Canonical:** — Also importable as: `DS003374`, `Fedele2020`. Modality: `ieeg`; Experiment type: `Affect`; Subject type: `Epilepsy`. Subjects: 9; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003374](https://openneuro.org/datasets/ds003374) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003374](https://nemar.org/dataexplorer/detail?dataset_id=ds003374) DOI: [https://doi.org/10.18112/openneuro.ds003374.v1.1.1](https://doi.org/10.18112/openneuro.ds003374.v1.1.1) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003374 >>> dataset = DS003374(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003380(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Corticothalamic communication under analgesia, sedation and gradual ischemia: a multimodal model of controlled gradual cerebral ischemia in pig * **Study:** `ds003380` (OpenNeuro) * **Author (year):** `Frasch2020` * **Canonical:** — Also importable as: `DS003380`, `Frasch2020`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 1; recordings: 5; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003380](https://openneuro.org/datasets/ds003380) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003380](https://nemar.org/dataexplorer/detail?dataset_id=ds003380) DOI: [https://doi.org/10.18112/openneuro.ds003380.v1.0.0](https://doi.org/10.18112/openneuro.ds003380.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS003380 >>> dataset = DS003380(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003392(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NeuroSpin hMT+ Localizer DATA (MEG & aMRI) * **Study:** `ds003392` (OpenNeuro) * **Author (year):** `Zilber2020` * **Canonical:** — Also importable as: `DS003392`, `Zilber2020`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 12; recordings: 33; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003392](https://openneuro.org/datasets/ds003392) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003392](https://nemar.org/dataexplorer/detail?dataset_id=ds003392) DOI: [https://doi.org/10.18112/openneuro.ds003392.v1.0.4](https://doi.org/10.18112/openneuro.ds003392.v1.0.4) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003392 >>> dataset = DS003392(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003420(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HD-EEGtask(Dataset 1) * **Study:** `ds003420` (OpenNeuro) * **Author (year):** `Mheich2020_HD` * **Canonical:** — Also importable as: `DS003420`, `Mheich2020_HD`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 23; recordings: 92; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003420](https://openneuro.org/datasets/ds003420) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003420](https://nemar.org/dataexplorer/detail?dataset_id=ds003420) DOI: [https://doi.org/10.18112/openneuro.ds003420.v1.0.2](https://doi.org/10.18112/openneuro.ds003420.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003420 >>> dataset = DS003420(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003421(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HD-EEGtask(Dataset 2) * **Study:** `ds003421` (OpenNeuro) * **Author (year):** `Mheich2020_HD_EEGtask` * **Canonical:** — Also importable as: `DS003421`, `Mheich2020_HD_EEGtask`. Modality: `eeg`. Subjects: 20; recordings: 80; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003421](https://openneuro.org/datasets/ds003421) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003421](https://nemar.org/dataexplorer/detail?dataset_id=ds003421) DOI: [https://doi.org/10.18112/openneuro.ds003421.v1.0.2](https://doi.org/10.18112/openneuro.ds003421.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003421 >>> dataset = DS003421(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003458(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Three armed bandit gambling task * **Study:** `ds003458` (OpenNeuro) * **Author (year):** `Cavanagh2021_Three` * **Canonical:** — Also importable as: `DS003458`, `Cavanagh2021_Three`. Modality: `eeg`. Subjects: 23; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003458](https://openneuro.org/datasets/ds003458) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003458](https://nemar.org/dataexplorer/detail?dataset_id=ds003458) DOI: [https://doi.org/10.18112/openneuro.ds003458.v1.1.0](https://doi.org/10.18112/openneuro.ds003458.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003458 >>> dataset = DS003458(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003474(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Probabilistic Selection and Depression * **Study:** `ds003474` (OpenNeuro) * **Author (year):** `Cavanagh2021_Probabilistic` * **Canonical:** — Also importable as: `DS003474`, `Cavanagh2021_Probabilistic`. Modality: `eeg`. Subjects: 122; recordings: 122; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003474](https://openneuro.org/datasets/ds003474) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003474](https://nemar.org/dataexplorer/detail?dataset_id=ds003474) DOI: [https://doi.org/10.18112/openneuro.ds003474.v1.1.0](https://doi.org/10.18112/openneuro.ds003474.v1.1.0) NEMAR citation count: 9 ### Examples ```pycon >>> from eegdash.dataset import DS003474 >>> dataset = DS003474(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003478(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Depression rest * **Study:** `ds003478` (OpenNeuro) * **Author (year):** `Cavanagh2021_Depression` * **Canonical:** — Also importable as: `DS003478`, `Cavanagh2021_Depression`. Modality: `eeg`. Subjects: 122; recordings: 243; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003478](https://openneuro.org/datasets/ds003478) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003478](https://nemar.org/dataexplorer/detail?dataset_id=ds003478) DOI: [https://doi.org/10.18112/openneuro.ds003478.v1.1.0](https://doi.org/10.18112/openneuro.ds003478.v1.1.0) NEMAR citation count: 22 ### Examples ```pycon >>> from eegdash.dataset import DS003478 >>> dataset = DS003478(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003483(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Logical reasoning study * **Study:** `ds003483` (OpenNeuro) * **Author (year):** `Cognitive2021` * **Canonical:** `Maestu2021` Also importable as: `DS003483`, `Cognitive2021`, `Maestu2021`. Modality: `meg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 21; recordings: 41; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003483](https://openneuro.org/datasets/ds003483) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003483](https://nemar.org/dataexplorer/detail?dataset_id=ds003483) DOI: [https://doi.org/10.18112/openneuro.ds003483.v1.0.2](https://doi.org/10.18112/openneuro.ds003483.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003483 >>> dataset = DS003483(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Maestu2021']* ### *class* eegdash.dataset.dataset.DS003490(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: 3-Stim Auditory Oddball and Rest in Parkinson’s * **Study:** `ds003490` (OpenNeuro) * **Author (year):** `Cavanagh2021_3` * **Canonical:** — Also importable as: `DS003490`, `Cavanagh2021_3`. Modality: `eeg`. Subjects: 50; recordings: 75; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003490](https://openneuro.org/datasets/ds003490) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003490](https://nemar.org/dataexplorer/detail?dataset_id=ds003490) DOI: [https://doi.org/10.18112/openneuro.ds003490.v1.1.0](https://doi.org/10.18112/openneuro.ds003490.v1.1.0) NEMAR citation count: 13 ### Examples ```pycon >>> from eegdash.dataset import DS003490 >>> dataset = DS003490(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003498(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) interictal iEEG during slow-wave sleep with HFO markings * **Study:** `ds003498` (OpenNeuro) * **Author (year):** `Fedele2021` * **Canonical:** — Also importable as: `DS003498`, `Fedele2021`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 20; recordings: 385; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003498](https://openneuro.org/datasets/ds003498) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003498](https://nemar.org/dataexplorer/detail?dataset_id=ds003498) DOI: [https://doi.org/10.18112/openneuro.ds003498.v1.0.1](https://doi.org/10.18112/openneuro.ds003498.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003498 >>> dataset = DS003498(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003505(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) VEPCON: Source imaging of high-density visual evoked potentials with multi-scale brain parcellations and connectomes * **Study:** `ds003505` (OpenNeuro) * **Author (year):** `Pascucci2021` * **Canonical:** `VEPCON` Also importable as: `DS003505`, `Pascucci2021`, `VEPCON`. Modality: `eeg`. Subjects: 19; recordings: 37; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003505](https://openneuro.org/datasets/ds003505) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003505](https://nemar.org/dataexplorer/detail?dataset_id=ds003505) DOI: [https://doi.org/10.18112/openneuro.ds003505.v1.1.1](https://doi.org/10.18112/openneuro.ds003505.v1.1.1) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003505 >>> dataset = DS003505(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['VEPCON']* ### *class* eegdash.dataset.dataset.DS003506(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Reinforcement Learning in Parkinson’s * **Study:** `ds003506` (OpenNeuro) * **Author (year):** `Cavanagh2021_Reinforcement` * **Canonical:** — Also importable as: `DS003506`, `Cavanagh2021_Reinforcement`. Modality: `eeg`. Subjects: 56; recordings: 84; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003506](https://openneuro.org/datasets/ds003506) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003506](https://nemar.org/dataexplorer/detail?dataset_id=ds003506) DOI: [https://doi.org/10.18112/openneuro.ds003506.v1.1.0](https://doi.org/10.18112/openneuro.ds003506.v1.1.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003506 >>> dataset = DS003506(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003509(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Simon Conflict in Parkinson’s * **Study:** `ds003509` (OpenNeuro) * **Author (year):** `Cavanagh2021_Simon` * **Canonical:** — Also importable as: `DS003509`, `Cavanagh2021_Simon`. Modality: `eeg`. Subjects: 56; recordings: 84; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003509](https://openneuro.org/datasets/ds003509) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003509](https://nemar.org/dataexplorer/detail?dataset_id=ds003509) DOI: [https://doi.org/10.18112/openneuro.ds003509.v1.1.0](https://doi.org/10.18112/openneuro.ds003509.v1.1.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003509 >>> dataset = DS003509(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003516(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Attended Speaker Paradigm (Own Name in Ignored Stream) * **Study:** `ds003516` (OpenNeuro) * **Author (year):** `Holtze2021` * **Canonical:** — Also importable as: `DS003516`, `Holtze2021`. Modality: `eeg`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003516](https://openneuro.org/datasets/ds003516) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003516](https://nemar.org/dataexplorer/detail?dataset_id=ds003516) DOI: [https://doi.org/10.18112/openneuro.ds003516.v1.1.1](https://doi.org/10.18112/openneuro.ds003516.v1.1.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003516 >>> dataset = DS003516(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003517(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Continuous gameplay of an 8-bit style video game * **Study:** `ds003517` (OpenNeuro) * **Author (year):** `Cavanagh2021_Continuous` * **Canonical:** — Also importable as: `DS003517`, `Cavanagh2021_Continuous`. Modality: `eeg`. Subjects: 17; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003517](https://openneuro.org/datasets/ds003517) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003517](https://nemar.org/dataexplorer/detail?dataset_id=ds003517) DOI: [https://doi.org/10.18112/openneuro.ds003517.v1.1.0](https://doi.org/10.18112/openneuro.ds003517.v1.1.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003517 >>> dataset = DS003517(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003518(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Simon Conflict w/ Reinforcement + Cabergoline Challenge * **Study:** `ds003518` (OpenNeuro) * **Author (year):** `Cavanagh2021_Simon_Conflict` * **Canonical:** — Also importable as: `DS003518`, `Cavanagh2021_Simon_Conflict`. Modality: `eeg`. Subjects: 110; recordings: 137; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003518](https://openneuro.org/datasets/ds003518) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003518](https://nemar.org/dataexplorer/detail?dataset_id=ds003518) DOI: [https://doi.org/10.18112/openneuro.ds003518.v1.1.0](https://doi.org/10.18112/openneuro.ds003518.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003518 >>> dataset = DS003518(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003519(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Visual Working Memory + Cabergoline Challenge * **Study:** `ds003519` (OpenNeuro) * **Author (year):** `Cavanagh2021_Visual` * **Canonical:** — Also importable as: `DS003519`, `Cavanagh2021_Visual`. Modality: `eeg`. Subjects: 27; recordings: 54; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003519](https://openneuro.org/datasets/ds003519) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003519](https://nemar.org/dataexplorer/detail?dataset_id=ds003519) DOI: [https://doi.org/10.18112/openneuro.ds003519.v1.1.0](https://doi.org/10.18112/openneuro.ds003519.v1.1.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003519 >>> dataset = DS003519(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003522(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI * **Study:** `ds003522` (OpenNeuro) * **Author (year):** `Cavanagh2021_Three_Stim` * **Canonical:** — Also importable as: `DS003522`, `Cavanagh2021_Three_Stim`. Modality: `eeg`. Subjects: 96; recordings: 200; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003522](https://openneuro.org/datasets/ds003522) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003522](https://nemar.org/dataexplorer/detail?dataset_id=ds003522) DOI: [https://doi.org/10.18112/openneuro.ds003522.v1.1.0](https://doi.org/10.18112/openneuro.ds003522.v1.1.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003522 >>> dataset = DS003522(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003523(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Visual Working Memory in Acute TBI * **Study:** `ds003523` (OpenNeuro) * **Author (year):** `Cavanagh2021_Visual_Working` * **Canonical:** — Also importable as: `DS003523`, `Cavanagh2021_Visual_Working`. Modality: `eeg`. Subjects: 91; recordings: 221; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003523](https://openneuro.org/datasets/ds003523) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003523](https://nemar.org/dataexplorer/detail?dataset_id=ds003523) DOI: [https://doi.org/10.18112/openneuro.ds003523.v1.1.0](https://doi.org/10.18112/openneuro.ds003523.v1.1.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003523 >>> dataset = DS003523(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003555(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of EEG recordings of pediatric patients with epilepsy based on the 10-20 system * **Study:** `ds003555` (OpenNeuro) * **Author (year):** `Cserpan2021` * **Canonical:** — Also importable as: `DS003555`, `Cserpan2021`. Modality: `eeg`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003555](https://openneuro.org/datasets/ds003555) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003555](https://nemar.org/dataexplorer/detail?dataset_id=ds003555) DOI: [https://doi.org/10.18112/openneuro.ds003555.v1.0.1](https://doi.org/10.18112/openneuro.ds003555.v1.0.1) NEMAR citation count: 8 ### Examples ```pycon >>> from eegdash.dataset import DS003555 >>> dataset = DS003555(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003568(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mood induction in MDD and healthy adolescents * **Study:** `ds003568` (OpenNeuro) * **Author (year):** `Liuzzi2021` * **Canonical:** — Also importable as: `DS003568`, `Liuzzi2021`. Modality: `meg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 51; recordings: 118; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003568](https://openneuro.org/datasets/ds003568) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003568](https://nemar.org/dataexplorer/detail?dataset_id=ds003568) DOI: [https://doi.org/10.18112/openneuro.ds003568.v1.0.2](https://doi.org/10.18112/openneuro.ds003568.v1.0.2) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003568 >>> dataset = DS003568(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003570(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Improvisation and Musical Structures * **Study:** `ds003570` (OpenNeuro) * **Author (year):** `Goldman2021` * **Canonical:** — Also importable as: `DS003570`, `Goldman2021`. Modality: `eeg`. Subjects: 40; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003570](https://openneuro.org/datasets/ds003570) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003570](https://nemar.org/dataexplorer/detail?dataset_id=ds003570) DOI: [https://doi.org/10.18112/openneuro.ds003570.v1.0.0](https://doi.org/10.18112/openneuro.ds003570.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003570 >>> dataset = DS003570(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003574(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Reward biases spontaneous neural reactivation during sleep * **Study:** `ds003574` (OpenNeuro) * **Author (year):** `Sterpenich2021` * **Canonical:** — Also importable as: `DS003574`, `Sterpenich2021`. Modality: `eeg`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003574](https://openneuro.org/datasets/ds003574) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003574](https://nemar.org/dataexplorer/detail?dataset_id=ds003574) DOI: [https://doi.org/10.18112/openneuro.ds003574.v1.0.2](https://doi.org/10.18112/openneuro.ds003574.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003574 >>> dataset = DS003574(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003602(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Childhood Sexual Abuse and problem drinking in women: Neurobehavioral mechanisms * **Study:** `ds003602` (OpenNeuro) * **Author (year):** `Korucuoglu2021` * **Canonical:** — Also importable as: `DS003602`, `Korucuoglu2021`. Modality: `eeg`. Subjects: 118; recordings: 699; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003602](https://openneuro.org/datasets/ds003602) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003602](https://nemar.org/dataexplorer/detail?dataset_id=ds003602) DOI: [https://doi.org/10.18112/openneuro.ds003602.v1.0.0](https://doi.org/10.18112/openneuro.ds003602.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003602 >>> dataset = DS003602(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003620(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Runabout: A mobile EEG study of auditory oddball processing in laboratory and real-world conditions * **Study:** `ds003620` (OpenNeuro) * **Author (year):** `Liebherr2021` * **Canonical:** `Runabout` Also importable as: `DS003620`, `Liebherr2021`, `Runabout`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 44; recordings: 100; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003620](https://openneuro.org/datasets/ds003620) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003620](https://nemar.org/dataexplorer/detail?dataset_id=ds003620) DOI: [https://doi.org/10.18112/openneuro.ds003620.v1.1.1](https://doi.org/10.18112/openneuro.ds003620.v1.1.1) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003620 >>> dataset = DS003620(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Runabout']* ### *class* eegdash.dataset.dataset.DS003626(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Inner Speech * **Study:** `ds003626` (OpenNeuro) * **Author (year):** `Nieto2021` * **Canonical:** — Also importable as: `DS003626`, `Nieto2021`. Modality: `eeg`. Subjects: 10; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003626](https://openneuro.org/datasets/ds003626) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003626](https://nemar.org/dataexplorer/detail?dataset_id=ds003626) DOI: [https://doi.org/10.18112/openneuro.ds003626.v2.0.0](https://doi.org/10.18112/openneuro.ds003626.v2.0.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS003626 >>> dataset = DS003626(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003633(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ForrestGump-MEG * **Study:** `ds003633` (OpenNeuro) * **Author (year):** `Liu2021` * **Canonical:** `ForrestGump_MEG` Also importable as: `DS003633`, `Liu2021`, `ForrestGump_MEG`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 12; recordings: 96; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003633](https://openneuro.org/datasets/ds003633) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003633](https://nemar.org/dataexplorer/detail?dataset_id=ds003633) DOI: [https://doi.org/10.18112/openneuro.ds003633.v1.0.3](https://doi.org/10.18112/openneuro.ds003633.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003633 >>> dataset = DS003633(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ForrestGump_MEG']* ### *class* eegdash.dataset.dataset.DS003638(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Electrophysiological biomarkers of behavioral dimensions from cross-species paradigms * **Study:** `ds003638` (OpenNeuro) * **Author (year):** `Cavanagh2021_Electrophysiological` * **Canonical:** — Also importable as: `DS003638`, `Cavanagh2021_Electrophysiological`. Modality: `eeg`. Subjects: 57; recordings: 57; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003638](https://openneuro.org/datasets/ds003638) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003638](https://nemar.org/dataexplorer/detail?dataset_id=ds003638) DOI: [https://doi.org/10.18112/openneuro.ds003638.v1.0.0](https://doi.org/10.18112/openneuro.ds003638.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003638 >>> dataset = DS003638(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003645(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Face processing MEEG dataset with HED annotation * **Study:** `ds003645` (OpenNeuro) * **Author (year):** `Wakeman2021` * **Canonical:** — Also importable as: `DS003645`, `Wakeman2021`. Modality: `eeg, meg`. Subjects: 19; recordings: 224; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003645](https://openneuro.org/datasets/ds003645) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003645](https://nemar.org/dataexplorer/detail?dataset_id=ds003645) DOI: [https://doi.org/10.18112/openneuro.ds003645.v2.0.2](https://doi.org/10.18112/openneuro.ds003645.v2.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003645 >>> dataset = DS003645(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003655(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) VerbalWorkingMemory * **Study:** `ds003655` (OpenNeuro) * **Author (year):** `Pavlov2021_VerbalWorkingMemory` * **Canonical:** — Also importable as: `DS003655`, `Pavlov2021_VerbalWorkingMemory`. Modality: `eeg`. Subjects: 156; recordings: 156; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003655](https://openneuro.org/datasets/ds003655) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003655](https://nemar.org/dataexplorer/detail?dataset_id=ds003655) DOI: [https://doi.org/10.18112/openneuro.ds003655.v1.0.0](https://doi.org/10.18112/openneuro.ds003655.v1.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003655 >>> dataset = DS003655(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003670(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of Concurrent EEG, ECG, and Behavior with Multiple Doses of transcranial Electrical Stimulation - BIDS * **Study:** `ds003670` (OpenNeuro) * **Author (year):** `Gebodh2021` * **Canonical:** — Also importable as: `DS003670`, `Gebodh2021`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 25; recordings: 62; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003670](https://openneuro.org/datasets/ds003670) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003670](https://nemar.org/dataexplorer/detail?dataset_id=ds003670) DOI: [https://doi.org/10.18112/openneuro.ds003670.v1.1.0](https://doi.org/10.18112/openneuro.ds003670.v1.1.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS003670 >>> dataset = DS003670(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003682(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Model-based aversive learning in humans is supported by preferential task state reactivation * **Study:** `ds003682` (OpenNeuro) * **Author (year):** `Wise2021` * **Canonical:** — Also importable as: `DS003682`, `Wise2021`. Modality: `meg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 28; recordings: 336; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003682](https://openneuro.org/datasets/ds003682) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003682](https://nemar.org/dataexplorer/detail?dataset_id=ds003682) DOI: [https://doi.org/10.18112/openneuro.ds003682.v1.0.0](https://doi.org/10.18112/openneuro.ds003682.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003682 >>> dataset = DS003682(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003688(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Open multimodal iEEG-fMRI dataset from naturalistic stimulation with a short audiovisual film * **Study:** `ds003688` (OpenNeuro) * **Author (year):** `Berezutskaya2021` * **Canonical:** — Also importable as: `DS003688`, `Berezutskaya2021`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Epilepsy`. Subjects: 51; recordings: 107; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003688](https://openneuro.org/datasets/ds003688) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003688](https://nemar.org/dataexplorer/detail?dataset_id=ds003688) DOI: [https://doi.org/10.18112/openneuro.ds003688.v1.0.7](https://doi.org/10.18112/openneuro.ds003688.v1.0.7) NEMAR citation count: 9 ### Examples ```pycon >>> from eegdash.dataset import DS003688 >>> dataset = DS003688(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003690(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG, ECG and pupil data from young and older adults: rest and auditory cued reaction time tasks * **Study:** `ds003690` (OpenNeuro) * **Author (year):** `Ribeiro2021` * **Canonical:** — Also importable as: `DS003690`, `Ribeiro2021`. Modality: `eeg`. Subjects: 75; recordings: 375; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003690](https://openneuro.org/datasets/ds003690) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003690](https://nemar.org/dataexplorer/detail?dataset_id=ds003690) DOI: [https://doi.org/10.18112/openneuro.ds003690.v1.0.0](https://doi.org/10.18112/openneuro.ds003690.v1.0.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003690 >>> dataset = DS003690(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003694(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEGMEM * **Study:** `ds003694` (OpenNeuro) * **Author (year):** `Griffiths2021` * **Canonical:** `MEGMEM` Also importable as: `DS003694`, `Griffiths2021`, `MEGMEM`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 28; recordings: 132; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003694](https://openneuro.org/datasets/ds003694) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003694](https://nemar.org/dataexplorer/detail?dataset_id=ds003694) DOI: [https://doi.org/10.18112/openneuro.ds003694.v1.0.0](https://doi.org/10.18112/openneuro.ds003694.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003694 >>> dataset = DS003694(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MEGMEM']* ### *class* eegdash.dataset.dataset.DS003702(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Social Memory cuing * **Study:** `ds003702` (OpenNeuro) * **Author (year):** `Gregory2021` * **Canonical:** — Also importable as: `DS003702`, `Gregory2021`. Modality: `eeg`. Subjects: 47; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003702](https://openneuro.org/datasets/ds003702) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003702](https://nemar.org/dataexplorer/detail?dataset_id=ds003702) DOI: [https://doi.org/10.18112/openneuro.ds003702.v1.0.1](https://doi.org/10.18112/openneuro.ds003702.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003702 >>> dataset = DS003702(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003703(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Frequency Tagging of Syntactic Structure or Lexical Properties * **Study:** `ds003703` (OpenNeuro) * **Author (year):** `Kalenkovich2021` * **Canonical:** `Kalenkovich2019` Also importable as: `DS003703`, `Kalenkovich2021`, `Kalenkovich2019`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 34; recordings: 102; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003703](https://openneuro.org/datasets/ds003703) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003703](https://nemar.org/dataexplorer/detail?dataset_id=ds003703) DOI: [https://doi.org/10.18112/openneuro.ds003703.v1.0.0](https://doi.org/10.18112/openneuro.ds003703.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003703 >>> dataset = DS003703(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kalenkovich2019']* ### *class* eegdash.dataset.dataset.DS003708(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Basis profile curve identification to understand electrical stimulation effects in human brain networks * **Study:** `ds003708` (OpenNeuro) * **Author (year):** `Hermes2021` * **Canonical:** `Miller2021` Also importable as: `DS003708`, `Hermes2021`, `Miller2021`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003708](https://openneuro.org/datasets/ds003708) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003708](https://nemar.org/dataexplorer/detail?dataset_id=ds003708) DOI: [https://doi.org/10.18112/openneuro.ds003708.v1.0.0](https://doi.org/10.18112/openneuro.ds003708.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003708 >>> dataset = DS003708(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Miller2021']* ### *class* eegdash.dataset.dataset.DS003710(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) APPLESEED Example Dataset * **Study:** `ds003710` (OpenNeuro) * **Author (year):** `Williams2021` * **Canonical:** `APPLESEED` Also importable as: `DS003710`, `Williams2021`, `APPLESEED`. Modality: `eeg`. Subjects: 13; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003710](https://openneuro.org/datasets/ds003710) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003710](https://nemar.org/dataexplorer/detail?dataset_id=ds003710) DOI: [https://doi.org/10.18112/openneuro.ds003710.v1.0.2](https://doi.org/10.18112/openneuro.ds003710.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003710 >>> dataset = DS003710(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['APPLESEED']* ### *class* eegdash.dataset.dataset.DS003739(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Perturbed beam-walking task * **Study:** `ds003739` (OpenNeuro) * **Author (year):** `Peterson2021_Perturbed_beam_walking` * **Canonical:** — Also importable as: `DS003739`, `Peterson2021_Perturbed_beam_walking`. Modality: `eeg`. Subjects: 30; recordings: 120; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003739](https://openneuro.org/datasets/ds003739) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003739](https://nemar.org/dataexplorer/detail?dataset_id=ds003739) DOI: [https://doi.org/10.18112/openneuro.ds003739.v1.0.2](https://doi.org/10.18112/openneuro.ds003739.v1.0.2) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003739 >>> dataset = DS003739(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003751(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset on Emotion with Naturalistic Stimuli (DENS) * **Study:** `ds003751` (OpenNeuro) * **Author (year):** `Mishra2021` * **Canonical:** `DENS` Also importable as: `DS003751`, `Mishra2021`, `DENS`. Modality: `eeg`. Subjects: 38; recordings: 38; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003751](https://openneuro.org/datasets/ds003751) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003751](https://nemar.org/dataexplorer/detail?dataset_id=ds003751) DOI: [https://doi.org/10.18112/openneuro.ds003751.v1.0.2](https://doi.org/10.18112/openneuro.ds003751.v1.0.2) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003751 >>> dataset = DS003751(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['DENS']* ### *class* eegdash.dataset.dataset.DS003753(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Probabilistic Learning with Affective Feedback: Exp * **Study:** `ds003753` (OpenNeuro) * **Author (year):** `Brown2021_Probabilistic` * **Canonical:** — Also importable as: `DS003753`, `Brown2021_Probabilistic`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003753](https://openneuro.org/datasets/ds003753) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003753](https://nemar.org/dataexplorer/detail?dataset_id=ds003753) ### Examples ```pycon >>> from eegdash.dataset import DS003753 >>> dataset = DS003753(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003766(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A resource for assessing dynamic binary choices in the adult brain using EEG and mouse-tracking * **Study:** `ds003766` (OpenNeuro) * **Author (year):** `Chen2021` * **Canonical:** — Also importable as: `DS003766`, `Chen2021`. Modality: `eeg`. Subjects: 31; recordings: 124; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003766](https://openneuro.org/datasets/ds003766) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003766](https://nemar.org/dataexplorer/detail?dataset_id=ds003766) DOI: [https://doi.org/10.18112/openneuro.ds003766.v2.0.3](https://doi.org/10.18112/openneuro.ds003766.v2.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003766 >>> dataset = DS003766(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003768(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Simultaneous EEG and fMRI signals during sleep from humans * **Study:** `ds003768` (OpenNeuro) * **Author (year):** `Gu2021` * **Canonical:** — Also importable as: `DS003768`, `Gu2021`. Modality: `eeg`. Subjects: 33; recordings: 255; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003768](https://openneuro.org/datasets/ds003768) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003768](https://nemar.org/dataexplorer/detail?dataset_id=ds003768) DOI: [https://doi.org/10.18112/openneuro.ds003768.v1.0.0](https://doi.org/10.18112/openneuro.ds003768.v1.0.0) NEMAR citation count: 21 ### Examples ```pycon >>> from eegdash.dataset import DS003768 >>> dataset = DS003768(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003774(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Music Listening- Genre EEG dataset (MUSIN-G) * **Study:** `ds003774` (OpenNeuro) * **Author (year):** `Miyapuram2021` * **Canonical:** `MUSING` Also importable as: `DS003774`, `Miyapuram2021`, `MUSING`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 20; recordings: 240; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003774](https://openneuro.org/datasets/ds003774) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003774](https://nemar.org/dataexplorer/detail?dataset_id=ds003774) DOI: [https://doi.org/10.18112/openneuro.ds003774.v1.0.0](https://doi.org/10.18112/openneuro.ds003774.v1.0.0) NEMAR citation count: 8 ### Examples ```pycon >>> from eegdash.dataset import DS003774 >>> dataset = DS003774(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MUSING']* ### *class* eegdash.dataset.dataset.DS003775(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SRM Resting-state EEG * **Study:** `ds003775` (OpenNeuro) * **Author (year):** `HatlestadHall2021` * **Canonical:** — Also importable as: `DS003775`, `HatlestadHall2021`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 111; recordings: 153; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003775](https://openneuro.org/datasets/ds003775) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003775](https://nemar.org/dataexplorer/detail?dataset_id=ds003775) DOI: [https://doi.org/10.18112/openneuro.ds003775.v1.2.1](https://doi.org/10.18112/openneuro.ds003775.v1.2.1) NEMAR citation count: 8 ### Examples ```pycon >>> from eegdash.dataset import DS003775 >>> dataset = DS003775(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003800(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory Gamma Entrainment * **Study:** `ds003800` (OpenNeuro) * **Author (year):** `Lahijanian2021_Auditory` * **Canonical:** — Also importable as: `DS003800`, `Lahijanian2021_Auditory`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Dementia`. Subjects: 13; recordings: 24; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003800](https://openneuro.org/datasets/ds003800) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003800](https://nemar.org/dataexplorer/detail?dataset_id=ds003800) DOI: [https://doi.org/10.18112/openneuro.ds003800.v1.0.0](https://doi.org/10.18112/openneuro.ds003800.v1.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003800 >>> dataset = DS003800(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003801(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neural Tracking to go * **Study:** `ds003801` (OpenNeuro) * **Author (year):** `Straetmans2021` * **Canonical:** — Also importable as: `DS003801`, `Straetmans2021`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003801](https://openneuro.org/datasets/ds003801) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003801](https://nemar.org/dataexplorer/detail?dataset_id=ds003801) DOI: [https://doi.org/10.18112/openneuro.ds003801.v1.0.0](https://doi.org/10.18112/openneuro.ds003801.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003801 >>> dataset = DS003801(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003805(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multisensory Gamma Entrainment * **Study:** `ds003805` (OpenNeuro) * **Author (year):** `Lahijanian2021_Multisensory` * **Canonical:** — Also importable as: `DS003805`, `Lahijanian2021_Multisensory`. Modality: `eeg`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003805](https://openneuro.org/datasets/ds003805) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003805](https://nemar.org/dataexplorer/detail?dataset_id=ds003805) DOI: [https://doi.org/10.18112/openneuro.ds003805.v1.0.0](https://doi.org/10.18112/openneuro.ds003805.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003805 >>> dataset = DS003805(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003810(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Imagery vs Rest - Low-Cost EEG System * **Study:** `ds003810` (OpenNeuro) * **Author (year):** `Peterson2021_Motor_Imagery_vs` * **Canonical:** — Also importable as: `DS003810`, `Peterson2021_Motor_Imagery_vs`. Modality: `eeg`. Subjects: 10; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003810](https://openneuro.org/datasets/ds003810) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003810](https://nemar.org/dataexplorer/detail?dataset_id=ds003810) DOI: [https://doi.org/10.18112/openneuro.ds003810.v2.0.2](https://doi.org/10.18112/openneuro.ds003810.v2.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003810 >>> dataset = DS003810(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003816(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Effect of Buddhism Derived Loving Kindness Meditation on Modulating EEG: Long-term and Short-term Effect * **Study:** `ds003816` (OpenNeuro) * **Author (year):** `Sun2024` * **Canonical:** — Also importable as: `DS003816`, `Sun2024`. Modality: `eeg`. Subjects: 48; recordings: 1077; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003816](https://openneuro.org/datasets/ds003816) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003816](https://nemar.org/dataexplorer/detail?dataset_id=ds003816) DOI: [https://doi.org/10.18112/openneuro.ds003816.v1.0.1](https://doi.org/10.18112/openneuro.ds003816.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS003816 >>> dataset = DS003816(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003822(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Probabilistic Learning with Affective Feedback: Exp * **Study:** `ds003822` (OpenNeuro) * **Author (year):** `Brown2021_Probabilistic_Learning` * **Canonical:** — Also importable as: `DS003822`, `Brown2021_Probabilistic_Learning`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003822](https://openneuro.org/datasets/ds003822) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003822](https://nemar.org/dataexplorer/detail?dataset_id=ds003822) ### Examples ```pycon >>> from eegdash.dataset import DS003822 >>> dataset = DS003822(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003825(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Human electroencephalography recordings from 50 subjects for 22,248 images from 1,854 object concepts * **Study:** `ds003825` (OpenNeuro) * **Author (year):** `Grootswagers2021` * **Canonical:** `THINGS`, `THINGS_EEG` Also importable as: `DS003825`, `Grootswagers2021`, `THINGS`, `THINGS_EEG`. Modality: `eeg`. Subjects: 50; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003825](https://openneuro.org/datasets/ds003825) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003825](https://nemar.org/dataexplorer/detail?dataset_id=ds003825) DOI: [https://doi.org/10.18112/openneuro.ds003825.v1.1.0](https://doi.org/10.18112/openneuro.ds003825.v1.1.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003825 >>> dataset = DS003825(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['THINGS', 'THINGS_EEG']* ### *class* eegdash.dataset.dataset.DS003838(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG, pupillometry, ECG and photoplethysmography, and behavioral data in the digit span task and rest * **Study:** `ds003838` (OpenNeuro) * **Author (year):** `Pavlov2021_pupillometry` * **Canonical:** — Also importable as: `DS003838`, `Pavlov2021_pupillometry`. Modality: `eeg`. Subjects: 65; recordings: 130; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003838](https://openneuro.org/datasets/ds003838) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003838](https://nemar.org/dataexplorer/detail?dataset_id=ds003838) DOI: [https://doi.org/10.18112/openneuro.ds003838.v1.0.6](https://doi.org/10.18112/openneuro.ds003838.v1.0.6) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003838 >>> dataset = DS003838(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003844(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset Clinical Epilepsy iEEG to BIDS -RESPect_intraoperative_iEEG * **Study:** `ds003844` (OpenNeuro) * **Author (year):** `Zweiphenning2021` * **Canonical:** `RESPect_intraop` Also importable as: `DS003844`, `Zweiphenning2021`, `RESPect_intraop`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 6; recordings: 38; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003844](https://openneuro.org/datasets/ds003844) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003844](https://nemar.org/dataexplorer/detail?dataset_id=ds003844) DOI: [https://doi.org/10.18112/openneuro.ds003844.v1.0.1](https://doi.org/10.18112/openneuro.ds003844.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003844 >>> dataset = DS003844(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['RESPect_intraop']* ### *class* eegdash.dataset.dataset.DS003846(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Prediction Error * **Study:** `ds003846` (OpenNeuro) * **Author (year):** `Gehrke2021` * **Canonical:** — Also importable as: `DS003846`, `Gehrke2021`. Modality: `eeg`. Subjects: 19; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003846](https://openneuro.org/datasets/ds003846) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003846](https://nemar.org/dataexplorer/detail?dataset_id=ds003846) DOI: [https://doi.org/10.18112/openneuro.ds003846.v2.0.2](https://doi.org/10.18112/openneuro.ds003846.v2.0.2) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003846 >>> dataset = DS003846(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003848(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset Clinical Epilepsy iEEG to BIDS - RESPect_longterm_iEEG * **Study:** `ds003848` (OpenNeuro) * **Author (year):** `Blooijs2021` * **Canonical:** `RESPect_longterm` Also importable as: `DS003848`, `Blooijs2021`, `RESPect_longterm`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 6; recordings: 22; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003848](https://openneuro.org/datasets/ds003848) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003848](https://nemar.org/dataexplorer/detail?dataset_id=ds003848) DOI: [https://doi.org/10.18112/openneuro.ds003848.v1.0.3](https://doi.org/10.18112/openneuro.ds003848.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003848 >>> dataset = DS003848(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['RESPect_longterm']* ### *class* eegdash.dataset.dataset.DS003876(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Epilepsy-iEEG-Interictal-Multicenter-Dataset * **Study:** `ds003876` (OpenNeuro) * **Author (year):** `Gunnarsdottir2021` * **Canonical:** — Also importable as: `DS003876`, `Gunnarsdottir2021`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 39; recordings: 54; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003876](https://openneuro.org/datasets/ds003876) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003876](https://nemar.org/dataexplorer/detail?dataset_id=ds003876) DOI: [https://doi.org/10.18112/openneuro.ds003876.v1.0.2](https://doi.org/10.18112/openneuro.ds003876.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003876 >>> dataset = DS003876(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003885(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Capacity for movement is an organisational principle in object representations: EEG data from Experiment 1 * **Study:** `ds003885` (OpenNeuro) * **Author (year):** `Shatek2021_E1` * **Canonical:** — Also importable as: `DS003885`, `Shatek2021_E1`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003885](https://openneuro.org/datasets/ds003885) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003885](https://nemar.org/dataexplorer/detail?dataset_id=ds003885) DOI: [https://doi.org/10.18112/openneuro.ds003885.v1.0.7](https://doi.org/10.18112/openneuro.ds003885.v1.0.7) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003885 >>> dataset = DS003885(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003887(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Capacity for movement is an organisational principle in object representations: EEG data from Experiment 2 * **Study:** `ds003887` (OpenNeuro) * **Author (year):** `Shatek2021_E2` * **Canonical:** — Also importable as: `DS003887`, `Shatek2021_E2`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003887](https://openneuro.org/datasets/ds003887) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003887](https://nemar.org/dataexplorer/detail?dataset_id=ds003887) DOI: [https://doi.org/10.18112/openneuro.ds003887.v1.2.2](https://doi.org/10.18112/openneuro.ds003887.v1.2.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003887 >>> dataset = DS003887(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003922(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multisensory Correlation Detector * **Study:** `ds003922` (OpenNeuro) * **Author (year):** `Lerousseau2021` * **Canonical:** — Also importable as: `DS003922`, `Lerousseau2021`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 14; recordings: 164; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003922](https://openneuro.org/datasets/ds003922) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003922](https://nemar.org/dataexplorer/detail?dataset_id=ds003922) DOI: [https://doi.org/10.18112/openneuro.ds003922.v1.0.1](https://doi.org/10.18112/openneuro.ds003922.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003922 >>> dataset = DS003922(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003944(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: First Episode Psychosis vs. Control Resting Task 1 * **Study:** `ds003944` (OpenNeuro) * **Author (year):** `Salisbury2021_First` * **Canonical:** — Also importable as: `DS003944`, `Salisbury2021_First`. Modality: `eeg`. Subjects: 82; recordings: 82; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003944](https://openneuro.org/datasets/ds003944) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003944](https://nemar.org/dataexplorer/detail?dataset_id=ds003944) DOI: [https://doi.org/10.18112/openneuro.ds003944.v1.0.1](https://doi.org/10.18112/openneuro.ds003944.v1.0.1) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003944 >>> dataset = DS003944(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003947(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: First Episode Psychosis vs. Control Resting Task 2 * **Study:** `ds003947` (OpenNeuro) * **Author (year):** `Salisbury2021_First_Episode` * **Canonical:** — Also importable as: `DS003947`, `Salisbury2021_First_Episode`. Modality: `eeg`. Subjects: 61; recordings: 61; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003947](https://openneuro.org/datasets/ds003947) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003947](https://nemar.org/dataexplorer/detail?dataset_id=ds003947) DOI: [https://doi.org/10.18112/openneuro.ds003947.v1.0.1](https://doi.org/10.18112/openneuro.ds003947.v1.0.1) NEMAR citation count: 8 ### Examples ```pycon >>> from eegdash.dataset import DS003947 >>> dataset = DS003947(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003969(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Meditation vs thinking task * **Study:** `ds003969` (OpenNeuro) * **Author (year):** `Delorme2021` * **Canonical:** — Also importable as: `DS003969`, `Delorme2021`. Modality: `eeg`. Subjects: 98; recordings: 392; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003969](https://openneuro.org/datasets/ds003969) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003969](https://nemar.org/dataexplorer/detail?dataset_id=ds003969) DOI: [https://doi.org/10.18112/openneuro.ds003969.v1.0.0](https://doi.org/10.18112/openneuro.ds003969.v1.0.0) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003969 >>> dataset = DS003969(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS003987(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Amphetamine trials 5CCPT and Probabilistic Learning * **Study:** `ds003987` (OpenNeuro) * **Author (year):** `Cavanagh2022_Amphetamine_trials_5` * **Canonical:** — Also importable as: `DS003987`, `Cavanagh2022_Amphetamine_trials_5`. Modality: `eeg`. Subjects: 23; recordings: 69; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003987](https://openneuro.org/datasets/ds003987) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003987](https://nemar.org/dataexplorer/detail?dataset_id=ds003987) DOI: [https://doi.org/10.18112/openneuro.ds003987.v1.0.0](https://doi.org/10.18112/openneuro.ds003987.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003987 >>> dataset = DS003987(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004000(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Fribourg Ultimatum Game in Schizophrenia Study * **Study:** `ds004000` (OpenNeuro) * **Author (year):** `Padee2022` * **Canonical:** — Also importable as: `DS004000`, `Padee2022`. Modality: `eeg`. Subjects: 43; recordings: 86; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004000](https://openneuro.org/datasets/ds004000) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004000](https://nemar.org/dataexplorer/detail?dataset_id=ds004000) DOI: [https://doi.org/10.18112/openneuro.ds004000.v1.0.0](https://doi.org/10.18112/openneuro.ds004000.v1.0.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS004000 >>> dataset = DS004000(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004010(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MAVIS * **Study:** `ds004010` (OpenNeuro) * **Author (year):** `Waschke2022` * **Canonical:** `MAVIS` Also importable as: `DS004010`, `Waschke2022`, `MAVIS`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004010](https://openneuro.org/datasets/ds004010) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004010](https://nemar.org/dataexplorer/detail?dataset_id=ds004010) DOI: [https://doi.org/10.18112/openneuro.ds004010.v1.0.0](https://doi.org/10.18112/openneuro.ds004010.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004010 >>> dataset = DS004010(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MAVIS']* ### *class* eegdash.dataset.dataset.DS004011(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The nature of neural object representations during dynamic occlusion * **Study:** `ds004011` (OpenNeuro) * **Author (year):** `Teichmann2022` * **Canonical:** — Also importable as: `DS004011`, `Teichmann2022`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 22; recordings: 132; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004011](https://openneuro.org/datasets/ds004011) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004011](https://nemar.org/dataexplorer/detail?dataset_id=ds004011) DOI: [https://doi.org/10.18112/openneuro.ds004011.v1.0.3](https://doi.org/10.18112/openneuro.ds004011.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004011 >>> dataset = DS004011(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004012(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BRAR_NQ * **Study:** `ds004012` (OpenNeuro) * **Author (year):** `Rani2022` * **Canonical:** `Rani2019` Also importable as: `DS004012`, `Rani2022`, `Rani2019`. Modality: `meg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 30; recordings: 294; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004012](https://openneuro.org/datasets/ds004012) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004012](https://nemar.org/dataexplorer/detail?dataset_id=ds004012) DOI: [https://doi.org/10.18112/openneuro.ds004012.v1.0.0](https://doi.org/10.18112/openneuro.ds004012.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004012 >>> dataset = DS004012(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Rani2019']* ### *class* eegdash.dataset.dataset.DS004015(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Attended speaker paradigm (cEEGrid data) * **Study:** `ds004015` (OpenNeuro) * **Author (year):** `Holtze2022_Attended` * **Canonical:** — Also importable as: `DS004015`, `Holtze2022_Attended`. Modality: `eeg`. Subjects: 36; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004015](https://openneuro.org/datasets/ds004015) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004015](https://nemar.org/dataexplorer/detail?dataset_id=ds004015) DOI: [https://doi.org/10.18112/openneuro.ds004015.v1.0.2](https://doi.org/10.18112/openneuro.ds004015.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004015 >>> dataset = DS004015(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004017(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Embodied Learning for Literacy EEG * **Study:** `ds004017` (OpenNeuro) * **Author (year):** `Damsgaard2022` * **Canonical:** — Also importable as: `DS004017`, `Damsgaard2022`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 21; recordings: 63; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004017](https://openneuro.org/datasets/ds004017) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004017](https://nemar.org/dataexplorer/detail?dataset_id=ds004017) DOI: [https://doi.org/10.18112/openneuro.ds004017.v1.0.3](https://doi.org/10.18112/openneuro.ds004017.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004017 >>> dataset = DS004017(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004018(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG recordings for 200 object images presented in RSVP sequences at 5Hz or 20Hz * **Study:** `ds004018` (OpenNeuro) * **Author (year):** `Grootswagers2022_RSVP` * **Canonical:** — Also importable as: `DS004018`, `Grootswagers2022_RSVP`. Modality: `eeg`. Subjects: 16; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004018](https://openneuro.org/datasets/ds004018) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004018](https://nemar.org/dataexplorer/detail?dataset_id=ds004018) DOI: [https://doi.org/10.18112/openneuro.ds004018.v2.0.0](https://doi.org/10.18112/openneuro.ds004018.v2.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004018 >>> dataset = DS004018(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004019(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Effect of obesity on arithmetic processing in preteens with high and low math skills. An event-related potentials study * **Study:** `ds004019` (OpenNeuro) * **Author (year):** `AlatorreCruz2022_Effect` * **Canonical:** — Also importable as: `DS004019`, `AlatorreCruz2022_Effect`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Obese`. Subjects: 62; recordings: 62; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004019](https://openneuro.org/datasets/ds004019) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004019](https://nemar.org/dataexplorer/detail?dataset_id=ds004019) DOI: [https://doi.org/10.18112/openneuro.ds004019.v1.0.0](https://doi.org/10.18112/openneuro.ds004019.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004019 >>> dataset = DS004019(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004022(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal EEG and fNIRS Biosignal Acquisition during Motor Imagery Tasks in Patients with Orthopedic Impairment * **Study:** `ds004022` (OpenNeuro) * **Author (year):** `Lee2022` * **Canonical:** — Also importable as: `DS004022`, `Lee2022`. Modality: `eeg`. Subjects: 7; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004022](https://openneuro.org/datasets/ds004022) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004022](https://nemar.org/dataexplorer/detail?dataset_id=ds004022) DOI: [https://doi.org/10.18112/openneuro.ds004022.v1.0.0](https://doi.org/10.18112/openneuro.ds004022.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004022 >>> dataset = DS004022(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004024(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TMS-EEG-MRI-fMRI-DWI data on paired associative stimulation and connectivity (Shirley Ryan AbilityLab, Chicago, IL) * **Study:** `ds004024` (OpenNeuro) * **Author (year):** `Pavon2022` * **Canonical:** — Also importable as: `DS004024`, `Pavon2022`. Modality: `eeg`. Subjects: 13; recordings: 497; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004024](https://openneuro.org/datasets/ds004024) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004024](https://nemar.org/dataexplorer/detail?dataset_id=ds004024) DOI: [https://doi.org/10.18112/openneuro.ds004024.v1.0.1](https://doi.org/10.18112/openneuro.ds004024.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004024 >>> dataset = DS004024(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004033(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrode walking study * **Study:** `ds004033` (OpenNeuro) * **Author (year):** `Scanlon2022` * **Canonical:** — Also importable as: `DS004033`, `Scanlon2022`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 18; recordings: 36; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004033](https://openneuro.org/datasets/ds004033) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004033](https://nemar.org/dataexplorer/detail?dataset_id=ds004033) DOI: [https://doi.org/10.18112/openneuro.ds004033.v1.0.0](https://doi.org/10.18112/openneuro.ds004033.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004033 >>> dataset = DS004033(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004040(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Trance channeling EEG study * **Study:** `ds004040` (OpenNeuro) * **Author (year):** `Cannard2022` * **Canonical:** — Also importable as: `DS004040`, `Cannard2022`. Modality: `eeg`. Subjects: 13; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004040](https://openneuro.org/datasets/ds004040) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004040](https://nemar.org/dataexplorer/detail?dataset_id=ds004040) DOI: [https://doi.org/10.18112/openneuro.ds004040.v1.0.0](https://doi.org/10.18112/openneuro.ds004040.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004040 >>> dataset = DS004040(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004043(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes * **Study:** `ds004043` (OpenNeuro) * **Author (year):** `Moerel2022_time` * **Canonical:** — Also importable as: `DS004043`, `Moerel2022_time`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004043](https://openneuro.org/datasets/ds004043) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004043](https://nemar.org/dataexplorer/detail?dataset_id=ds004043) DOI: [https://doi.org/10.18112/openneuro.ds004043.v1.1.0](https://doi.org/10.18112/openneuro.ds004043.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004043 >>> dataset = DS004043(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004067(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Moral conviction and metacognitive ability shape multiple stages of information processing * **Study:** `ds004067` (OpenNeuro) * **Author (year):** `Yoder2022` * **Canonical:** — Also importable as: `DS004067`, `Yoder2022`. Modality: `eeg`. Subjects: 80; recordings: 84; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004067](https://openneuro.org/datasets/ds004067) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004067](https://nemar.org/dataexplorer/detail?dataset_id=ds004067) DOI: [https://doi.org/10.18112/openneuro.ds004067.v1.0.1](https://doi.org/10.18112/openneuro.ds004067.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004067 >>> dataset = DS004067(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004075(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) what_are_we_talking_about * **Study:** `ds004075` (OpenNeuro) * **Author (year):** `Boncz2022` * **Canonical:** — Also importable as: `DS004075`, `Boncz2022`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 29; recordings: 116; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004075](https://openneuro.org/datasets/ds004075) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004075](https://nemar.org/dataexplorer/detail?dataset_id=ds004075) DOI: [https://doi.org/10.18112/openneuro.ds004075.v1.0.0](https://doi.org/10.18112/openneuro.ds004075.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004075 >>> dataset = DS004075(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004078(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A synchronized multimodal neuroimaging dataset to study brain language processing * **Study:** `ds004078` (OpenNeuro) * **Author (year):** `Wang2022_StudyBRAIN` * **Canonical:** — Also importable as: `DS004078`, `Wang2022_StudyBRAIN`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 12; recordings: 720; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004078](https://openneuro.org/datasets/ds004078) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004078](https://nemar.org/dataexplorer/detail?dataset_id=ds004078) DOI: [https://doi.org/10.18112/openneuro.ds004078.v1.0.4](https://doi.org/10.18112/openneuro.ds004078.v1.0.4) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS004078 >>> dataset = DS004078(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004080(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CCEP ECoG dataset across age 4-51 * **Study:** `ds004080` (OpenNeuro) * **Author (year):** `Blooijs2023_CCEP_ECoG` * **Canonical:** `RESPect_CCEP` Also importable as: `DS004080`, `Blooijs2023_CCEP_ECoG`, `RESPect_CCEP`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 74; recordings: 117; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004080](https://openneuro.org/datasets/ds004080) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004080](https://nemar.org/dataexplorer/detail?dataset_id=ds004080) DOI: [https://doi.org/10.18112/openneuro.ds004080.v1.2.4](https://doi.org/10.18112/openneuro.ds004080.v1.2.4) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004080 >>> dataset = DS004080(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['RESPect_CCEP']* ### *class* eegdash.dataset.dataset.DS004100(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HUP iEEG Epilepsy Dataset * **Study:** `ds004100` (OpenNeuro) * **Author (year):** `Bernabei2022` * **Canonical:** `HUPiEEG` Also importable as: `DS004100`, `Bernabei2022`, `HUPiEEG`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 57; recordings: 319; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004100](https://openneuro.org/datasets/ds004100) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004100](https://nemar.org/dataexplorer/detail?dataset_id=ds004100) DOI: [https://doi.org/10.18112/openneuro.ds004100.v1.1.3](https://doi.org/10.18112/openneuro.ds004100.v1.1.3) NEMAR citation count: 21 ### Examples ```pycon >>> from eegdash.dataset import DS004100 >>> dataset = DS004100(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HUPiEEG']* ### *class* eegdash.dataset.dataset.DS004105(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Auditory Cueing * **Study:** `ds004105` (OpenNeuro) * **Author (year):** `Garcia2022` * **Canonical:** `BCIT_Auditory_Cueing` Also importable as: `DS004105`, `Garcia2022`, `BCIT_Auditory_Cueing`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004105](https://openneuro.org/datasets/ds004105) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004105](https://nemar.org/dataexplorer/detail?dataset_id=ds004105) DOI: [https://doi.org/10.18112/openneuro.ds004105.v1.0.0](https://doi.org/10.18112/openneuro.ds004105.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004105 >>> dataset = DS004105(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCIT_Auditory_Cueing']* ### *class* eegdash.dataset.dataset.DS004106(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Advanced Guard Duty * **Study:** `ds004106` (OpenNeuro) * **Author (year):** `Touryan2022` * **Canonical:** `BCITAdvancedGuardDuty` Also importable as: `DS004106`, `Touryan2022`, `BCITAdvancedGuardDuty`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 27; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004106](https://openneuro.org/datasets/ds004106) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004106](https://nemar.org/dataexplorer/detail?dataset_id=ds004106) DOI: [https://doi.org/10.18112/openneuro.ds004106.v1.0.0](https://doi.org/10.18112/openneuro.ds004106.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004106 >>> dataset = DS004106(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCITAdvancedGuardDuty']* ### *class* eegdash.dataset.dataset.DS004107(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MIND DATA * **Study:** `ds004107` (OpenNeuro) * **Author (year):** `Weisend2022` * **Canonical:** `Weisend2007` Also importable as: `DS004107`, `Weisend2022`, `Weisend2007`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 9; recordings: 89; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004107](https://openneuro.org/datasets/ds004107) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004107](https://nemar.org/dataexplorer/detail?dataset_id=ds004107) DOI: [https://doi.org/10.18112/openneuro.ds004107.v1.0.0](https://doi.org/10.18112/openneuro.ds004107.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004107 >>> dataset = DS004107(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Weisend2007']* ### *class* eegdash.dataset.dataset.DS004117(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sternberg Working Memory * **Study:** `ds004117` (OpenNeuro) * **Author (year):** `Onton2022` * **Canonical:** — Also importable as: `DS004117`, `Onton2022`. Modality: `eeg`. Subjects: 23; recordings: 85; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004117](https://openneuro.org/datasets/ds004117) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004117](https://nemar.org/dataexplorer/detail?dataset_id=ds004117) DOI: [https://doi.org/10.18112/openneuro.ds004117.v1.0.1](https://doi.org/10.18112/openneuro.ds004117.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004117 >>> dataset = DS004117(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004118(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Calibration Driving * **Study:** `ds004118` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Calibration` * **Canonical:** `Touryan1999` Also importable as: `DS004118`, `Touryan2022_BCIT_Calibration`, `Touryan1999`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 156; recordings: 247; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004118](https://openneuro.org/datasets/ds004118) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004118](https://nemar.org/dataexplorer/detail?dataset_id=ds004118) DOI: [https://doi.org/10.18112/openneuro.ds004118.v1.0.1](https://doi.org/10.18112/openneuro.ds004118.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004118 >>> dataset = DS004118(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Touryan1999']* ### *class* eegdash.dataset.dataset.DS004119(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Basic Guard Duty * **Study:** `ds004119` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Basic` * **Canonical:** `BCIT` Also importable as: `DS004119`, `Touryan2022_BCIT_Basic`, `BCIT`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004119](https://openneuro.org/datasets/ds004119) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004119](https://nemar.org/dataexplorer/detail?dataset_id=ds004119) DOI: [https://doi.org/10.18112/openneuro.ds004119.v1.0.0](https://doi.org/10.18112/openneuro.ds004119.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004119 >>> dataset = DS004119(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCIT']* ### *class* eegdash.dataset.dataset.DS004120(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Baseline Driving * **Study:** `ds004120` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Baseline` * **Canonical:** `BCITBaselineDriving` Also importable as: `DS004120`, `Touryan2022_BCIT_Baseline`, `BCITBaselineDriving`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 109; recordings: 131; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004120](https://openneuro.org/datasets/ds004120) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004120](https://nemar.org/dataexplorer/detail?dataset_id=ds004120) DOI: [https://doi.org/10.18112/openneuro.ds004120.v1.0.0](https://doi.org/10.18112/openneuro.ds004120.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004120 >>> dataset = DS004120(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCITBaselineDriving']* ### *class* eegdash.dataset.dataset.DS004121(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Mind Wandering * **Study:** `ds004121` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Mind` * **Canonical:** `BCITMindWandering` Also importable as: `DS004121`, `Touryan2022_BCIT_Mind`, `BCITMindWandering`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004121](https://openneuro.org/datasets/ds004121) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004121](https://nemar.org/dataexplorer/detail?dataset_id=ds004121) DOI: [https://doi.org/10.18112/openneuro.ds004121.v1.0.0](https://doi.org/10.18112/openneuro.ds004121.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004121 >>> dataset = DS004121(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCITMindWandering']* ### *class* eegdash.dataset.dataset.DS004122(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Speed Control * **Study:** `ds004122` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Speed` * **Canonical:** — Also importable as: `DS004122`, `Touryan2022_BCIT_Speed`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 32; recordings: 63; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004122](https://openneuro.org/datasets/ds004122) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004122](https://nemar.org/dataexplorer/detail?dataset_id=ds004122) DOI: [https://doi.org/10.18112/openneuro.ds004122.v1.0.0](https://doi.org/10.18112/openneuro.ds004122.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004122 >>> dataset = DS004122(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004123(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Traffic Complexity * **Study:** `ds004123` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Traffic` * **Canonical:** `BCIT_Traffic_Complexity` Also importable as: `DS004123`, `Touryan2022_BCIT_Traffic`, `BCIT_Traffic_Complexity`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 29; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004123](https://openneuro.org/datasets/ds004123) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004123](https://nemar.org/dataexplorer/detail?dataset_id=ds004123) DOI: [https://doi.org/10.18112/openneuro.ds004123.v1.0.0](https://doi.org/10.18112/openneuro.ds004123.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004123 >>> dataset = DS004123(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCIT_Traffic_Complexity']* ### *class* eegdash.dataset.dataset.DS004127(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Somatosensory Cortex Rat DISC Data * **Study:** `ds004127` (OpenNeuro) * **Author (year):** `Abrego2022` * **Canonical:** — Also importable as: `DS004127`, `Abrego2022`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Other`. Subjects: 8; recordings: 73; tasks: 11. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004127](https://openneuro.org/datasets/ds004127) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004127](https://nemar.org/dataexplorer/detail?dataset_id=ds004127) DOI: [https://doi.org/10.18112/openneuro.ds004127.v3.0.0](https://doi.org/10.18112/openneuro.ds004127.v3.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004127 >>> dataset = DS004127(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004147(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Average Task Value * **Study:** `ds004147` (OpenNeuro) * **Author (year):** `Hassall2022_Average` * **Canonical:** — Also importable as: `DS004147`, `Hassall2022_Average`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004147](https://openneuro.org/datasets/ds004147) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004147](https://nemar.org/dataexplorer/detail?dataset_id=ds004147) DOI: [https://doi.org/10.18112/openneuro.ds004147.v1.0.2](https://doi.org/10.18112/openneuro.ds004147.v1.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004147 >>> dataset = DS004147(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004148(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A test-retest resting and cognitive state EEG dataset * **Study:** `ds004148` (OpenNeuro) * **Author (year):** `Wang2022_test_retest_resting` * **Canonical:** — Also importable as: `DS004148`, `Wang2022_test_retest_resting`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 60; recordings: 900; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004148](https://openneuro.org/datasets/ds004148) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004148](https://nemar.org/dataexplorer/detail?dataset_id=ds004148) DOI: [https://doi.org/10.18112/openneuro.ds004148.v1.0.0](https://doi.org/10.18112/openneuro.ds004148.v1.0.0) NEMAR citation count: 12 ### Examples ```pycon >>> from eegdash.dataset import DS004148 >>> dataset = DS004148(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004151(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Effect of obesity on inhibitory control in preadolescents during stop-signal task. An event-related potentials study * **Study:** `ds004151` (OpenNeuro) * **Author (year):** `AlatorreCruz2022_Effect_obesity` * **Canonical:** — Also importable as: `DS004151`, `AlatorreCruz2022_Effect_obesity`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Obese`. Subjects: 57; recordings: 57; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004151](https://openneuro.org/datasets/ds004151) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004151](https://nemar.org/dataexplorer/detail?dataset_id=ds004151) DOI: [https://doi.org/10.18112/openneuro.ds004151.v1.0.0](https://doi.org/10.18112/openneuro.ds004151.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004151 >>> dataset = DS004151(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004152(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Drum Trainer * **Study:** `ds004152` (OpenNeuro) * **Author (year):** `Hassall2022_Drum` * **Canonical:** — Also importable as: `DS004152`, `Hassall2022_Drum`. Modality: `eeg`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004152](https://openneuro.org/datasets/ds004152) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004152](https://nemar.org/dataexplorer/detail?dataset_id=ds004152) DOI: [https://doi.org/10.18112/openneuro.ds004152.v1.1.2](https://doi.org/10.18112/openneuro.ds004152.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004152 >>> dataset = DS004152(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004166(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Effects of Forward and Backward Span Trainings on Working Memory: Evidence from a Randomized, Controlled Trial * **Study:** `ds004166` (OpenNeuro) * **Author (year):** `Li2022` * **Canonical:** — Also importable as: `DS004166`, `Li2022`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 71; recordings: 213; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004166](https://openneuro.org/datasets/ds004166) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004166](https://nemar.org/dataexplorer/detail?dataset_id=ds004166) DOI: [https://doi.org/10.18112/openneuro.ds004166.v1.0.0](https://doi.org/10.18112/openneuro.ds004166.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004166 >>> dataset = DS004166(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004194(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual ECoG dataset * **Study:** `ds004194` (OpenNeuro) * **Author (year):** `Groen2022` * **Canonical:** — Also importable as: `DS004194`, `Groen2022`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Epilepsy`. Subjects: 14; recordings: 209; tasks: 7. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004194](https://openneuro.org/datasets/ds004194) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004194](https://nemar.org/dataexplorer/detail?dataset_id=ds004194) DOI: [https://doi.org/10.18112/openneuro.ds004194.v3.0.0](https://doi.org/10.18112/openneuro.ds004194.v3.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS004194 >>> dataset = DS004194(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004196(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Bimodal dataset on Inner speech * **Study:** `ds004196` (OpenNeuro) * **Author (year):** `Liwicki2022` * **Canonical:** — Also importable as: `DS004196`, `Liwicki2022`. Modality: `eeg`. Subjects: 4; recordings: 4; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004196](https://openneuro.org/datasets/ds004196) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004196](https://nemar.org/dataexplorer/detail?dataset_id=ds004196) DOI: [https://doi.org/10.18112/openneuro.ds004196.v2.0.2](https://doi.org/10.18112/openneuro.ds004196.v2.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004196 >>> dataset = DS004196(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004200(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Temporal Scaling * **Study:** `ds004200` (OpenNeuro) * **Author (year):** `Hassall2022_Temporal` * **Canonical:** — Also importable as: `DS004200`, `Hassall2022_Temporal`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004200](https://openneuro.org/datasets/ds004200) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004200](https://nemar.org/dataexplorer/detail?dataset_id=ds004200) DOI: [https://doi.org/10.18112/openneuro.ds004200.v1.0.1](https://doi.org/10.18112/openneuro.ds004200.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004200 >>> dataset = DS004200(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004212(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) THINGS-MEG * **Study:** `ds004212` (OpenNeuro) * **Author (year):** `Hebart2022` * **Canonical:** `THINGS_MEG`, `THINGSMEG` Also importable as: `DS004212`, `Hebart2022`, `THINGS_MEG`, `THINGSMEG`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 5; recordings: 500; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004212](https://openneuro.org/datasets/ds004212) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004212](https://nemar.org/dataexplorer/detail?dataset_id=ds004212) DOI: [https://doi.org/10.18112/openneuro.ds004212.v3.0.0](https://doi.org/10.18112/openneuro.ds004212.v3.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004212 >>> dataset = DS004212(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['THINGS_MEG', 'THINGSMEG']* ### *class* eegdash.dataset.dataset.DS004229(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) amnoise * **Study:** `ds004229` (OpenNeuro) * **Author (year):** `Mittag2022` * **Canonical:** — Also importable as: `DS004229`, `Mittag2022`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Dyslexia`. Subjects: 2; recordings: 3; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004229](https://openneuro.org/datasets/ds004229) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004229](https://nemar.org/dataexplorer/detail?dataset_id=ds004229) DOI: [https://doi.org/10.18112/openneuro.ds004229.v1.0.3](https://doi.org/10.18112/openneuro.ds004229.v1.0.3) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004229 >>> dataset = DS004229(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004252(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Rotation-tolerant representations elucidate the time course of high-level object processing * **Study:** `ds004252` (OpenNeuro) * **Author (year):** `Moerel2022_Rotation` * **Canonical:** — Also importable as: `DS004252`, `Moerel2022_Rotation`. Modality: `eeg`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004252](https://openneuro.org/datasets/ds004252) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004252](https://nemar.org/dataexplorer/detail?dataset_id=ds004252) DOI: [https://doi.org/10.18112/openneuro.ds004252.v1.0.2](https://doi.org/10.18112/openneuro.ds004252.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004252 >>> dataset = DS004252(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004256(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Encoding of Sound Source Elevation in Human Cortex * **Study:** `ds004256` (OpenNeuro) * **Author (year):** `Bialas2022` * **Canonical:** — Also importable as: `DS004256`, `Bialas2022`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 53; recordings: 53; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004256](https://openneuro.org/datasets/ds004256) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004256](https://nemar.org/dataexplorer/detail?dataset_id=ds004256) DOI: [https://doi.org/10.18112/openneuro.ds004256.v1.0.5](https://doi.org/10.18112/openneuro.ds004256.v1.0.5) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004256 >>> dataset = DS004256(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004262(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Continuous Feedback Processing * **Study:** `ds004262` (OpenNeuro) * **Author (year):** `Hassall2022_Continuous` * **Canonical:** — Also importable as: `DS004262`, `Hassall2022_Continuous`. Modality: `eeg`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004262](https://openneuro.org/datasets/ds004262) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004262](https://nemar.org/dataexplorer/detail?dataset_id=ds004262) DOI: [https://doi.org/10.18112/openneuro.ds004262.v1.0.0](https://doi.org/10.18112/openneuro.ds004262.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004262 >>> dataset = DS004262(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004264(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Steer the Ship * **Study:** `ds004264` (OpenNeuro) * **Author (year):** `Hassall2022_Steer` * **Canonical:** — Also importable as: `DS004264`, `Hassall2022_Steer`. Modality: `eeg`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004264](https://openneuro.org/datasets/ds004264) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004264](https://nemar.org/dataexplorer/detail?dataset_id=ds004264) DOI: [https://doi.org/10.18112/openneuro.ds004264.v1.1.0](https://doi.org/10.18112/openneuro.ds004264.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004264 >>> dataset = DS004264(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004276(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory single word recognition in MEG * **Study:** `ds004276` (OpenNeuro) * **Author (year):** `Gaston2022` * **Canonical:** — Also importable as: `DS004276`, `Gaston2022`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 19; recordings: 19; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004276](https://openneuro.org/datasets/ds004276) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004276](https://nemar.org/dataexplorer/detail?dataset_id=ds004276) DOI: [https://doi.org/10.18112/openneuro.ds004276.v1.0.0](https://doi.org/10.18112/openneuro.ds004276.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004276 >>> dataset = DS004276(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004278(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sustained Neural Representations of Personally Familiar People and Places During Cued Recall * **Study:** `ds004278` (OpenNeuro) * **Author (year):** `Kidder2022` * **Canonical:** `Kidder2024` Also importable as: `DS004278`, `Kidder2022`, `Kidder2024`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004278](https://openneuro.org/datasets/ds004278) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004278](https://nemar.org/dataexplorer/detail?dataset_id=ds004278) DOI: [https://doi.org/10.18112/openneuro.ds004278.v1.0.1](https://doi.org/10.18112/openneuro.ds004278.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004278 >>> dataset = DS004278(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kidder2024']* ### *class* eegdash.dataset.dataset.DS004279(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Large Spanish EEG * **Study:** `ds004279` (OpenNeuro) * **Author (year):** `Araya2022` * **Canonical:** — Also importable as: `DS004279`, `Araya2022`. Modality: `eeg`. Subjects: 56; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004279](https://openneuro.org/datasets/ds004279) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004279](https://nemar.org/dataexplorer/detail?dataset_id=ds004279) DOI: [https://doi.org/10.18112/openneuro.ds004279.v1.1.2](https://doi.org/10.18112/openneuro.ds004279.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004279 >>> dataset = DS004279(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004284(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) eeg-neuroforecasting * **Study:** `ds004284` (OpenNeuro) * **Author (year):** `Veillette2022` * **Canonical:** — Also importable as: `DS004284`, `Veillette2022`. Modality: `eeg`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004284](https://openneuro.org/datasets/ds004284) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004284](https://nemar.org/dataexplorer/detail?dataset_id=ds004284) DOI: [https://doi.org/10.18112/openneuro.ds004284.v1.0.0](https://doi.org/10.18112/openneuro.ds004284.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004284 >>> dataset = DS004284(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004295(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Reward gain and punishment avoidance reversal learning * **Study:** `ds004295` (OpenNeuro) * **Author (year):** `Stolz2022` * **Canonical:** — Also importable as: `DS004295`, `Stolz2022`. Modality: `eeg`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004295](https://openneuro.org/datasets/ds004295) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004295](https://nemar.org/dataexplorer/detail?dataset_id=ds004295) DOI: [https://doi.org/10.18112/openneuro.ds004295.v1.0.0](https://doi.org/10.18112/openneuro.ds004295.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004295 >>> dataset = DS004295(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004306(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Semantic Imagination and Perception Dataset * **Study:** `ds004306` (OpenNeuro) * **Author (year):** `Wilson2022` * **Canonical:** — Also importable as: `DS004306`, `Wilson2022`. Modality: `eeg`. Subjects: 12; recordings: 15; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004306](https://openneuro.org/datasets/ds004306) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004306](https://nemar.org/dataexplorer/detail?dataset_id=ds004306) DOI: [https://doi.org/10.18112/openneuro.ds004306.v1.0.2](https://doi.org/10.18112/openneuro.ds004306.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004306 >>> dataset = DS004306(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004315(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mood Manipulation and PST, Experiment 1 * **Study:** `ds004315` (OpenNeuro) * **Author (year):** `Cavanagh2022_E1` * **Canonical:** — Also importable as: `DS004315`, `Cavanagh2022_E1`. Modality: `eeg`. Subjects: 50; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004315](https://openneuro.org/datasets/ds004315) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004315](https://nemar.org/dataexplorer/detail?dataset_id=ds004315) DOI: [https://doi.org/10.18112/openneuro.ds004315.v1.0.0](https://doi.org/10.18112/openneuro.ds004315.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004315 >>> dataset = DS004315(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004317(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mood Manipulation and PST, Experiment 2 * **Study:** `ds004317` (OpenNeuro) * **Author (year):** `Cavanagh2022_E2` * **Canonical:** — Also importable as: `DS004317`, `Cavanagh2022_E2`. Modality: `eeg`. Subjects: 50; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004317](https://openneuro.org/datasets/ds004317) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004317](https://nemar.org/dataexplorer/detail?dataset_id=ds004317) DOI: [https://doi.org/10.18112/openneuro.ds004317.v1.0.3](https://doi.org/10.18112/openneuro.ds004317.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004317 >>> dataset = DS004317(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004324(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ToonFaces * **Study:** `ds004324` (OpenNeuro) * **Author (year):** `Chacon2022` * **Canonical:** `ToonFaces` Also importable as: `DS004324`, `Chacon2022`, `ToonFaces`. Modality: `eeg`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004324](https://openneuro.org/datasets/ds004324) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004324](https://nemar.org/dataexplorer/detail?dataset_id=ds004324) DOI: [https://doi.org/10.18112/openneuro.ds004324.v1.0.0](https://doi.org/10.18112/openneuro.ds004324.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004324 >>> dataset = DS004324(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ToonFaces']* ### *class* eegdash.dataset.dataset.DS004330(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The spatiotemporal neural dynamics of object recognition for natural images and line drawings (MEG) * **Study:** `ds004330` (OpenNeuro) * **Author (year):** `Singer2022` * **Canonical:** — Also importable as: `DS004330`, `Singer2022`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 30; recordings: 270; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004330](https://openneuro.org/datasets/ds004330) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004330](https://nemar.org/dataexplorer/detail?dataset_id=ds004330) DOI: [https://doi.org/10.18112/openneuro.ds004330.v1.0.0](https://doi.org/10.18112/openneuro.ds004330.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004330 >>> dataset = DS004330(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004346(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FLUX: A pipeline for MEG analysis * **Study:** `ds004346` (OpenNeuro) * **Author (year):** `Ferrante2022` * **Canonical:** `FLUX` Also importable as: `DS004346`, `Ferrante2022`, `FLUX`. Modality: `meg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 1; recordings: 3; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004346](https://openneuro.org/datasets/ds004346) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004346](https://nemar.org/dataexplorer/detail?dataset_id=ds004346) DOI: [https://doi.org/10.18112/openneuro.ds004346.v1.0.8](https://doi.org/10.18112/openneuro.ds004346.v1.0.8) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004346 >>> dataset = DS004346(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['FLUX']* ### *class* eegdash.dataset.dataset.DS004347(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Symmetry perception and affective responses: a combined EEG/EMG study * **Study:** `ds004347` (OpenNeuro) * **Author (year):** `Makin2022` * **Canonical:** — Also importable as: `DS004347`, `Makin2022`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004347](https://openneuro.org/datasets/ds004347) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004347](https://nemar.org/dataexplorer/detail?dataset_id=ds004347) DOI: [https://doi.org/10.18112/openneuro.ds004347.v1.0.0](https://doi.org/10.18112/openneuro.ds004347.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004347 >>> dataset = DS004347(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004348(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Ear-EEG Sleep Monitoring 2017 (EESM17) * **Study:** `ds004348` (OpenNeuro) * **Author (year):** `Mikkelsen2022` * **Canonical:** `EESM17` Also importable as: `DS004348`, `Mikkelsen2022`, `EESM17`. Modality: `eeg`. Subjects: 9; recordings: 18; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004348](https://openneuro.org/datasets/ds004348) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004348](https://nemar.org/dataexplorer/detail?dataset_id=ds004348) DOI: [https://doi.org/10.18112/openneuro.ds004348.v1.0.5](https://doi.org/10.18112/openneuro.ds004348.v1.0.5) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004348 >>> dataset = DS004348(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EESM17']* ### *class* eegdash.dataset.dataset.DS004350(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Executive Functionning Study for Assessing the Effect of Neurofeedback * **Study:** `ds004350` (OpenNeuro) * **Author (year):** `Delorme2022` * **Canonical:** — Also importable as: `DS004350`, `Delorme2022`. Modality: `eeg`. Subjects: 24; recordings: 240; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004350](https://openneuro.org/datasets/ds004350) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004350](https://nemar.org/dataexplorer/detail?dataset_id=ds004350) DOI: [https://doi.org/10.18112/openneuro.ds004350.v2.0.0](https://doi.org/10.18112/openneuro.ds004350.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004350 >>> dataset = DS004350(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004356(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Subcortical responses to music and speech are alike while cortical responses diverge * **Study:** `ds004356` (OpenNeuro) * **Author (year):** `Shan2022` * **Canonical:** — Also importable as: `DS004356`, `Shan2022`. Modality: `eeg`. Subjects: 22; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004356](https://openneuro.org/datasets/ds004356) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004356](https://nemar.org/dataexplorer/detail?dataset_id=ds004356) DOI: [https://doi.org/10.18112/openneuro.ds004356.v2.2.1](https://doi.org/10.18112/openneuro.ds004356.v2.2.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004356 >>> dataset = DS004356(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004357(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Features-EEG * **Study:** `ds004357` (OpenNeuro) * **Author (year):** `Grootswagers2022_EEG` * **Canonical:** — Also importable as: `DS004357`, `Grootswagers2022_EEG`. Modality: `eeg`. Subjects: 16; recordings: 16; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004357](https://openneuro.org/datasets/ds004357) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004357](https://nemar.org/dataexplorer/detail?dataset_id=ds004357) DOI: [https://doi.org/10.18112/openneuro.ds004357.v1.0.1](https://doi.org/10.18112/openneuro.ds004357.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004357 >>> dataset = DS004357(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004362(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Motor Movement/Imagery Dataset * **Study:** `ds004362` (OpenNeuro) * **Author (year):** `Schalk2022` * **Canonical:** `PhysionetMI`, `EEGMotorMovementImagery` Also importable as: `DS004362`, `Schalk2022`, `PhysionetMI`, `EEGMotorMovementImagery`. Modality: `eeg`. Subjects: 109; recordings: 1526; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004362](https://openneuro.org/datasets/ds004362) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004362](https://nemar.org/dataexplorer/detail?dataset_id=ds004362) DOI: [https://doi.org/10.18112/openneuro.ds004362.v1.0.0](https://doi.org/10.18112/openneuro.ds004362.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004362 >>> dataset = DS004362(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PhysionetMI', 'EEGMotorMovementImagery']* ### *class* eegdash.dataset.dataset.DS004367(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Meta-rdk: Raw EEG data * **Study:** `ds004367` (OpenNeuro) * **Author (year):** `Rouy2022_Meta` * **Canonical:** — Also importable as: `DS004367`, `Rouy2022_Meta`. Modality: `eeg`. Subjects: 40; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004367](https://openneuro.org/datasets/ds004367) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004367](https://nemar.org/dataexplorer/detail?dataset_id=ds004367) DOI: [https://doi.org/10.18112/openneuro.ds004367.v1.0.2](https://doi.org/10.18112/openneuro.ds004367.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004367 >>> dataset = DS004367(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004368(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Meta-rdk: Preprocessed EEG data * **Study:** `ds004368` (OpenNeuro) * **Author (year):** `Rouy2022_Meta_rdk` * **Canonical:** — Also importable as: `DS004368`, `Rouy2022_Meta_rdk`. Modality: `eeg`. Subjects: 39; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004368](https://openneuro.org/datasets/ds004368) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004368](https://nemar.org/dataexplorer/detail?dataset_id=ds004368) DOI: [https://doi.org/10.18112/openneuro.ds004368.v1.0.2](https://doi.org/10.18112/openneuro.ds004368.v1.0.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004368 >>> dataset = DS004368(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004369(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Blink-Pause-Relation (Competing Speaker Paradigm) * **Study:** `ds004369` (OpenNeuro) * **Author (year):** `Holtze2022_Blink` * **Canonical:** — Also importable as: `DS004369`, `Holtze2022_Blink`. Modality: `eeg`. Subjects: 41; recordings: 41; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004369](https://openneuro.org/datasets/ds004369) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004369](https://nemar.org/dataexplorer/detail?dataset_id=ds004369) DOI: [https://doi.org/10.18112/openneuro.ds004369.v1.0.1](https://doi.org/10.18112/openneuro.ds004369.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004369 >>> dataset = DS004369(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004370(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PRIOS * **Study:** `ds004370` (OpenNeuro) * **Author (year):** `Blooijs2022_PRIOS` * **Canonical:** `PRIOS` Also importable as: `DS004370`, `Blooijs2022_PRIOS`, `PRIOS`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 7; recordings: 15; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004370](https://openneuro.org/datasets/ds004370) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004370](https://nemar.org/dataexplorer/detail?dataset_id=ds004370) DOI: [https://doi.org/10.18112/openneuro.ds004370.v1.0.2](https://doi.org/10.18112/openneuro.ds004370.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004370 >>> dataset = DS004370(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PRIOS']* ### *class* eegdash.dataset.dataset.DS004381(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Intraoperative EEG dataset during medianus-tibialis stimulation with 8 different rates * **Study:** `ds004381` (OpenNeuro) * **Author (year):** `Selmin2022` * **Canonical:** — Also importable as: `DS004381`, `Selmin2022`. Modality: `eeg`. Subjects: 18; recordings: 437; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004381](https://openneuro.org/datasets/ds004381) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004381](https://nemar.org/dataexplorer/detail?dataset_id=ds004381) DOI: [https://doi.org/10.18112/openneuro.ds004381.v1.0.2](https://doi.org/10.18112/openneuro.ds004381.v1.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004381 >>> dataset = DS004381(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004388(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Somatosensory evoked potentials in the human spinal cord to mixed nerve stimulation * **Study:** `ds004388` (OpenNeuro) * **Author (year):** `Nierula2023_Somatosensory` * **Canonical:** — Also importable as: `DS004388`, `Nierula2023_Somatosensory`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 40; recordings: 399; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004388](https://openneuro.org/datasets/ds004388) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004388](https://nemar.org/dataexplorer/detail?dataset_id=ds004388) DOI: [https://doi.org/10.18112/openneuro.ds004388.v1.0.0](https://doi.org/10.18112/openneuro.ds004388.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004388 >>> dataset = DS004388(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004389(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Somatosensory evoked potentials in the human spinal cord to mixed and sensory nerve stimulation * **Study:** `ds004389` (OpenNeuro) * **Author (year):** `Nierula2023_Somatosensory_evoked` * **Canonical:** — Also importable as: `DS004389`, `Nierula2023_Somatosensory_evoked`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 26; recordings: 260; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004389](https://openneuro.org/datasets/ds004389) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004389](https://nemar.org/dataexplorer/detail?dataset_id=ds004389) DOI: [https://doi.org/10.18112/openneuro.ds004389.v1.0.0](https://doi.org/10.18112/openneuro.ds004389.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004389 >>> dataset = DS004389(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004395(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Penn Electrophysiology of Encoding and Retrieval Study (PEERS) * **Study:** `ds004395` (OpenNeuro) * **Author (year):** `Kahana2023` * **Canonical:** `PEERS` Also importable as: `DS004395`, `Kahana2023`, `PEERS`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 364; recordings: 6483; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004395](https://openneuro.org/datasets/ds004395) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004395](https://nemar.org/dataexplorer/detail?dataset_id=ds004395) DOI: [https://doi.org/10.18112/openneuro.ds004395.v2.0.0](https://doi.org/10.18112/openneuro.ds004395.v2.0.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS004395 >>> dataset = DS004395(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PEERS']* ### *class* eegdash.dataset.dataset.DS004398(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) planmemreplay * **Study:** `ds004398` (OpenNeuro) * **Author (year):** `Wimmer2023` * **Canonical:** `Wimmer2024` Also importable as: `DS004398`, `Wimmer2023`, `Wimmer2024`. Modality: `meg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004398](https://openneuro.org/datasets/ds004398) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004398](https://nemar.org/dataexplorer/detail?dataset_id=ds004398) DOI: [https://doi.org/10.18112/openneuro.ds004398.v1.0.0](https://doi.org/10.18112/openneuro.ds004398.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004398 >>> dataset = DS004398(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Wimmer2024']* ### *class* eegdash.dataset.dataset.DS004408(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG responses to continuous naturalistic speech * **Study:** `ds004408` (OpenNeuro) * **Author (year):** `Liberto2023` * **Canonical:** — Also importable as: `DS004408`, `Liberto2023`. Modality: `eeg`. Subjects: 19; recordings: 380; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004408](https://openneuro.org/datasets/ds004408) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004408](https://nemar.org/dataexplorer/detail?dataset_id=ds004408) DOI: [https://doi.org/10.18112/openneuro.ds004408.v1.0.8](https://doi.org/10.18112/openneuro.ds004408.v1.0.8) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004408 >>> dataset = DS004408(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004444(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The BMI-HDEEG dataset 1 * **Study:** `ds004444` (OpenNeuro) * **Author (year):** `Iwama2023_D1` * **Canonical:** `BMI_HDEEG_D1` Also importable as: `DS004444`, `Iwama2023_D1`, `BMI_HDEEG_D1`. Modality: `eeg`. Subjects: 30; recordings: 465; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004444](https://openneuro.org/datasets/ds004444) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004444](https://nemar.org/dataexplorer/detail?dataset_id=ds004444) DOI: [https://doi.org/10.18112/openneuro.ds004444.v1.0.1](https://doi.org/10.18112/openneuro.ds004444.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004444 >>> dataset = DS004444(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BMI_HDEEG_D1']* ### *class* eegdash.dataset.dataset.DS004446(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The BMI-HDEEG dataset 2 * **Study:** `ds004446` (OpenNeuro) * **Author (year):** `Iwama2023_D2` * **Canonical:** `BMI_HDEEG_D2` Also importable as: `DS004446`, `Iwama2023_D2`, `BMI_HDEEG_D2`. Modality: `eeg`. Subjects: 30; recordings: 237; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004446](https://openneuro.org/datasets/ds004446) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004446](https://nemar.org/dataexplorer/detail?dataset_id=ds004446) DOI: [https://doi.org/10.18112/openneuro.ds004446.v1.0.1](https://doi.org/10.18112/openneuro.ds004446.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004446 >>> dataset = DS004446(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BMI_HDEEG_D2']* ### *class* eegdash.dataset.dataset.DS004447(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The BMI-HDEEG dataset 3 * **Study:** `ds004447` (OpenNeuro) * **Author (year):** `Iwama2023_D3` * **Canonical:** `BMI_HDEEG_D3` Also importable as: `DS004447`, `Iwama2023_D3`, `BMI_HDEEG_D3`. Modality: `eeg`. Subjects: 22; recordings: 418; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004447](https://openneuro.org/datasets/ds004447) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004447](https://nemar.org/dataexplorer/detail?dataset_id=ds004447) DOI: [https://doi.org/10.18112/openneuro.ds004447.v1.0.1](https://doi.org/10.18112/openneuro.ds004447.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004447 >>> dataset = DS004447(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BMI_HDEEG_D3']* ### *class* eegdash.dataset.dataset.DS004448(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The BMI-HDEEG dataset 4 * **Study:** `ds004448` (OpenNeuro) * **Author (year):** `Iwama2023_D4` * **Canonical:** `BMI_HDEEG_D4` Also importable as: `DS004448`, `Iwama2023_D4`, `BMI_HDEEG_D4`. Modality: `eeg`. Subjects: 56; recordings: 280; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004448](https://openneuro.org/datasets/ds004448) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004448](https://nemar.org/dataexplorer/detail?dataset_id=ds004448) DOI: [https://doi.org/10.18112/openneuro.ds004448.v1.0.2](https://doi.org/10.18112/openneuro.ds004448.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004448 >>> dataset = DS004448(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BMI_HDEEG_D4']* ### *class* eegdash.dataset.dataset.DS004457(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrical stimulation of temporal and limbic circuitry produces distinct responses in human ventral temporal cortex * **Study:** `ds004457` (OpenNeuro) * **Author (year):** `Huang2023` * **Canonical:** `Huang2022` Also importable as: `DS004457`, `Huang2023`, `Huang2022`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 5; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004457](https://openneuro.org/datasets/ds004457) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004457](https://nemar.org/dataexplorer/detail?dataset_id=ds004457) DOI: [https://doi.org/10.18112/openneuro.ds004457.v1.0.1](https://doi.org/10.18112/openneuro.ds004457.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004457 >>> dataset = DS004457(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Huang2022']* ### *class* eegdash.dataset.dataset.DS004460(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG and motion capture data set for a full-body/joystick rotation task * **Study:** `ds004460` (OpenNeuro) * **Author (year):** `Gramann2023` * **Canonical:** — Also importable as: `DS004460`, `Gramann2023`. Modality: `eeg`. Subjects: 20; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004460](https://openneuro.org/datasets/ds004460) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004460](https://nemar.org/dataexplorer/detail?dataset_id=ds004460) DOI: [https://doi.org/10.18112/openneuro.ds004460.v1.1.0](https://doi.org/10.18112/openneuro.ds004460.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004460 >>> dataset = DS004460(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004473(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) sEEG Forced Two-Choice Task * **Study:** `ds004473` (OpenNeuro) * **Author (year):** `Rockhill2023` * **Canonical:** `Rockhill2022` Also importable as: `DS004473`, `Rockhill2023`, `Rockhill2022`. Modality: `ieeg`; Experiment type: `Motor`; Subject type: `Epilepsy`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004473](https://openneuro.org/datasets/ds004473) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004473](https://nemar.org/dataexplorer/detail?dataset_id=ds004473) DOI: [https://doi.org/10.18112/openneuro.ds004473.v1.0.1](https://doi.org/10.18112/openneuro.ds004473.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004473 >>> dataset = DS004473(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Rockhill2022']* ### *class* eegdash.dataset.dataset.DS004475(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mobile EEG split-belt walking study * **Study:** `ds004475` (OpenNeuro) * **Author (year):** `Jacobsen2023` * **Canonical:** — Also importable as: `DS004475`, `Jacobsen2023`. Modality: `eeg`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004475](https://openneuro.org/datasets/ds004475) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004475](https://nemar.org/dataexplorer/detail?dataset_id=ds004475) DOI: [https://doi.org/10.18112/openneuro.ds004475.v1.0.3](https://doi.org/10.18112/openneuro.ds004475.v1.0.3) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004475 >>> dataset = DS004475(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004477(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PES - Pandemic Emergency Scenario * **Study:** `ds004477` (OpenNeuro) * **Author (year):** `Papastylianou2023` * **Canonical:** — Also importable as: `DS004477`, `Papastylianou2023`. Modality: `eeg`. Subjects: 9; recordings: 9; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004477](https://openneuro.org/datasets/ds004477) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004477](https://nemar.org/dataexplorer/detail?dataset_id=ds004477) DOI: [https://doi.org/10.18112/openneuro.ds004477.v1.0.2](https://doi.org/10.18112/openneuro.ds004477.v1.0.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004477 >>> dataset = DS004477(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004483(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ABSeqMEG * **Study:** `ds004483` (OpenNeuro) * **Author (year):** `Planton2023` * **Canonical:** `ABSeqMEG` Also importable as: `DS004483`, `Planton2023`, `ABSeqMEG`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 19; recordings: 282; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004483](https://openneuro.org/datasets/ds004483) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004483](https://nemar.org/dataexplorer/detail?dataset_id=ds004483) DOI: [https://doi.org/10.18112/openneuro.ds004483.v1.0.0](https://doi.org/10.18112/openneuro.ds004483.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004483 >>> dataset = DS004483(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ABSeqMEG']* ### *class* eegdash.dataset.dataset.DS004502(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Anticipatory differences between Attention and Expectation * **Study:** `ds004502` (OpenNeuro) * **Author (year):** `Penalver2023` * **Canonical:** `Penalver2024` Also importable as: `DS004502`, `Penalver2023`, `Penalver2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 48; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004502](https://openneuro.org/datasets/ds004502) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004502](https://nemar.org/dataexplorer/detail?dataset_id=ds004502) DOI: [https://doi.org/10.18112/openneuro.ds004502.v1.0.1](https://doi.org/10.18112/openneuro.ds004502.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004502 >>> dataset = DS004502(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Penalver2024']* ### *class* eegdash.dataset.dataset.DS004504(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset of EEG recordings from: Alzheimer’s disease, Frontotemporal dementia and Healthy subjects * **Study:** `ds004504` (OpenNeuro) * **Author (year):** `Miltiadous2023` * **Canonical:** — Also importable as: `DS004504`, `Miltiadous2023`. Modality: `eeg`. Subjects: 88; recordings: 88; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004504](https://openneuro.org/datasets/ds004504) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004504](https://nemar.org/dataexplorer/detail?dataset_id=ds004504) DOI: [https://doi.org/10.18112/openneuro.ds004504.v1.0.8](https://doi.org/10.18112/openneuro.ds004504.v1.0.8) NEMAR citation count: 55 ### Examples ```pycon >>> from eegdash.dataset import DS004504 >>> dataset = DS004504(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004505(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Real World Table Tennis * **Study:** `ds004505` (OpenNeuro) * **Author (year):** `Studnicki2023` * **Canonical:** — Also importable as: `DS004505`, `Studnicki2023`. Modality: `eeg`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004505](https://openneuro.org/datasets/ds004505) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004505](https://nemar.org/dataexplorer/detail?dataset_id=ds004505) DOI: [https://doi.org/10.18112/openneuro.ds004505.v1.0.4](https://doi.org/10.18112/openneuro.ds004505.v1.0.4) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS004505 >>> dataset = DS004505(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004511(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Deception_data * **Study:** `ds004511` (OpenNeuro) * **Author (year):** `Makowski2023_Deception` * **Canonical:** — Also importable as: `DS004511`, `Makowski2023_Deception`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 45; recordings: 134; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004511](https://openneuro.org/datasets/ds004511) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004511](https://nemar.org/dataexplorer/detail?dataset_id=ds004511) DOI: [https://doi.org/10.18112/openneuro.ds004511.v1.0.2](https://doi.org/10.18112/openneuro.ds004511.v1.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004511 >>> dataset = DS004511(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004514(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Simultaneous EEG and fNIRS recordings for semantic decoding of imagined animals and tools * **Study:** `ds004514` (OpenNeuro) * **Author (year):** `Rybar2023_Simultaneous` * **Canonical:** — Also importable as: `DS004514`, `Rybar2023_Simultaneous`. Modality: `eeg, fnirs`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 12; recordings: 24; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004514](https://openneuro.org/datasets/ds004514) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004514](https://nemar.org/dataexplorer/detail?dataset_id=ds004514) DOI: [https://doi.org/10.18112/openneuro.ds004514.v1.1.2](https://doi.org/10.18112/openneuro.ds004514.v1.1.2) ### Examples ```pycon >>> from eegdash.dataset import DS004514 >>> dataset = DS004514(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004515(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Alcohol imagery reinforcement learning task with light and heavy drinker participants * **Study:** `ds004515` (OpenNeuro) * **Author (year):** `Singh2023` * **Canonical:** — Also importable as: `DS004515`, `Singh2023`. Modality: `eeg`. Subjects: 54; recordings: 54; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004515](https://openneuro.org/datasets/ds004515) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004515](https://nemar.org/dataexplorer/detail?dataset_id=ds004515) DOI: [https://doi.org/10.18112/openneuro.ds004515.v1.0.0](https://doi.org/10.18112/openneuro.ds004515.v1.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS004515 >>> dataset = DS004515(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004517(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG recordings for semantic decoding of imagined animals and tools during auditory imagery task * **Study:** `ds004517` (OpenNeuro) * **Author (year):** `Rybar2023_semantic` * **Canonical:** — Also importable as: `DS004517`, `Rybar2023_semantic`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 7; recordings: 7; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004517](https://openneuro.org/datasets/ds004517) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004517](https://nemar.org/dataexplorer/detail?dataset_id=ds004517) DOI: [https://doi.org/10.18112/openneuro.ds004517.v1.0.2](https://doi.org/10.18112/openneuro.ds004517.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS004517 >>> dataset = DS004517(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004519(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Internal selective attention is delayed by competition between endogenous and exogenous factors * **Study:** `ds004519` (OpenNeuro) * **Author (year):** `Ester2023_Internal` * **Canonical:** `Ester2022` Also importable as: `DS004519`, `Ester2023_Internal`, `Ester2022`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 40; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004519](https://openneuro.org/datasets/ds004519) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004519](https://nemar.org/dataexplorer/detail?dataset_id=ds004519) DOI: [https://doi.org/10.18112/openneuro.ds004519.v1.0.1](https://doi.org/10.18112/openneuro.ds004519.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004519 >>> dataset = DS004519(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ester2022']* ### *class* eegdash.dataset.dataset.DS004520(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Changes in behavioral priority influence the accessibility of working memory content - Experiment 2 * **Study:** `ds004520` (OpenNeuro) * **Author (year):** `Ester2023_Changes` * **Canonical:** `Ester2024_E2` Also importable as: `DS004520`, `Ester2023_Changes`, `Ester2024_E2`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 33; recordings: 33; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004520](https://openneuro.org/datasets/ds004520) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004520](https://nemar.org/dataexplorer/detail?dataset_id=ds004520) DOI: [https://doi.org/10.18112/openneuro.ds004520.v1.0.1](https://doi.org/10.18112/openneuro.ds004520.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004520 >>> dataset = DS004520(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ester2024_E2']* ### *class* eegdash.dataset.dataset.DS004521(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Changes in behavioral priority influence the accessibility of working memory content - Experiment 1 * **Study:** `ds004521` (OpenNeuro) * **Author (year):** `Ester2023_Changes_behavioral` * **Canonical:** `Ester2024_E1` Also importable as: `DS004521`, `Ester2023_Changes_behavioral`, `Ester2024_E1`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004521](https://openneuro.org/datasets/ds004521) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004521](https://nemar.org/dataexplorer/detail?dataset_id=ds004521) DOI: [https://doi.org/10.18112/openneuro.ds004521.v1.0.1](https://doi.org/10.18112/openneuro.ds004521.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004521 >>> dataset = DS004521(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ester2024_E1']* ### *class* eegdash.dataset.dataset.DS004532(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Probabilistic Selection Task (PST) + PST with Cabergoline Challenge * **Study:** `ds004532` (OpenNeuro) * **Author (year):** `Cavanagh2023` * **Canonical:** — Also importable as: `DS004532`, `Cavanagh2023`. Modality: `eeg`. Subjects: 110; recordings: 137; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004532](https://openneuro.org/datasets/ds004532) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004532](https://nemar.org/dataexplorer/detail?dataset_id=ds004532) DOI: [https://doi.org/10.18112/openneuro.ds004532.v1.2.0](https://doi.org/10.18112/openneuro.ds004532.v1.2.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004532 >>> dataset = DS004532(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004541(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal EEG-fNIRS data from patients undergoing general anesthesia * **Study:** `ds004541` (OpenNeuro) * **Author (year):** `Ferron2023` * **Canonical:** `Ferron2019` Also importable as: `DS004541`, `Ferron2023`, `Ferron2019`. Modality: `eeg, fnirs`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 8; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004541](https://openneuro.org/datasets/ds004541) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004541](https://nemar.org/dataexplorer/detail?dataset_id=ds004541) DOI: [https://doi.org/10.18112/openneuro.ds004541.v1.0.0](https://doi.org/10.18112/openneuro.ds004541.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS004541 >>> dataset = DS004541(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ferron2019']* ### *class* eegdash.dataset.dataset.DS004551(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG on children during slow wave sleep * **Study:** `ds004551` (OpenNeuro) * **Author (year):** `Sakakura2023_children_slow_wave` * **Canonical:** `Sakakura2025` Also importable as: `DS004551`, `Sakakura2023_children_slow_wave`, `Sakakura2025`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Epilepsy`. Subjects: 114; recordings: 125; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004551](https://openneuro.org/datasets/ds004551) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004551](https://nemar.org/dataexplorer/detail?dataset_id=ds004551) DOI: [https://doi.org/10.18112/openneuro.ds004551.v1.0.6](https://doi.org/10.18112/openneuro.ds004551.v1.0.6) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004551 >>> dataset = DS004551(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Sakakura2025']* ### *class* eegdash.dataset.dataset.DS004554(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Forced Picture Naming Task * **Study:** `ds004554` (OpenNeuro) * **Author (year):** `Volpert2023` * **Canonical:** — Also importable as: `DS004554`, `Volpert2023`. Modality: `eeg`. Subjects: 16; recordings: 16; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004554](https://openneuro.org/datasets/ds004554) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004554](https://nemar.org/dataexplorer/detail?dataset_id=ds004554) DOI: [https://doi.org/10.18112/openneuro.ds004554.v1.0.4](https://doi.org/10.18112/openneuro.ds004554.v1.0.4) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004554 >>> dataset = DS004554(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004561(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Illusion of Agency over Electrically-Actuated Movements * **Study:** `ds004561` (OpenNeuro) * **Author (year):** `Veillette2023` * **Canonical:** — Also importable as: `DS004561`, `Veillette2023`. Modality: `eeg`. Subjects: 23; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004561](https://openneuro.org/datasets/ds004561) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004561](https://nemar.org/dataexplorer/detail?dataset_id=ds004561) DOI: [https://doi.org/10.18112/openneuro.ds004561.v1.0.0](https://doi.org/10.18112/openneuro.ds004561.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004561 >>> dataset = DS004561(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004563(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Vicarious touch: overlapping neural patterns between seeing and feeling touch * **Study:** `ds004563` (OpenNeuro) * **Author (year):** `Smit2023` * **Canonical:** — Also importable as: `DS004563`, `Smit2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Other`. Subjects: 40; recordings: 119; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004563](https://openneuro.org/datasets/ds004563) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004563](https://nemar.org/dataexplorer/detail?dataset_id=ds004563) DOI: [https://doi.org/10.18112/openneuro.ds004563.v1.0.1](https://doi.org/10.18112/openneuro.ds004563.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004563 >>> dataset = DS004563(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004572(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effects of sham hypnosis techniques * **Study:** `ds004572` (OpenNeuro) * **Author (year):** `Kekecs2023` * **Canonical:** `Kekecs2024` Also importable as: `DS004572`, `Kekecs2023`, `Kekecs2024`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 52; recordings: 516; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004572](https://openneuro.org/datasets/ds004572) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004572](https://nemar.org/dataexplorer/detail?dataset_id=ds004572) DOI: [https://doi.org/10.18112/openneuro.ds004572.v1.3.2](https://doi.org/10.18112/openneuro.ds004572.v1.3.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004572 >>> dataset = DS004572(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kekecs2024']* ### *class* eegdash.dataset.dataset.DS004574(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cross-modal Oddball Task. * **Study:** `ds004574` (OpenNeuro) * **Author (year):** `Singh2023_Cross_modal` * **Canonical:** — Also importable as: `DS004574`, `Singh2023_Cross_modal`. Modality: `eeg`. Subjects: 146; recordings: 146; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004574](https://openneuro.org/datasets/ds004574) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004574](https://nemar.org/dataexplorer/detail?dataset_id=ds004574) DOI: [https://doi.org/10.18112/openneuro.ds004574.v1.0.0](https://doi.org/10.18112/openneuro.ds004574.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004574 >>> dataset = DS004574(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004577(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset containing resting EEG for a sample of 103 normal infants in the first year of life * **Study:** `ds004577` (OpenNeuro) * **Author (year):** `Unit2023` * **Canonical:** — Also importable as: `DS004577`, `Unit2023`. Modality: `eeg`. Subjects: 103; recordings: 130; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004577](https://openneuro.org/datasets/ds004577) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004577](https://nemar.org/dataexplorer/detail?dataset_id=ds004577) DOI: [https://doi.org/10.18112/openneuro.ds004577.v1.0.1](https://doi.org/10.18112/openneuro.ds004577.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004577 >>> dataset = DS004577(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004579(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Interval Timing Task * **Study:** `ds004579` (OpenNeuro) * **Author (year):** `Singh2023_Interval_Timing` * **Canonical:** — Also importable as: `DS004579`, `Singh2023_Interval_Timing`. Modality: `eeg`. Subjects: 139; recordings: 139; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004579](https://openneuro.org/datasets/ds004579) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004579](https://nemar.org/dataexplorer/detail?dataset_id=ds004579) DOI: [https://doi.org/10.18112/openneuro.ds004579.v1.0.0](https://doi.org/10.18112/openneuro.ds004579.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004579 >>> dataset = DS004579(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004580(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Simon-conflict Task. * **Study:** `ds004580` (OpenNeuro) * **Author (year):** `Singh2023_Simon_conflict` * **Canonical:** — Also importable as: `DS004580`, `Singh2023_Simon_conflict`. Modality: `eeg`. Subjects: 147; recordings: 147; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004580](https://openneuro.org/datasets/ds004580) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004580](https://nemar.org/dataexplorer/detail?dataset_id=ds004580) DOI: [https://doi.org/10.18112/openneuro.ds004580.v1.0.0](https://doi.org/10.18112/openneuro.ds004580.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004580 >>> dataset = DS004580(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004582(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FakeFaceEmo_data * **Study:** `ds004582` (OpenNeuro) * **Author (year):** `Makowski2023_FakeFaceEmo` * **Canonical:** — Also importable as: `DS004582`, `Makowski2023_FakeFaceEmo`. Modality: `eeg`. Subjects: 73; recordings: 73; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004582](https://openneuro.org/datasets/ds004582) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004582](https://nemar.org/dataexplorer/detail?dataset_id=ds004582) DOI: [https://doi.org/10.18112/openneuro.ds004582.v1.0.0](https://doi.org/10.18112/openneuro.ds004582.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004582 >>> dataset = DS004582(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004584(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Rest eyes open * **Study:** `ds004584` (OpenNeuro) * **Author (year):** `Singh2023_Rest_eyes` * **Canonical:** — Also importable as: `DS004584`, `Singh2023_Rest_eyes`. Modality: `eeg`. Subjects: 149; recordings: 149; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004584](https://openneuro.org/datasets/ds004584) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004584](https://nemar.org/dataexplorer/detail?dataset_id=ds004584) DOI: [https://doi.org/10.18112/openneuro.ds004584.v1.0.0](https://doi.org/10.18112/openneuro.ds004584.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004584 >>> dataset = DS004584(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004587(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) IllusionGameEEG_data * **Study:** `ds004587` (OpenNeuro) * **Author (year):** `Makowski2023_IllusionGameEEG` * **Canonical:** — Also importable as: `DS004587`, `Makowski2023_IllusionGameEEG`. Modality: `eeg`. Subjects: 103; recordings: 114; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004587](https://openneuro.org/datasets/ds004587) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004587](https://nemar.org/dataexplorer/detail?dataset_id=ds004587) DOI: [https://doi.org/10.18112/openneuro.ds004587.v1.0.0](https://doi.org/10.18112/openneuro.ds004587.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004587 >>> dataset = DS004587(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004588(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neuma * **Study:** `ds004588` (OpenNeuro) * **Author (year):** `Georgiadis2023` * **Canonical:** `Neuma` Also importable as: `DS004588`, `Georgiadis2023`, `Neuma`. Modality: `eeg`. Subjects: 42; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004588](https://openneuro.org/datasets/ds004588) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004588](https://nemar.org/dataexplorer/detail?dataset_id=ds004588) DOI: [https://doi.org/10.18112/openneuro.ds004588.v1.2.0](https://doi.org/10.18112/openneuro.ds004588.v1.2.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004588 >>> dataset = DS004588(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Neuma']* ### *class* eegdash.dataset.dataset.DS004595(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls * **Study:** `ds004595` (OpenNeuro) * **Author (year):** `Campbell2023` * **Canonical:** — Also importable as: `DS004595`, `Campbell2023`. Modality: `eeg`. Subjects: 53; recordings: 53; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004595](https://openneuro.org/datasets/ds004595) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004595](https://nemar.org/dataexplorer/detail?dataset_id=ds004595) DOI: [https://doi.org/10.18112/openneuro.ds004595.v1.0.0](https://doi.org/10.18112/openneuro.ds004595.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004595 >>> dataset = DS004595(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004598(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LFP during linear track in 6-month old TgF344-AD rats * **Study:** `ds004598` (OpenNeuro) * **Author (year):** `Faraz2023` * **Canonical:** `Moradi2024` Also importable as: `DS004598`, `Faraz2023`, `Moradi2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Dementia`. Subjects: 9; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004598](https://openneuro.org/datasets/ds004598) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004598](https://nemar.org/dataexplorer/detail?dataset_id=ds004598) DOI: [https://doi.org/10.18112/openneuro.ds004598.v1.0.0](https://doi.org/10.18112/openneuro.ds004598.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004598 >>> dataset = DS004598(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Moradi2024']* ### *class* eegdash.dataset.dataset.DS004602(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Registered Replication Report of ERN/Pe Psychometrics * **Study:** `ds004602` (OpenNeuro) * **Author (year):** `Clayson2023_Registered` * **Canonical:** — Also importable as: `DS004602`, `Clayson2023_Registered`. Modality: `eeg`. Subjects: 182; recordings: 546; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004602](https://openneuro.org/datasets/ds004602) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004602](https://nemar.org/dataexplorer/detail?dataset_id=ds004602) DOI: [https://doi.org/10.18112/openneuro.ds004602.v1.0.1](https://doi.org/10.18112/openneuro.ds004602.v1.0.1) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS004602 >>> dataset = DS004602(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004603(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual Attribute-Specific Contextual Trajectory Paradigm * **Study:** `ds004603` (OpenNeuro) * **Author (year):** `Lowe2023` * **Canonical:** `VisualContextTrajectory` Also importable as: `DS004603`, `Lowe2023`, `VisualContextTrajectory`. Modality: `eeg`. Subjects: 37; recordings: 37; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004603](https://openneuro.org/datasets/ds004603) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004603](https://nemar.org/dataexplorer/detail?dataset_id=ds004603) DOI: [https://doi.org/10.18112/openneuro.ds004603.v1.1.0](https://doi.org/10.18112/openneuro.ds004603.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004603 >>> dataset = DS004603(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['VisualContextTrajectory']* ### *class* eegdash.dataset.dataset.DS004621(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Nencki-Symfonia EEG/ERP dataset * **Study:** `ds004621` (OpenNeuro) * **Author (year):** `Patrycja2023_Nencki` * **Canonical:** `NenckiSymfonia` Also importable as: `DS004621`, `Patrycja2023_Nencki`, `NenckiSymfonia`. Modality: `eeg`. Subjects: 42; recordings: 167; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004621](https://openneuro.org/datasets/ds004621) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004621](https://nemar.org/dataexplorer/detail?dataset_id=ds004621) DOI: [https://doi.org/10.18112/openneuro.ds004621.v1.0.4](https://doi.org/10.18112/openneuro.ds004621.v1.0.4) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004621 >>> dataset = DS004621(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['NenckiSymfonia']* ### *class* eegdash.dataset.dataset.DS004624(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Intracranial recordings using BCI2000 and the CorTec BrainInterchange * **Study:** `ds004624` (OpenNeuro) * **Author (year):** `Mivalt2025` * **Canonical:** `Mivalt2024`, `BCI2000_Intracranial` Also importable as: `DS004624`, `Mivalt2025`, `Mivalt2024`, `BCI2000_Intracranial`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 3; recordings: 614; tasks: 28. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004624](https://openneuro.org/datasets/ds004624) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004624](https://nemar.org/dataexplorer/detail?dataset_id=ds004624) DOI: [https://doi.org/10.18112/openneuro.ds004624.v2.0.0](https://doi.org/10.18112/openneuro.ds004624.v2.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004624 >>> dataset = DS004624(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Mivalt2024', 'BCI2000_Intracranial']* ### *class* eegdash.dataset.dataset.DS004625(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mind in Motion Young Adults Walking Over Uneven Terrain * **Study:** `ds004625` (OpenNeuro) * **Author (year):** `Liu2023` * **Canonical:** — Also importable as: `DS004625`, `Liu2023`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 32; recordings: 543; tasks: 9. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004625](https://openneuro.org/datasets/ds004625) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004625](https://nemar.org/dataexplorer/detail?dataset_id=ds004625) DOI: [https://doi.org/10.18112/openneuro.ds004625.v1.0.2](https://doi.org/10.18112/openneuro.ds004625.v1.0.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004625 >>> dataset = DS004625(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004626(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Can we dissociate hypervigilance to social threats from altered perceptual decision-making processes in lonely individuals? An exploration with Drift Diffusion Modelling and event-related potentials. * **Study:** `ds004626` (OpenNeuro) * **Author (year):** `Maka2023` * **Canonical:** — Also importable as: `DS004626`, `Maka2023`. Modality: `eeg`. Subjects: 52; recordings: 52; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004626](https://openneuro.org/datasets/ds004626) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004626](https://nemar.org/dataexplorer/detail?dataset_id=ds004626) DOI: [https://doi.org/10.18112/openneuro.ds004626.v1.0.2](https://doi.org/10.18112/openneuro.ds004626.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004626 >>> dataset = DS004626(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004635(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Gaffrey Lab Infant Microstates Reliability * **Study:** `ds004635` (OpenNeuro) * **Author (year):** `Bagdasarov2023` * **Canonical:** — Also importable as: `DS004635`, `Bagdasarov2023`. Modality: `eeg`. Subjects: 48; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004635](https://openneuro.org/datasets/ds004635) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004635](https://nemar.org/dataexplorer/detail?dataset_id=ds004635) DOI: [https://doi.org/10.18112/openneuro.ds004635.v3.1.0](https://doi.org/10.18112/openneuro.ds004635.v3.1.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004635 >>> dataset = DS004635(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004642(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Intraoperative recordings of medianus stimulation with low and high impedance ECoG * **Study:** `ds004642` (OpenNeuro) * **Author (year):** `Dimakopoulos2023_Intraoperative` * **Canonical:** — Also importable as: `DS004642`, `Dimakopoulos2023_Intraoperative`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Surgery`. Subjects: 10; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004642](https://openneuro.org/datasets/ds004642) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004642](https://nemar.org/dataexplorer/detail?dataset_id=ds004642) DOI: [https://doi.org/10.18112/openneuro.ds004642.v1.0.1](https://doi.org/10.18112/openneuro.ds004642.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004642 >>> dataset = DS004642(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004657(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Driving with Autonomous Aids * **Study:** `ds004657` (OpenNeuro) * **Author (year):** `Metcalfe2023_Driving` * **Canonical:** `TX20` Also importable as: `DS004657`, `Metcalfe2023_Driving`, `TX20`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 24; recordings: 119; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004657](https://openneuro.org/datasets/ds004657) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004657](https://nemar.org/dataexplorer/detail?dataset_id=ds004657) DOI: [https://doi.org/10.18112/openneuro.ds004657.v1.0.3](https://doi.org/10.18112/openneuro.ds004657.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004657 >>> dataset = DS004657(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['TX20']* ### *class* eegdash.dataset.dataset.DS004660(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TNO * **Study:** `ds004660` (OpenNeuro) * **Author (year):** `Johnson2023_TNO` * **Canonical:** `TNO` Also importable as: `DS004660`, `Johnson2023_TNO`, `TNO`. Modality: `eeg`. Subjects: 21; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004660](https://openneuro.org/datasets/ds004660) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004660](https://nemar.org/dataexplorer/detail?dataset_id=ds004660) DOI: [https://doi.org/10.18112/openneuro.ds004660.v1.0.2](https://doi.org/10.18112/openneuro.ds004660.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004660 >>> dataset = DS004660(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['TNO']* ### *class* eegdash.dataset.dataset.DS004661(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ANDI * **Study:** `ds004661` (OpenNeuro) * **Author (year):** `Johnson2023_ANDI` * **Canonical:** `ANDI` Also importable as: `DS004661`, `Johnson2023_ANDI`, `ANDI`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004661](https://openneuro.org/datasets/ds004661) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004661](https://nemar.org/dataexplorer/detail?dataset_id=ds004661) DOI: [https://doi.org/10.18112/openneuro.ds004661.v1.1.0](https://doi.org/10.18112/openneuro.ds004661.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004661 >>> dataset = DS004661(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ANDI']* ### *class* eegdash.dataset.dataset.DS004696(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HAPwave_bids * **Study:** `ds004696` (OpenNeuro) * **Author (year):** `Valencia2023` * **Canonical:** — Also importable as: `DS004696`, `Valencia2023`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004696](https://openneuro.org/datasets/ds004696) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004696](https://nemar.org/dataexplorer/detail?dataset_id=ds004696) DOI: [https://doi.org/10.18112/openneuro.ds004696.v1.0.1](https://doi.org/10.18112/openneuro.ds004696.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004696 >>> dataset = DS004696(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004703(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) sEEG Passive listening to natural speech * **Study:** `ds004703` (OpenNeuro) * **Author (year):** `Mai2023` * **Canonical:** — Also importable as: `DS004703`, `Mai2023`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Surgery`. Subjects: 10; recordings: 11; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004703](https://openneuro.org/datasets/ds004703) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004703](https://nemar.org/dataexplorer/detail?dataset_id=ds004703) DOI: [https://doi.org/10.18112/openneuro.ds004703.v1.1.0](https://doi.org/10.18112/openneuro.ds004703.v1.1.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004703 >>> dataset = DS004703(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004706(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Spatial memory and non-invasive closed-loop stimulus timing * **Study:** `ds004706` (OpenNeuro) * **Author (year):** `Rudoler2023` * **Canonical:** — Also importable as: `DS004706`, `Rudoler2023`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 34; recordings: 298; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004706](https://openneuro.org/datasets/ds004706) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004706](https://nemar.org/dataexplorer/detail?dataset_id=ds004706) DOI: [https://doi.org/10.18112/openneuro.ds004706.v1.0.0](https://doi.org/10.18112/openneuro.ds004706.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004706 >>> dataset = DS004706(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004718(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Le Petit Prince Hong Kong: Naturalistic fMRI and EEG dataset from older Cantonese speakers * **Study:** `ds004718` (OpenNeuro) * **Author (year):** `Momenian2023` * **Canonical:** — Also importable as: `DS004718`, `Momenian2023`. Modality: `eeg`. Subjects: 51; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004718](https://openneuro.org/datasets/ds004718) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004718](https://nemar.org/dataexplorer/detail?dataset_id=ds004718) DOI: [https://doi.org/10.18112/openneuro.ds004718.v1.1.2](https://doi.org/10.18112/openneuro.ds004718.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004718 >>> dataset = DS004718(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004738(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) sfb_meg_phantom (B04/C01) * **Study:** `ds004738` (OpenNeuro) * **Author (year):** `Bahners2023` * **Canonical:** — Also importable as: `DS004738`, `Bahners2023`. Modality: `meg`; Experiment type: `Other`; Subject type: `Other`. Subjects: 4; recordings: 25; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004738](https://openneuro.org/datasets/ds004738) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004738](https://nemar.org/dataexplorer/detail?dataset_id=ds004738) DOI: [https://doi.org/10.18112/openneuro.ds004738.v1.0.1](https://doi.org/10.18112/openneuro.ds004738.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004738 >>> dataset = DS004738(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004745(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 8-Channel SSVEP EEG Dataset with Artifact Trials * **Study:** `ds004745` (OpenNeuro) * **Author (year):** `Kumaravel2023` * **Canonical:** — Also importable as: `DS004745`, `Kumaravel2023`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 6; recordings: 6; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004745](https://openneuro.org/datasets/ds004745) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004745](https://nemar.org/dataexplorer/detail?dataset_id=ds004745) DOI: [https://doi.org/10.18112/openneuro.ds004745.v1.0.1](https://doi.org/10.18112/openneuro.ds004745.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004745 >>> dataset = DS004745(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004752(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of intracranial EEG, scalp EEG and beamforming sources from epilepsy patients performing a verbal working memory task * **Study:** `ds004752` (OpenNeuro) * **Author (year):** `Dimakopoulos2023_intracranial` * **Canonical:** — Also importable as: `DS004752`, `Dimakopoulos2023_intracranial`. Modality: `eeg, ieeg`. Subjects: 15; recordings: 136; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004752](https://openneuro.org/datasets/ds004752) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004752](https://nemar.org/dataexplorer/detail?dataset_id=ds004752) DOI: [https://doi.org/10.18112/openneuro.ds004752.v1.0.1](https://doi.org/10.18112/openneuro.ds004752.v1.0.1) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS004752 >>> dataset = DS004752(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004770(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG on children during gameplay * **Study:** `ds004770` (OpenNeuro) * **Author (year):** `Ueda2023` * **Canonical:** — Also importable as: `DS004770`, `Ueda2023`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 10; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004770](https://openneuro.org/datasets/ds004770) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004770](https://nemar.org/dataexplorer/detail?dataset_id=ds004770) DOI: [https://doi.org/10.18112/openneuro.ds004770.v1.0.0](https://doi.org/10.18112/openneuro.ds004770.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004770 >>> dataset = DS004770(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004771(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG/ERP data from a Python Reading Task * **Study:** `ds004771` (OpenNeuro) * **Author (year):** `Kuo2023` * **Canonical:** — Also importable as: `DS004771`, `Kuo2023`. Modality: `eeg`. Subjects: 61; recordings: 61; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004771](https://openneuro.org/datasets/ds004771) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004771](https://nemar.org/dataexplorer/detail?dataset_id=ds004771) DOI: [https://doi.org/10.18112/openneuro.ds004771.v1.0.0](https://doi.org/10.18112/openneuro.ds004771.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004771 >>> dataset = DS004771(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004774(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Automatic Evoked Response Detection (ER-Detect) dataset * **Study:** `ds004774` (OpenNeuro) * **Author (year):** `Boom2023` * **Canonical:** `ERDetect`, `ER_Detect` Also importable as: `DS004774`, `Boom2023`, `ERDetect`, `ER_Detect`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 14; recordings: 14; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004774](https://openneuro.org/datasets/ds004774) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004774](https://nemar.org/dataexplorer/detail?dataset_id=ds004774) DOI: [https://doi.org/10.18112/openneuro.ds004774.v1.0.0](https://doi.org/10.18112/openneuro.ds004774.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS004774 >>> dataset = DS004774(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ERDetect', 'ER_Detect']* ### *class* eegdash.dataset.dataset.DS004784(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Phantom EEG Dataset with Motion, Muscle, and Eye Artifacts and Example Scripts * **Study:** `ds004784` (OpenNeuro) * **Author (year):** `Downey2023` * **Canonical:** — Also importable as: `DS004784`, `Downey2023`. Modality: `eeg`. Subjects: 1; recordings: 6; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004784](https://openneuro.org/datasets/ds004784) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004784](https://nemar.org/dataexplorer/detail?dataset_id=ds004784) DOI: [https://doi.org/10.18112/openneuro.ds004784.v1.0.4](https://doi.org/10.18112/openneuro.ds004784.v1.0.4) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004784 >>> dataset = DS004784(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004785(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data for paper titled - Precise cortical contributions to feedback sensorimotor control during reactive balance * **Study:** `ds004785` (OpenNeuro) * **Author (year):** `Boebinger2023` * **Canonical:** — Also importable as: `DS004785`, `Boebinger2023`. Modality: `eeg`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004785](https://openneuro.org/datasets/ds004785) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004785](https://nemar.org/dataexplorer/detail?dataset_id=ds004785) DOI: [https://doi.org/10.18112/openneuro.ds004785.v1.0.1](https://doi.org/10.18112/openneuro.ds004785.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004785 >>> dataset = DS004785(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004789(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Delayed Free Recall of Word Lists * **Study:** `ds004789` (OpenNeuro) * **Author (year):** `Herrema2023_Delayed_Free_Recall` * **Canonical:** — Also importable as: `DS004789`, `Herrema2023_Delayed_Free_Recall`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 273; recordings: 983; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004789](https://openneuro.org/datasets/ds004789) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004789](https://nemar.org/dataexplorer/detail?dataset_id=ds004789) DOI: [https://doi.org/10.18112/openneuro.ds004789.v3.1.0](https://doi.org/10.18112/openneuro.ds004789.v3.1.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004789 >>> dataset = DS004789(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004796(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Polish Electroencephalography, Alzheimer’s Risk-genes, Lifestyle and Neuroimaging (PEARL-Neuro) Database * **Study:** `ds004796` (OpenNeuro) * **Author (year):** `Patrycja2023_Polish` * **Canonical:** `PEARLNeuro` Also importable as: `DS004796`, `Patrycja2023_Polish`, `PEARLNeuro`. Modality: `eeg`. Subjects: 79; recordings: 235; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004796](https://openneuro.org/datasets/ds004796) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004796](https://nemar.org/dataexplorer/detail?dataset_id=ds004796) DOI: [https://doi.org/10.18112/openneuro.ds004796.v1.1.0](https://doi.org/10.18112/openneuro.ds004796.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004796 >>> dataset = DS004796(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PEARLNeuro']* ### *class* eegdash.dataset.dataset.DS004802(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Pilot data for Loneliness in the Brain: Distinguishing Between Hypersensitivity and Hyperalertness * **Study:** `ds004802` (OpenNeuro) * **Author (year):** `Bathelt2023` * **Canonical:** — Also importable as: `DS004802`, `Bathelt2023`. Modality: `eeg`. Subjects: 39; recordings: 79; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004802](https://openneuro.org/datasets/ds004802) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004802](https://nemar.org/dataexplorer/detail?dataset_id=ds004802) DOI: [https://doi.org/10.18112/openneuro.ds004802.v1.0.0](https://doi.org/10.18112/openneuro.ds004802.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004802 >>> dataset = DS004802(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004809(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Categorized Free Recall: Delayed Free Recall of Word Lists Organized by Semantic Categories * **Study:** `ds004809` (OpenNeuro) * **Author (year):** `Herrema2023_Categorized_Free_Recall` * **Canonical:** `catFR_Categorized_Free_Recall`, `CatFR` Also importable as: `DS004809`, `Herrema2023_Categorized_Free_Recall`, `catFR_Categorized_Free_Recall`, `CatFR`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 252; recordings: 889; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004809](https://openneuro.org/datasets/ds004809) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004809](https://nemar.org/dataexplorer/detail?dataset_id=ds004809) DOI: [https://doi.org/10.18112/openneuro.ds004809.v2.2.0](https://doi.org/10.18112/openneuro.ds004809.v2.2.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004809 >>> dataset = DS004809(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['catFR_Categorized_Free_Recall', 'CatFR']* ### *class* eegdash.dataset.dataset.DS004816(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG-attention-rsvp-exp1 * **Study:** `ds004816` (OpenNeuro) * **Author (year):** `Grootswagers2023_E1` * **Canonical:** — Also importable as: `DS004816`, `Grootswagers2023_E1`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004816](https://openneuro.org/datasets/ds004816) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004816](https://nemar.org/dataexplorer/detail?dataset_id=ds004816) DOI: [https://doi.org/10.18112/openneuro.ds004816.v1.0.0](https://doi.org/10.18112/openneuro.ds004816.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004816 >>> dataset = DS004816(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004817(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG-attention-rsvp-exp2 * **Study:** `ds004817` (OpenNeuro) * **Author (year):** `Grootswagers2023_E2` * **Canonical:** — Also importable as: `DS004817`, `Grootswagers2023_E2`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004817](https://openneuro.org/datasets/ds004817) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004817](https://nemar.org/dataexplorer/detail?dataset_id=ds004817) DOI: [https://doi.org/10.18112/openneuro.ds004817.v1.0.0](https://doi.org/10.18112/openneuro.ds004817.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004817 >>> dataset = DS004817(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004819(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Flexible, Scalable, High Channel Count Stereo-Electrode for Recording in the Human Brain * **Study:** `ds004819` (OpenNeuro) * **Author (year):** `Lee2023` * **Canonical:** — Also importable as: `DS004819`, `Lee2023`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 1; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004819](https://openneuro.org/datasets/ds004819) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004819](https://nemar.org/dataexplorer/detail?dataset_id=ds004819) DOI: [https://doi.org/10.18112/openneuro.ds004819.v1.0.0](https://doi.org/10.18112/openneuro.ds004819.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004819 >>> dataset = DS004819(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004830(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Spatial Attention Decoding using fNIRS During Complex Scene Analysis * **Study:** `ds004830` (OpenNeuro) * **Author (year):** `Ning2023` * **Canonical:** `Ning2024` Also importable as: `DS004830`, `Ning2023`, `Ning2024`. Modality: `fnirs`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004830](https://openneuro.org/datasets/ds004830) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004830](https://nemar.org/dataexplorer/detail?dataset_id=ds004830) DOI: [https://doi.org/10.18112/openneuro.ds004830.v2.0.0](https://doi.org/10.18112/openneuro.ds004830.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS004830 >>> dataset = DS004830(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ning2024']* ### *class* eegdash.dataset.dataset.DS004837(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Magnetoencephalographic (MEG) Pitch and Duration Mismatch Negativity (MMN) in First-Episode Psychosis * **Study:** `ds004837` (OpenNeuro) * **Author (year):** `LopezCaballero2023` * **Canonical:** — Also importable as: `DS004837`, `LopezCaballero2023`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Schizophrenia/Psychosis`. Subjects: 60; recordings: 106; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004837](https://openneuro.org/datasets/ds004837) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004837](https://nemar.org/dataexplorer/detail?dataset_id=ds004837) DOI: [https://doi.org/10.18112/openneuro.ds004837.v1.0.2](https://doi.org/10.18112/openneuro.ds004837.v1.0.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004837 >>> dataset = DS004837(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004840(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of electrophysiological signals (EEG, ECG, EMG) during Music therapy with adult burn patients in the Intensive Care Unit. * **Study:** `ds004840` (OpenNeuro) * **Author (year):** `CordobaSilva2023` * **Canonical:** — Also importable as: `DS004840`, `CordobaSilva2023`. Modality: `eeg`. Subjects: 9; recordings: 51; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004840](https://openneuro.org/datasets/ds004840) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004840](https://nemar.org/dataexplorer/detail?dataset_id=ds004840) DOI: [https://doi.org/10.18112/openneuro.ds004840.v1.0.1](https://doi.org/10.18112/openneuro.ds004840.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004840 >>> dataset = DS004840(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004841(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TX14 * **Study:** `ds004841` (OpenNeuro) * **Author (year):** `Larkin2023_TX14` * **Canonical:** `TX14` Also importable as: `DS004841`, `Larkin2023_TX14`, `TX14`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 20; recordings: 147; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004841](https://openneuro.org/datasets/ds004841) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004841](https://nemar.org/dataexplorer/detail?dataset_id=ds004841) DOI: [https://doi.org/10.18112/openneuro.ds004841.v1.0.1](https://doi.org/10.18112/openneuro.ds004841.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004841 >>> dataset = DS004841(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['TX14']* ### *class* eegdash.dataset.dataset.DS004842(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TX15 * **Study:** `ds004842` (OpenNeuro) * **Author (year):** `Larkin2023_TX15` * **Canonical:** `TX15` Also importable as: `DS004842`, `Larkin2023_TX15`, `TX15`. Modality: `eeg`. Subjects: 14; recordings: 102; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004842](https://openneuro.org/datasets/ds004842) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004842](https://nemar.org/dataexplorer/detail?dataset_id=ds004842) DOI: [https://doi.org/10.18112/openneuro.ds004842.v1.0.0](https://doi.org/10.18112/openneuro.ds004842.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004842 >>> dataset = DS004842(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['TX15']* ### *class* eegdash.dataset.dataset.DS004843(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) T16 * **Study:** `ds004843` (OpenNeuro) * **Author (year):** `Johnson2023_T16` * **Canonical:** — Also importable as: `DS004843`, `Johnson2023_T16`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 14; recordings: 92; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004843](https://openneuro.org/datasets/ds004843) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004843](https://nemar.org/dataexplorer/detail?dataset_id=ds004843) DOI: [https://doi.org/10.18112/openneuro.ds004843.v1.0.0](https://doi.org/10.18112/openneuro.ds004843.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004843 >>> dataset = DS004843(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004844(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) T22 * **Study:** `ds004844` (OpenNeuro) * **Author (year):** `Metcalfe2023_T22` * **Canonical:** — Also importable as: `DS004844`, `Metcalfe2023_T22`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 17; recordings: 68; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004844](https://openneuro.org/datasets/ds004844) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004844](https://nemar.org/dataexplorer/detail?dataset_id=ds004844) DOI: [https://doi.org/10.18112/openneuro.ds004844.v1.0.0](https://doi.org/10.18112/openneuro.ds004844.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004844 >>> dataset = DS004844(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004849(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) STRONG * **Study:** `ds004849` (OpenNeuro) * **Author (year):** `Johnson2023_STRONG` * **Canonical:** `STRONG` Also importable as: `DS004849`, `Johnson2023_STRONG`, `STRONG`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004849](https://openneuro.org/datasets/ds004849) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004849](https://nemar.org/dataexplorer/detail?dataset_id=ds004849) DOI: [https://doi.org/10.18112/openneuro.ds004849.v1.0.0](https://doi.org/10.18112/openneuro.ds004849.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004849 >>> dataset = DS004849(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['STRONG']* ### *class* eegdash.dataset.dataset.DS004850(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ODE * **Study:** `ds004850` (OpenNeuro) * **Author (year):** `Johnson2023_ODE` * **Canonical:** `Johnson2024` Also importable as: `DS004850`, `Johnson2023_ODE`, `Johnson2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004850](https://openneuro.org/datasets/ds004850) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004850](https://nemar.org/dataexplorer/detail?dataset_id=ds004850) DOI: [https://doi.org/10.18112/openneuro.ds004850.v1.0.0](https://doi.org/10.18112/openneuro.ds004850.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004850 >>> dataset = DS004850(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Johnson2024']* ### *class* eegdash.dataset.dataset.DS004851(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HID * **Study:** `ds004851` (OpenNeuro) * **Author (year):** `Johnson2023_HID` * **Canonical:** `HID` Also importable as: `DS004851`, `Johnson2023_HID`, `HID`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 66; recordings: 66; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004851](https://openneuro.org/datasets/ds004851) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004851](https://nemar.org/dataexplorer/detail?dataset_id=ds004851) DOI: [https://doi.org/10.18112/openneuro.ds004851.v2.1.0](https://doi.org/10.18112/openneuro.ds004851.v2.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004851 >>> dataset = DS004851(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HID']* ### *class* eegdash.dataset.dataset.DS004852(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) InsurgentCivilian * **Study:** `ds004852` (OpenNeuro) * **Author (year):** `Johnson2023_InsurgentCivilian` * **Canonical:** `Johnson2025` Also importable as: `DS004852`, `Johnson2023_InsurgentCivilian`, `Johnson2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004852](https://openneuro.org/datasets/ds004852) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004852](https://nemar.org/dataexplorer/detail?dataset_id=ds004852) DOI: [https://doi.org/10.18112/openneuro.ds004852.v1.0.0](https://doi.org/10.18112/openneuro.ds004852.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004852 >>> dataset = DS004852(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Johnson2025']* ### *class* eegdash.dataset.dataset.DS004853(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TX17 * **Study:** `ds004853` (OpenNeuro) * **Author (year):** `Johnson2023_TX17` * **Canonical:** — Also importable as: `DS004853`, `Johnson2023_TX17`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004853](https://openneuro.org/datasets/ds004853) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004853](https://nemar.org/dataexplorer/detail?dataset_id=ds004853) DOI: [https://doi.org/10.18112/openneuro.ds004853.v1.0.0](https://doi.org/10.18112/openneuro.ds004853.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004853 >>> dataset = DS004853(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004854(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TX18 * **Study:** `ds004854` (OpenNeuro) * **Author (year):** `Johnson2023_TX18` * **Canonical:** `TX18` Also importable as: `DS004854`, `Johnson2023_TX18`, `TX18`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004854](https://openneuro.org/datasets/ds004854) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004854](https://nemar.org/dataexplorer/detail?dataset_id=ds004854) DOI: [https://doi.org/10.18112/openneuro.ds004854.v1.0.0](https://doi.org/10.18112/openneuro.ds004854.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004854 >>> dataset = DS004854(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['TX18']* ### *class* eegdash.dataset.dataset.DS004855(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FT * **Study:** `ds004855` (OpenNeuro) * **Author (year):** `Johnson2023_FT` * **Canonical:** — Also importable as: `DS004855`, `Johnson2023_FT`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004855](https://openneuro.org/datasets/ds004855) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004855](https://nemar.org/dataexplorer/detail?dataset_id=ds004855) DOI: [https://doi.org/10.18112/openneuro.ds004855.v1.0.0](https://doi.org/10.18112/openneuro.ds004855.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004855 >>> dataset = DS004855(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004859(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG on children during Stroop task * **Study:** `ds004859` (OpenNeuro) * **Author (year):** `Sakakura2023_children_Stroop` * **Canonical:** `Sakakura2024` Also importable as: `DS004859`, `Sakakura2023_children_Stroop`, `Sakakura2024`. Modality: `ieeg`; Experiment type: `Attention`; Subject type: `Unknown`. Subjects: 7; recordings: 9; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004859](https://openneuro.org/datasets/ds004859) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004859](https://nemar.org/dataexplorer/detail?dataset_id=ds004859) DOI: [https://doi.org/10.18112/openneuro.ds004859.v1.0.0](https://doi.org/10.18112/openneuro.ds004859.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004859 >>> dataset = DS004859(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Sakakura2024']* ### *class* eegdash.dataset.dataset.DS004860(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Investigating the cognitive conflict triggered by moral judgment of accidental harm : an event-related potentials study * **Study:** `ds004860` (OpenNeuro) * **Author (year):** `Schwartz2023` * **Canonical:** — Also importable as: `DS004860`, `Schwartz2023`. Modality: `eeg`. Subjects: 31; recordings: 31; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004860](https://openneuro.org/datasets/ds004860) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004860](https://nemar.org/dataexplorer/detail?dataset_id=ds004860) DOI: [https://doi.org/10.18112/openneuro.ds004860.v1.0.0](https://doi.org/10.18112/openneuro.ds004860.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004860 >>> dataset = DS004860(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004865(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) pyFR: Delayed Free Recall of Word Lists, Preliminary Cognitive Electrophysiology Study * **Study:** `ds004865` (OpenNeuro) * **Author (year):** `Herrema2023_pyFR_Delayed_Free` * **Canonical:** `pyFR` Also importable as: `DS004865`, `Herrema2023_pyFR_Delayed_Free`, `pyFR`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Surgery`. Subjects: 42; recordings: 172; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004865](https://openneuro.org/datasets/ds004865) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004865](https://nemar.org/dataexplorer/detail?dataset_id=ds004865) DOI: [https://doi.org/10.18112/openneuro.ds004865.v2.0.1](https://doi.org/10.18112/openneuro.ds004865.v2.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004865 >>> dataset = DS004865(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['pyFR']* ### *class* eegdash.dataset.dataset.DS004883(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Registerd Report of ERN During Three Versions of a Flanker Task * **Study:** `ds004883` (OpenNeuro) * **Author (year):** `Clayson2023_Registerd` * **Canonical:** — Also importable as: `DS004883`, `Clayson2023_Registerd`. Modality: `eeg`. Subjects: 172; recordings: 516; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004883](https://openneuro.org/datasets/ds004883) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004883](https://nemar.org/dataexplorer/detail?dataset_id=ds004883) DOI: [https://doi.org/10.18112/openneuro.ds004883.v1.0.0](https://doi.org/10.18112/openneuro.ds004883.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004883 >>> dataset = DS004883(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004902(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Resting-state EEG Dataset for Sleep Deprivation * **Study:** `ds004902` (OpenNeuro) * **Author (year):** `Xiang2023` * **Canonical:** — Also importable as: `DS004902`, `Xiang2023`. Modality: `eeg`. Subjects: 71; recordings: 218; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004902](https://openneuro.org/datasets/ds004902) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004902](https://nemar.org/dataexplorer/detail?dataset_id=ds004902) DOI: [https://doi.org/10.18112/openneuro.ds004902.v1.0.8](https://doi.org/10.18112/openneuro.ds004902.v1.0.8) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004902 >>> dataset = DS004902(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004917(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Probability Decision-making Task with ambiguity * **Study:** `ds004917` (OpenNeuro) * **Author (year):** `FigueroaVargas2024` * **Canonical:** — Also importable as: `DS004917`, `FigueroaVargas2024`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004917](https://openneuro.org/datasets/ds004917) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004917](https://nemar.org/dataexplorer/detail?dataset_id=ds004917) DOI: [https://doi.org/10.18112/openneuro.ds004917.v1.0.1](https://doi.org/10.18112/openneuro.ds004917.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004917 >>> dataset = DS004917(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004929(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BallSqueezingHD * **Study:** `ds004929` (OpenNeuro) * **Author (year):** `Gao2024` * **Canonical:** — Also importable as: `DS004929`, `Gao2024`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 12; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004929](https://openneuro.org/datasets/ds004929) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004929](https://nemar.org/dataexplorer/detail?dataset_id=ds004929) DOI: [https://doi.org/10.18112/openneuro.ds004929.v1.0.0](https://doi.org/10.18112/openneuro.ds004929.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS004929 >>> dataset = DS004929(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004940(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neurophysiological measures of covert semantic processing in neurotypical adolescents actively ignoring spoken sentence inputs: A high-density event-related potential (ERP) study. * **Study:** `ds004940` (OpenNeuro) * **Author (year):** `Toffolo2024` * **Canonical:** — Also importable as: `DS004940`, `Toffolo2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 22; recordings: 48; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004940](https://openneuro.org/datasets/ds004940) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004940](https://nemar.org/dataexplorer/detail?dataset_id=ds004940) DOI: [https://doi.org/10.18112/openneuro.ds004940.v1.0.1](https://doi.org/10.18112/openneuro.ds004940.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS004940 >>> dataset = DS004940(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004942(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SpatialMemory * **Study:** `ds004942` (OpenNeuro) * **Author (year):** `Kieffaber2024` * **Canonical:** — Also importable as: `DS004942`, `Kieffaber2024`. Modality: `eeg`. Subjects: 62; recordings: 62; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004942](https://openneuro.org/datasets/ds004942) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004942](https://nemar.org/dataexplorer/detail?dataset_id=ds004942) DOI: [https://doi.org/10.18112/openneuro.ds004942.v1.0.0](https://doi.org/10.18112/openneuro.ds004942.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004942 >>> dataset = DS004942(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004944(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of BCI2000-compatible intraoperative ECoG with neuromorphic encoding * **Study:** `ds004944` (OpenNeuro) * **Author (year):** `Costa2024` * **Canonical:** `BCI2000_intraop` Also importable as: `DS004944`, `Costa2024`, `BCI2000_intraop`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 22; recordings: 44; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004944](https://openneuro.org/datasets/ds004944) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004944](https://nemar.org/dataexplorer/detail?dataset_id=ds004944) DOI: [https://doi.org/10.18112/openneuro.ds004944.v1.1.0](https://doi.org/10.18112/openneuro.ds004944.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004944 >>> dataset = DS004944(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCI2000_intraop']* ### *class* eegdash.dataset.dataset.DS004951(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Braille letters - EEG * **Study:** `ds004951` (OpenNeuro) * **Author (year):** `Haupt2024_Braille` * **Canonical:** `Haupt2025` Also importable as: `DS004951`, `Haupt2024_Braille`, `Haupt2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Other`. Subjects: 11; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004951](https://openneuro.org/datasets/ds004951) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004951](https://nemar.org/dataexplorer/detail?dataset_id=ds004951) DOI: [https://doi.org/10.18112/openneuro.ds004951.v1.0.0](https://doi.org/10.18112/openneuro.ds004951.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004951 >>> dataset = DS004951(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Haupt2025']* ### *class* eegdash.dataset.dataset.DS004952(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ChineseEEG: A Chinese Linguistic Corpora EEG Dataset for Semantic Alignment and Neural Decoding * **Study:** `ds004952` (OpenNeuro) * **Author (year):** `Mou2024` * **Canonical:** — Also importable as: `DS004952`, `Mou2024`. Modality: `eeg`. Subjects: 10; recordings: 245; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004952](https://openneuro.org/datasets/ds004952) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004952](https://nemar.org/dataexplorer/detail?dataset_id=ds004952) DOI: [https://doi.org/10.18112/openneuro.ds004952.v1.2.2](https://doi.org/10.18112/openneuro.ds004952.v1.2.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004952 >>> dataset = DS004952(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004973(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) An fNIRS dataset for driving risk cognition of passengers in highly automated driving scenarios * **Study:** `ds004973` (OpenNeuro) * **Author (year):** `Zhang2024_driving_risk_cognition` * **Canonical:** — Also importable as: `DS004973`, `Zhang2024_driving_risk_cognition`. Modality: `fnirs`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 20; recordings: 222; tasks: 12. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004973](https://openneuro.org/datasets/ds004973) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004973](https://nemar.org/dataexplorer/detail?dataset_id=ds004973) DOI: [https://doi.org/10.18112/openneuro.ds004973.v1.0.1](https://doi.org/10.18112/openneuro.ds004973.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS004973 >>> dataset = DS004973(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004977(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CARLA: Adjusted common average referencing for cortico-cortical evoked potential data * **Study:** `ds004977` (OpenNeuro) * **Author (year):** `Huang2024` * **Canonical:** `CARLA` Also importable as: `DS004977`, `Huang2024`, `CARLA`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Epilepsy`. Subjects: 4; recordings: 6; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004977](https://openneuro.org/datasets/ds004977) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004977](https://nemar.org/dataexplorer/detail?dataset_id=ds004977) DOI: [https://doi.org/10.18112/openneuro.ds004977.v1.2.0](https://doi.org/10.18112/openneuro.ds004977.v1.2.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004977 >>> dataset = DS004977(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['CARLA']* ### *class* eegdash.dataset.dataset.DS004980(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data set for a architectural affordances task * **Study:** `ds004980` (OpenNeuro) * **Author (year):** `Wang2024_architectural_affordances` * **Canonical:** — Also importable as: `DS004980`, `Wang2024_architectural_affordances`. Modality: `eeg`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004980](https://openneuro.org/datasets/ds004980) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004980](https://nemar.org/dataexplorer/detail?dataset_id=ds004980) DOI: [https://doi.org/10.18112/openneuro.ds004980.v1.0.0](https://doi.org/10.18112/openneuro.ds004980.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004980 >>> dataset = DS004980(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS004993(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS * **Study:** `ds004993` (OpenNeuro) * **Author (year):** `Hamilton2024` * **Canonical:** `WIRED_ICM` Also importable as: `DS004993`, `Hamilton2024`, `WIRED_ICM`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Epilepsy`. Subjects: 3; recordings: 3; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004993](https://openneuro.org/datasets/ds004993) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004993](https://nemar.org/dataexplorer/detail?dataset_id=ds004993) DOI: [https://doi.org/10.18112/openneuro.ds004993.v1.1.2](https://doi.org/10.18112/openneuro.ds004993.v1.1.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004993 >>> dataset = DS004993(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['WIRED_ICM']* ### *class* eegdash.dataset.dataset.DS004995(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Time-Course of Food Representation in the Human Brain * **Study:** `ds004995` (OpenNeuro) * **Author (year):** `Moerel2024` * **Canonical:** `Moerel2023` Also importable as: `DS004995`, `Moerel2024`, `Moerel2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004995](https://openneuro.org/datasets/ds004995) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004995](https://nemar.org/dataexplorer/detail?dataset_id=ds004995) DOI: [https://doi.org/10.18112/openneuro.ds004995.v1.0.2](https://doi.org/10.18112/openneuro.ds004995.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004995 >>> dataset = DS004995(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Moerel2023']* ### *class* eegdash.dataset.dataset.DS004998(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Exploring the electrophysiology of Parkinson’s disease - magnetoencephalography combined with deep brain recordings from the subthalamic nucleus. * **Study:** `ds004998` (OpenNeuro) * **Author (year):** `Rassoulou2024` * **Canonical:** — Also importable as: `DS004998`, `Rassoulou2024`. Modality: `meg`; Experiment type: `Motor`; Subject type: `Parkinson's`. Subjects: 20; recordings: 145; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004998](https://openneuro.org/datasets/ds004998) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004998](https://nemar.org/dataexplorer/detail?dataset_id=ds004998) DOI: [https://doi.org/10.18112/openneuro.ds004998.v1.2.2](https://doi.org/10.18112/openneuro.ds004998.v1.2.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004998 >>> dataset = DS004998(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005007(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory naming task with questions that begin or end with a wh-interrogative * **Study:** `ds005007` (OpenNeuro) * **Author (year):** `Kitazawa2024` * **Canonical:** `Kitazawa2025` Also importable as: `DS005007`, `Kitazawa2024`, `Kitazawa2025`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 40; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005007](https://openneuro.org/datasets/ds005007) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005007](https://nemar.org/dataexplorer/detail?dataset_id=ds005007) DOI: [https://doi.org/10.18112/openneuro.ds005007.v1.0.0](https://doi.org/10.18112/openneuro.ds005007.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005007 >>> dataset = DS005007(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kitazawa2025']* ### *class* eegdash.dataset.dataset.DS005021(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Tilt Illusion by Phase * **Study:** `ds005021` (OpenNeuro) * **Author (year):** `Williams2024` * **Canonical:** — Also importable as: `DS005021`, `Williams2024`. Modality: `eeg`. Subjects: 36; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005021](https://openneuro.org/datasets/ds005021) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005021](https://nemar.org/dataexplorer/detail?dataset_id=ds005021) DOI: [https://doi.org/10.18112/openneuro.ds005021.v1.2.1](https://doi.org/10.18112/openneuro.ds005021.v1.2.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005021 >>> dataset = DS005021(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005028(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Comparing P300 Flashing paradigms in online typing with language models * **Study:** `ds005028` (OpenNeuro) * **Author (year):** `Chandravadia2024` * **Canonical:** `Chandravadia2022` Also importable as: `DS005028`, `Chandravadia2024`, `Chandravadia2022`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Unknown`. Subjects: 11; recordings: 105; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005028](https://openneuro.org/datasets/ds005028) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005028](https://nemar.org/dataexplorer/detail?dataset_id=ds005028) DOI: [https://doi.org/10.18112/openneuro.ds005028.v1.0.0](https://doi.org/10.18112/openneuro.ds005028.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005028 >>> dataset = DS005028(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Chandravadia2022']* ### *class* eegdash.dataset.dataset.DS005034(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effect of theta tACS on working memory * **Study:** `ds005034` (OpenNeuro) * **Author (year):** `Pavlov2024_effect_theta_tACS` * **Canonical:** — Also importable as: `DS005034`, `Pavlov2024_effect_theta_tACS`. Modality: `eeg`. Subjects: 25; recordings: 100; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005034](https://openneuro.org/datasets/ds005034) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005034](https://nemar.org/dataexplorer/detail?dataset_id=ds005034) DOI: [https://doi.org/10.18112/openneuro.ds005034.v1.0.1](https://doi.org/10.18112/openneuro.ds005034.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005034 >>> dataset = DS005034(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005048(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 40Hz Auditory Entrainment * **Study:** `ds005048` (OpenNeuro) * **Author (year):** `Lahijanian2024` * **Canonical:** — Also importable as: `DS005048`, `Lahijanian2024`. Modality: `eeg`. Subjects: 35; recordings: 35; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005048](https://openneuro.org/datasets/ds005048) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005048](https://nemar.org/dataexplorer/detail?dataset_id=ds005048) DOI: [https://doi.org/10.18112/openneuro.ds005048.v1.0.1](https://doi.org/10.18112/openneuro.ds005048.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005048 >>> dataset = DS005048(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005059(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Paired Associates Learning: Memory for Word Pairs in Cued Recall * **Study:** `ds005059` (OpenNeuro) * **Author (year):** `Herrema2024_Paired` * **Canonical:** `PAL` Also importable as: `DS005059`, `Herrema2024_Paired`, `PAL`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 69; recordings: 282; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005059](https://openneuro.org/datasets/ds005059) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005059](https://nemar.org/dataexplorer/detail?dataset_id=ds005059) DOI: [https://doi.org/10.18112/openneuro.ds005059.v1.0.6](https://doi.org/10.18112/openneuro.ds005059.v1.0.6) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005059 >>> dataset = DS005059(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PAL']* ### *class* eegdash.dataset.dataset.DS005065(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Heuristics in risky decision-making relate to preferential representation of information MEG data * **Study:** `ds005065` (OpenNeuro) * **Author (year):** `Russek2024` * **Canonical:** — Also importable as: `DS005065`, `Russek2024`. Modality: `meg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 21; recordings: 275; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005065](https://openneuro.org/datasets/ds005065) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005065](https://nemar.org/dataexplorer/detail?dataset_id=ds005065) DOI: [https://doi.org/10.18112/openneuro.ds005065.v1.0.0](https://doi.org/10.18112/openneuro.ds005065.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005065 >>> dataset = DS005065(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005079(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Effects of Directed Therapeutic Intent on Live and Damaged Cells * **Study:** `ds005079` (OpenNeuro) * **Author (year):** `Cohen2024` * **Canonical:** — Also importable as: `DS005079`, `Cohen2024`. Modality: `eeg`. Subjects: 1; recordings: 60; tasks: 15. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005079](https://openneuro.org/datasets/ds005079) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005079](https://nemar.org/dataexplorer/detail?dataset_id=ds005079) DOI: [https://doi.org/10.18112/openneuro.ds005079.v2.0.0](https://doi.org/10.18112/openneuro.ds005079.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005079 >>> dataset = DS005079(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005083(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Safety and Accuracy of Stereoelectroencephalography for Pediatric Patients with Prior Craniotomy * **Study:** `ds005083` (OpenNeuro) * **Author (year):** `Yang2024` * **Canonical:** — Also importable as: `DS005083`, `Yang2024`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 61; recordings: 1357; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005083](https://openneuro.org/datasets/ds005083) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005083](https://nemar.org/dataexplorer/detail?dataset_id=ds005083) DOI: [https://doi.org/10.18112/openneuro.ds005083.v1.0.0](https://doi.org/10.18112/openneuro.ds005083.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005083 >>> dataset = DS005083(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005087(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) rapid-hemifield-object-eeg * **Study:** `ds005087` (OpenNeuro) * **Author (year):** `Robinson2024_rapid` * **Canonical:** — Also importable as: `DS005087`, `Robinson2024_rapid`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 20; recordings: 60; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005087](https://openneuro.org/datasets/ds005087) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005087](https://nemar.org/dataexplorer/detail?dataset_id=ds005087) DOI: [https://doi.org/10.18112/openneuro.ds005087.v1.0.1](https://doi.org/10.18112/openneuro.ds005087.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005087 >>> dataset = DS005087(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005089(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Proactive selective attention across competition contexts * **Study:** `ds005089` (OpenNeuro) * **Author (year):** `AguadoLopez2024` * **Canonical:** — Also importable as: `DS005089`, `AguadoLopez2024`. Modality: `eeg`. Subjects: 36; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005089](https://openneuro.org/datasets/ds005089) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005089](https://nemar.org/dataexplorer/detail?dataset_id=ds005089) DOI: [https://doi.org/10.18112/openneuro.ds005089.v1.0.1](https://doi.org/10.18112/openneuro.ds005089.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005089 >>> dataset = DS005089(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005095(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) STERNBERG DIFFICULT * **Study:** `ds005095` (OpenNeuro) * **Author (year):** `Zhozhikashvili2024` * **Canonical:** — Also importable as: `DS005095`, `Zhozhikashvili2024`. Modality: `eeg`. Subjects: 48; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005095](https://openneuro.org/datasets/ds005095) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005095](https://nemar.org/dataexplorer/detail?dataset_id=ds005095) DOI: [https://doi.org/10.18112/openneuro.ds005095.v1.0.2](https://doi.org/10.18112/openneuro.ds005095.v1.0.2) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS005095 >>> dataset = DS005095(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005106(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 200 Objects Infants EEG * **Study:** `ds005106` (OpenNeuro) * **Author (year):** `Grootswagers2024` * **Canonical:** — Also importable as: `DS005106`, `Grootswagers2024`. Modality: `eeg`. Subjects: 42; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005106](https://openneuro.org/datasets/ds005106) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005106](https://nemar.org/dataexplorer/detail?dataset_id=ds005106) DOI: [https://doi.org/10.18112/openneuro.ds005106.v1.5.0](https://doi.org/10.18112/openneuro.ds005106.v1.5.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005106 >>> dataset = DS005106(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005107(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FACE-DEC * **Study:** `ds005107` (OpenNeuro) * **Author (year):** `Xu2024_DEC` * **Canonical:** — Also importable as: `DS005107`, `Xu2024_DEC`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 21; recordings: 350; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005107](https://openneuro.org/datasets/ds005107) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005107](https://nemar.org/dataexplorer/detail?dataset_id=ds005107) DOI: [https://doi.org/10.18112/openneuro.ds005107.v2.0.0](https://doi.org/10.18112/openneuro.ds005107.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005107 >>> dataset = DS005107(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005114(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: DPX Cog Ctl Task in Acute Mild TBI * **Study:** `ds005114` (OpenNeuro) * **Author (year):** `Cavanagh2024` * **Canonical:** — Also importable as: `DS005114`, `Cavanagh2024`. Modality: `eeg`. Subjects: 91; recordings: 223; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005114](https://openneuro.org/datasets/ds005114) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005114](https://nemar.org/dataexplorer/detail?dataset_id=ds005114) DOI: [https://doi.org/10.18112/openneuro.ds005114.v1.0.0](https://doi.org/10.18112/openneuro.ds005114.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005114 >>> dataset = DS005114(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005121(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Siefert2024 * **Study:** `ds005121` (OpenNeuro) * **Author (year):** `Siefert2024` * **Canonical:** — Also importable as: `DS005121`, `Siefert2024`. Modality: `eeg`. Subjects: 34; recordings: 39; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005121](https://openneuro.org/datasets/ds005121) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005121](https://nemar.org/dataexplorer/detail?dataset_id=ds005121) DOI: [https://doi.org/10.18112/openneuro.ds005121.v1.0.2](https://doi.org/10.18112/openneuro.ds005121.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005121 >>> dataset = DS005121(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005131(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Evoked responses to elevated sounds * **Study:** `ds005131` (OpenNeuro) * **Author (year):** `Bialas2024` * **Canonical:** — Also importable as: `DS005131`, `Bialas2024`. Modality: `eeg`. Subjects: 58; recordings: 63; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005131](https://openneuro.org/datasets/ds005131) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005131](https://nemar.org/dataexplorer/detail?dataset_id=ds005131) DOI: [https://doi.org/10.18112/openneuro.ds005131.v1.0.1](https://doi.org/10.18112/openneuro.ds005131.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005131 >>> dataset = DS005131(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005169(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of intracranial EEG during cortical stimulation evoking visual effects * **Study:** `ds005169` (OpenNeuro) * **Author (year):** `Barborica2024` * **Canonical:** — Also importable as: `DS005169`, `Barborica2024`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 20; recordings: 112; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005169](https://openneuro.org/datasets/ds005169) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005169](https://nemar.org/dataexplorer/detail?dataset_id=ds005169) DOI: [https://doi.org/10.18112/openneuro.ds005169.v1.0.0](https://doi.org/10.18112/openneuro.ds005169.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005169 >>> dataset = DS005169(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005170(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Chisco * **Study:** `ds005170` (OpenNeuro) * **Author (year):** `Zhang2024_Chisco` * **Canonical:** `Chisco` Also importable as: `DS005170`, `Zhang2024_Chisco`, `Chisco`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 225; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005170](https://openneuro.org/datasets/ds005170) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005170](https://nemar.org/dataexplorer/detail?dataset_id=ds005170) DOI: [https://doi.org/10.18112/openneuro.ds005170.v1.1.2](https://doi.org/10.18112/openneuro.ds005170.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005170 >>> dataset = DS005170(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Chisco']* ### *class* eegdash.dataset.dataset.DS005178(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Ear-EEG Sleep Monitoring 2023 (EESM23) * **Study:** `ds005178` (OpenNeuro) * **Author (year):** `Tabar2024` * **Canonical:** `EESM23` Also importable as: `DS005178`, `Tabar2024`, `EESM23`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 10; recordings: 140; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005178](https://openneuro.org/datasets/ds005178) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005178](https://nemar.org/dataexplorer/detail?dataset_id=ds005178) DOI: [https://doi.org/10.18112/openneuro.ds005178.v1.0.0](https://doi.org/10.18112/openneuro.ds005178.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005178 >>> dataset = DS005178(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EESM23']* ### *class* eegdash.dataset.dataset.DS005185(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Ear-EEG Sleep Monitoring 2019 (EESM19) * **Study:** `ds005185` (OpenNeuro) * **Author (year):** `Mikkelsen2024_Ear_Sleep_Monitoring` * **Canonical:** `EESM19` Also importable as: `DS005185`, `Mikkelsen2024_Ear_Sleep_Monitoring`, `EESM19`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 20; recordings: 356; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005185](https://openneuro.org/datasets/ds005185) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005185](https://nemar.org/dataexplorer/detail?dataset_id=ds005185) DOI: [https://doi.org/10.18112/openneuro.ds005185.v1.0.2](https://doi.org/10.18112/openneuro.ds005185.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS005185 >>> dataset = DS005185(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EESM19']* ### *class* eegdash.dataset.dataset.DS005189(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Search Superiority Recollection Familiarity * **Study:** `ds005189` (OpenNeuro) * **Author (year):** `Helbing2024` * **Canonical:** — Also importable as: `DS005189`, `Helbing2024`. Modality: `eeg`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005189](https://openneuro.org/datasets/ds005189) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005189](https://nemar.org/dataexplorer/detail?dataset_id=ds005189) DOI: [https://doi.org/10.18112/openneuro.ds005189.v1.0.1](https://doi.org/10.18112/openneuro.ds005189.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005189 >>> dataset = DS005189(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005207(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Surrey cEEGrid sleep data set * **Study:** `ds005207` (OpenNeuro) * **Author (year):** `Mikkelsen2024_Surrey_cEEGrid_sleep` * **Canonical:** `Surrey_cEEGrid_sleep` Also importable as: `DS005207`, `Mikkelsen2024_Surrey_cEEGrid_sleep`, `Surrey_cEEGrid_sleep`. Modality: `eeg`. Subjects: 20; recordings: 39; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005207](https://openneuro.org/datasets/ds005207) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005207](https://nemar.org/dataexplorer/detail?dataset_id=ds005207) DOI: [https://doi.org/10.18112/openneuro.ds005207.v1.0.0](https://doi.org/10.18112/openneuro.ds005207.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005207 >>> dataset = DS005207(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Surrey_cEEGrid_sleep']* ### *class* eegdash.dataset.dataset.DS005241(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NeuroMorph: A High-Temporal Resolution MEG Dataset for Morpheme-Based Linguistic Analysis * **Study:** `ds005241` (OpenNeuro) * **Author (year):** `Rodriguez2024` * **Canonical:** `NeuroMorph`, `neuromorph` Also importable as: `DS005241`, `Rodriguez2024`, `NeuroMorph`, `neuromorph`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 24; recordings: 117; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005241](https://openneuro.org/datasets/ds005241) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005241](https://nemar.org/dataexplorer/detail?dataset_id=ds005241) DOI: [https://doi.org/10.18112/openneuro.ds005241.v1.1.0](https://doi.org/10.18112/openneuro.ds005241.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005241 >>> dataset = DS005241(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['NeuroMorph', 'neuromorph']* ### *class* eegdash.dataset.dataset.DS005261(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Gloups_MEG * **Study:** `ds005261` (OpenNeuro) * **Author (year):** `Todorovic2024` * **Canonical:** `Todorovic2023` Also importable as: `DS005261`, `Todorovic2024`, `Todorovic2023`. Modality: `meg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 17; recordings: 128; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005261](https://openneuro.org/datasets/ds005261) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005261](https://nemar.org/dataexplorer/detail?dataset_id=ds005261) DOI: [https://doi.org/10.18112/openneuro.ds005261.v3.0.0](https://doi.org/10.18112/openneuro.ds005261.v3.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005261 >>> dataset = DS005261(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Todorovic2023']* ### *class* eegdash.dataset.dataset.DS005262(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ArEEG: Arabic Inner Speech EEG dataset * **Study:** `ds005262` (OpenNeuro) * **Author (year):** `Metwalli2024` * **Canonical:** `ArEEG` Also importable as: `DS005262`, `Metwalli2024`, `ArEEG`. Modality: `eeg`. Subjects: 12; recordings: 186; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005262](https://openneuro.org/datasets/ds005262) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005262](https://nemar.org/dataexplorer/detail?dataset_id=ds005262) DOI: [https://doi.org/10.18112/openneuro.ds005262.v1.0.1](https://doi.org/10.18112/openneuro.ds005262.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005262 >>> dataset = DS005262(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ArEEG']* ### *class* eegdash.dataset.dataset.DS005273(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neural representation of consciously seen and unseen information * **Study:** `ds005273` (OpenNeuro) * **Author (year):** `Esteban2024` * **Canonical:** — Also importable as: `DS005273`, `Esteban2024`. Modality: `eeg`. Subjects: 33; recordings: 33; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005273](https://openneuro.org/datasets/ds005273) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005273](https://nemar.org/dataexplorer/detail?dataset_id=ds005273) DOI: [https://doi.org/10.18112/openneuro.ds005273.v1.0.0](https://doi.org/10.18112/openneuro.ds005273.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005273 >>> dataset = DS005273(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005274(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) UV_EEG * **Study:** `ds005274` (OpenNeuro) * **Author (year):** `Ito2024` * **Canonical:** — Also importable as: `DS005274`, `Ito2024`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 22; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005274](https://openneuro.org/datasets/ds005274) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005274](https://nemar.org/dataexplorer/detail?dataset_id=ds005274) DOI: [https://doi.org/10.18112/openneuro.ds005274.v1.0.0](https://doi.org/10.18112/openneuro.ds005274.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005274 >>> dataset = DS005274(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005279(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Picture-Word Interference Dataset * **Study:** `ds005279` (OpenNeuro) * **Author (year):** `Wei2024` * **Canonical:** — Also importable as: `DS005279`, `Wei2024`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 30; recordings: 90; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005279](https://openneuro.org/datasets/ds005279) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005279](https://nemar.org/dataexplorer/detail?dataset_id=ds005279) DOI: [https://doi.org/10.18112/openneuro.ds005279.v1.0.3](https://doi.org/10.18112/openneuro.ds005279.v1.0.3) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005279 >>> dataset = DS005279(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005280(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 223 By BP * **Study:** `ds005280` (OpenNeuro) * **Author (year):** `Xiangyue2024_223_BP` * **Canonical:** — Also importable as: `DS005280`, `Xiangyue2024_223_BP`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 223; recordings: 669; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005280](https://openneuro.org/datasets/ds005280) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005280](https://nemar.org/dataexplorer/detail?dataset_id=ds005280) DOI: [https://doi.org/10.18112/openneuro.ds005280.v1.0.0](https://doi.org/10.18112/openneuro.ds005280.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005280 >>> dataset = DS005280(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005284(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 26 By Biosemi * **Study:** `ds005284` (OpenNeuro) * **Author (year):** `Xiangyue2024_26_Biosemi` * **Canonical:** — Also importable as: `DS005284`, `Xiangyue2024_26_Biosemi`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005284](https://openneuro.org/datasets/ds005284) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005284](https://nemar.org/dataexplorer/detail?dataset_id=ds005284) DOI: [https://doi.org/10.18112/openneuro.ds005284.v1.0.0](https://doi.org/10.18112/openneuro.ds005284.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005284 >>> dataset = DS005284(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005285(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 29 By ANT * **Study:** `ds005285` (OpenNeuro) * **Author (year):** `Xiangyue2024_29_ANT` * **Canonical:** — Also importable as: `DS005285`, `Xiangyue2024_29_ANT`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 29; recordings: 116; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005285](https://openneuro.org/datasets/ds005285) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005285](https://nemar.org/dataexplorer/detail?dataset_id=ds005285) DOI: [https://doi.org/10.18112/openneuro.ds005285.v1.0.0](https://doi.org/10.18112/openneuro.ds005285.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005285 >>> dataset = DS005285(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005286(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 30 By ANT * **Study:** `ds005286` (OpenNeuro) * **Author (year):** `Xiangyue2024_30_ANT` * **Canonical:** — Also importable as: `DS005286`, `Xiangyue2024_30_ANT`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005286](https://openneuro.org/datasets/ds005286) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005286](https://nemar.org/dataexplorer/detail?dataset_id=ds005286) DOI: [https://doi.org/10.18112/openneuro.ds005286.v1.0.0](https://doi.org/10.18112/openneuro.ds005286.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005286 >>> dataset = DS005286(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005289(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 39 By BP * **Study:** `ds005289` (OpenNeuro) * **Author (year):** `Xiangyue2024_39_BP` * **Canonical:** — Also importable as: `DS005289`, `Xiangyue2024_39_BP`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 39; recordings: 195; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005289](https://openneuro.org/datasets/ds005289) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005289](https://nemar.org/dataexplorer/detail?dataset_id=ds005289) DOI: [https://doi.org/10.18112/openneuro.ds005289.v1.0.0](https://doi.org/10.18112/openneuro.ds005289.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005289 >>> dataset = DS005289(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005291(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 65 By ANT * **Study:** `ds005291` (OpenNeuro) * **Author (year):** `Xiangyue2024_65_ANT` * **Canonical:** — Also importable as: `DS005291`, `Xiangyue2024_65_ANT`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 65; recordings: 65; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005291](https://openneuro.org/datasets/ds005291) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005291](https://nemar.org/dataexplorer/detail?dataset_id=ds005291) DOI: [https://doi.org/10.18112/openneuro.ds005291.v1.0.0](https://doi.org/10.18112/openneuro.ds005291.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005291 >>> dataset = DS005291(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005292(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 142 by Biosemi * **Study:** `ds005292` (OpenNeuro) * **Author (year):** `Xiangyue2024_142_Biosemi` * **Canonical:** — Also importable as: `DS005292`, `Xiangyue2024_142_Biosemi`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 142; recordings: 426; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005292](https://openneuro.org/datasets/ds005292) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005292](https://nemar.org/dataexplorer/detail?dataset_id=ds005292) DOI: [https://doi.org/10.18112/openneuro.ds005292.v1.0.0](https://doi.org/10.18112/openneuro.ds005292.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005292 >>> dataset = DS005292(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005293(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 95 By BP * **Study:** `ds005293` (OpenNeuro) * **Author (year):** `Xiangyue2024_95_BP` * **Canonical:** — Also importable as: `DS005293`, `Xiangyue2024_95_BP`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 95; recordings: 570; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005293](https://openneuro.org/datasets/ds005293) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005293](https://nemar.org/dataexplorer/detail?dataset_id=ds005293) DOI: [https://doi.org/10.18112/openneuro.ds005293.v1.0.0](https://doi.org/10.18112/openneuro.ds005293.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005293 >>> dataset = DS005293(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005296(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Assessing sensitivity to semantic and syntactic information in deaf readers: An ERP study * **Study:** `ds005296` (OpenNeuro) * **Author (year):** `Emmorey2024` * **Canonical:** — Also importable as: `DS005296`, `Emmorey2024`. Modality: `eeg`. Subjects: 62; recordings: 62; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005296](https://openneuro.org/datasets/ds005296) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005296](https://nemar.org/dataexplorer/detail?dataset_id=ds005296) DOI: [https://doi.org/10.18112/openneuro.ds005296.v1.0.1](https://doi.org/10.18112/openneuro.ds005296.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005296 >>> dataset = DS005296(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005305(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Resting-state Microstates Correlates of Executive Functions * **Study:** `ds005305` (OpenNeuro) * **Author (year):** `Quentin2024` * **Canonical:** — Also importable as: `DS005305`, `Quentin2024`. Modality: `eeg`. Subjects: 165; recordings: 165; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005305](https://openneuro.org/datasets/ds005305) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005305](https://nemar.org/dataexplorer/detail?dataset_id=ds005305) DOI: [https://doi.org/10.18112/openneuro.ds005305.v1.0.1](https://doi.org/10.18112/openneuro.ds005305.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005305 >>> dataset = DS005305(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005307(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Laser-evoked potentials in the human spinal cord and cortex * **Study:** `ds005307` (OpenNeuro) * **Author (year):** `Nierula2024` * **Canonical:** `Nierula2019` Also importable as: `DS005307`, `Nierula2024`, `Nierula2019`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 7; recordings: 73; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005307](https://openneuro.org/datasets/ds005307) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005307](https://nemar.org/dataexplorer/detail?dataset_id=ds005307) DOI: [https://doi.org/10.18112/openneuro.ds005307.v1.0.1](https://doi.org/10.18112/openneuro.ds005307.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005307 >>> dataset = DS005307(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Nierula2019']* ### *class* eegdash.dataset.dataset.DS005340(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech * **Study:** `ds005340` (OpenNeuro) * **Author (year):** `Polonenko2024_Fundamental` * **Canonical:** — Also importable as: `DS005340`, `Polonenko2024_Fundamental`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 15; recordings: 15; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005340](https://openneuro.org/datasets/ds005340) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005340](https://nemar.org/dataexplorer/detail?dataset_id=ds005340) DOI: [https://doi.org/10.18112/openneuro.ds005340.v1.0.4](https://doi.org/10.18112/openneuro.ds005340.v1.0.4) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005340 >>> dataset = DS005340(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005342(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data offline and online during motor imagery for standing and sitting * **Study:** `ds005342` (OpenNeuro) * **Author (year):** `TrianaGuzman2024` * **Canonical:** — Also importable as: `DS005342`, `TrianaGuzman2024`. Modality: `eeg`. Subjects: 32; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005342](https://openneuro.org/datasets/ds005342) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005342](https://nemar.org/dataexplorer/detail?dataset_id=ds005342) DOI: [https://doi.org/10.18112/openneuro.ds005342.v1.0.3](https://doi.org/10.18112/openneuro.ds005342.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005342 >>> dataset = DS005342(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005343(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Gaffrey Lab Infant Microstates and Attention * **Study:** `ds005343` (OpenNeuro) * **Author (year):** `Bagdasarov2024` * **Canonical:** — Also importable as: `DS005343`, `Bagdasarov2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Development`. Subjects: 43; recordings: 43; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005343](https://openneuro.org/datasets/ds005343) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005343](https://nemar.org/dataexplorer/detail?dataset_id=ds005343) DOI: [https://doi.org/10.18112/openneuro.ds005343.v1.0.0](https://doi.org/10.18112/openneuro.ds005343.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005343 >>> dataset = DS005343(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005345(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset * **Study:** `ds005345` (OpenNeuro) * **Author (year):** `Ma2024` * **Canonical:** `LPP` Also importable as: `DS005345`, `Ma2024`, `LPP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005345](https://openneuro.org/datasets/ds005345) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005345](https://nemar.org/dataexplorer/detail?dataset_id=ds005345) DOI: [https://doi.org/10.18112/openneuro.ds005345.v1.0.1](https://doi.org/10.18112/openneuro.ds005345.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005345 >>> dataset = DS005345(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['LPP']* ### *class* eegdash.dataset.dataset.DS005346(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Naturalistic fMRI and MEG recordings during viewing of a reality TV show * **Study:** `ds005346` (OpenNeuro) * **Author (year):** `Li2024_Naturalistic_fMRI_viewing` * **Canonical:** — Also importable as: `DS005346`, `Li2024_Naturalistic_fMRI_viewing`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 30; recordings: 90; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005346](https://openneuro.org/datasets/ds005346) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005346](https://nemar.org/dataexplorer/detail?dataset_id=ds005346) DOI: [https://doi.org/10.18112/openneuro.ds005346.v1.0.5](https://doi.org/10.18112/openneuro.ds005346.v1.0.5) ### Examples ```pycon >>> from eegdash.dataset import DS005346 >>> dataset = DS005346(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005356(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEG: Major Depression & Probabilistic Learning Task * **Study:** `ds005356` (OpenNeuro) * **Author (year):** `DS5356_MajorDepression` * **Canonical:** — Also importable as: `DS005356`, `DS5356_MajorDepression`. Modality: `meg`; Experiment type: `Learning`; Subject type: `Depression`. Subjects: 85; recordings: 116; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005356](https://openneuro.org/datasets/ds005356) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005356](https://nemar.org/dataexplorer/detail?dataset_id=ds005356) DOI: [https://doi.org/10.18112/openneuro.ds005356.v1.5.0](https://doi.org/10.18112/openneuro.ds005356.v1.5.0) ### Examples ```pycon >>> from eegdash.dataset import DS005356 >>> dataset = DS005356(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005363(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Object recognition in healthy aging (ORHA) - EEG * **Study:** `ds005363` (OpenNeuro) * **Author (year):** `Haupt2024_Object` * **Canonical:** `ORHA` Also importable as: `DS005363`, `Haupt2024_Object`, `ORHA`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 43; recordings: 43; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005363](https://openneuro.org/datasets/ds005363) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005363](https://nemar.org/dataexplorer/detail?dataset_id=ds005363) DOI: [https://doi.org/10.18112/openneuro.ds005363.v1.0.0](https://doi.org/10.18112/openneuro.ds005363.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005363 >>> dataset = DS005363(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ORHA']* ### *class* eegdash.dataset.dataset.DS005383(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TMNRED, A Chinese Language EEG Dataset for Fuzzy Semantic Target Identification in Natural Reading Environments * **Study:** `ds005383` (OpenNeuro) * **Author (year):** `Bai2024` * **Canonical:** `TMNRED` Also importable as: `DS005383`, `Bai2024`, `TMNRED`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 30; recordings: 240; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005383](https://openneuro.org/datasets/ds005383) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005383](https://nemar.org/dataexplorer/detail?dataset_id=ds005383) DOI: [https://doi.org/10.18112/openneuro.ds005383.v1.0.0](https://doi.org/10.18112/openneuro.ds005383.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005383 >>> dataset = DS005383(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['TMNRED']* ### *class* eegdash.dataset.dataset.DS005385(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting-state EEG data before and after cognitive activity across the adult lifespan and a 5-year follow-up * **Study:** `ds005385` (OpenNeuro) * **Author (year):** `Wascher2024` * **Canonical:** — Also importable as: `DS005385`, `Wascher2024`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 608; recordings: 3264; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005385](https://openneuro.org/datasets/ds005385) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005385](https://nemar.org/dataexplorer/detail?dataset_id=ds005385) DOI: [https://doi.org/10.18112/openneuro.ds005385.v1.0.3](https://doi.org/10.18112/openneuro.ds005385.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005385 >>> dataset = DS005385(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005397(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Affordances of stairs * **Study:** `ds005397` (OpenNeuro) * **Author (year):** `Hilton2024` * **Canonical:** — Also importable as: `DS005397`, `Hilton2024`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005397](https://openneuro.org/datasets/ds005397) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005397](https://nemar.org/dataexplorer/detail?dataset_id=ds005397) DOI: [https://doi.org/10.18112/openneuro.ds005397.v1.0.4](https://doi.org/10.18112/openneuro.ds005397.v1.0.4) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005397 >>> dataset = DS005397(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005398(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Open iEEG Dataset (Pediatric iEEG, Wayne State University and UCLA) * **Study:** `ds005398` (OpenNeuro) * **Author (year):** `Zhang2024_Open_Pediatric_Wayne` * **Canonical:** — Also importable as: `DS005398`, `Zhang2024_Open_Pediatric_Wayne`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 185; recordings: 185; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005398](https://openneuro.org/datasets/ds005398) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005398](https://nemar.org/dataexplorer/detail?dataset_id=ds005398) DOI: [https://doi.org/10.18112/openneuro.ds005398.v1.1.1](https://doi.org/10.18112/openneuro.ds005398.v1.1.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005398 >>> dataset = DS005398(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005403(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Delayed Auditory Feedback EEG/EGG * **Study:** `ds005403` (OpenNeuro) * **Author (year):** `Veillette2024` * **Canonical:** `Veillette2019` Also importable as: `DS005403`, `Veillette2024`, `Veillette2019`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 32; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005403](https://openneuro.org/datasets/ds005403) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005403](https://nemar.org/dataexplorer/detail?dataset_id=ds005403) DOI: [https://doi.org/10.18112/openneuro.ds005403.v1.0.1](https://doi.org/10.18112/openneuro.ds005403.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005403 >>> dataset = DS005403(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Veillette2019']* ### *class* eegdash.dataset.dataset.DS005406(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG frequency tagging reveals the integration of dissimilar observed actions * **Study:** `ds005406` (OpenNeuro) * **Author (year):** `Formica2024` * **Canonical:** `Formica2025` Also importable as: `DS005406`, `Formica2024`, `Formica2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 29; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005406](https://openneuro.org/datasets/ds005406) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005406](https://nemar.org/dataexplorer/detail?dataset_id=ds005406) DOI: [https://doi.org/10.18112/openneuro.ds005406.v1.0.0](https://doi.org/10.18112/openneuro.ds005406.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005406 >>> dataset = DS005406(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Formica2025']* ### *class* eegdash.dataset.dataset.DS005407(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effect of speech masking on the subcortical response to speech * **Study:** `ds005407` (OpenNeuro) * **Author (year):** `Polonenko2024_effect` * **Canonical:** — Also importable as: `DS005407`, `Polonenko2024_effect`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 25; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005407](https://openneuro.org/datasets/ds005407) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005407](https://nemar.org/dataexplorer/detail?dataset_id=ds005407) DOI: [https://doi.org/10.18112/openneuro.ds005407.v1.0.1](https://doi.org/10.18112/openneuro.ds005407.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005407 >>> dataset = DS005407(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005408(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effect of speech masking on the subcortical response to speech * **Study:** `ds005408` (OpenNeuro) * **Author (year):** `Polonenko2024_effect_speech` * **Canonical:** — Also importable as: `DS005408`, `Polonenko2024_effect_speech`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 25; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005408](https://openneuro.org/datasets/ds005408) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005408](https://nemar.org/dataexplorer/detail?dataset_id=ds005408) DOI: [https://doi.org/10.18112/openneuro.ds005408.v1.0.0](https://doi.org/10.18112/openneuro.ds005408.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005408 >>> dataset = DS005408(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005410(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Semantic_conditioning * **Study:** `ds005410` (OpenNeuro) * **Author (year):** `Pavlov2024_Semantic_conditioning` * **Canonical:** — Also importable as: `DS005410`, `Pavlov2024_Semantic_conditioning`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 81; recordings: 81; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005410](https://openneuro.org/datasets/ds005410) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005410](https://nemar.org/dataexplorer/detail?dataset_id=ds005410) DOI: [https://doi.org/10.18112/openneuro.ds005410.v1.0.1](https://doi.org/10.18112/openneuro.ds005410.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005410 >>> dataset = DS005410(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005411(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Free Recall of Word Lists with Repeated Items * **Study:** `ds005411` (OpenNeuro) * **Author (year):** `Herrema2024_Free` * **Canonical:** — Also importable as: `DS005411`, `Herrema2024_Free`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 47; recordings: 193; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005411](https://openneuro.org/datasets/ds005411) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005411](https://nemar.org/dataexplorer/detail?dataset_id=ds005411) DOI: [https://doi.org/10.18112/openneuro.ds005411.v1.0.0](https://doi.org/10.18112/openneuro.ds005411.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005411 >>> dataset = DS005411(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005415(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Numbers * **Study:** `ds005415` (OpenNeuro) * **Author (year):** `Rockhill2024` * **Canonical:** — Also importable as: `DS005415`, `Rockhill2024`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Epilepsy`. Subjects: 13; recordings: 13; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005415](https://openneuro.org/datasets/ds005415) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005415](https://nemar.org/dataexplorer/detail?dataset_id=ds005415) DOI: [https://doi.org/10.18112/openneuro.ds005415.v1.0.0](https://doi.org/10.18112/openneuro.ds005415.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005415 >>> dataset = DS005415(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005416(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Fatigue Characterization of EEG under Mixed Reality Stereo Vision * **Study:** `ds005416` (OpenNeuro) * **Author (year):** `Wu2024` * **Canonical:** — Also importable as: `DS005416`, `Wu2024`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 23; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005416](https://openneuro.org/datasets/ds005416) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005416](https://nemar.org/dataexplorer/detail?dataset_id=ds005416) DOI: [https://doi.org/10.18112/openneuro.ds005416.v1.0.1](https://doi.org/10.18112/openneuro.ds005416.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005416 >>> dataset = DS005416(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005420(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting state EEG with closed eyes and open eyes in females from 60 to 80 years old * **Study:** `ds005420` (OpenNeuro) * **Author (year):** `Gama2024` * **Canonical:** `Gama2019` Also importable as: `DS005420`, `Gama2024`, `Gama2019`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 37; recordings: 72; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005420](https://openneuro.org/datasets/ds005420) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005420](https://nemar.org/dataexplorer/detail?dataset_id=ds005420) DOI: [https://doi.org/10.18112/openneuro.ds005420.v1.0.0](https://doi.org/10.18112/openneuro.ds005420.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005420 >>> dataset = DS005420(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Gama2019']* ### *class* eegdash.dataset.dataset.DS005429(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory oddball comparison (Optimum-1, Learning-oddball, and the local–global paradigm) * **Study:** `ds005429` (OpenNeuro) * **Author (year):** `Rutiku2024` * **Canonical:** — Also importable as: `DS005429`, `Rutiku2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 61; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005429](https://openneuro.org/datasets/ds005429) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005429](https://nemar.org/dataexplorer/detail?dataset_id=ds005429) DOI: [https://doi.org/10.18112/openneuro.ds005429.v1.0.0](https://doi.org/10.18112/openneuro.ds005429.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005429 >>> dataset = DS005429(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005448(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) STReEF * **Study:** `ds005448` (OpenNeuro) * **Author (year):** `Jelsma2024` * **Canonical:** `STReEF` Also importable as: `DS005448`, `Jelsma2024`, `STReEF`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 13; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005448](https://openneuro.org/datasets/ds005448) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005448](https://nemar.org/dataexplorer/detail?dataset_id=ds005448) DOI: [https://doi.org/10.18112/openneuro.ds005448.v1.0.0](https://doi.org/10.18112/openneuro.ds005448.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005448 >>> dataset = DS005448(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['STReEF']* ### *class* eegdash.dataset.dataset.DS005473(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 29 By BP * **Study:** `ds005473` (OpenNeuro) * **Author (year):** `Xiangyue2024_29_BP` * **Canonical:** `Zhao2024` Also importable as: `DS005473`, `Xiangyue2024_29_BP`, `Zhao2024`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 29; recordings: 58; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005473](https://openneuro.org/datasets/ds005473) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005473](https://nemar.org/dataexplorer/detail?dataset_id=ds005473) DOI: [https://doi.org/10.18112/openneuro.ds005473.v1.0.0](https://doi.org/10.18112/openneuro.ds005473.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005473 >>> dataset = DS005473(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Zhao2024']* ### *class* eegdash.dataset.dataset.DS005486(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PREDICT * **Study:** `ds005486` (OpenNeuro) * **Author (year):** `Chowdhury2024` * **Canonical:** — Also importable as: `DS005486`, `Chowdhury2024`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Unknown`. Subjects: 159; recordings: 445; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005486](https://openneuro.org/datasets/ds005486) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005486](https://nemar.org/dataexplorer/detail?dataset_id=ds005486) DOI: [https://doi.org/10.18112/openneuro.ds005486.v1.0.1](https://doi.org/10.18112/openneuro.ds005486.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005486 >>> dataset = DS005486(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005489(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Free Recall with Open-Loop Stimulation at Encoding * **Study:** `ds005489` (OpenNeuro) * **Author (year):** `Herrema2024_Free_Recall` * **Canonical:** — Also importable as: `DS005489`, `Herrema2024_Free_Recall`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 37; recordings: 154; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005489](https://openneuro.org/datasets/ds005489) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005489](https://nemar.org/dataexplorer/detail?dataset_id=ds005489) DOI: [https://doi.org/10.18112/openneuro.ds005489.v1.0.3](https://doi.org/10.18112/openneuro.ds005489.v1.0.3) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005489 >>> dataset = DS005489(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005491(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Categorized Free Recall with Open-Loop Stimulation at Encoding * **Study:** `ds005491` (OpenNeuro) * **Author (year):** `Herrema2024_Categorized` * **Canonical:** `catFR_open_loop`, `RAM_catFR`, `catFR_stim` Also importable as: `DS005491`, `Herrema2024_Categorized`, `catFR_open_loop`, `RAM_catFR`, `catFR_stim`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 19; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005491](https://openneuro.org/datasets/ds005491) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005491](https://nemar.org/dataexplorer/detail?dataset_id=ds005491) DOI: [https://doi.org/10.18112/openneuro.ds005491.v1.0.0](https://doi.org/10.18112/openneuro.ds005491.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005491 >>> dataset = DS005491(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['catFR_open_loop', 'RAM_catFR', 'catFR_stim']* ### *class* eegdash.dataset.dataset.DS005494(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cued Recall of Paired Associates with Open-Loop Stimulation at Encoding or Retrieval * **Study:** `ds005494` (OpenNeuro) * **Author (year):** `Herrema2024_Cued` * **Canonical:** `Herrema2024` Also importable as: `DS005494`, `Herrema2024_Cued`, `Herrema2024`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 20; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005494](https://openneuro.org/datasets/ds005494) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005494](https://nemar.org/dataexplorer/detail?dataset_id=ds005494) DOI: [https://doi.org/10.18112/openneuro.ds005494.v1.0.1](https://doi.org/10.18112/openneuro.ds005494.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005494 >>> dataset = DS005494(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Herrema2024']* ### *class* eegdash.dataset.dataset.DS005505(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 1 * **Study:** `ds005505` (OpenNeuro) * **Author (year):** `Shirazi2024_R1` * **Canonical:** `HBN_r1` Also importable as: `DS005505`, `Shirazi2024_R1`, `HBN_r1`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 136; recordings: 1342; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005505](https://openneuro.org/datasets/ds005505) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005505](https://nemar.org/dataexplorer/detail?dataset_id=ds005505) DOI: [https://doi.org/10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005505 >>> dataset = DS005505(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r1']* ### *class* eegdash.dataset.dataset.DS005506(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 2 * **Study:** `ds005506` (OpenNeuro) * **Author (year):** `Shirazi2024_R2` * **Canonical:** `HBN_r2` Also importable as: `DS005506`, `Shirazi2024_R2`, `HBN_r2`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 150; recordings: 1405; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005506](https://openneuro.org/datasets/ds005506) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005506](https://nemar.org/dataexplorer/detail?dataset_id=ds005506) DOI: [https://doi.org/10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005506 >>> dataset = DS005506(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r2']* ### *class* eegdash.dataset.dataset.DS005507(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 3 * **Study:** `ds005507` (OpenNeuro) * **Author (year):** `Shirazi2024_R3` * **Canonical:** `HBN_r3` Also importable as: `DS005507`, `Shirazi2024_R3`, `HBN_r3`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 184; recordings: 1812; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005507](https://openneuro.org/datasets/ds005507) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005507](https://nemar.org/dataexplorer/detail?dataset_id=ds005507) DOI: [https://doi.org/10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005507 >>> dataset = DS005507(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r3']* ### *class* eegdash.dataset.dataset.DS005508(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 4 * **Study:** `ds005508` (OpenNeuro) * **Author (year):** `Shirazi2024_R4` * **Canonical:** `HBN_r4` Also importable as: `DS005508`, `Shirazi2024_R4`, `HBN_r4`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 324; recordings: 3342; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005508](https://openneuro.org/datasets/ds005508) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005508](https://nemar.org/dataexplorer/detail?dataset_id=ds005508) DOI: [https://doi.org/10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005508 >>> dataset = DS005508(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r4']* ### *class* eegdash.dataset.dataset.DS005509(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 5 * **Study:** `ds005509` (OpenNeuro) * **Author (year):** `Shirazi2024_R5` * **Canonical:** `HBN_r5` Also importable as: `DS005509`, `Shirazi2024_R5`, `HBN_r5`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 330; recordings: 3326; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005509](https://openneuro.org/datasets/ds005509) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005509](https://nemar.org/dataexplorer/detail?dataset_id=ds005509) DOI: [https://doi.org/10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005509 >>> dataset = DS005509(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r5']* ### *class* eegdash.dataset.dataset.DS005510(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 6 * **Study:** `ds005510` (OpenNeuro) * **Author (year):** `Shirazi2024_R6` * **Canonical:** `HBN_r6` Also importable as: `DS005510`, `Shirazi2024_R6`, `HBN_r6`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 135; recordings: 1227; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005510](https://openneuro.org/datasets/ds005510) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005510](https://nemar.org/dataexplorer/detail?dataset_id=ds005510) DOI: [https://doi.org/10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005510 >>> dataset = DS005510(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r6']* ### *class* eegdash.dataset.dataset.DS005512(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 8 * **Study:** `ds005512` (OpenNeuro) * **Author (year):** `Shirazi2024_R8` * **Canonical:** `HBN_r8` Also importable as: `DS005512`, `Shirazi2024_R8`, `HBN_r8`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 257; recordings: 2320; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005512](https://openneuro.org/datasets/ds005512) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005512](https://nemar.org/dataexplorer/detail?dataset_id=ds005512) DOI: [https://doi.org/10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS005512 >>> dataset = DS005512(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r8']* ### *class* eegdash.dataset.dataset.DS005514(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 9 * **Study:** `ds005514` (OpenNeuro) * **Author (year):** `Shirazi2024_R9` * **Canonical:** `HBN_r9` Also importable as: `DS005514`, `Shirazi2024_R9`, `HBN_r9`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 295; recordings: 2885; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005514](https://openneuro.org/datasets/ds005514) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005514](https://nemar.org/dataexplorer/detail?dataset_id=ds005514) DOI: [https://doi.org/10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005514 >>> dataset = DS005514(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r9']* ### *class* eegdash.dataset.dataset.DS005515(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 10 * **Study:** `ds005515` (OpenNeuro) * **Author (year):** `Shirazi2024_R10` * **Canonical:** `HBN_r10` Also importable as: `DS005515`, `Shirazi2024_R10`, `HBN_r10`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 533; recordings: 2516; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005515](https://openneuro.org/datasets/ds005515) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005515](https://nemar.org/dataexplorer/detail?dataset_id=ds005515) DOI: [https://doi.org/10.18112/openneuro.ds005515.v1.0.1](https://doi.org/10.18112/openneuro.ds005515.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005515 >>> dataset = DS005515(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r10']* ### *class* eegdash.dataset.dataset.DS005516(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 11 * **Study:** `ds005516` (OpenNeuro) * **Author (year):** `Shirazi2024_R11` * **Canonical:** `HBN_r11` Also importable as: `DS005516`, `Shirazi2024_R11`, `HBN_r11`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 430; recordings: 3397; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005516](https://openneuro.org/datasets/ds005516) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005516](https://nemar.org/dataexplorer/detail?dataset_id=ds005516) DOI: [https://doi.org/10.18112/openneuro.ds005516.v1.0.1](https://doi.org/10.18112/openneuro.ds005516.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005516 >>> dataset = DS005516(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r11']* ### *class* eegdash.dataset.dataset.DS005520(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Research data supporting ‘EEG recording during playing MOBA game’ * **Study:** `ds005520` (OpenNeuro) * **Author (year):** `Li2024_Research_supporting_playing` * **Canonical:** — Also importable as: `DS005520`, `Li2024_Research_supporting_playing`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 23; recordings: 69; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005520](https://openneuro.org/datasets/ds005520) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005520](https://nemar.org/dataexplorer/detail?dataset_id=ds005520) DOI: [https://doi.org/10.18112/openneuro.ds005520.v1.0.1](https://doi.org/10.18112/openneuro.ds005520.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005520 >>> dataset = DS005520(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005522(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Spatial Navigation Memory of Object Locations * **Study:** `ds005522` (OpenNeuro) * **Author (year):** `Herrema2024_Spatial` * **Canonical:** — Also importable as: `DS005522`, `Herrema2024_Spatial`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 55; recordings: 176; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005522](https://openneuro.org/datasets/ds005522) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005522](https://nemar.org/dataexplorer/detail?dataset_id=ds005522) DOI: [https://doi.org/10.18112/openneuro.ds005522.v1.0.0](https://doi.org/10.18112/openneuro.ds005522.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005522 >>> dataset = DS005522(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005523(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Spatial Memory of Object Locations with Open-Loop Stimulation at Encoding * **Study:** `ds005523` (OpenNeuro) * **Author (year):** `Herrema2024_Spatial_Memory` * **Canonical:** — Also importable as: `DS005523`, `Herrema2024_Spatial_Memory`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Surgery`. Subjects: 21; recordings: 102; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005523](https://openneuro.org/datasets/ds005523) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005523](https://nemar.org/dataexplorer/detail?dataset_id=ds005523) DOI: [https://doi.org/10.18112/openneuro.ds005523.v1.0.1](https://doi.org/10.18112/openneuro.ds005523.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005523 >>> dataset = DS005523(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005530(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Depotentiation of emotional reactivity using TMR during REM sleep * **Study:** `ds005530` (OpenNeuro) * **Author (year):** `Greco2024` * **Canonical:** — Also importable as: `DS005530`, `Greco2024`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 17; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005530](https://openneuro.org/datasets/ds005530) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005530](https://nemar.org/dataexplorer/detail?dataset_id=ds005530) DOI: [https://doi.org/10.18112/openneuro.ds005530.v1.0.9](https://doi.org/10.18112/openneuro.ds005530.v1.0.9) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005530 >>> dataset = DS005530(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005540(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EmoEEG-MC: A Multi-Context Emotional EEG Dataset for Cross-Context Emotion Decoding * **Study:** `ds005540` (OpenNeuro) * **Author (year):** `Xin2024` * **Canonical:** — Also importable as: `DS005540`, `Xin2024`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 59; recordings: 103; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005540](https://openneuro.org/datasets/ds005540) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005540](https://nemar.org/dataexplorer/detail?dataset_id=ds005540) DOI: [https://doi.org/10.18112/openneuro.ds005540.v1.0.7](https://doi.org/10.18112/openneuro.ds005540.v1.0.7) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005540 >>> dataset = DS005540(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005545(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory naming * **Study:** `ds005545` (OpenNeuro) * **Author (year):** `Kanno2024` * **Canonical:** `Kanno2025` Also importable as: `DS005545`, `Kanno2024`, `Kanno2025`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Surgery`. Subjects: 106; recordings: 336; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005545](https://openneuro.org/datasets/ds005545) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005545](https://nemar.org/dataexplorer/detail?dataset_id=ds005545) DOI: [https://doi.org/10.18112/openneuro.ds005545.v1.0.3](https://doi.org/10.18112/openneuro.ds005545.v1.0.3) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005545 >>> dataset = DS005545(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kanno2025']* ### *class* eegdash.dataset.dataset.DS005555(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Bitbrain Open Access Sleep (BOAS) dataset * **Study:** `ds005555` (OpenNeuro) * **Author (year):** `LopezLarraz2024` * **Canonical:** `BOAS` Also importable as: `DS005555`, `LopezLarraz2024`, `BOAS`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 128; recordings: 256; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005555](https://openneuro.org/datasets/ds005555) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005555](https://nemar.org/dataexplorer/detail?dataset_id=ds005555) DOI: [https://doi.org/10.18112/openneuro.ds005555.v1.1.1](https://doi.org/10.18112/openneuro.ds005555.v1.1.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005555 >>> dataset = DS005555(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BOAS']* ### *class* eegdash.dataset.dataset.DS005557(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier) * **Study:** `ds005557` (OpenNeuro) * **Author (year):** `Herrema2024_Classifier` * **Canonical:** — Also importable as: `DS005557`, `Herrema2024_Classifier`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Other`. Subjects: 16; recordings: 58; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005557](https://openneuro.org/datasets/ds005557) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005557](https://nemar.org/dataexplorer/detail?dataset_id=ds005557) DOI: [https://doi.org/10.18112/openneuro.ds005557.v1.0.0](https://doi.org/10.18112/openneuro.ds005557.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005557 >>> dataset = DS005557(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005558(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Categorized Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier) * **Study:** `ds005558` (OpenNeuro) * **Author (year):** `Herrema2024_Categorized_Free` * **Canonical:** `catFR_closed_loop` Also importable as: `DS005558`, `Herrema2024_Categorized_Free`, `catFR_closed_loop`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Surgery`. Subjects: 7; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005558](https://openneuro.org/datasets/ds005558) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005558](https://nemar.org/dataexplorer/detail?dataset_id=ds005558) DOI: [https://doi.org/10.18112/openneuro.ds005558.v1.0.0](https://doi.org/10.18112/openneuro.ds005558.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005558 >>> dataset = DS005558(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['catFR_closed_loop']* ### *class* eegdash.dataset.dataset.DS005565(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neural associations between fingerspelling, print, and signs: An ERP priming study with deaf readers * **Study:** `ds005565` (OpenNeuro) * **Author (year):** `Lee2024_StudyWITH` * **Canonical:** — Also importable as: `DS005565`, `Lee2024_StudyWITH`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005565](https://openneuro.org/datasets/ds005565) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005565](https://nemar.org/dataexplorer/detail?dataset_id=ds005565) DOI: [https://doi.org/10.18112/openneuro.ds005565.v1.0.3](https://doi.org/10.18112/openneuro.ds005565.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005565 >>> dataset = DS005565(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005571(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Expectation of Conflict Stimuli * **Study:** `ds005571` (OpenNeuro) * **Author (year):** `MartinezMolina2024` * **Canonical:** — Also importable as: `DS005571`, `MartinezMolina2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 24; recordings: 45; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005571](https://openneuro.org/datasets/ds005571) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005571](https://nemar.org/dataexplorer/detail?dataset_id=ds005571) DOI: [https://doi.org/10.18112/openneuro.ds005571.v1.0.1](https://doi.org/10.18112/openneuro.ds005571.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005571 >>> dataset = DS005571(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005574(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The “Podcast” ECoG dataset * **Study:** `ds005574` (OpenNeuro) * **Author (year):** `Zada2024` * **Canonical:** `Podcast` Also importable as: `DS005574`, `Zada2024`, `Podcast`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Unknown`. Subjects: 9; recordings: 9; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005574](https://openneuro.org/datasets/ds005574) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005574](https://nemar.org/dataexplorer/detail?dataset_id=ds005574) DOI: [https://doi.org/10.18112/openneuro.ds005574.v1.0.2](https://doi.org/10.18112/openneuro.ds005574.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS005574 >>> dataset = DS005574(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Podcast']* ### *class* eegdash.dataset.dataset.DS005586(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electroencephalographic responses to the number of objects in partially occluded and uncovered scenes * **Study:** `ds005586` (OpenNeuro) * **Author (year):** `Baykan2024` * **Canonical:** — Also importable as: `DS005586`, `Baykan2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 23; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005586](https://openneuro.org/datasets/ds005586) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005586](https://nemar.org/dataexplorer/detail?dataset_id=ds005586) DOI: [https://doi.org/10.18112/openneuro.ds005586.v2.0.0](https://doi.org/10.18112/openneuro.ds005586.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005586 >>> dataset = DS005586(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005594(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alphabetic Decision Task (Arial Light Font) * **Study:** `ds005594` (OpenNeuro) * **Author (year):** `Taylor2024` * **Canonical:** — Also importable as: `DS005594`, `Taylor2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 16; recordings: 16; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005594](https://openneuro.org/datasets/ds005594) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005594](https://nemar.org/dataexplorer/detail?dataset_id=ds005594) DOI: [https://doi.org/10.18112/openneuro.ds005594.v1.0.3](https://doi.org/10.18112/openneuro.ds005594.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005594 >>> dataset = DS005594(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005620(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A repeated awakening study exploring the capacity of complexity measures to capture dreaming during propofol sedation * **Study:** `ds005620` (OpenNeuro) * **Author (year):** `Bajwa2024` * **Canonical:** — Also importable as: `DS005620`, `Bajwa2024`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 21; recordings: 202; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005620](https://openneuro.org/datasets/ds005620) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005620](https://nemar.org/dataexplorer/detail?dataset_id=ds005620) DOI: [https://doi.org/10.18112/openneuro.ds005620.v1.0.0](https://doi.org/10.18112/openneuro.ds005620.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005620 >>> dataset = DS005620(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005624(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Color Change Detection Task * **Study:** `ds005624` (OpenNeuro) * **Author (year):** `DS5624_ColorChangeDetection` * **Canonical:** — Also importable as: `DS005624`, `DS5624_ColorChangeDetection`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 24; recordings: 35; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005624](https://openneuro.org/datasets/ds005624) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005624](https://nemar.org/dataexplorer/detail?dataset_id=ds005624) DOI: [https://doi.org/10.18112/openneuro.ds005624.v1.0.0](https://doi.org/10.18112/openneuro.ds005624.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005624 >>> dataset = DS005624(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005628(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of Visual and Audiovisual Stimuli in Virtual Reality from the Edzna Archaeological Site * **Study:** `ds005628` (OpenNeuro) * **Author (year):** `RosadoAiza2024` * **Canonical:** — Also importable as: `DS005628`, `RosadoAiza2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 102; recordings: 306; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005628](https://openneuro.org/datasets/ds005628) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005628](https://nemar.org/dataexplorer/detail?dataset_id=ds005628) DOI: [https://doi.org/10.18112/openneuro.ds005628.v1.0.0](https://doi.org/10.18112/openneuro.ds005628.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005628 >>> dataset = DS005628(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005642(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) illusory-face-eeg * **Study:** `ds005642` (OpenNeuro) * **Author (year):** `Robinson2024_illusory` * **Canonical:** — Also importable as: `DS005642`, `Robinson2024_illusory`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005642](https://openneuro.org/datasets/ds005642) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005642](https://nemar.org/dataexplorer/detail?dataset_id=ds005642) DOI: [https://doi.org/10.18112/openneuro.ds005642.v1.0.1](https://doi.org/10.18112/openneuro.ds005642.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005642 >>> dataset = DS005642(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005648(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mapping object space dimensions: new insights from temporal dynamics * **Study:** `ds005648` (OpenNeuro) * **Author (year):** `Kidder2024` * **Canonical:** — Also importable as: `DS005648`, `Kidder2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005648](https://openneuro.org/datasets/ds005648) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005648](https://nemar.org/dataexplorer/detail?dataset_id=ds005648) DOI: [https://doi.org/10.18112/openneuro.ds005648.v1.0.3](https://doi.org/10.18112/openneuro.ds005648.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS005648 >>> dataset = DS005648(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005662(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A comprehensive EEG dataset for investigating visual touch perception * **Study:** `ds005662` (OpenNeuro) * **Author (year):** `Smit2024` * **Canonical:** — Also importable as: `DS005662`, `Smit2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 80; recordings: 80; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005662](https://openneuro.org/datasets/ds005662) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005662](https://nemar.org/dataexplorer/detail?dataset_id=ds005662) DOI: [https://doi.org/10.18112/openneuro.ds005662.v2.0.1](https://doi.org/10.18112/openneuro.ds005662.v2.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005662 >>> dataset = DS005662(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005670(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SEEG Resting State Recording * **Study:** `ds005670` (OpenNeuro) * **Author (year):** `Xu2024_SEEG_Resting_State` * **Canonical:** — Also importable as: `DS005670`, `Xu2024_SEEG_Resting_State`. Modality: `ieeg`; Experiment type: `Resting-state`; Subject type: `Epilepsy`. Subjects: 2; recordings: 2; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005670](https://openneuro.org/datasets/ds005670) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005670](https://nemar.org/dataexplorer/detail?dataset_id=ds005670) DOI: [https://doi.org/10.18112/openneuro.ds005670.v1.0.0](https://doi.org/10.18112/openneuro.ds005670.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005670 >>> dataset = DS005670(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005672(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PerceiveImagine * **Study:** `ds005672` (OpenNeuro) * **Author (year):** `Zhiyuan2024` * **Canonical:** — Also importable as: `DS005672`, `Zhiyuan2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 3; recordings: 3; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005672](https://openneuro.org/datasets/ds005672) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005672](https://nemar.org/dataexplorer/detail?dataset_id=ds005672) DOI: [https://doi.org/10.18112/openneuro.ds005672.v1.0.0](https://doi.org/10.18112/openneuro.ds005672.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS005672 >>> dataset = DS005672(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005688(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) visStim * **Study:** `ds005688` (OpenNeuro) * **Author (year):** `Tan2024` * **Canonical:** — Also importable as: `DS005688`, `Tan2024`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 20; recordings: 89; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005688](https://openneuro.org/datasets/ds005688) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005688](https://nemar.org/dataexplorer/detail?dataset_id=ds005688) DOI: [https://doi.org/10.18112/openneuro.ds005688.v1.0.1](https://doi.org/10.18112/openneuro.ds005688.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005688 >>> dataset = DS005688(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005691(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SpinalExpect_Invasive * **Study:** `ds005691` (OpenNeuro) * **Author (year):** `Stenner2024_SpinalExpect` * **Canonical:** — Also importable as: `DS005691`, `Stenner2024_SpinalExpect`. Modality: `ieeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005691](https://openneuro.org/datasets/ds005691) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005691](https://nemar.org/dataexplorer/detail?dataset_id=ds005691) DOI: [https://doi.org/10.18112/openneuro.ds005691.v1.0.0](https://doi.org/10.18112/openneuro.ds005691.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005691 >>> dataset = DS005691(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005692(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SpinalExpect_NonInvasive * **Study:** `ds005692` (OpenNeuro) * **Author (year):** `Stenner2024_SpinalExpect_NonInvasive` * **Canonical:** — Also importable as: `DS005692`, `Stenner2024_SpinalExpect_NonInvasive`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 30; recordings: 59; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005692](https://openneuro.org/datasets/ds005692) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005692](https://nemar.org/dataexplorer/detail?dataset_id=ds005692) DOI: [https://doi.org/10.18112/openneuro.ds005692.v1.0.0](https://doi.org/10.18112/openneuro.ds005692.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005692 >>> dataset = DS005692(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005697(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PerceiveImagine * **Study:** `ds005697` (OpenNeuro) * **Author (year):** `Li2024_PerceiveImagine` * **Canonical:** `PerceiveImagine` Also importable as: `DS005697`, `Li2024_PerceiveImagine`, `PerceiveImagine`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 51; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005697](https://openneuro.org/datasets/ds005697) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005697](https://nemar.org/dataexplorer/detail?dataset_id=ds005697) DOI: [https://doi.org/10.18112/openneuro.ds005697.v1.0.2](https://doi.org/10.18112/openneuro.ds005697.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS005697 >>> dataset = DS005697(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PerceiveImagine']* ### *class* eegdash.dataset.dataset.DS005752(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The NIMH Healthy Research Volunteer Dataset * **Study:** `ds005752` (OpenNeuro) * **Author (year):** `Nugent2024` * **Canonical:** — Also importable as: `DS005752`, `Nugent2024`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 123; recordings: 1055; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005752](https://openneuro.org/datasets/ds005752) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005752](https://nemar.org/dataexplorer/detail?dataset_id=ds005752) DOI: [https://doi.org/10.18112/openneuro.ds005752.v2.1.0](https://doi.org/10.18112/openneuro.ds005752.v2.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS005752 >>> dataset = DS005752(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005776(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrical_Thermal_FingerTapping_2015 * **Study:** `ds005776` (OpenNeuro) * **Author (year):** `Yucel2025_Electrical` * **Canonical:** `Yucel2015` Also importable as: `DS005776`, `Yucel2025_Electrical`, `Yucel2015`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 11; recordings: 46; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005776](https://openneuro.org/datasets/ds005776) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005776](https://nemar.org/dataexplorer/detail?dataset_id=ds005776) DOI: [https://doi.org/10.18112/openneuro.ds005776.v1.0.1](https://doi.org/10.18112/openneuro.ds005776.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005776 >>> dataset = DS005776(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Yucel2015']* ### *class* eegdash.dataset.dataset.DS005777(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrical_Morphine_Placebo_2018 * **Study:** `ds005777` (OpenNeuro) * **Author (year):** `Peng2025` * **Canonical:** `Peng2018` Also importable as: `DS005777`, `Peng2025`, `Peng2018`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Unknown`. Subjects: 14; recordings: 113; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005777](https://openneuro.org/datasets/ds005777) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005777](https://nemar.org/dataexplorer/detail?dataset_id=ds005777) DOI: [https://doi.org/10.18112/openneuro.ds005777.v1.0.1](https://doi.org/10.18112/openneuro.ds005777.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005777 >>> dataset = DS005777(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Peng2018']* ### *class* eegdash.dataset.dataset.DS005779(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Real-time personalized brain state-dependent TMS in healthy adults * **Study:** `ds005779` (OpenNeuro) * **Author (year):** `Khatri2025` * **Canonical:** — Also importable as: `DS005779`, `Khatri2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 19; recordings: 250; tasks: 16. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005779](https://openneuro.org/datasets/ds005779) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005779](https://nemar.org/dataexplorer/detail?dataset_id=ds005779) DOI: [https://doi.org/10.18112/openneuro.ds005779.v1.0.1](https://doi.org/10.18112/openneuro.ds005779.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005779 >>> dataset = DS005779(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005795(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data) * **Study:** `ds005795` (OpenNeuro) * **Author (year):** `Stadler2025` * **Canonical:** — Also importable as: `DS005795`, `Stadler2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 34; recordings: 39; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005795](https://openneuro.org/datasets/ds005795) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005795](https://nemar.org/dataexplorer/detail?dataset_id=ds005795) DOI: [https://doi.org/10.18112/openneuro.ds005795.v1.0.0](https://doi.org/10.18112/openneuro.ds005795.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005795 >>> dataset = DS005795(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005810(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NOD-MEG * **Study:** `ds005810` (OpenNeuro) * **Author (year):** `Zhang2025_MEG` * **Canonical:** `NOD_MEG` Also importable as: `DS005810`, `Zhang2025_MEG`, `NOD_MEG`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 31; recordings: 305; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005810](https://openneuro.org/datasets/ds005810) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005810](https://nemar.org/dataexplorer/detail?dataset_id=ds005810) DOI: [https://doi.org/10.18112/openneuro.ds005810.v2.0.0](https://doi.org/10.18112/openneuro.ds005810.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005810 >>> dataset = DS005810(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['NOD_MEG']* ### *class* eegdash.dataset.dataset.DS005811(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NOD-EEG * **Study:** `ds005811` (OpenNeuro) * **Author (year):** `Zhang2025_EEG` * **Canonical:** `NOD_EEG` Also importable as: `DS005811`, `Zhang2025_EEG`, `NOD_EEG`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 19; recordings: 448; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005811](https://openneuro.org/datasets/ds005811) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005811](https://nemar.org/dataexplorer/detail?dataset_id=ds005811) DOI: [https://doi.org/10.18112/openneuro.ds005811.v1.0.9](https://doi.org/10.18112/openneuro.ds005811.v1.0.9) ### Examples ```pycon >>> from eegdash.dataset import DS005811 >>> dataset = DS005811(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['NOD_EEG']* ### *class* eegdash.dataset.dataset.DS005815(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Human EEG Dataset for Multisensory Perception and Mental Imagery * **Study:** `ds005815` (OpenNeuro) * **Author (year):** `Chang2025` * **Canonical:** — Also importable as: `DS005815`, `Chang2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 20; recordings: 103; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005815](https://openneuro.org/datasets/ds005815) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005815](https://nemar.org/dataexplorer/detail?dataset_id=ds005815) DOI: [https://doi.org/10.18112/openneuro.ds005815.v2.0.1](https://doi.org/10.18112/openneuro.ds005815.v2.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005815 >>> dataset = DS005815(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005841(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Experiment measuring ERPs in VR * **Study:** `ds005841` (OpenNeuro) * **Author (year):** `Karakashevska2025` * **Canonical:** — Also importable as: `DS005841`, `Karakashevska2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 48; recordings: 288; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005841](https://openneuro.org/datasets/ds005841) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005841](https://nemar.org/dataexplorer/detail?dataset_id=ds005841) DOI: [https://doi.org/10.18112/openneuro.ds005841.v1.0.0](https://doi.org/10.18112/openneuro.ds005841.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005841 >>> dataset = DS005841(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005857(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ltpDelayRepFRReadOnly * **Study:** `ds005857` (OpenNeuro) * **Author (year):** `Broitman2025` * **Canonical:** `Broitman2019` Also importable as: `DS005857`, `Broitman2025`, `Broitman2019`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 29; recordings: 110; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005857](https://openneuro.org/datasets/ds005857) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005857](https://nemar.org/dataexplorer/detail?dataset_id=ds005857) DOI: [https://doi.org/10.18112/openneuro.ds005857.v1.0.0](https://doi.org/10.18112/openneuro.ds005857.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005857 >>> dataset = DS005857(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Broitman2019']* ### *class* eegdash.dataset.dataset.DS005863(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cognitive Electrophysiology in Socioeconomic Context in Adulthood * **Study:** `ds005863` (OpenNeuro) * **Author (year):** `Isbell2025_Cognitive` * **Canonical:** — Also importable as: `DS005863`, `Isbell2025_Cognitive`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 127; recordings: 357; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005863](https://openneuro.org/datasets/ds005863) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005863](https://nemar.org/dataexplorer/detail?dataset_id=ds005863) DOI: [https://doi.org/10.18112/openneuro.ds005863.v2.0.0](https://doi.org/10.18112/openneuro.ds005863.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005863 >>> dataset = DS005863(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005866(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Flankers-NEAR * **Study:** `ds005866` (OpenNeuro) * **Author (year):** `TerhuneCotter2025_NEAR` * **Canonical:** `Flankers_NEAR` Also importable as: `DS005866`, `TerhuneCotter2025_NEAR`, `Flankers_NEAR`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 60; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005866](https://openneuro.org/datasets/ds005866) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005866](https://nemar.org/dataexplorer/detail?dataset_id=ds005866) DOI: [https://doi.org/10.18112/openneuro.ds005866.v1.0.1](https://doi.org/10.18112/openneuro.ds005866.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005866 >>> dataset = DS005866(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Flankers_NEAR']* ### *class* eegdash.dataset.dataset.DS005868(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Flankers-FAR * **Study:** `ds005868` (OpenNeuro) * **Author (year):** `TerhuneCotter2025_FAR` * **Canonical:** `Flankers_FAR` Also importable as: `DS005868`, `TerhuneCotter2025_FAR`, `Flankers_FAR`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 48; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005868](https://openneuro.org/datasets/ds005868) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005868](https://nemar.org/dataexplorer/detail?dataset_id=ds005868) DOI: [https://doi.org/10.18112/openneuro.ds005868.v1.0.1](https://doi.org/10.18112/openneuro.ds005868.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005868 >>> dataset = DS005868(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Flankers_FAR']* ### *class* eegdash.dataset.dataset.DS005872(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEGEyeNet Dataset * **Study:** `ds005872` (OpenNeuro) * **Author (year):** `Plomecka2025` * **Canonical:** `EEGEyeNet` Also importable as: `DS005872`, `Plomecka2025`, `EEGEyeNet`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005872](https://openneuro.org/datasets/ds005872) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005872](https://nemar.org/dataexplorer/detail?dataset_id=ds005872) DOI: [https://doi.org/10.18112/openneuro.ds005872.v1.0.0](https://doi.org/10.18112/openneuro.ds005872.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005872 >>> dataset = DS005872(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EEGEyeNet']* ### *class* eegdash.dataset.dataset.DS005873(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SeizeIT2 * **Study:** `ds005873` (OpenNeuro) * **Author (year):** `Bhagubai2025` * **Canonical:** `SeizeIT2` Also importable as: `DS005873`, `Bhagubai2025`, `SeizeIT2`. Modality: `eeg, emg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 125; recordings: 5654; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005873](https://openneuro.org/datasets/ds005873) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005873](https://nemar.org/dataexplorer/detail?dataset_id=ds005873) DOI: [https://doi.org/10.18112/openneuro.ds005873.v1.1.0](https://doi.org/10.18112/openneuro.ds005873.v1.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS005873 >>> dataset = DS005873(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['SeizeIT2']* ### *class* eegdash.dataset.dataset.DS005876(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Song Familiarity * **Study:** `ds005876` (OpenNeuro) * **Author (year):** `Girard2025` * **Canonical:** — Also importable as: `DS005876`, `Girard2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 29; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005876](https://openneuro.org/datasets/ds005876) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005876](https://nemar.org/dataexplorer/detail?dataset_id=ds005876) DOI: [https://doi.org/10.18112/openneuro.ds005876.v1.0.1](https://doi.org/10.18112/openneuro.ds005876.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005876 >>> dataset = DS005876(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005907(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls * **Study:** `ds005907` (OpenNeuro) * **Author (year):** `Campbell2025` * **Canonical:** — Also importable as: `DS005907`, `Campbell2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Alcohol`. Subjects: 53; recordings: 53; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005907](https://openneuro.org/datasets/ds005907) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005907](https://nemar.org/dataexplorer/detail?dataset_id=ds005907) DOI: [https://doi.org/10.18112/openneuro.ds005907.v1.0.0](https://doi.org/10.18112/openneuro.ds005907.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005907 >>> dataset = DS005907(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005929(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motion-Yucel2014 * **Study:** `ds005929` (OpenNeuro) * **Author (year):** `MotionYucel2014` * **Canonical:** `Yucel2014`, `Motion_Yucel2014` Also importable as: `DS005929`, `MotionYucel2014`, `Yucel2014`, `Motion_Yucel2014`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 7; recordings: 7; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005929](https://openneuro.org/datasets/ds005929) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005929](https://nemar.org/dataexplorer/detail?dataset_id=ds005929) DOI: [https://doi.org/10.18112/openneuro.ds005929.v1.0.1](https://doi.org/10.18112/openneuro.ds005929.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005929 >>> dataset = DS005929(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Yucel2014', 'Motion_Yucel2014']* ### *class* eegdash.dataset.dataset.DS005930(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BallSqueezingHD_Gao2023 * **Study:** `ds005930` (OpenNeuro) * **Author (year):** `Gao2023` * **Canonical:** — Also importable as: `DS005930`, `Gao2023`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 12; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005930](https://openneuro.org/datasets/ds005930) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005930](https://nemar.org/dataexplorer/detail?dataset_id=ds005930) DOI: [https://doi.org/10.18112/openneuro.ds005930.v1.0.1](https://doi.org/10.18112/openneuro.ds005930.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005930 >>> dataset = DS005930(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005931(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visuomotor_task * **Study:** `ds005931` (OpenNeuro) * **Author (year):** `Ueda2025` * **Canonical:** — Also importable as: `DS005931`, `Ueda2025`. Modality: `ieeg`; Experiment type: `Motor`; Subject type: `Epilepsy`. Subjects: 8; recordings: 16; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005931](https://openneuro.org/datasets/ds005931) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005931](https://nemar.org/dataexplorer/detail?dataset_id=ds005931) DOI: [https://doi.org/10.18112/openneuro.ds005931.v1.0.0](https://doi.org/10.18112/openneuro.ds005931.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005931 >>> dataset = DS005931(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005932(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PWIe * **Study:** `ds005932` (OpenNeuro) * **Author (year):** `Holcomb2025` * **Canonical:** `PWIe` Also importable as: `DS005932`, `Holcomb2025`, `PWIe`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 29; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005932](https://openneuro.org/datasets/ds005932) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005932](https://nemar.org/dataexplorer/detail?dataset_id=ds005932) DOI: [https://doi.org/10.18112/openneuro.ds005932.v1.0.0](https://doi.org/10.18112/openneuro.ds005932.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005932 >>> dataset = DS005932(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PWIe']* ### *class* eegdash.dataset.dataset.DS005935(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mirror Neuron Study * **Study:** `ds005935` (OpenNeuro) * **Author (year):** `Li2025` * **Canonical:** — Also importable as: `DS005935`, `Li2025`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 21; recordings: 64; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005935](https://openneuro.org/datasets/ds005935) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005935](https://nemar.org/dataexplorer/detail?dataset_id=ds005935) DOI: [https://doi.org/10.18112/openneuro.ds005935.v1.0.0](https://doi.org/10.18112/openneuro.ds005935.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005935 >>> dataset = DS005935(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005946(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ERC_CoG PROMENADE - WP2 - MetaImagery (Metaphor and Mental Imagery) * **Study:** `ds005946` (OpenNeuro) * **Author (year):** `Frau2025` * **Canonical:** `PROMENADE` Also importable as: `DS005946`, `Frau2025`, `PROMENADE`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 39; recordings: 39; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005946](https://openneuro.org/datasets/ds005946) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005946](https://nemar.org/dataexplorer/detail?dataset_id=ds005946) DOI: [https://doi.org/10.18112/openneuro.ds005946.v1.0.1](https://doi.org/10.18112/openneuro.ds005946.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005946 >>> dataset = DS005946(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PROMENADE']* ### *class* eegdash.dataset.dataset.DS005953(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_visual * **Study:** `ds005953` (OpenNeuro) * **Author (year):** `Winawer2025` * **Canonical:** — Also importable as: `DS005953`, `Winawer2025`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Surgery`. Subjects: 2; recordings: 3; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005953](https://openneuro.org/datasets/ds005953) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005953](https://nemar.org/dataexplorer/detail?dataset_id=ds005953) DOI: [https://doi.org/10.18112/openneuro.ds005953.v1.0.0](https://doi.org/10.18112/openneuro.ds005953.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005953 >>> dataset = DS005953(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005960(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) General Info: inst-comp-eeg * **Study:** `ds005960` (OpenNeuro) * **Author (year):** `Pena2025` * **Canonical:** — Also importable as: `DS005960`, `Pena2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 41; recordings: 41; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005960](https://openneuro.org/datasets/ds005960) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005960](https://nemar.org/dataexplorer/detail?dataset_id=ds005960) DOI: [https://doi.org/10.18112/openneuro.ds005960.v1.0.0](https://doi.org/10.18112/openneuro.ds005960.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005960 >>> dataset = DS005960(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS005963(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRESH Motor Dataset * **Study:** `ds005963` (OpenNeuro) * **Author (year):** `Mesquita2025` * **Canonical:** `Mesquita2019` Also importable as: `DS005963`, `Mesquita2025`, `Mesquita2019`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 10; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005963](https://openneuro.org/datasets/ds005963) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005963](https://nemar.org/dataexplorer/detail?dataset_id=ds005963) DOI: [https://doi.org/10.18112/openneuro.ds005963.v1.0.0](https://doi.org/10.18112/openneuro.ds005963.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005963 >>> dataset = DS005963(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Mesquita2019']* ### *class* eegdash.dataset.dataset.DS005964(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRESH Audio Dataset * **Study:** `ds005964` (OpenNeuro) * **Author (year):** `Luke2025` * **Canonical:** `Luke2019` Also importable as: `DS005964`, `Luke2025`, `Luke2019`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Unknown`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005964](https://openneuro.org/datasets/ds005964) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005964](https://nemar.org/dataexplorer/detail?dataset_id=ds005964) DOI: [https://doi.org/10.18112/openneuro.ds005964.v1.0.0](https://doi.org/10.18112/openneuro.ds005964.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005964 >>> dataset = DS005964(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Luke2019']* ### *class* eegdash.dataset.dataset.DS006012(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A geometric shape regularity effect in the human brain: MEG dataset * **Study:** `ds006012` (OpenNeuro) * **Author (year):** `SableMeyer2025` * **Canonical:** — Also importable as: `DS006012`, `SableMeyer2025`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 21; recordings: 193; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006012](https://openneuro.org/datasets/ds006012) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006012](https://nemar.org/dataexplorer/detail?dataset_id=ds006012) DOI: [https://doi.org/10.18112/openneuro.ds006012.v1.0.1](https://doi.org/10.18112/openneuro.ds006012.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006012 >>> dataset = DS006012(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006018(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cognitive Electrophysiology in Socioeconomic Context in Adulthood: An EEG dataset * **Study:** `ds006018` (OpenNeuro) * **Author (year):** `Isbell2025_Adulthood` * **Canonical:** — Also importable as: `DS006018`, `Isbell2025_Adulthood`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 127; recordings: 357; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006018](https://openneuro.org/datasets/ds006018) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006018](https://nemar.org/dataexplorer/detail?dataset_id=ds006018) DOI: [https://doi.org/10.18112/openneuro.ds006018.v1.2.2](https://doi.org/10.18112/openneuro.ds006018.v1.2.2) ### Examples ```pycon >>> from eegdash.dataset import DS006018 >>> dataset = DS006018(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006033(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Synchronous EEG and fMRI dataset on inner speech * **Study:** `ds006033` (OpenNeuro) * **Author (year):** `Liwicki2025` * **Canonical:** — Also importable as: `DS006033`, `Liwicki2025`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 3; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006033](https://openneuro.org/datasets/ds006033) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006033](https://nemar.org/dataexplorer/detail?dataset_id=ds006033) DOI: [https://doi.org/10.18112/openneuro.ds006033.v1.0.1](https://doi.org/10.18112/openneuro.ds006033.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006033 >>> dataset = DS006033(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006035(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) somatomotor * **Study:** `ds006035` (OpenNeuro) * **Author (year):** `Lin2025` * **Canonical:** `Lin2019` Also importable as: `DS006035`, `Lin2025`, `Lin2019`. Modality: `meg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 15; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006035](https://openneuro.org/datasets/ds006035) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006035](https://nemar.org/dataexplorer/detail?dataset_id=ds006035) DOI: [https://doi.org/10.18112/openneuro.ds006035.v1.0.0](https://doi.org/10.18112/openneuro.ds006035.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006035 >>> dataset = DS006035(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Lin2019']* ### *class* eegdash.dataset.dataset.DS006036(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A complementary dataset of open-eyes EEG recordings in a photo-stimulation setting from: Alzheimer’s disease, Frontotemporal dementia and Healthy subjects * **Study:** `ds006036` (OpenNeuro) * **Author (year):** `Ntetska2025` * **Canonical:** — Also importable as: `DS006036`, `Ntetska2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Dementia`. Subjects: 88; recordings: 88; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006036](https://openneuro.org/datasets/ds006036) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006036](https://nemar.org/dataexplorer/detail?dataset_id=ds006036) DOI: [https://doi.org/10.18112/openneuro.ds006036.v1.0.6](https://doi.org/10.18112/openneuro.ds006036.v1.0.6) ### Examples ```pycon >>> from eegdash.dataset import DS006036 >>> dataset = DS006036(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006040(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sustained Attention Task (gradCPT) Dataset using simultaneous EEG-fMRI and DTI * **Study:** `ds006040` (OpenNeuro) * **Author (year):** `Cha2025` * **Canonical:** — Also importable as: `DS006040`, `Cha2025`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 28; recordings: 392; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006040](https://openneuro.org/datasets/ds006040) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006040](https://nemar.org/dataexplorer/detail?dataset_id=ds006040) DOI: [https://doi.org/10.18112/openneuro.ds006040.v1.0.2](https://doi.org/10.18112/openneuro.ds006040.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006040 >>> dataset = DS006040(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006065(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TSS_iEEG * **Study:** `ds006065` (OpenNeuro) * **Author (year):** `Kragel2025` * **Canonical:** — Also importable as: `DS006065`, `Kragel2025`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 7; recordings: 45; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006065](https://openneuro.org/datasets/ds006065) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006065](https://nemar.org/dataexplorer/detail?dataset_id=ds006065) DOI: [https://doi.org/10.18112/openneuro.ds006065.v1.0.0](https://doi.org/10.18112/openneuro.ds006065.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006065 >>> dataset = DS006065(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006095(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mind in Motion Older Adults Walking Over Uneven Terrain * **Study:** `ds006095` (OpenNeuro) * **Author (year):** `Liu2025_Mind_Motion_Older` * **Canonical:** — Also importable as: `DS006095`, `Liu2025_Mind_Motion_Older`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 71; recordings: 1182; tasks: 9. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006095](https://openneuro.org/datasets/ds006095) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006095](https://nemar.org/dataexplorer/detail?dataset_id=ds006095) DOI: [https://doi.org/10.18112/openneuro.ds006095.v1.0.0](https://doi.org/10.18112/openneuro.ds006095.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006095 >>> dataset = DS006095(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006104(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG dataset for speech decoding * **Study:** `ds006104` (OpenNeuro) * **Author (year):** `Moreira2025` * **Canonical:** — Also importable as: `DS006104`, `Moreira2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 24; recordings: 56; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006104](https://openneuro.org/datasets/ds006104) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006104](https://nemar.org/dataexplorer/detail?dataset_id=ds006104) DOI: [https://doi.org/10.18112/openneuro.ds006104.v1.0.1](https://doi.org/10.18112/openneuro.ds006104.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006104 >>> dataset = DS006104(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006107(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_Neural_spatial_volatility * **Study:** `ds006107` (OpenNeuro) * **Author (year):** `Kuroda2025` * **Canonical:** `Kuroda2024` Also importable as: `DS006107`, `Kuroda2025`, `Kuroda2024`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Unknown`. Subjects: 166; recordings: 167; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006107](https://openneuro.org/datasets/ds006107) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006107](https://nemar.org/dataexplorer/detail?dataset_id=ds006107) DOI: [https://doi.org/10.18112/openneuro.ds006107.v1.0.0](https://doi.org/10.18112/openneuro.ds006107.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006107 >>> dataset = DS006107(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kuroda2024']* ### *class* eegdash.dataset.dataset.DS006126(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TDCS Modulation of Visual Cortex in Motor Imagery * **Study:** `ds006126` (OpenNeuro) * **Author (year):** `Mensah2025` * **Canonical:** — Also importable as: `DS006126`, `Mensah2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 90; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006126](https://openneuro.org/datasets/ds006126) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006126](https://nemar.org/dataexplorer/detail?dataset_id=ds006126) DOI: [https://doi.org/10.18112/openneuro.ds006126.v1.0.0](https://doi.org/10.18112/openneuro.ds006126.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006126 >>> dataset = DS006126(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006136(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) OWM-Dataset * **Study:** `ds006136` (OpenNeuro) * **Author (year):** `Omelyusik2025` * **Canonical:** `Omelyusik2026` Also importable as: `DS006136`, `Omelyusik2025`, `Omelyusik2026`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 13; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006136](https://openneuro.org/datasets/ds006136) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006136](https://nemar.org/dataexplorer/detail?dataset_id=ds006136) DOI: [https://doi.org/10.18112/openneuro.ds006136.v1.0.1](https://doi.org/10.18112/openneuro.ds006136.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006136 >>> dataset = DS006136(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Omelyusik2026']* ### *class* eegdash.dataset.dataset.DS006142(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Essex EEG Movie Memory dataset * **Study:** `ds006142` (OpenNeuro) * **Author (year):** `MatranFernandez2025` * **Canonical:** — Also importable as: `DS006142`, `MatranFernandez2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 27; recordings: 27; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006142](https://openneuro.org/datasets/ds006142) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006142](https://nemar.org/dataexplorer/detail?dataset_id=ds006142) DOI: [https://doi.org/10.18112/openneuro.ds006142.v1.0.2](https://doi.org/10.18112/openneuro.ds006142.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006142 >>> dataset = DS006142(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006159(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Implicit Learning EEG (BioSemi) * **Study:** `ds006159` (OpenNeuro) * **Author (year):** `LeganesFonteneau2025` * **Canonical:** `LeganesFonteneau2024` Also importable as: `DS006159`, `LeganesFonteneau2025`, `LeganesFonteneau2024`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 61; recordings: 61; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006159](https://openneuro.org/datasets/ds006159) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006159](https://nemar.org/dataexplorer/detail?dataset_id=ds006159) DOI: [https://doi.org/10.18112/openneuro.ds006159.v1.0.0](https://doi.org/10.18112/openneuro.ds006159.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006159 >>> dataset = DS006159(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['LeganesFonteneau2024']* ### *class* eegdash.dataset.dataset.DS006171(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data during three near-threshold visual detection tasks: a no-cue task, a noninformative cue task (50% validity), and an informative cue task (100% validity) * **Study:** `ds006171` (OpenNeuro) * **Author (year):** `Melcon2025` * **Canonical:** `Melcon2024` Also importable as: `DS006171`, `Melcon2025`, `Melcon2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 36; recordings: 104; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006171](https://openneuro.org/datasets/ds006171) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006171](https://nemar.org/dataexplorer/detail?dataset_id=ds006171) DOI: [https://doi.org/10.18112/openneuro.ds006171.v1.0.0](https://doi.org/10.18112/openneuro.ds006171.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006171 >>> dataset = DS006171(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Melcon2024']* ### *class* eegdash.dataset.dataset.DS006222(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MultisensoryFlickerHealthyYoungAdults_AllSubjectsRawData * **Study:** `ds006222` (OpenNeuro) * **Author (year):** `Attokaren2025` * **Canonical:** — Also importable as: `DS006222`, `Attokaren2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 69; recordings: 70; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006222](https://openneuro.org/datasets/ds006222) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006222](https://nemar.org/dataexplorer/detail?dataset_id=ds006222) DOI: [https://doi.org/10.18112/openneuro.ds006222.v1.0.1](https://doi.org/10.18112/openneuro.ds006222.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006222 >>> dataset = DS006222(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006233(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Picture naming * **Study:** `ds006233` (OpenNeuro) * **Author (year):** `Kochi2025_Picture_naming` * **Canonical:** — Also importable as: `DS006233`, `Kochi2025_Picture_naming`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Surgery`. Subjects: 108; recordings: 347; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006233](https://openneuro.org/datasets/ds006233) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006233](https://nemar.org/dataexplorer/detail?dataset_id=ds006233) DOI: [https://doi.org/10.18112/openneuro.ds006233.v1.0.0](https://doi.org/10.18112/openneuro.ds006233.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006233 >>> dataset = DS006233(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006234(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory naming * **Study:** `ds006234` (OpenNeuro) * **Author (year):** `Kochi2025_Auditory_naming` * **Canonical:** — Also importable as: `DS006234`, `Kochi2025_Auditory_naming`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Surgery`. Subjects: 119; recordings: 378; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006234](https://openneuro.org/datasets/ds006234) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006234](https://nemar.org/dataexplorer/detail?dataset_id=ds006234) DOI: [https://doi.org/10.18112/openneuro.ds006234.v1.0.0](https://doi.org/10.18112/openneuro.ds006234.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006234 >>> dataset = DS006234(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006253(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MetaRDK * **Study:** `ds006253` (OpenNeuro) * **Author (year):** `Goueytes2024` * **Canonical:** `MetaRDK` Also importable as: `DS006253`, `Goueytes2024`, `MetaRDK`. Modality: `ieeg`; Experiment type: `Decision-making`; Subject type: `Epilepsy`. Subjects: 23; recordings: 201; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006253](https://openneuro.org/datasets/ds006253) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006253](https://nemar.org/dataexplorer/detail?dataset_id=ds006253) DOI: [https://doi.org/10.18112/openneuro.ds006253.v1.0.3](https://doi.org/10.18112/openneuro.ds006253.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS006253 >>> dataset = DS006253(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MetaRDK']* ### *class* eegdash.dataset.dataset.DS006260(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of psychophysiological data from children with learning difficulties who strengthen reading and math skills through assistive technology * **Study:** `ds006260` (OpenNeuro) * **Author (year):** `CoronaGonzalez2025` * **Canonical:** — Also importable as: `DS006260`, `CoronaGonzalez2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 76; recordings: 366; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006260](https://openneuro.org/datasets/ds006260) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006260](https://nemar.org/dataexplorer/detail?dataset_id=ds006260) DOI: [https://doi.org/10.18112/openneuro.ds006260.v1.0.1](https://doi.org/10.18112/openneuro.ds006260.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006260 >>> dataset = DS006260(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006269(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Tethered EEG Recordings in Syngap1 rats * **Study:** `ds006269` (OpenNeuro) * **Author (year):** `Pritchard2025` * **Canonical:** — Also importable as: `DS006269`, `Pritchard2025`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Other`. Subjects: 24; recordings: 40; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006269](https://openneuro.org/datasets/ds006269) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006269](https://nemar.org/dataexplorer/detail?dataset_id=ds006269) DOI: [https://doi.org/10.18112/openneuro.ds006269.v1.0.0](https://doi.org/10.18112/openneuro.ds006269.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006269 >>> dataset = DS006269(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006317(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Chisco-2.0 * **Study:** `ds006317` (OpenNeuro) * **Author (year):** `Zhang2025_Chisco_2_0` * **Canonical:** `Chisco2_0`, `Chisco20`, `CHISCO20` Also importable as: `DS006317`, `Zhang2025_Chisco_2_0`, `Chisco2_0`, `Chisco20`, `CHISCO20`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 2; recordings: 64; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006317](https://openneuro.org/datasets/ds006317) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006317](https://nemar.org/dataexplorer/detail?dataset_id=ds006317) DOI: [https://doi.org/10.18112/openneuro.ds006317.v1.1.0](https://doi.org/10.18112/openneuro.ds006317.v1.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS006317 >>> dataset = DS006317(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Chisco2_0', 'Chisco20', 'CHISCO20']* ### *class* eegdash.dataset.dataset.DS006334(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories * **Study:** `ds006334` (OpenNeuro) * **Author (year):** `Biau2025` * **Canonical:** — Also importable as: `DS006334`, `Biau2025`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 30; recordings: 128; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006334](https://openneuro.org/datasets/ds006334) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006334](https://nemar.org/dataexplorer/detail?dataset_id=ds006334) DOI: [https://doi.org/10.18112/openneuro.ds006334.v1.0.0](https://doi.org/10.18112/openneuro.ds006334.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006334 >>> dataset = DS006334(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006366(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mouse Sleep Staging Validation dataset (MSSV) * **Study:** `ds006366` (OpenNeuro) * **Author (year):** `Rose2025` * **Canonical:** `MSSV` Also importable as: `DS006366`, `Rose2025`, `MSSV`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 92; recordings: 148; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006366](https://openneuro.org/datasets/ds006366) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006366](https://nemar.org/dataexplorer/detail?dataset_id=ds006366) DOI: [https://doi.org/10.18112/openneuro.ds006366.v1.0.1](https://doi.org/10.18112/openneuro.ds006366.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006366 >>> dataset = DS006366(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MSSV']* ### *class* eegdash.dataset.dataset.DS006367(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Memory Reactivation Levels Remain Unaffected by Anticipated Interference * **Study:** `ds006367` (OpenNeuro) * **Author (year):** `DS6367_Memory_Reactivation` * **Canonical:** — Also importable as: `DS006367`, `DS6367_Memory_Reactivation`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 52; recordings: 52; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006367](https://openneuro.org/datasets/ds006367) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006367](https://nemar.org/dataexplorer/detail?dataset_id=ds006367) DOI: [https://doi.org/10.18112/openneuro.ds006367.v1.0.1](https://doi.org/10.18112/openneuro.ds006367.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006367 >>> dataset = DS006367(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006370(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Memory Reactivation Levels Remain Unaffected by Anticipated Interference Experiment 2 Dataset * **Study:** `ds006370` (OpenNeuro) * **Author (year):** `DS6370_Memory_Reactivation` * **Canonical:** — Also importable as: `DS006370`, `DS6370_Memory_Reactivation`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 56; recordings: 56; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006370](https://openneuro.org/datasets/ds006370) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006370](https://nemar.org/dataexplorer/detail?dataset_id=ds006370) DOI: [https://doi.org/10.18112/openneuro.ds006370.v1.0.1](https://doi.org/10.18112/openneuro.ds006370.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006370 >>> dataset = DS006370(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006374(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Expectation effects on repetition suppression in nociception * **Study:** `ds006374` (OpenNeuro) * **Author (year):** `Pohle2025` * **Canonical:** `Pohle2019` Also importable as: `DS006374`, `Pohle2025`, `Pohle2019`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 36; recordings: 358; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006374](https://openneuro.org/datasets/ds006374) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006374](https://nemar.org/dataexplorer/detail?dataset_id=ds006374) DOI: [https://doi.org/10.18112/openneuro.ds006374.v1.0.0](https://doi.org/10.18112/openneuro.ds006374.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006374 >>> dataset = DS006374(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Pohle2019']* ### *class* eegdash.dataset.dataset.DS006377(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) InclusionStudy * **Study:** `ds006377` (OpenNeuro) * **Author (year):** `Yucel2025_InclusionStudy` * **Canonical:** — Also importable as: `DS006377`, `Yucel2025_InclusionStudy`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 115; recordings: 690; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006377](https://openneuro.org/datasets/ds006377) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006377](https://nemar.org/dataexplorer/detail?dataset_id=ds006377) DOI: [https://doi.org/10.18112/openneuro.ds006377.v1.0.2](https://doi.org/10.18112/openneuro.ds006377.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006377 >>> dataset = DS006377(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006386(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PhysioMotion_Artifact * **Study:** `ds006386` (OpenNeuro) * **Author (year):** `Yu2025` * **Canonical:** `Yu2019` Also importable as: `DS006386`, `Yu2025`, `Yu2019`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 30; recordings: 180; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006386](https://openneuro.org/datasets/ds006386) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006386](https://nemar.org/dataexplorer/detail?dataset_id=ds006386) DOI: [https://doi.org/10.18112/openneuro.ds006386.v1.0.1](https://doi.org/10.18112/openneuro.ds006386.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006386 >>> dataset = DS006386(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Yu2019']* ### *class* eegdash.dataset.dataset.DS006392(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HED schema library for SCORE annotations example * **Study:** `ds006392` (OpenNeuro) * **Author (year):** `Attia2025` * **Canonical:** `Hermes2024` Also importable as: `DS006392`, `Attia2025`, `Hermes2024`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006392](https://openneuro.org/datasets/ds006392) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006392](https://nemar.org/dataexplorer/detail?dataset_id=ds006392) DOI: [https://doi.org/10.18112/openneuro.ds006392.v1.0.1](https://doi.org/10.18112/openneuro.ds006392.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006392 >>> dataset = DS006392(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Hermes2024']* ### *class* eegdash.dataset.dataset.DS006394(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrophysiological markers of surprise-induced failures of visual and auditory awareness * **Study:** `ds006394` (OpenNeuro) * **Author (year):** `Leong2025` * **Canonical:** — Also importable as: `DS006394`, `Leong2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 33; recordings: 60; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006394](https://openneuro.org/datasets/ds006394) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006394](https://nemar.org/dataexplorer/detail?dataset_id=ds006394) DOI: [https://doi.org/10.18112/openneuro.ds006394.v1.0.3](https://doi.org/10.18112/openneuro.ds006394.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS006394 >>> dataset = DS006394(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006434(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The auditory brainstem response to natural speech is not affected by selective attention * **Study:** `ds006434` (OpenNeuro) * **Author (year):** `Stoll2025` * **Canonical:** — Also importable as: `DS006434`, `Stoll2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 66; recordings: 118; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006434](https://openneuro.org/datasets/ds006434) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006434](https://nemar.org/dataexplorer/detail?dataset_id=ds006434) DOI: [https://doi.org/10.18112/openneuro.ds006434.v1.2.0](https://doi.org/10.18112/openneuro.ds006434.v1.2.0) ### Examples ```pycon >>> from eegdash.dataset import DS006434 >>> dataset = DS006434(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006437(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LIGHT Hypnotherapy * **Study:** `ds006437` (OpenNeuro) * **Author (year):** `DS6437_LIGHT_Hypnotherapy` * **Canonical:** — Also importable as: `DS006437`, `DS6437_LIGHT_Hypnotherapy`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 9; recordings: 63; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006437](https://openneuro.org/datasets/ds006437) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006437](https://nemar.org/dataexplorer/detail?dataset_id=ds006437) DOI: [https://doi.org/10.18112/openneuro.ds006437.v1.1.0](https://doi.org/10.18112/openneuro.ds006437.v1.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS006437 >>> dataset = DS006437(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006446(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cueing the future to reduce temporal discounting * **Study:** `ds006446` (OpenNeuro) * **Author (year):** `Kinley2025` * **Canonical:** `Kinley2019` Also importable as: `DS006446`, `Kinley2025`, `Kinley2019`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 29; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006446](https://openneuro.org/datasets/ds006446) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006446](https://nemar.org/dataexplorer/detail?dataset_id=ds006446) DOI: [https://doi.org/10.18112/openneuro.ds006446.v1.0.0](https://doi.org/10.18112/openneuro.ds006446.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006446 >>> dataset = DS006446(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kinley2019']* ### *class* eegdash.dataset.dataset.DS006459(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High-DensityvSparsefNIRS_WordColorStroop_Sparse_Anderson_2025 * **Study:** `ds006459` (OpenNeuro) * **Author (year):** `Anderson2025_Sparse` * **Canonical:** — Also importable as: `DS006459`, `Anderson2025_Sparse`. Modality: `fnirs`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006459](https://openneuro.org/datasets/ds006459) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006459](https://nemar.org/dataexplorer/detail?dataset_id=ds006459) DOI: [https://doi.org/10.18112/openneuro.ds006459.v1.0.0](https://doi.org/10.18112/openneuro.ds006459.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006459 >>> dataset = DS006459(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006460(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High-DensityvSparsefNIRS_WordColorStroop_HD_Anderson_2025 * **Study:** `ds006460` (OpenNeuro) * **Author (year):** `Anderson2025_HD` * **Canonical:** — Also importable as: `DS006460`, `Anderson2025_HD`. Modality: `fnirs`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006460](https://openneuro.org/datasets/ds006460) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006460](https://nemar.org/dataexplorer/detail?dataset_id=ds006460) DOI: [https://doi.org/10.18112/openneuro.ds006460.v1.0.0](https://doi.org/10.18112/openneuro.ds006460.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006460 >>> dataset = DS006460(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006465(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 3M-CPSEED:An EEG-based Dataset for Chinese Pinyin Production in Overt, Silent-intended, and Imagined Speech * **Study:** `ds006465` (OpenNeuro) * **Author (year):** `Ma2025` * **Canonical:** `CPSEED_3M`, `CPSEED` Also importable as: `DS006465`, `Ma2025`, `CPSEED_3M`, `CPSEED`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 20; recordings: 80; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006465](https://openneuro.org/datasets/ds006465) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006465](https://nemar.org/dataexplorer/detail?dataset_id=ds006465) DOI: [https://doi.org/10.18112/openneuro.ds006465.v2.0.0](https://doi.org/10.18112/openneuro.ds006465.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006465 >>> dataset = DS006465(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['CPSEED_3M', 'CPSEED']* ### *class* eegdash.dataset.dataset.DS006466(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HeartBEAM: Older Adult Resting State and Auditory Oddball Task EEG Data * **Study:** `ds006466` (OpenNeuro) * **Author (year):** `Kim2025_HeartBEAM_Older_Adult` * **Canonical:** `HeartBEAM` Also importable as: `DS006466`, `Kim2025_HeartBEAM_Older_Adult`, `HeartBEAM`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 66; recordings: 1257; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006466](https://openneuro.org/datasets/ds006466) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006466](https://nemar.org/dataexplorer/detail?dataset_id=ds006466) DOI: [https://doi.org/10.18112/openneuro.ds006466.v1.0.1](https://doi.org/10.18112/openneuro.ds006466.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006466 >>> dataset = DS006466(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HeartBEAM']* ### *class* eegdash.dataset.dataset.DS006468(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEG-SCANS - A comprehensive magnetoencephalography speech dataset with Stories, Chirps And Noisy Sentences. * **Study:** `ds006468` (OpenNeuro) * **Author (year):** `Habersetzer2025` * **Canonical:** `MEG_SCANS` Also importable as: `DS006468`, `Habersetzer2025`, `MEG_SCANS`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 24; recordings: 189; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006468](https://openneuro.org/datasets/ds006468) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006468](https://nemar.org/dataexplorer/detail?dataset_id=ds006468) DOI: [https://doi.org/10.18112/openneuro.ds006468.v1.1.2](https://doi.org/10.18112/openneuro.ds006468.v1.1.2) ### Examples ```pycon >>> from eegdash.dataset import DS006468 >>> dataset = DS006468(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MEG_SCANS']* ### *class* eegdash.dataset.dataset.DS006480(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Young Adult Resting State and Auditory Oddball Task EEG Data * **Study:** `ds006480` (OpenNeuro) * **Author (year):** `Kim2025_Young_Adult_Resting` * **Canonical:** — Also importable as: `DS006480`, `Kim2025_Young_Adult_Resting`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 68; recordings: 68; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006480](https://openneuro.org/datasets/ds006480) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006480](https://nemar.org/dataexplorer/detail?dataset_id=ds006480) DOI: [https://doi.org/10.18112/openneuro.ds006480.v1.0.1](https://doi.org/10.18112/openneuro.ds006480.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006480 >>> dataset = DS006480(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006502(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Skill learning and consolidation in healthy humans * **Study:** `ds006502` (OpenNeuro) * **Author (year):** `Bonstrup2025` * **Canonical:** — Also importable as: `DS006502`, `Bonstrup2025`. Modality: `meg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 31; recordings: 380; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006502](https://openneuro.org/datasets/ds006502) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006502](https://nemar.org/dataexplorer/detail?dataset_id=ds006502) DOI: [https://doi.org/10.18112/openneuro.ds006502.v1.0.0](https://doi.org/10.18112/openneuro.ds006502.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006502 >>> dataset = DS006502(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006519(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of intracranial EEG during cortical stimulations evoking negative motor responses * **Study:** `ds006519` (OpenNeuro) * **Author (year):** `Barborica2025` * **Canonical:** — Also importable as: `DS006519`, `Barborica2025`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 21; recordings: 35; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006519](https://openneuro.org/datasets/ds006519) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006519](https://nemar.org/dataexplorer/detail?dataset_id=ds006519) DOI: [https://doi.org/10.18112/openneuro.ds006519.v1.0.0](https://doi.org/10.18112/openneuro.ds006519.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006519 >>> dataset = DS006519(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006525(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting EEG * **Study:** `ds006525` (OpenNeuro) * **Author (year):** `Neuroimaging2025` * **Canonical:** — Also importable as: `DS006525`, `Neuroimaging2025`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Unknown`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006525](https://openneuro.org/datasets/ds006525) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006525](https://nemar.org/dataexplorer/detail?dataset_id=ds006525) DOI: [https://doi.org/10.18112/openneuro.ds006525.v1.0.0](https://doi.org/10.18112/openneuro.ds006525.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006525 >>> dataset = DS006525(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006545(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Reliability-Dubois2024 * **Study:** `ds006545` (OpenNeuro) * **Author (year):** `ReliabilityDubois2024` * **Canonical:** `Dubois2024` Also importable as: `DS006545`, `ReliabilityDubois2024`, `Dubois2024`. Modality: `fnirs`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 49; recordings: 98; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006545](https://openneuro.org/datasets/ds006545) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006545](https://nemar.org/dataexplorer/detail?dataset_id=ds006545) DOI: [https://doi.org/10.18112/openneuro.ds006545.v1.0.0](https://doi.org/10.18112/openneuro.ds006545.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006545 >>> dataset = DS006545(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Dubois2024']* ### *class* eegdash.dataset.dataset.DS006547(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual EEG Study (BrainVision → BIDS) * **Study:** `ds006547` (OpenNeuro) * **Author (year):** `Ghaffari2025` * **Canonical:** `Ghaffari2024` Also importable as: `DS006547`, `Ghaffari2025`, `Ghaffari2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 31; recordings: 31; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006547](https://openneuro.org/datasets/ds006547) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006547](https://nemar.org/dataexplorer/detail?dataset_id=ds006547) DOI: [https://doi.org/10.18112/openneuro.ds006547.v1.0.0](https://doi.org/10.18112/openneuro.ds006547.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006547 >>> dataset = DS006547(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ghaffari2024']* ### *class* eegdash.dataset.dataset.DS006554(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Social Observation EEG raw data * **Study:** `ds006554` (OpenNeuro) * **Author (year):** `Su2025` * **Canonical:** — Also importable as: `DS006554`, `Su2025`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 47; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006554](https://openneuro.org/datasets/ds006554) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006554](https://nemar.org/dataexplorer/detail?dataset_id=ds006554) DOI: [https://doi.org/10.18112/openneuro.ds006554.v1.0.0](https://doi.org/10.18112/openneuro.ds006554.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006554 >>> dataset = DS006554(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006563(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dimension-based attention modulates early visual processing * **Study:** `ds006563` (OpenNeuro) * **Author (year):** `Gramann2025` * **Canonical:** — Also importable as: `DS006563`, `Gramann2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006563](https://openneuro.org/datasets/ds006563) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006563](https://nemar.org/dataexplorer/detail?dataset_id=ds006563) DOI: [https://doi.org/10.18112/openneuro.ds006563.v1.0.0](https://doi.org/10.18112/openneuro.ds006563.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006563 >>> dataset = DS006563(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006576(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The role of REM sleep in neural differentiation of memories in the hippocampus * **Study:** `ds006576` (OpenNeuro) * **Author (year):** `McDevitt2025` * **Canonical:** — Also importable as: `DS006576`, `McDevitt2025`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 57; recordings: 57; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006576](https://openneuro.org/datasets/ds006576) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006576](https://nemar.org/dataexplorer/detail?dataset_id=ds006576) DOI: [https://doi.org/10.18112/openneuro.ds006576.v1.0.3](https://doi.org/10.18112/openneuro.ds006576.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS006576 >>> dataset = DS006576(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006593(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) cBCI Matrix Multimodal Dataset * **Study:** `ds006593` (OpenNeuro) * **Author (year):** `Celik2025` * **Canonical:** — Also importable as: `DS006593`, `Celik2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006593](https://openneuro.org/datasets/ds006593) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006593](https://nemar.org/dataexplorer/detail?dataset_id=ds006593) DOI: [https://doi.org/10.18112/openneuro.ds006593.v1.0.0](https://doi.org/10.18112/openneuro.ds006593.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006593 >>> dataset = DS006593(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006629(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SINGSING * **Study:** `ds006629` (OpenNeuro) * **Author (year):** `Chanoine2025` * **Canonical:** `SINGSING` Also importable as: `DS006629`, `Chanoine2025`, `SINGSING`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 19; recordings: 38; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006629](https://openneuro.org/datasets/ds006629) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006629](https://nemar.org/dataexplorer/detail?dataset_id=ds006629) DOI: [https://doi.org/10.18112/openneuro.ds006629.v1.0.1](https://doi.org/10.18112/openneuro.ds006629.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006629 >>> dataset = DS006629(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['SINGSING']* ### *class* eegdash.dataset.dataset.DS006647(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Poetry Assessment EEG Dataset 2 * **Study:** `ds006647` (OpenNeuro) * **Author (year):** `Chaudhuri2025_D2` * **Canonical:** — Also importable as: `DS006647`, `Chaudhuri2025_D2`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 4; recordings: 4; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006647](https://openneuro.org/datasets/ds006647) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006647](https://nemar.org/dataexplorer/detail?dataset_id=ds006647) DOI: [https://doi.org/10.18112/openneuro.ds006647.v1.0.1](https://doi.org/10.18112/openneuro.ds006647.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006647 >>> dataset = DS006647(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006648(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Poetry Assessment EEG Dataset 1 * **Study:** `ds006648` (OpenNeuro) * **Author (year):** `Chaudhuri2025_D1` * **Canonical:** — Also importable as: `DS006648`, `Chaudhuri2025_D1`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 47; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006648](https://openneuro.org/datasets/ds006648) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006648](https://nemar.org/dataexplorer/detail?dataset_id=ds006648) DOI: [https://doi.org/10.18112/openneuro.ds006648.v1.0.0](https://doi.org/10.18112/openneuro.ds006648.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006648 >>> dataset = DS006648(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006673(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ball_squeeze_Carlton_2025 * **Study:** `ds006673` (OpenNeuro) * **Author (year):** `Carlton2025` * **Canonical:** — Also importable as: `DS006673`, `Carlton2025`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 17; recordings: 67; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006673](https://openneuro.org/datasets/ds006673) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006673](https://nemar.org/dataexplorer/detail?dataset_id=ds006673) DOI: [https://doi.org/10.18112/openneuro.ds006673.v1.0.2](https://doi.org/10.18112/openneuro.ds006673.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006673 >>> dataset = DS006673(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006695(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Validation of Sleep Staging with Forehead EEG Patch * **Study:** `ds006695` (OpenNeuro) * **Author (year):** `Onton2025` * **Canonical:** `Onton2024` Also importable as: `DS006695`, `Onton2025`, `Onton2024`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 19; recordings: 19; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006695](https://openneuro.org/datasets/ds006695) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006695](https://nemar.org/dataexplorer/detail?dataset_id=ds006695) DOI: [https://doi.org/10.18112/openneuro.ds006695.v1.0.2](https://doi.org/10.18112/openneuro.ds006695.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006695 >>> dataset = DS006695(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Onton2024']* ### *class* eegdash.dataset.dataset.DS006720(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alpha power indexes working memory load for durations * **Study:** `ds006720` (OpenNeuro) * **Author (year):** `Herbst2025` * **Canonical:** — Also importable as: `DS006720`, `Herbst2025`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 24; recordings: 246; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006720](https://openneuro.org/datasets/ds006720) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006720](https://nemar.org/dataexplorer/detail?dataset_id=ds006720) DOI: [https://doi.org/10.18112/openneuro.ds006720.v1.0.0](https://doi.org/10.18112/openneuro.ds006720.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006720 >>> dataset = DS006720(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006735(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Chimeric music reveals an interaction of pitch and time in electrophysiological signatures of music encoding * **Study:** `ds006735` (OpenNeuro) * **Author (year):** `Shan2025` * **Canonical:** — Also importable as: `DS006735`, `Shan2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 27; recordings: 27; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006735](https://openneuro.org/datasets/ds006735) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006735](https://nemar.org/dataexplorer/detail?dataset_id=ds006735) DOI: [https://doi.org/10.18112/openneuro.ds006735.v2.0.0](https://doi.org/10.18112/openneuro.ds006735.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006735 >>> dataset = DS006735(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006761(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neural decoding of competitive decision-making in Rock-Paper-Scissors * **Study:** `ds006761` (OpenNeuro) * **Author (year):** `Moerel2025_Neural` * **Canonical:** — Also importable as: `DS006761`, `Moerel2025_Neural`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 31; recordings: 31; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006761](https://openneuro.org/datasets/ds006761) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006761](https://nemar.org/dataexplorer/detail?dataset_id=ds006761) DOI: [https://doi.org/10.18112/openneuro.ds006761.v1.0.0](https://doi.org/10.18112/openneuro.ds006761.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006761 >>> dataset = DS006761(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006768(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multiple Object Monitoring (EEG) * **Study:** `ds006768` (OpenNeuro) * **Author (year):** `Lowe2025` * **Canonical:** — Also importable as: `DS006768`, `Lowe2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 30; recordings: 210; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006768](https://openneuro.org/datasets/ds006768) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006768](https://nemar.org/dataexplorer/detail?dataset_id=ds006768) DOI: [https://doi.org/10.18112/openneuro.ds006768.v1.1.0](https://doi.org/10.18112/openneuro.ds006768.v1.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS006768 >>> dataset = DS006768(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006801(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting-state EEG before and after different study methods * **Study:** `ds006801` (OpenNeuro) * **Author (year):** `Alves2025` * **Canonical:** — Also importable as: `DS006801`, `Alves2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 21; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006801](https://openneuro.org/datasets/ds006801) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006801](https://nemar.org/dataexplorer/detail?dataset_id=ds006801) DOI: [https://doi.org/10.18112/openneuro.ds006801.v1.0.0](https://doi.org/10.18112/openneuro.ds006801.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006801 >>> dataset = DS006801(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006802(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Collaborative rule learning promotes interbrain information alignment * **Study:** `ds006802` (OpenNeuro) * **Author (year):** `Moerel2025_Collaborative` * **Canonical:** — Also importable as: `DS006802`, `Moerel2025_Collaborative`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006802](https://openneuro.org/datasets/ds006802) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006802](https://nemar.org/dataexplorer/detail?dataset_id=ds006802) DOI: [https://doi.org/10.18112/openneuro.ds006802.v1.0.0](https://doi.org/10.18112/openneuro.ds006802.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006802 >>> dataset = DS006802(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006803(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NeuroTechs Dataset for Stem Skills * **Study:** `ds006803` (OpenNeuro) * **Author (year):** `PechCanul2025` * **Canonical:** — Also importable as: `DS006803`, `PechCanul2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 63; recordings: 126; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006803](https://openneuro.org/datasets/ds006803) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006803](https://nemar.org/dataexplorer/detail?dataset_id=ds006803) DOI: [https://doi.org/10.18112/openneuro.ds006803.v1.1.1](https://doi.org/10.18112/openneuro.ds006803.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS006803 >>> dataset = DS006803(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006817(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual Attribute-Specific Contextual Trajectory Paradigm 2.0 * **Study:** `ds006817` (OpenNeuro) * **Author (year):** `Lowe2025` * **Canonical:** `VisualContextTrajectory_v2` Also importable as: `DS006817`, `Lowe2025`, `VisualContextTrajectory_v2`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006817](https://openneuro.org/datasets/ds006817) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006817](https://nemar.org/dataexplorer/detail?dataset_id=ds006817) DOI: [https://doi.org/10.18112/openneuro.ds006817.v1.0.0](https://doi.org/10.18112/openneuro.ds006817.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006817 >>> dataset = DS006817(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['VisualContextTrajectory_v2', 'Lowe2025']* ### *class* eegdash.dataset.dataset.DS006839(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG recordings during sham neurofeedback in virtual reality * **Study:** `ds006839` (OpenNeuro) * **Author (year):** `Gonzales2025` * **Canonical:** — Also importable as: `DS006839`, `Gonzales2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 36; recordings: 144; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006839](https://openneuro.org/datasets/ds006839) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006839](https://nemar.org/dataexplorer/detail?dataset_id=ds006839) DOI: [https://doi.org/10.18112/openneuro.ds006839.v1.0.0](https://doi.org/10.18112/openneuro.ds006839.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006839 >>> dataset = DS006839(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006840(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) IACKD: Intention Action Conflict EEG-Hand Kinematics Dataset * **Study:** `ds006840` (OpenNeuro) * **Author (year):** `Cai2025` * **Canonical:** `IACKD` Also importable as: `DS006840`, `Cai2025`, `IACKD`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 15; recordings: 128; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006840](https://openneuro.org/datasets/ds006840) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006840](https://nemar.org/dataexplorer/detail?dataset_id=ds006840) DOI: [https://doi.org/10.18112/openneuro.ds006840.v1.0.0](https://doi.org/10.18112/openneuro.ds006840.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006840 >>> dataset = DS006840(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['IACKD']* ### *class* eegdash.dataset.dataset.DS006848(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) AlphaDirection1: EEG, ECG, PPG in the resting state and working memory for sequentially and simultaneously presented digits * **Study:** `ds006848` (OpenNeuro) * **Author (year):** `Kosachenko2025` * **Canonical:** — Also importable as: `DS006848`, `Kosachenko2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 30; recordings: 52; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006848](https://openneuro.org/datasets/ds006848) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006848](https://nemar.org/dataexplorer/detail?dataset_id=ds006848) DOI: [https://doi.org/10.18112/openneuro.ds006848.v1.0.0](https://doi.org/10.18112/openneuro.ds006848.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006848 >>> dataset = DS006848(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006850(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Urban Appraisal: Physiological Recording during Rating of Different Urban Environments * **Study:** `ds006850` (OpenNeuro) * **Author (year):** `Zaehme2025` * **Canonical:** — Also importable as: `DS006850`, `Zaehme2025`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 63; recordings: 126; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006850](https://openneuro.org/datasets/ds006850) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006850](https://nemar.org/dataexplorer/detail?dataset_id=ds006850) DOI: [https://doi.org/10.18112/openneuro.ds006850.v1.0.0](https://doi.org/10.18112/openneuro.ds006850.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006850 >>> dataset = DS006850(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006861(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Targeted Neuromodulation of the Left Dorsolateral Prefrontal Cortex Alleviates Altered Affective Response Evaluation in Lonely Individuals * **Study:** `ds006861` (OpenNeuro) * **Author (year):** `Maka2025_Targeted` * **Canonical:** — Also importable as: `DS006861`, `Maka2025_Targeted`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 120; recordings: 239; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006861](https://openneuro.org/datasets/ds006861) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006861](https://nemar.org/dataexplorer/detail?dataset_id=ds006861) DOI: [https://doi.org/10.18112/openneuro.ds006861.v1.0.2](https://doi.org/10.18112/openneuro.ds006861.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006861 >>> dataset = DS006861(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006866(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Discrepancy between self-report and neurophysiological markers of socio-affective responses in lonely individuals * **Study:** `ds006866` (OpenNeuro) * **Author (year):** `Maka2025_Discrepancy` * **Canonical:** — Also importable as: `DS006866`, `Maka2025_Discrepancy`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 148; recordings: 148; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006866](https://openneuro.org/datasets/ds006866) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006866](https://nemar.org/dataexplorer/detail?dataset_id=ds006866) DOI: [https://doi.org/10.18112/openneuro.ds006866.v1.0.0](https://doi.org/10.18112/openneuro.ds006866.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006866 >>> dataset = DS006866(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006890(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Longitudinal Multitask Wireless ECoG Data from Two Fully Implanted Macaca fuscata * **Study:** `ds006890` (OpenNeuro) * **Author (year):** `Yang2025_Longitudinal` * **Canonical:** — Also importable as: `DS006890`, `Yang2025_Longitudinal`. Modality: `ieeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 2; recordings: 870; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006890](https://openneuro.org/datasets/ds006890) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006890](https://nemar.org/dataexplorer/detail?dataset_id=ds006890) DOI: [https://doi.org/10.18112/openneuro.ds006890.v1.0.0](https://doi.org/10.18112/openneuro.ds006890.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006890 >>> dataset = DS006890(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006902(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Profound neuronal differences during Exercise-Induced Hypoalgesia between athletes and non-athletes revealed by functional near-infrared spectroscopy * **Study:** `ds006902` (OpenNeuro) * **Author (year):** `Geisler2025` * **Canonical:** — Also importable as: `DS006902`, `Geisler2025`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 42; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006902](https://openneuro.org/datasets/ds006902) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006902](https://nemar.org/dataexplorer/detail?dataset_id=ds006902) DOI: [https://doi.org/10.18112/openneuro.ds006902.v1.1.1](https://doi.org/10.18112/openneuro.ds006902.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS006902 >>> dataset = DS006902(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006903(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ball_squeeze_2025 * **Study:** `ds006903` (OpenNeuro) * **Author (year):** `here2025` * **Canonical:** — Also importable as: `DS006903`, `here2025`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 17; recordings: 67; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006903](https://openneuro.org/datasets/ds006903) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006903](https://nemar.org/dataexplorer/detail?dataset_id=ds006903) DOI: [https://doi.org/10.18112/openneuro.ds006903.v1.0.0](https://doi.org/10.18112/openneuro.ds006903.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006903 >>> dataset = DS006903(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006910(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory Naming EC * **Study:** `ds006910` (OpenNeuro) * **Author (year):** `Kochi2025_Auditory_Naming_EC` * **Canonical:** — Also importable as: `DS006910`, `Kochi2025_Auditory_Naming_EC`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Unknown`. Subjects: 121; recordings: 384; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006910](https://openneuro.org/datasets/ds006910) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006910](https://nemar.org/dataexplorer/detail?dataset_id=ds006910) DOI: [https://doi.org/10.18112/openneuro.ds006910.v1.0.1](https://doi.org/10.18112/openneuro.ds006910.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006910 >>> dataset = DS006910(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006914(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual Naming EC * **Study:** `ds006914` (OpenNeuro) * **Author (year):** `Kochi2025_Visual_Naming_EC` * **Canonical:** — Also importable as: `DS006914`, `Kochi2025_Visual_Naming_EC`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Epilepsy`. Subjects: 110; recordings: 353; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006914](https://openneuro.org/datasets/ds006914) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006914](https://nemar.org/dataexplorer/detail?dataset_id=ds006914) DOI: [https://doi.org/10.18112/openneuro.ds006914.v1.0.3](https://doi.org/10.18112/openneuro.ds006914.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS006914 >>> dataset = DS006914(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006921(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High Density Resting State EEG of Phantom Limb Pain and Controls * **Study:** `ds006921` (OpenNeuro) * **Author (year):** `Ramne2025` * **Canonical:** — Also importable as: `DS006921`, `Ramne2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 38; recordings: 152; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006921](https://openneuro.org/datasets/ds006921) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006921](https://nemar.org/dataexplorer/detail?dataset_id=ds006921) DOI: [https://doi.org/10.18112/openneuro.ds006921.v1.1.1](https://doi.org/10.18112/openneuro.ds006921.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS006921 >>> dataset = DS006921(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006923(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of Electroencephalograms of Juvenile Offenders * **Study:** `ds006923` (OpenNeuro) * **Author (year):** `Polo2025` * **Canonical:** — Also importable as: `DS006923`, `Polo2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 140; recordings: 280; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006923](https://openneuro.org/datasets/ds006923) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006923](https://nemar.org/dataexplorer/detail?dataset_id=ds006923) DOI: [https://doi.org/10.18112/openneuro.ds006923.v1.0.0](https://doi.org/10.18112/openneuro.ds006923.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006923 >>> dataset = DS006923(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006940(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset: EEG-Controlled Exoskeleton for Walking and Standing - A Longitudinal Study of Healthy Individuals * **Study:** `ds006940` (OpenNeuro) * **Author (year):** `Sarkar2025_StudyOF` * **Canonical:** — Also importable as: `DS006940`, `Sarkar2025_StudyOF`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 7; recordings: 935; tasks: 15. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006940](https://openneuro.org/datasets/ds006940) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006940](https://nemar.org/dataexplorer/detail?dataset_id=ds006940) DOI: [https://doi.org/10.18112/openneuro.ds006940.v1.0.0](https://doi.org/10.18112/openneuro.ds006940.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006940 >>> dataset = DS006940(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006945(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset: T1-Weighted Structural MRI and fMRI of Participants Viewing Self-Avatar Exoskeleton Walking (11 SWS Cycles) * **Study:** `ds006945` (OpenNeuro) * **Author (year):** `Sarkar2025_T1_Weighted_Structural` * **Canonical:** — Also importable as: `DS006945`, `Sarkar2025_T1_Weighted_Structural`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 14; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006945](https://openneuro.org/datasets/ds006945) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006945](https://nemar.org/dataexplorer/detail?dataset_id=ds006945) DOI: [https://doi.org/10.18112/openneuro.ds006945.v1.2.1](https://doi.org/10.18112/openneuro.ds006945.v1.2.1) ### Examples ```pycon >>> from eegdash.dataset import DS006945 >>> dataset = DS006945(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006963(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Control Processes Moderate Visual Working Memory Gating Dataset * **Study:** `ds006963` (OpenNeuro) * **Author (year):** `Ozdemir2025` * **Canonical:** — Also importable as: `DS006963`, `Ozdemir2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 32; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006963](https://openneuro.org/datasets/ds006963) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006963](https://nemar.org/dataexplorer/detail?dataset_id=ds006963) DOI: [https://doi.org/10.18112/openneuro.ds006963.v1.0.0](https://doi.org/10.18112/openneuro.ds006963.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006963 >>> dataset = DS006963(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS006979(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Examining Perceptual Grouping on Stages of Processing in Visual Working Memory: An ERP Study * **Study:** `ds006979` (OpenNeuro) * **Author (year):** `Ramzaoui2025` * **Canonical:** `Ramzaoui2024` Also importable as: `DS006979`, `Ramzaoui2025`, `Ramzaoui2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 53; recordings: 56; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006979](https://openneuro.org/datasets/ds006979) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006979](https://nemar.org/dataexplorer/detail?dataset_id=ds006979) DOI: [https://doi.org/10.18112/openneuro.ds006979.v1.0.1](https://doi.org/10.18112/openneuro.ds006979.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006979 >>> dataset = DS006979(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ramzaoui2024']* ### *class* eegdash.dataset.dataset.DS007006(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) VR-Compassion Cultivation Training * **Study:** `ds007006` (OpenNeuro) * **Author (year):** `Wu2025` * **Canonical:** — Also importable as: `DS007006`, `Wu2025`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 10; recordings: 50; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007006](https://openneuro.org/datasets/ds007006) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007006](https://nemar.org/dataexplorer/detail?dataset_id=ds007006) DOI: [https://doi.org/10.18112/openneuro.ds007006.v1.0.0](https://doi.org/10.18112/openneuro.ds007006.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007006 >>> dataset = DS007006(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007020(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Mortality Dataset in Parkinson’s Disease * **Study:** `ds007020` (OpenNeuro) * **Author (year):** `Jamshidi2025` * **Canonical:** — Also importable as: `DS007020`, `Jamshidi2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Parkinson's`. Subjects: 94; recordings: 94; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007020](https://openneuro.org/datasets/ds007020) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007020](https://nemar.org/dataexplorer/detail?dataset_id=ds007020) DOI: [https://doi.org/10.18112/openneuro.ds007020.v1.0.0](https://doi.org/10.18112/openneuro.ds007020.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007020 >>> dataset = DS007020(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007028(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory Cortex Macaque Monkey DISC Data * **Study:** `ds007028` (OpenNeuro) * **Author (year):** `Kajikawa2025` * **Canonical:** `Kajikawa2000` Also importable as: `DS007028`, `Kajikawa2025`, `Kajikawa2000`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Other`. Subjects: 3; recordings: 3; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007028](https://openneuro.org/datasets/ds007028) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007028](https://nemar.org/dataexplorer/detail?dataset_id=ds007028) DOI: [https://doi.org/10.18112/openneuro.ds007028.v1.0.0](https://doi.org/10.18112/openneuro.ds007028.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007028 >>> dataset = DS007028(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kajikawa2000']* ### *class* eegdash.dataset.dataset.DS007052(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE N400 Word Processing * **Study:** `ds007052` (OpenNeuro) * **Author (year):** `Couperus2025_N400` * **Canonical:** `Couperus2021_N400` Also importable as: `DS007052`, `Couperus2025_N400`, `Couperus2021_N400`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 288; recordings: 288; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007052](https://openneuro.org/datasets/ds007052) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007052](https://nemar.org/dataexplorer/detail?dataset_id=ds007052) DOI: [https://doi.org/10.18112/openneuro.ds007052.v1.1.2](https://doi.org/10.18112/openneuro.ds007052.v1.1.2) ### Examples ```pycon >>> from eegdash.dataset import DS007052 >>> dataset = DS007052(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Couperus2021_N400']* ### *class* eegdash.dataset.dataset.DS007056(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE P300 Visual Oddball * **Study:** `ds007056` (OpenNeuro) * **Author (year):** `Couperus2025_P300` * **Canonical:** `Couperus2021_P300` Also importable as: `DS007056`, `Couperus2025_P300`, `Couperus2021_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 286; recordings: 286; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007056](https://openneuro.org/datasets/ds007056) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007056](https://nemar.org/dataexplorer/detail?dataset_id=ds007056) DOI: [https://doi.org/10.18112/openneuro.ds007056.v1.1.1](https://doi.org/10.18112/openneuro.ds007056.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS007056 >>> dataset = DS007056(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Couperus2021_P300']* ### *class* eegdash.dataset.dataset.DS007069(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE MMN Auditory Oddball * **Study:** `ds007069` (OpenNeuro) * **Author (year):** `Couperus2025_MMN` * **Canonical:** `Couperus2021_MMN` Also importable as: `DS007069`, `Couperus2025_MMN`, `Couperus2021_MMN`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 281; recordings: 281; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007069](https://openneuro.org/datasets/ds007069) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007069](https://nemar.org/dataexplorer/detail?dataset_id=ds007069) DOI: [https://doi.org/10.18112/openneuro.ds007069.v1.0.0](https://doi.org/10.18112/openneuro.ds007069.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007069 >>> dataset = DS007069(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Couperus2021_MMN']* ### *class* eegdash.dataset.dataset.DS007081(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Passive but accessible: Studied information is not actively stored in working memory, yet attended regardless of anticipated load * **Study:** `ds007081` (OpenNeuro) * **Author (year):** `Ylmaz2025` * **Canonical:** — Also importable as: `DS007081`, `Ylmaz2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 41; recordings: 41; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007081](https://openneuro.org/datasets/ds007081) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007081](https://nemar.org/dataexplorer/detail?dataset_id=ds007081) DOI: [https://doi.org/10.18112/openneuro.ds007081.v1.0.0](https://doi.org/10.18112/openneuro.ds007081.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007081 >>> dataset = DS007081(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007095(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RNS_Epilepsy-iBIDS * **Study:** `ds007095` (OpenNeuro) * **Author (year):** `Feng2025` * **Canonical:** — Also importable as: `DS007095`, `Feng2025`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 8; recordings: 6019; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007095](https://openneuro.org/datasets/ds007095) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007095](https://nemar.org/dataexplorer/detail?dataset_id=ds007095) DOI: [https://doi.org/10.18112/openneuro.ds007095.v1.0.0](https://doi.org/10.18112/openneuro.ds007095.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007095 >>> dataset = DS007095(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007096(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE N170 Face Perception * **Study:** `ds007096` (OpenNeuro) * **Author (year):** `Couperus2025_PURSUE_N170_Face` * **Canonical:** `Couperus2017` Also importable as: `DS007096`, `Couperus2025_PURSUE_N170_Face`, `Couperus2017`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 292; recordings: 292; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007096](https://openneuro.org/datasets/ds007096) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007096](https://nemar.org/dataexplorer/detail?dataset_id=ds007096) DOI: [https://doi.org/10.18112/openneuro.ds007096.v1.0.0](https://doi.org/10.18112/openneuro.ds007096.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007096 >>> dataset = DS007096(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Couperus2017']* ### *class* eegdash.dataset.dataset.DS007118(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_comprehensive_HFA_model_part1 * **Study:** `ds007118` (OpenNeuro) * **Author (year):** `Hatano2025_part1` * **Canonical:** `Hatano` Also importable as: `DS007118`, `Hatano2025_part1`, `Hatano`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Unknown`. Subjects: 65; recordings: 82; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007118](https://openneuro.org/datasets/ds007118) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007118](https://nemar.org/dataexplorer/detail?dataset_id=ds007118) DOI: [https://doi.org/10.18112/openneuro.ds007118.v1.0.0](https://doi.org/10.18112/openneuro.ds007118.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007118 >>> dataset = DS007118(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Hatano']* ### *class* eegdash.dataset.dataset.DS007119(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_comprehensive_HFA_model_part3 * **Study:** `ds007119` (OpenNeuro) * **Author (year):** `Hatano2025_part3` * **Canonical:** — Also importable as: `DS007119`, `Hatano2025_part3`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Unknown`. Subjects: 103; recordings: 106; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007119](https://openneuro.org/datasets/ds007119) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007119](https://nemar.org/dataexplorer/detail?dataset_id=ds007119) DOI: [https://doi.org/10.18112/openneuro.ds007119.v1.0.0](https://doi.org/10.18112/openneuro.ds007119.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007119 >>> dataset = DS007119(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007120(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_comprehensive_HFA_model_part2 * **Study:** `ds007120` (OpenNeuro) * **Author (year):** `Hatano2025_part2` * **Canonical:** — Also importable as: `DS007120`, `Hatano2025_part2`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Epilepsy`. Subjects: 65; recordings: 70; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007120](https://openneuro.org/datasets/ds007120) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007120](https://nemar.org/dataexplorer/detail?dataset_id=ds007120) DOI: [https://doi.org/10.18112/openneuro.ds007120.v1.0.0](https://doi.org/10.18112/openneuro.ds007120.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007120 >>> dataset = DS007120(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007137(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE N2pc Visual Search * **Study:** `ds007137` (OpenNeuro) * **Author (year):** `Couperus2025_N2PC` * **Canonical:** `Couperus2021_N2pc` Also importable as: `DS007137`, `Couperus2025_N2PC`, `Couperus2021_N2pc`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 294; recordings: 294; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007137](https://openneuro.org/datasets/ds007137) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007137](https://nemar.org/dataexplorer/detail?dataset_id=ds007137) DOI: [https://doi.org/10.18112/openneuro.ds007137.v1.0.0](https://doi.org/10.18112/openneuro.ds007137.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007137 >>> dataset = DS007137(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Couperus2021_N2pc']* ### *class* eegdash.dataset.dataset.DS007139(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE LRP/ERN Flanker * **Study:** `ds007139` (OpenNeuro) * **Author (year):** `Couperus2025_LRP` * **Canonical:** `Couperus2021_LRP` Also importable as: `DS007139`, `Couperus2025_LRP`, `Couperus2021_LRP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 292; recordings: 292; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007139](https://openneuro.org/datasets/ds007139) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007139](https://nemar.org/dataexplorer/detail?dataset_id=ds007139) DOI: [https://doi.org/10.18112/openneuro.ds007139.v1.0.0](https://doi.org/10.18112/openneuro.ds007139.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007139 >>> dataset = DS007139(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Couperus2021_LRP']* ### *class* eegdash.dataset.dataset.DS007162(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Adaptive recruitment of cortex-wide recurrence for visual object recognition (EEG) * **Study:** `ds007162` (OpenNeuro) * **Author (year):** `DS7162_VisualRecognition` * **Canonical:** — Also importable as: `DS007162`, `DS7162_VisualRecognition`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 34; recordings: 69; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007162](https://openneuro.org/datasets/ds007162) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007162](https://nemar.org/dataexplorer/detail?dataset_id=ds007162) DOI: [https://doi.org/10.18112/openneuro.ds007162.v1.0.0](https://doi.org/10.18112/openneuro.ds007162.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007162 >>> dataset = DS007162(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007169(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal Cognitive Workload n-back Task, 4 Difficulties * **Study:** `ds007169` (OpenNeuro) * **Author (year):** `Barras2026_Multimodal` * **Canonical:** `Barras2021` Also importable as: `DS007169`, `Barras2026_Multimodal`, `Barras2021`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007169](https://openneuro.org/datasets/ds007169) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007169](https://nemar.org/dataexplorer/detail?dataset_id=ds007169) DOI: [https://doi.org/10.18112/openneuro.ds007169.v1.0.5](https://doi.org/10.18112/openneuro.ds007169.v1.0.5) ### Examples ```pycon >>> from eegdash.dataset import DS007169 >>> dataset = DS007169(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Barras2021']* ### *class* eegdash.dataset.dataset.DS007172(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG-Asymmetries Dataset * **Study:** `ds007172` (OpenNeuro) * **Author (year):** `Reinke2026` * **Canonical:** `EEGAsymmetries` Also importable as: `DS007172`, `Reinke2026`, `EEGAsymmetries`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 100; recordings: 501; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007172](https://openneuro.org/datasets/ds007172) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007172](https://nemar.org/dataexplorer/detail?dataset_id=ds007172) DOI: [https://doi.org/10.18112/openneuro.ds007172.v1.0.0](https://doi.org/10.18112/openneuro.ds007172.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007172 >>> dataset = DS007172(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EEGAsymmetries']* ### *class* eegdash.dataset.dataset.DS007175(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FFR-active-listening * **Study:** `ds007175` (OpenNeuro) * **Author (year):** `DS7175_FFR_ActiveListening` * **Canonical:** — Also importable as: `DS007175`, `DS7175_FFR_ActiveListening`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 41; recordings: 41; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007175](https://openneuro.org/datasets/ds007175) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007175](https://nemar.org/dataexplorer/detail?dataset_id=ds007175) DOI: [https://doi.org/10.18112/openneuro.ds007175.v1.0.1](https://doi.org/10.18112/openneuro.ds007175.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007175 >>> dataset = DS007175(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007176(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Longitudinal EEG Test-Retest Reliability in Healthy Individuals * **Study:** `ds007176` (OpenNeuro) * **Author (year):** `Isaza2026_Longitudinal` * **Canonical:** — Also importable as: `DS007176`, `Isaza2026_Longitudinal`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 45; recordings: 300; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007176](https://openneuro.org/datasets/ds007176) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007176](https://nemar.org/dataexplorer/detail?dataset_id=ds007176) DOI: [https://doi.org/10.18112/openneuro.ds007176.v1.0.1](https://doi.org/10.18112/openneuro.ds007176.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007176 >>> dataset = DS007176(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007180(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Exo-EEG Experiment * **Study:** `ds007180` (OpenNeuro) * **Author (year):** `FuentesGuerra2026` * **Canonical:** `FuentesGuerra2024` Also importable as: `DS007180`, `FuentesGuerra2026`, `FuentesGuerra2024`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007180](https://openneuro.org/datasets/ds007180) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007180](https://nemar.org/dataexplorer/detail?dataset_id=ds007180) DOI: [https://doi.org/10.18112/openneuro.ds007180.v1.0.0](https://doi.org/10.18112/openneuro.ds007180.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007180 >>> dataset = DS007180(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['FuentesGuerra2024']* ### *class* eegdash.dataset.dataset.DS007181(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Structural MRI, Resting-state fMRI, and PSG/EEG Dataset of Zoster-associated Neuralgia * **Study:** `ds007181` (OpenNeuro) * **Author (year):** `Li2026` * **Canonical:** — Also importable as: `DS007181`, `Li2026`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 59; recordings: 59; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007181](https://openneuro.org/datasets/ds007181) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007181](https://nemar.org/dataexplorer/detail?dataset_id=ds007181) DOI: [https://doi.org/10.18112/openneuro.ds007181.v1.0.1](https://doi.org/10.18112/openneuro.ds007181.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007181 >>> dataset = DS007181(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007216(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A multi-session simultaneous EEG-fMRI dataset with online experience sampling * **Study:** `ds007216` (OpenNeuro) * **Author (year):** `Kucyi2026` * **Canonical:** `Kucyi2024` Also importable as: `DS007216`, `Kucyi2026`, `Kucyi2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 24; recordings: 187; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007216](https://openneuro.org/datasets/ds007216) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007216](https://nemar.org/dataexplorer/detail?dataset_id=ds007216) DOI: [https://doi.org/10.18112/openneuro.ds007216.v1.0.0](https://doi.org/10.18112/openneuro.ds007216.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007216 >>> dataset = DS007216(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kucyi2024']* ### *class* eegdash.dataset.dataset.DS007221(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cross-Environment Multi-Paradigm Motor Imagery EEG Dataset * **Study:** `ds007221` (OpenNeuro) * **Author (year):** `Xinwei2026` * **Canonical:** — Also importable as: `DS007221`, `Xinwei2026`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 84; recordings: 1265; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007221](https://openneuro.org/datasets/ds007221) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007221](https://nemar.org/dataexplorer/detail?dataset_id=ds007221) DOI: [https://doi.org/10.18112/openneuro.ds007221.v1.0.1](https://doi.org/10.18112/openneuro.ds007221.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007221 >>> dataset = DS007221(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007262(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cognitive Workload 8-level arithmetic * **Study:** `ds007262` (OpenNeuro) * **Author (year):** `Barras2026_Cognitive` * **Canonical:** `Barras2025` Also importable as: `DS007262`, `Barras2026_Cognitive`, `Barras2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007262](https://openneuro.org/datasets/ds007262) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007262](https://nemar.org/dataexplorer/detail?dataset_id=ds007262) DOI: [https://doi.org/10.18112/openneuro.ds007262.v1.0.6](https://doi.org/10.18112/openneuro.ds007262.v1.0.6) ### Examples ```pycon >>> from eegdash.dataset import DS007262 >>> dataset = DS007262(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Barras2025']* ### *class* eegdash.dataset.dataset.DS007314(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) tACS for Patients with Post-Stroke Anomia * **Study:** `ds007314` (OpenNeuro) * **Author (year):** `Martzoukou2026_tACS` * **Canonical:** `Martzoukou2024_Post` Also importable as: `DS007314`, `Martzoukou2026_tACS`, `Martzoukou2024_Post`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 2; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007314](https://openneuro.org/datasets/ds007314) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007314](https://nemar.org/dataexplorer/detail?dataset_id=ds007314) DOI: [https://doi.org/10.18112/openneuro.ds007314.v1.0.0](https://doi.org/10.18112/openneuro.ds007314.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007314 >>> dataset = DS007314(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Martzoukou2024_Post']* ### *class* eegdash.dataset.dataset.DS007315(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) tACS for Patients with Post-Stroke Anomia * **Study:** `ds007315` (OpenNeuro) * **Author (year):** `Martzoukou2026_tACS_Patients` * **Canonical:** `Martzoukou2024_Post_A` Also importable as: `DS007315`, `Martzoukou2026_tACS_Patients`, `Martzoukou2024_Post_A`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 2; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007315](https://openneuro.org/datasets/ds007315) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007315](https://nemar.org/dataexplorer/detail?dataset_id=ds007315) DOI: [https://doi.org/10.18112/openneuro.ds007315.v1.0.1](https://doi.org/10.18112/openneuro.ds007315.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007315 >>> dataset = DS007315(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Martzoukou2024_Post_A']* ### *class* eegdash.dataset.dataset.DS007322(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Personalized smartphone notifications bias auditory salience across processing stages * **Study:** `ds007322` (OpenNeuro) * **Author (year):** `Mishra2026` * **Canonical:** `Mishra2024` Also importable as: `DS007322`, `Mishra2026`, `Mishra2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 57; recordings: 57; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007322](https://openneuro.org/datasets/ds007322) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007322](https://nemar.org/dataexplorer/detail?dataset_id=ds007322) DOI: [https://doi.org/10.18112/openneuro.ds007322.v1.0.1](https://doi.org/10.18112/openneuro.ds007322.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007322 >>> dataset = DS007322(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Mishra2024']* ### *class* eegdash.dataset.dataset.DS007338(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEGEyeNet Dataset * **Study:** `ds007338` (OpenNeuro) * **Author (year):** `Plomecka2026` * **Canonical:** `EEGEyeNet_v2`, `EEGEYENET` Also importable as: `DS007338`, `Plomecka2026`, `EEGEyeNet_v2`, `EEGEYENET`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007338](https://openneuro.org/datasets/ds007338) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007338](https://nemar.org/dataexplorer/detail?dataset_id=ds007338) DOI: [https://doi.org/10.18112/openneuro.ds007338.v1.0.0](https://doi.org/10.18112/openneuro.ds007338.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007338 >>> dataset = DS007338(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EEGEyeNet_v2', 'EEGEYENET']* ### *class* eegdash.dataset.dataset.DS007347(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sterotactic Focused Ultrasound Mesencephalotomy for the Treatment of Head and Neck Cancer Pain * **Study:** `ds007347` (OpenNeuro) * **Author (year):** `Elias2026` * **Canonical:** — Also importable as: `DS007347`, `Elias2026`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Cancer`. Subjects: 5; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007347](https://openneuro.org/datasets/ds007347) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007347](https://nemar.org/dataexplorer/detail?dataset_id=ds007347) DOI: [https://doi.org/10.18112/openneuro.ds007347.v1.0.0](https://doi.org/10.18112/openneuro.ds007347.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007347 >>> dataset = DS007347(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007353(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HAD-MEEG * **Study:** `ds007353` (OpenNeuro) * **Author (year):** `Zhang2026` * **Canonical:** `HAD_MEEG`, `HADMEEG` Also importable as: `DS007353`, `Zhang2026`, `HAD_MEEG`, `HADMEEG`. Modality: `eeg, meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 32; recordings: 473; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007353](https://openneuro.org/datasets/ds007353) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007353](https://nemar.org/dataexplorer/detail?dataset_id=ds007353) DOI: [https://doi.org/10.18112/openneuro.ds007353.v1.0.0](https://doi.org/10.18112/openneuro.ds007353.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007353 >>> dataset = DS007353(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HAD_MEEG', 'HADMEEG']* ### *class* eegdash.dataset.dataset.DS007358(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A subset of large-scale EEG dataset (India + Tanzania) * **Study:** `ds007358` (OpenNeuro) * **Author (year):** `Vianney2026` * **Canonical:** `Vianney2025` Also importable as: `DS007358`, `Vianney2026`, `Vianney2025`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 2000; recordings: 6000; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007358](https://openneuro.org/datasets/ds007358) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007358](https://nemar.org/dataexplorer/detail?dataset_id=ds007358) DOI: [https://doi.org/10.18112/openneuro.ds007358.v1.0.0](https://doi.org/10.18112/openneuro.ds007358.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007358 >>> dataset = DS007358(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Vianney2025']* ### *class* eegdash.dataset.dataset.DS007406(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG dataset on consumer responses to extreme versus traditional marketing videos * **Study:** `ds007406` (OpenNeuro) * **Author (year):** `Edit2026` * **Canonical:** `Edit2024` Also importable as: `DS007406`, `Edit2026`, `Edit2024`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 10; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007406](https://openneuro.org/datasets/ds007406) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007406](https://nemar.org/dataexplorer/detail?dataset_id=ds007406) DOI: [https://doi.org/10.18112/openneuro.ds007406.v1.0.0](https://doi.org/10.18112/openneuro.ds007406.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007406 >>> dataset = DS007406(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Edit2024']* ### *class* eegdash.dataset.dataset.DS007420(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Light Weight Multi-Distance fNIRS Dataset for Ball-Squeezing Task and Purposeful Motion Artifact Creation Task * **Study:** `ds007420` (OpenNeuro) * **Author (year):** `Gao2026_Light_Weight_Multi` * **Canonical:** `Gao2024` Also importable as: `DS007420`, `Gao2026_Light_Weight_Multi`, `Gao2024`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 12; recordings: 60; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007420](https://openneuro.org/datasets/ds007420) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007420](https://nemar.org/dataexplorer/detail?dataset_id=ds007420) DOI: [https://doi.org/10.18112/openneuro.ds007420.v1.0.2](https://doi.org/10.18112/openneuro.ds007420.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS007420 >>> dataset = DS007420(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Gao2024']* ### *class* eegdash.dataset.dataset.DS007427(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Comprehensive methodology for sample enrichment in EEG biomarker studies for Alzheimer’s risk classification * **Study:** `ds007427` (OpenNeuro) * **Author (year):** `Isaza2026_Comprehensive` * **Canonical:** `HenaoIsaza2026` Also importable as: `DS007427`, `Isaza2026_Comprehensive`, `HenaoIsaza2026`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Dementia`. Subjects: 44; recordings: 44; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007427](https://openneuro.org/datasets/ds007427) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007427](https://nemar.org/dataexplorer/detail?dataset_id=ds007427) DOI: [https://doi.org/10.18112/openneuro.ds007427.v1.0.1](https://doi.org/10.18112/openneuro.ds007427.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007427 >>> dataset = DS007427(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HenaoIsaza2026']* ### *class* eegdash.dataset.dataset.DS007431(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Diffuse predictions stabilize and reshape neural code during memory encoding * **Study:** `ds007431` (OpenNeuro) * **Author (year):** `Ataseven2026` * **Canonical:** `Ataseven2024` Also importable as: `DS007431`, `Ataseven2026`, `Ataseven2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 47; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007431](https://openneuro.org/datasets/ds007431) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007431](https://nemar.org/dataexplorer/detail?dataset_id=ds007431) DOI: [https://doi.org/10.18112/openneuro.ds007431.v1.0.0](https://doi.org/10.18112/openneuro.ds007431.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007431 >>> dataset = DS007431(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ataseven2024']* ### *class* eegdash.dataset.dataset.DS007445(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Thalamocortical ictal iEEG dataset * **Study:** `ds007445` (OpenNeuro) * **Author (year):** `Panchavati2026` * **Canonical:** — Also importable as: `DS007445`, `Panchavati2026`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 19; recordings: 66; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007445](https://openneuro.org/datasets/ds007445) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007445](https://nemar.org/dataexplorer/detail?dataset_id=ds007445) DOI: [https://doi.org/10.18112/openneuro.ds007445.v1.0.2](https://doi.org/10.18112/openneuro.ds007445.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS007445 >>> dataset = DS007445(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007454(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A common neural mechanism underlies experiences of passage of time * **Study:** `ds007454` (OpenNeuro) * **Author (year):** `DS7454_TimePerception` * **Canonical:** — Also importable as: `DS007454`, `DS7454_TimePerception`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 42; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007454](https://openneuro.org/datasets/ds007454) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007454](https://nemar.org/dataexplorer/detail?dataset_id=ds007454) DOI: [https://doi.org/10.18112/openneuro.ds007454.v1.0.1](https://doi.org/10.18112/openneuro.ds007454.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007454 >>> dataset = DS007454(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007463(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Very-High-Density Diffuse Optical Tomography System Validation Dataset * **Study:** `ds007463` (OpenNeuro) * **Author (year):** `Fogarty2026_Very` * **Canonical:** `Fogarty2025` Also importable as: `DS007463`, `Fogarty2026_Very`, `Fogarty2025`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 8; recordings: 88; tasks: 14. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007463](https://openneuro.org/datasets/ds007463) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007463](https://nemar.org/dataexplorer/detail?dataset_id=ds007463) DOI: [https://doi.org/10.18112/openneuro.ds007463.v1.1.1](https://doi.org/10.18112/openneuro.ds007463.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS007463 >>> dataset = DS007463(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Fogarty2025']* ### *class* eegdash.dataset.dataset.DS007471(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Joint agency EEG dataset * **Study:** `ds007471` (OpenNeuro) * **Author (year):** `Zhou2026` * **Canonical:** `Zhou2024` Also importable as: `DS007471`, `Zhou2026`, `Zhou2024`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 31; recordings: 31; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007471](https://openneuro.org/datasets/ds007471) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007471](https://nemar.org/dataexplorer/detail?dataset_id=ds007471) DOI: [https://doi.org/10.18112/openneuro.ds007471.v1.0.0](https://doi.org/10.18112/openneuro.ds007471.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007471 >>> dataset = DS007471(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Zhou2024']* ### *class* eegdash.dataset.dataset.DS007473(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High-Density Diffuse Optical Tomography Audiovisual Movie Viewing Dataset * **Study:** `ds007473` (OpenNeuro) * **Author (year):** `Fogarty2026_High` * **Canonical:** `Tripathy2024` Also importable as: `DS007473`, `Fogarty2026_High`, `Tripathy2024`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 5; recordings: 189; tasks: 19. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007473](https://openneuro.org/datasets/ds007473) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007473](https://nemar.org/dataexplorer/detail?dataset_id=ds007473) DOI: [https://doi.org/10.18112/openneuro.ds007473.v1.0.0](https://doi.org/10.18112/openneuro.ds007473.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007473 >>> dataset = DS007473(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Tripathy2024']* ### *class* eegdash.dataset.dataset.DS007477(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TimeSeries BIDS converted * **Study:** `ds007477` (OpenNeuro) * **Author (year):** `Niu2026` * **Canonical:** — Also importable as: `DS007477`, `Niu2026`. Modality: `fnirs`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 18; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007477](https://openneuro.org/datasets/ds007477) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007477](https://nemar.org/dataexplorer/detail?dataset_id=ds007477) DOI: [https://doi.org/10.18112/openneuro.ds007477.v1.0.1](https://doi.org/10.18112/openneuro.ds007477.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007477 >>> dataset = DS007477(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007521(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effect of hunger and state preferences on the neural processing of food images * **Study:** `ds007521` (OpenNeuro) * **Author (year):** `Moerel2026` * **Canonical:** `Moerel2025` Also importable as: `DS007521`, `Moerel2026`, `Moerel2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 23; recordings: 46; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007521](https://openneuro.org/datasets/ds007521) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007521](https://nemar.org/dataexplorer/detail?dataset_id=ds007521) DOI: [https://doi.org/10.18112/openneuro.ds007521.v1.0.1](https://doi.org/10.18112/openneuro.ds007521.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007521 >>> dataset = DS007521(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Moerel2025']* ### *class* eegdash.dataset.dataset.DS007523(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LPP MEG Listen * **Study:** `ds007523` (OpenNeuro) * **Author (year):** `Bel2026` * **Canonical:** `Dascoli2025` Also importable as: `DS007523`, `Bel2026`, `Dascoli2025`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 58; recordings: 579; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007523](https://openneuro.org/datasets/ds007523) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007523](https://nemar.org/dataexplorer/detail?dataset_id=ds007523) DOI: [https://doi.org/10.18112/openneuro.ds007523.v1.0.0](https://doi.org/10.18112/openneuro.ds007523.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007523 >>> dataset = DS007523(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Dascoli2025']* ### *class* eegdash.dataset.dataset.DS007524(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LittlePrince_MEG_French_Read_Pallier2025 * **Study:** `ds007524` (OpenNeuro) * **Author (year):** `Pallier2025` * **Canonical:** `LittlePrince` Also importable as: `DS007524`, `Pallier2025`, `LittlePrince`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 50; recordings: 500; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007524](https://openneuro.org/datasets/ds007524) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007524](https://nemar.org/dataexplorer/detail?dataset_id=ds007524) DOI: [https://doi.org/10.18112/openneuro.ds007524.v1.0.1](https://doi.org/10.18112/openneuro.ds007524.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007524 >>> dataset = DS007524(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['LittlePrince']* ### *class* eegdash.dataset.dataset.DS007526(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PD-EEG: Resting-State & Walking EEG in Parkinson’s Disease * **Study:** `ds007526` (OpenNeuro) * **Author (year):** `Katzir2026` * **Canonical:** `PD_EEG`, `PDEEG` Also importable as: `DS007526`, `Katzir2026`, `PD_EEG`, `PDEEG`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Parkinson's`. Subjects: 144; recordings: 277; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007526](https://openneuro.org/datasets/ds007526) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007526](https://nemar.org/dataexplorer/detail?dataset_id=ds007526) DOI: [https://doi.org/10.18112/openneuro.ds007526.v1.0.0](https://doi.org/10.18112/openneuro.ds007526.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007526 >>> dataset = DS007526(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PD_EEG', 'PDEEG']* ### *class* eegdash.dataset.dataset.DS007554(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal dataset from the CMx7-MM Experiment * **Study:** `ds007554` (OpenNeuro) * **Author (year):** `Ajra2026` * **Canonical:** — Also importable as: `DS007554`, `Ajra2026`. Modality: `eeg, fnirs`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 30; recordings: 1034; tasks: 7. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007554](https://openneuro.org/datasets/ds007554) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007554](https://nemar.org/dataexplorer/detail?dataset_id=ds007554) DOI: [https://doi.org/10.18112/openneuro.ds007554.v1.0.0](https://doi.org/10.18112/openneuro.ds007554.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007554 >>> dataset = DS007554(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007558(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Pre/Post Intervention Dataset * **Study:** `ds007558` (OpenNeuro) * **Author (year):** `Qi2026` * **Canonical:** — Also importable as: `DS007558`, `Qi2026`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 67; recordings: 121; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007558](https://openneuro.org/datasets/ds007558) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007558](https://nemar.org/dataexplorer/detail?dataset_id=ds007558) DOI: [https://doi.org/10.18112/openneuro.ds007558.v1.0.0](https://doi.org/10.18112/openneuro.ds007558.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007558 >>> dataset = DS007558(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.DS007591(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Delineating neural contributions to EEG-based speech decoding * **Study:** `ds007591` (OpenNeuro) * **Author (year):** `Sato2026_Delineating` * **Canonical:** `Sato2025` Also importable as: `DS007591`, `Sato2026_Delineating`, `Sato2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 3; recordings: 21; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007591](https://openneuro.org/datasets/ds007591) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007591](https://nemar.org/dataexplorer/detail?dataset_id=ds007591) DOI: [https://doi.org/10.18112/openneuro.ds007591.v1.0.1](https://doi.org/10.18112/openneuro.ds007591.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007591 >>> dataset = DS007591(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Sato2025']* ### *class* eegdash.dataset.dataset.DS007602(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG-Speech Brain Decoding Dataset * **Study:** `ds007602` (OpenNeuro) * **Author (year):** `Sato2026_Speech` * **Canonical:** `Sato2024` Also importable as: `DS007602`, `Sato2026_Speech`, `Sato2024`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 3; recordings: 113; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007602](https://openneuro.org/datasets/ds007602) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007602](https://nemar.org/dataexplorer/detail?dataset_id=ds007602) DOI: [https://doi.org/10.18112/openneuro.ds007602.v1.0.1](https://doi.org/10.18112/openneuro.ds007602.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007602 >>> dataset = DS007602(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Sato2024']* ### *class* eegdash.dataset.dataset.DS007609(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting-State EEG and Trait Anxiety * **Study:** `ds007609` (OpenNeuro) * **Author (year):** `Shalamberidze2026` * **Canonical:** `Shalamberidze2025` Also importable as: `DS007609`, `Shalamberidze2026`, `Shalamberidze2025`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 51; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007609](https://openneuro.org/datasets/ds007609) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007609](https://nemar.org/dataexplorer/detail?dataset_id=ds007609) DOI: [https://doi.org/10.18112/openneuro.ds007609.v1.0.0](https://doi.org/10.18112/openneuro.ds007609.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007609 >>> dataset = DS007609(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Shalamberidze2025']* ### *class* eegdash.dataset.dataset.DS007615(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LDAEP and resting-state EEG in healthy women * **Study:** `ds007615` (OpenNeuro) * **Author (year):** `Normannseth2026` * **Canonical:** — Also importable as: `DS007615`, `Normannseth2026`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 69; recordings: 192; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007615](https://openneuro.org/datasets/ds007615) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007615](https://nemar.org/dataexplorer/detail?dataset_id=ds007615) DOI: [https://doi.org/10.18112/openneuro.ds007615.v1.0.0](https://doi.org/10.18112/openneuro.ds007615.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007615 >>> dataset = DS007615(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Normannseth2026']* ### eegdash.dataset.dataset.Dascoli2025 alias of [`DS007523`](eegdash.dataset.DS007523.md#eegdash.dataset.DS007523) ### eegdash.dataset.dataset.Delorme alias of [`DS003061`](eegdash.dataset.DS003061.md#eegdash.dataset.DS003061) ### eegdash.dataset.dataset.Dubois2024 alias of [`DS006545`](eegdash.dataset.DS006545.md#eegdash.dataset.DS006545) ### *class* eegdash.dataset.dataset.EEG2025R1(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted) * **Study:** `EEG2025r1` (NeMAR) * **Author (year):** `Shirazi2024_R1_bdf` * **Canonical:** `HBN_r1_bdf` Also importable as: `EEG2025R1`, `Shirazi2024_R1_bdf`, `HBN_r1_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 136; recordings: 1342; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r1](https://openneuro.org/datasets/EEG2025r1) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r1](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r1) DOI: [https://doi.org/10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R1 >>> dataset = EEG2025R1(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r1_bdf']* ### *class* eegdash.dataset.dataset.EEG2025R10(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 10 (BDF Converted) * **Study:** `EEG2025r10` (NeMAR) * **Author (year):** `Shirazi2025_R10_bdf` * **Canonical:** `HBN_r10_bdf` Also importable as: `EEG2025R10`, `Shirazi2025_R10_bdf`, `HBN_r10_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 533; recordings: 2516; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r10](https://openneuro.org/datasets/EEG2025r10) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r10](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r10) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R10 >>> dataset = EEG2025R10(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r10_bdf']* ### *class* eegdash.dataset.dataset.EEG2025R10MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 10 (BDF Converted) * **Study:** `EEG2025r10mini` (NeMAR) * **Author (year):** `Shirazi2025_R10_bdf_mini` * **Canonical:** `HBN_r10_bdf_mini` Also importable as: `EEG2025R10MINI`, `Shirazi2025_R10_bdf_mini`, `HBN_r10_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 220; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r10mini](https://openneuro.org/datasets/EEG2025r10mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r10mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r10mini) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R10MINI >>> dataset = EEG2025R10MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r10_bdf_mini']* ### *class* eegdash.dataset.dataset.EEG2025R11(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 11 (BDF Converted) * **Study:** `EEG2025r11` (NeMAR) * **Author (year):** `Shirazi2025_R11_bdf` * **Canonical:** `HBN_r11_bdf` Also importable as: `EEG2025R11`, `Shirazi2025_R11_bdf`, `HBN_r11_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 430; recordings: 3397; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r11](https://openneuro.org/datasets/EEG2025r11) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r11](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r11) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R11 >>> dataset = EEG2025R11(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r11_bdf']* ### *class* eegdash.dataset.dataset.EEG2025R11MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 11 (BDF Converted) * **Study:** `EEG2025r11mini` (NeMAR) * **Author (year):** `Shirazi2025_R11_bdf_mini` * **Canonical:** `HBN_r11_bdf_mini` Also importable as: `EEG2025R11MINI`, `Shirazi2025_R11_bdf_mini`, `HBN_r11_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 220; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r11mini](https://openneuro.org/datasets/EEG2025r11mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r11mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r11mini) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R11MINI >>> dataset = EEG2025R11MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r11_bdf_mini']* ### *class* eegdash.dataset.dataset.EEG2025R1MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted) * **Study:** `EEG2025r1mini` (NeMAR) * **Author (year):** `Shirazi2024_R1_bdf_mini` * **Canonical:** `HBN_r1_bdf_mini` Also importable as: `EEG2025R1MINI`, `Shirazi2024_R1_bdf_mini`, `HBN_r1_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 239; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r1mini](https://openneuro.org/datasets/EEG2025r1mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r1mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r1mini) DOI: [https://doi.org/10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R1MINI >>> dataset = EEG2025R1MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r1_bdf_mini']* ### *class* eegdash.dataset.dataset.EEG2025R2(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted) * **Study:** `EEG2025r2` (NeMAR) * **Author (year):** `Shirazi2024_R2_bdf` * **Canonical:** `HBN_r2_bdf` Also importable as: `EEG2025R2`, `Shirazi2024_R2_bdf`, `HBN_r2_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 150; recordings: 1405; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r2](https://openneuro.org/datasets/EEG2025r2) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r2](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r2) DOI: [https://doi.org/10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R2 >>> dataset = EEG2025R2(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r2_bdf']* ### *class* eegdash.dataset.dataset.EEG2025R2MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted) * **Study:** `EEG2025r2mini` (NeMAR) * **Author (year):** `Shirazi2024_R2_bdf_mini` * **Canonical:** `HBN_r2_bdf_mini` Also importable as: `EEG2025R2MINI`, `Shirazi2024_R2_bdf_mini`, `HBN_r2_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 240; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r2mini](https://openneuro.org/datasets/EEG2025r2mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r2mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r2mini) DOI: [https://doi.org/10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R2MINI >>> dataset = EEG2025R2MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r2_bdf_mini']* ### *class* eegdash.dataset.dataset.EEG2025R3(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted) * **Study:** `EEG2025r3` (NeMAR) * **Author (year):** `Shirazi2024_R3_bdf` * **Canonical:** `HBN_r3_bdf` Also importable as: `EEG2025R3`, `Shirazi2024_R3_bdf`, `HBN_r3_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 184; recordings: 1812; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r3](https://openneuro.org/datasets/EEG2025r3) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r3](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r3) DOI: [https://doi.org/10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R3 >>> dataset = EEG2025R3(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r3_bdf']* ### *class* eegdash.dataset.dataset.EEG2025R3MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted) * **Study:** `EEG2025r3mini` (NeMAR) * **Author (year):** `Shirazi2024_R3_bdf_mini` * **Canonical:** `HBN_r3_bdf_mini` Also importable as: `EEG2025R3MINI`, `Shirazi2024_R3_bdf_mini`, `HBN_r3_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 240; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r3mini](https://openneuro.org/datasets/EEG2025r3mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r3mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r3mini) DOI: [https://doi.org/10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R3MINI >>> dataset = EEG2025R3MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r3_bdf_mini']* ### *class* eegdash.dataset.dataset.EEG2025R4(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted) * **Study:** `EEG2025r4` (NeMAR) * **Author (year):** `Shirazi2024_R4_bdf` * **Canonical:** `HBN_r4_bdf` Also importable as: `EEG2025R4`, `Shirazi2024_R4_bdf`, `HBN_r4_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 324; recordings: 3342; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r4](https://openneuro.org/datasets/EEG2025r4) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r4](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r4) DOI: [https://doi.org/10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R4 >>> dataset = EEG2025R4(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r4_bdf']* ### *class* eegdash.dataset.dataset.EEG2025R4MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted) * **Study:** `EEG2025r4mini` (NeMAR) * **Author (year):** `Shirazi2024_R4_bdf_mini` * **Canonical:** `HBN_r4_bdf_mini` Also importable as: `EEG2025R4MINI`, `Shirazi2024_R4_bdf_mini`, `HBN_r4_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 240; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r4mini](https://openneuro.org/datasets/EEG2025r4mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r4mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r4mini) DOI: [https://doi.org/10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R4MINI >>> dataset = EEG2025R4MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r4_bdf_mini']* ### *class* eegdash.dataset.dataset.EEG2025R5(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted) * **Study:** `EEG2025r5` (NeMAR) * **Author (year):** `Shirazi2024_R5_bdf` * **Canonical:** `HBN_r5_bdf` Also importable as: `EEG2025R5`, `Shirazi2024_R5_bdf`, `HBN_r5_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 330; recordings: 3326; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r5](https://openneuro.org/datasets/EEG2025r5) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r5](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r5) DOI: [https://doi.org/10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R5 >>> dataset = EEG2025R5(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r5_bdf']* ### *class* eegdash.dataset.dataset.EEG2025R5MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted) * **Study:** `EEG2025r5mini` (NeMAR) * **Author (year):** `Shirazi2024_R5_bdf_mini` * **Canonical:** `HBN_r5_bdf_mini` Also importable as: `EEG2025R5MINI`, `Shirazi2024_R5_bdf_mini`, `HBN_r5_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 240; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r5mini](https://openneuro.org/datasets/EEG2025r5mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r5mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r5mini) DOI: [https://doi.org/10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R5MINI >>> dataset = EEG2025R5MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r5_bdf_mini']* ### *class* eegdash.dataset.dataset.EEG2025R6(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted) * **Study:** `EEG2025r6` (NeMAR) * **Author (year):** `Shirazi2024_R6_bdf` * **Canonical:** `HBN_r6_bdf` Also importable as: `EEG2025R6`, `Shirazi2024_R6_bdf`, `HBN_r6_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 135; recordings: 1227; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r6](https://openneuro.org/datasets/EEG2025r6) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r6](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r6) DOI: [https://doi.org/10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R6 >>> dataset = EEG2025R6(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r6_bdf']* ### *class* eegdash.dataset.dataset.EEG2025R6MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted) * **Study:** `EEG2025r6mini` (NeMAR) * **Author (year):** `Shirazi2024_R6_bdf_mini` * **Canonical:** `HBN_r6_bdf_mini` Also importable as: `EEG2025R6MINI`, `Shirazi2024_R6_bdf_mini`, `HBN_r6_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 237; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r6mini](https://openneuro.org/datasets/EEG2025r6mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r6mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r6mini) DOI: [https://doi.org/10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R6MINI >>> dataset = EEG2025R6MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r6_bdf_mini']* ### *class* eegdash.dataset.dataset.EEG2025R7(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted) * **Study:** `EEG2025r7` (NeMAR) * **Author (year):** `Shirazi2024_R7_bdf` * **Canonical:** `HBN_r7_bdf` Also importable as: `EEG2025R7`, `Shirazi2024_R7_bdf`, `HBN_r7_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 381; recordings: 3100; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r7](https://openneuro.org/datasets/EEG2025r7) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r7](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r7) DOI: [https://doi.org/10.18112/openneuro.ds005511.v1.0.1](https://doi.org/10.18112/openneuro.ds005511.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R7 >>> dataset = EEG2025R7(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r7_bdf']* ### *class* eegdash.dataset.dataset.EEG2025R7MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted) * **Study:** `EEG2025r7mini` (NeMAR) * **Author (year):** `Shirazi2024_R7_bdf_mini` * **Canonical:** `HBN_r7_bdf_mini` Also importable as: `EEG2025R7MINI`, `Shirazi2024_R7_bdf_mini`, `HBN_r7_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 239; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r7mini](https://openneuro.org/datasets/EEG2025r7mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r7mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r7mini) DOI: [https://doi.org/10.18112/openneuro.ds005511.v1.0.1](https://doi.org/10.18112/openneuro.ds005511.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R7MINI >>> dataset = EEG2025R7MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r7_bdf_mini']* ### *class* eegdash.dataset.dataset.EEG2025R8(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted) * **Study:** `EEG2025r8` (NeMAR) * **Author (year):** `Shirazi2024_R8_bdf` * **Canonical:** `HBN_r8_bdf` Also importable as: `EEG2025R8`, `Shirazi2024_R8_bdf`, `HBN_r8_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 257; recordings: 2320; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r8](https://openneuro.org/datasets/EEG2025r8) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r8](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r8) DOI: [https://doi.org/10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R8 >>> dataset = EEG2025R8(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r8_bdf']* ### *class* eegdash.dataset.dataset.EEG2025R8MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted) * **Study:** `EEG2025r8mini` (NeMAR) * **Author (year):** `Shirazi2024_R8_bdf_mini` * **Canonical:** `HBN_r8_bdf_mini` Also importable as: `EEG2025R8MINI`, `Shirazi2024_R8_bdf_mini`, `HBN_r8_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 238; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r8mini](https://openneuro.org/datasets/EEG2025r8mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r8mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r8mini) DOI: [https://doi.org/10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R8MINI >>> dataset = EEG2025R8MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r8_bdf_mini']* ### *class* eegdash.dataset.dataset.EEG2025R9(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted) * **Study:** `EEG2025r9` (NeMAR) * **Author (year):** `Shirazi2024_R9_bdf` * **Canonical:** `HBN_r9_bdf` Also importable as: `EEG2025R9`, `Shirazi2024_R9_bdf`, `HBN_r9_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 295; recordings: 2885; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r9](https://openneuro.org/datasets/EEG2025r9) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r9](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r9) DOI: [https://doi.org/10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R9 >>> dataset = EEG2025R9(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r9_bdf']* ### *class* eegdash.dataset.dataset.EEG2025R9MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted) * **Study:** `EEG2025r9mini` (NeMAR) * **Author (year):** `Shirazi2024_R9_bdf_mini` * **Canonical:** `HBN_r9_bdf_mini` Also importable as: `EEG2025R9MINI`, `Shirazi2024_R9_bdf_mini`, `HBN_r9_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 237; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r9mini](https://openneuro.org/datasets/EEG2025r9mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r9mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r9mini) DOI: [https://doi.org/10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R9MINI >>> dataset = EEG2025R9MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r9_bdf_mini']* ### eegdash.dataset.dataset.EEGAsymmetries alias of [`DS007172`](eegdash.dataset.DS007172.md#eegdash.dataset.DS007172) ### *class* eegdash.dataset.dataset.EEGChallengeDataset(release: str, cache_dir: str, mini: bool = True, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset helper for the EEG 2025 Challenge. This class simplifies access to the EEG 2025 Challenge datasets. It is a specialized version of `EEGDashDataset` that is pre-configured for the challenge’s data releases. It automatically maps a release name (e.g., “R1”) to the corresponding OpenNeuro dataset and handles the selection of subject subsets (e.g., “mini” release). * **Parameters:** * **release** (*str*) – The name of the challenge release to load. Must be one of the keys in `RELEASE_TO_OPENNEURO_DATASET_MAP` (e.g., “R1”, “R2”, …, “R11”). * **cache_dir** (*str*) – The local directory where the dataset will be downloaded and cached. * **mini** (*bool* *,* *default True*) – If True, the dataset is restricted to the official “mini” subset of subjects for the specified release. If False, all subjects for the release are included. * **query** (*dict* *,* *optional*) – An additional MongoDB-style query to apply as a filter. This query is combined with the release and subject filters using a logical AND. The query must not contain the `dataset` key, as this is determined by the `release` parameter. * **s3_bucket** (*str* *,* *optional*) – The base S3 bucket URI where the challenge data is stored. Defaults to the official challenge bucket. * **\*\*kwargs** – Additional keyword arguments that are passed directly to the `EEGDashDataset` constructor. * **Raises:** **ValueError** – If the specified `release` is unknown, or if the `query` argument contains a `dataset` key. Also raised if `mini` is True and a requested subject is not part of the official mini-release subset. #### SEE ALSO [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) : The base class for creating datasets from queries. ### *class* eegdash.dataset.dataset.EEGDashDataset(cache_dir: str | Path, query: dict[str, Any] = None, description_fields: list[str] | None = None, s3_bucket: str | None = None, records: list[dict] | None = None, download: bool = True, n_jobs: int = -1, eeg_dash_instance: Any = None, database: str | None = None, auth_token: str | None = None, on_error: str = 'raise', \*\*kwargs) Bases: `BaseConcatDataset` Create a new EEGDashDataset from a given query or local BIDS dataset directory and dataset name. An EEGDashDataset is pooled collection of EEGDashBaseDataset instances (individual recordings) and is a subclass of braindecode’s BaseConcatDataset. ### Examples Basic usage with dataset and subject filtering: ```pycon >>> from eegdash import EEGDashDataset >>> dataset = EEGDashDataset( ... cache_dir="./data", ... dataset="ds002718", ... subject="012" ... ) >>> print(f"Number of recordings: {len(dataset)}") ``` Filter by multiple subjects and specific task: ```pycon >>> subjects = ["012", "013", "014"] >>> dataset = EEGDashDataset( ... cache_dir="./data", ... dataset="ds002718", ... subject=subjects, ... task="RestingState" ... ) ``` Load and inspect EEG data from recordings: ```pycon >>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}") ... print(f"Duration: {raw.times[-1]:.1f} seconds") ``` Advanced filtering with raw MongoDB queries: ```pycon >>> from eegdash import EEGDashDataset >>> query = { ... "dataset": "ds002718", ... "subject": {"$in": ["012", "013"]}, ... "task": "RestingState" ... } >>> dataset = EEGDashDataset(cache_dir="./data", query=query) ``` Working with dataset collections and braindecode integration: ```pycon >>> # EEGDashDataset is a braindecode BaseConcatDataset >>> for i, recording in enumerate(dataset): ... if i >= 2: # limit output ... break ... print(f"Recording {i}: {recording.description}") ... raw = recording.load() ... print(f" Channels: {len(raw.ch_names)}, Duration: {raw.times[-1]:.1f}s") ``` * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Raw MongoDB query to filter records. If provided, it is merged with keyword filtering arguments (see `**kwargs`) using logical AND. You must provide at least a `dataset` (either in `query` or as a keyword argument). Only fields in `ALLOWED_QUERY_FIELDS` are considered for filtering. * **dataset** (*str*) – Dataset identifier (e.g., `"ds002718"`). Required if `query` does not already specify a dataset. * **task** (*str* *|* *list* *[**str* *]*) – Task name(s) to filter by (e.g., `"RestingState"`). * **subject** (*str* *|* *list* *[**str* *]*) – Subject identifier(s) to filter by (e.g., `"NDARCA153NKE"`). * **session** (*str* *|* *list* *[**str* *]*) – Session identifier(s) to filter by (e.g., `"1"`). * **run** (*str* *|* *list* *[**str* *]*) – Run identifier(s) to filter by (e.g., `"1"`). * **description_fields** (*list* *[**str* *]*) – Fields to extract from each record and include in dataset descriptions (e.g., “subject”, “session”, “run”, “task”). * **s3_bucket** (*str* *|* *None*) – Optional S3 bucket URI (e.g., “s3://mybucket”) to use instead of the default OpenNeuro bucket when downloading data files. * **records** (*list* *[**dict* *]* *|* *None*) – Pre-fetched metadata records. If provided, the dataset is constructed directly from these records and no MongoDB query is performed. * **download** (*bool* *,* *default True*) – If False, load from local BIDS files only. Local data are expected under `cache_dir / dataset`; no DB or S3 access is attempted. * **n_jobs** (*int*) – Number of parallel jobs to use where applicable (-1 uses all cores). * **eeg_dash_instance** (*EEGDash* *|* *None*) – Optional existing EEGDash client to reuse for DB queries. If None, a new client is created on demand, not used in the case of no download. * **database** (*str* *|* *None*) – Database name to use (e.g., “eegdash”, “eegdash_staging”). If None, uses the default database. * **auth_token** (*str* *|* *None*) – Authentication token for accessing protected databases. Required for staging or admin operations. * **on_error** (*str* *,* *default "raise"*) – How to handle `DataIntegrityError` when accessing `.raw` on individual recordings: - `"raise"` (default): propagate the exception. - `"warn"`: log the error as a warning and set `.raw` to `None`. - `"skip"`: silently set `.raw` to `None`. Use `drop_bad()` after iteration to remove skipped recordings. * **\*\*kwargs** (*dict*) – Additional keyword arguments serving two purposes: - Filtering: any keys present in `ALLOWED_QUERY_FIELDS` are treated as query filters (e.g., `dataset`, `subject`, `task`, …). - Dataset options: remaining keys are forwarded to `EEGDashRaw`. #### *property* cumulative_sizes *: list[int]* Recompute cumulative sizes from current dataset lengths. Overrides the cached version from BaseConcatDataset because individual dataset lengths can change after lazy raw loading (estimated ntimes from JSON metadata may differ from actual n_times in the raw file). #### download_all(n_jobs: int | None = None) → None Download missing remote files in parallel. * **Parameters:** **n_jobs** (*int* *|* *None*) – Number of parallel workers to use. If None, defaults to `self.n_jobs`. #### drop_bad() → list[dict] Remove skipped datasets and return their records. Call after accessing `.raw` on all datasets (e.g. after iteration or preprocessing) to clean up the dataset list. * **Returns:** Records that were removed because loading failed. * **Return type:** list of dict #### drop_short(min_samples: int) → list[dict] Remove recordings shorter than *min_samples* and return their records. This is useful when downstream processing (e.g., fixed-length windowing) requires a minimum number of samples per recording. Recordings whose `.raw` is `None` (failed to load) are also dropped. * **Parameters:** **min_samples** (*int*) – Minimum number of time-domain samples a recording must have to be kept. * **Returns:** Records that were removed. * **Return type:** list of dict #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ### eegdash.dataset.dataset.EEGEYENET alias of [`DS007338`](eegdash.dataset.DS007338.md#eegdash.dataset.DS007338) ### eegdash.dataset.dataset.EEGEyeNet alias of [`DS005872`](eegdash.dataset.DS005872.md#eegdash.dataset.DS005872) ### eegdash.dataset.dataset.EEGEyeNet_v2 alias of [`DS007338`](eegdash.dataset.DS007338.md#eegdash.dataset.DS007338) ### eegdash.dataset.dataset.EEGMotorMovementImagery alias of [`DS004362`](eegdash.dataset.DS004362.md#eegdash.dataset.DS004362) ### eegdash.dataset.dataset.EESM17 alias of [`DS004348`](eegdash.dataset.DS004348.md#eegdash.dataset.DS004348) ### eegdash.dataset.dataset.EESM19 alias of [`DS005185`](eegdash.dataset.DS005185.md#eegdash.dataset.DS005185) ### eegdash.dataset.dataset.EESM23 alias of [`DS005178`](eegdash.dataset.DS005178.md#eegdash.dataset.DS005178) ### eegdash.dataset.dataset.EPFLP300 alias of [`NM000231`](eegdash.dataset.NM000231.md#eegdash.dataset.NM000231) ### eegdash.dataset.dataset.EPFLP300Dataset alias of [`NM000231`](eegdash.dataset.NM000231.md#eegdash.dataset.NM000231) ### eegdash.dataset.dataset.EPFL_P300 alias of [`NM000231`](eegdash.dataset.NM000231.md#eegdash.dataset.NM000231) ### eegdash.dataset.dataset.ERDetect alias of [`DS004774`](eegdash.dataset.DS004774.md#eegdash.dataset.DS004774) ### eegdash.dataset.dataset.ERPCORE alias of [`NM000132`](eegdash.dataset.NM000132.md#eegdash.dataset.NM000132) ### eegdash.dataset.dataset.ERP_CORE alias of [`NM000132`](eegdash.dataset.NM000132.md#eegdash.dataset.NM000132) ### eegdash.dataset.dataset.ER_Detect alias of [`DS004774`](eegdash.dataset.DS004774.md#eegdash.dataset.DS004774) ### eegdash.dataset.dataset.Edit2024 alias of [`DS007406`](eegdash.dataset.DS007406.md#eegdash.dataset.DS007406) ### eegdash.dataset.dataset.EldBETA alias of [`NM000130`](eegdash.dataset.NM000130.md#eegdash.dataset.NM000130) ### eegdash.dataset.dataset.Ester2022 alias of [`DS004519`](eegdash.dataset.DS004519.md#eegdash.dataset.DS004519) ### eegdash.dataset.dataset.Ester2024_E1 alias of [`DS004521`](eegdash.dataset.DS004521.md#eegdash.dataset.DS004521) ### eegdash.dataset.dataset.Ester2024_E2 alias of [`DS004520`](eegdash.dataset.DS004520.md#eegdash.dataset.DS004520) ### eegdash.dataset.dataset.FACED alias of [`NM000112`](eegdash.dataset.NM000112.md#eegdash.dataset.NM000112) ### eegdash.dataset.dataset.FLUX alias of [`DS004346`](eegdash.dataset.DS004346.md#eegdash.dataset.DS004346) ### eegdash.dataset.dataset.FRL_DiscreteGestures alias of [`NM000105`](eegdash.dataset.NM000105.md#eegdash.dataset.NM000105) ### eegdash.dataset.dataset.FRL_Handwriting alias of [`NM000106`](eegdash.dataset.NM000106.md#eegdash.dataset.NM000106) ### eegdash.dataset.dataset.FRL_WristControl alias of [`NM000107`](eegdash.dataset.NM000107.md#eegdash.dataset.NM000107) ### eegdash.dataset.dataset.FernandezRodriguez2023 alias of [`NM000240`](eegdash.dataset.NM000240.md#eegdash.dataset.NM000240) ### eegdash.dataset.dataset.Ferron2019 alias of [`DS004541`](eegdash.dataset.DS004541.md#eegdash.dataset.DS004541) ### eegdash.dataset.dataset.Flankers_FAR alias of [`DS005868`](eegdash.dataset.DS005868.md#eegdash.dataset.DS005868) ### eegdash.dataset.dataset.Flankers_NEAR alias of [`DS005866`](eegdash.dataset.DS005866.md#eegdash.dataset.DS005866) ### eegdash.dataset.dataset.Fogarty2025 alias of [`DS007463`](eegdash.dataset.DS007463.md#eegdash.dataset.DS007463) ### eegdash.dataset.dataset.Formica2025 alias of [`DS005406`](eegdash.dataset.DS005406.md#eegdash.dataset.DS005406) ### eegdash.dataset.dataset.ForrestGump_MEG alias of [`DS003633`](eegdash.dataset.DS003633.md#eegdash.dataset.DS003633) ### eegdash.dataset.dataset.FuentesGuerra2024 alias of [`DS007180`](eegdash.dataset.DS007180.md#eegdash.dataset.DS007180) ### eegdash.dataset.dataset.Gama2019 alias of [`DS005420`](eegdash.dataset.DS005420.md#eegdash.dataset.DS005420) ### eegdash.dataset.dataset.Gao2024 alias of [`DS007420`](eegdash.dataset.DS007420.md#eegdash.dataset.DS007420) ### eegdash.dataset.dataset.Gao2026 alias of [`NM000242`](eegdash.dataset.NM000242.md#eegdash.dataset.NM000242) ### eegdash.dataset.dataset.Ghaffari2024 alias of [`DS006547`](eegdash.dataset.DS006547.md#eegdash.dataset.DS006547) ### eegdash.dataset.dataset.GuttmannFlury2025_ME alias of [`NM000227`](eegdash.dataset.NM000227.md#eegdash.dataset.NM000227) ### eegdash.dataset.dataset.GuttmannFlury2025_MIME alias of [`NM000235`](eegdash.dataset.NM000235.md#eegdash.dataset.NM000235) ### eegdash.dataset.dataset.HADMEEG alias of [`DS007353`](eegdash.dataset.DS007353.md#eegdash.dataset.DS007353) ### eegdash.dataset.dataset.HAD_MEEG alias of [`DS007353`](eegdash.dataset.DS007353.md#eegdash.dataset.DS007353) ### eegdash.dataset.dataset.HBN_EEG_NC alias of [`NM000103`](eegdash.dataset.NM000103.md#eegdash.dataset.NM000103) ### eegdash.dataset.dataset.HBN_NoCommercial alias of [`NM000103`](eegdash.dataset.NM000103.md#eegdash.dataset.NM000103) ### eegdash.dataset.dataset.HBN_r1 alias of [`DS005505`](eegdash.dataset.DS005505.md#eegdash.dataset.DS005505) ### eegdash.dataset.dataset.HBN_r10 alias of [`DS005515`](eegdash.dataset.DS005515.md#eegdash.dataset.DS005515) ### eegdash.dataset.dataset.HBN_r10_bdf alias of [`EEG2025R10`](eegdash.dataset.EEG2025R10.md#eegdash.dataset.EEG2025R10) ### eegdash.dataset.dataset.HBN_r10_bdf_mini alias of [`EEG2025R10MINI`](eegdash.dataset.EEG2025R10MINI.md#eegdash.dataset.EEG2025R10MINI) ### eegdash.dataset.dataset.HBN_r11 alias of [`DS005516`](eegdash.dataset.DS005516.md#eegdash.dataset.DS005516) ### eegdash.dataset.dataset.HBN_r11_bdf alias of [`EEG2025R11`](eegdash.dataset.EEG2025R11.md#eegdash.dataset.EEG2025R11) ### eegdash.dataset.dataset.HBN_r11_bdf_mini alias of [`EEG2025R11MINI`](eegdash.dataset.EEG2025R11MINI.md#eegdash.dataset.EEG2025R11MINI) ### eegdash.dataset.dataset.HBN_r1_bdf alias of [`EEG2025R1`](eegdash.dataset.EEG2025R1.md#eegdash.dataset.EEG2025R1) ### eegdash.dataset.dataset.HBN_r1_bdf_mini alias of [`EEG2025R1MINI`](eegdash.dataset.EEG2025R1MINI.md#eegdash.dataset.EEG2025R1MINI) ### eegdash.dataset.dataset.HBN_r2 alias of [`DS005506`](eegdash.dataset.DS005506.md#eegdash.dataset.DS005506) ### eegdash.dataset.dataset.HBN_r2_bdf alias of [`EEG2025R2`](eegdash.dataset.EEG2025R2.md#eegdash.dataset.EEG2025R2) ### eegdash.dataset.dataset.HBN_r2_bdf_mini alias of [`EEG2025R2MINI`](eegdash.dataset.EEG2025R2MINI.md#eegdash.dataset.EEG2025R2MINI) ### eegdash.dataset.dataset.HBN_r3 alias of [`DS005507`](eegdash.dataset.DS005507.md#eegdash.dataset.DS005507) ### eegdash.dataset.dataset.HBN_r3_bdf alias of [`EEG2025R3`](eegdash.dataset.EEG2025R3.md#eegdash.dataset.EEG2025R3) ### eegdash.dataset.dataset.HBN_r3_bdf_mini alias of [`EEG2025R3MINI`](eegdash.dataset.EEG2025R3MINI.md#eegdash.dataset.EEG2025R3MINI) ### eegdash.dataset.dataset.HBN_r4 alias of [`DS005508`](eegdash.dataset.DS005508.md#eegdash.dataset.DS005508) ### eegdash.dataset.dataset.HBN_r4_bdf alias of [`EEG2025R4`](eegdash.dataset.EEG2025R4.md#eegdash.dataset.EEG2025R4) ### eegdash.dataset.dataset.HBN_r4_bdf_mini alias of [`EEG2025R4MINI`](eegdash.dataset.EEG2025R4MINI.md#eegdash.dataset.EEG2025R4MINI) ### eegdash.dataset.dataset.HBN_r5 alias of [`DS005509`](eegdash.dataset.DS005509.md#eegdash.dataset.DS005509) ### eegdash.dataset.dataset.HBN_r5_bdf alias of [`EEG2025R5`](eegdash.dataset.EEG2025R5.md#eegdash.dataset.EEG2025R5) ### eegdash.dataset.dataset.HBN_r5_bdf_mini alias of [`EEG2025R5MINI`](eegdash.dataset.EEG2025R5MINI.md#eegdash.dataset.EEG2025R5MINI) ### eegdash.dataset.dataset.HBN_r6 alias of [`DS005510`](eegdash.dataset.DS005510.md#eegdash.dataset.DS005510) ### eegdash.dataset.dataset.HBN_r6_bdf alias of [`EEG2025R6`](eegdash.dataset.EEG2025R6.md#eegdash.dataset.EEG2025R6) ### eegdash.dataset.dataset.HBN_r6_bdf_mini alias of [`EEG2025R6MINI`](eegdash.dataset.EEG2025R6MINI.md#eegdash.dataset.EEG2025R6MINI) ### eegdash.dataset.dataset.HBN_r7_bdf alias of [`EEG2025R7`](eegdash.dataset.EEG2025R7.md#eegdash.dataset.EEG2025R7) ### eegdash.dataset.dataset.HBN_r7_bdf_mini alias of [`EEG2025R7MINI`](eegdash.dataset.EEG2025R7MINI.md#eegdash.dataset.EEG2025R7MINI) ### eegdash.dataset.dataset.HBN_r8 alias of [`DS005512`](eegdash.dataset.DS005512.md#eegdash.dataset.DS005512) ### eegdash.dataset.dataset.HBN_r8_bdf alias of [`EEG2025R8`](eegdash.dataset.EEG2025R8.md#eegdash.dataset.EEG2025R8) ### eegdash.dataset.dataset.HBN_r8_bdf_mini alias of [`EEG2025R8MINI`](eegdash.dataset.EEG2025R8MINI.md#eegdash.dataset.EEG2025R8MINI) ### eegdash.dataset.dataset.HBN_r9 alias of [`DS005514`](eegdash.dataset.DS005514.md#eegdash.dataset.DS005514) ### eegdash.dataset.dataset.HBN_r9_bdf alias of [`EEG2025R9`](eegdash.dataset.EEG2025R9.md#eegdash.dataset.EEG2025R9) ### eegdash.dataset.dataset.HBN_r9_bdf_mini alias of [`EEG2025R9MINI`](eegdash.dataset.EEG2025R9MINI.md#eegdash.dataset.EEG2025R9MINI) ### eegdash.dataset.dataset.HEFMIICH alias of [`NM000347`](eegdash.dataset.NM000347.md#eegdash.dataset.NM000347) ### eegdash.dataset.dataset.HEFMI_ICH alias of [`NM000347`](eegdash.dataset.NM000347.md#eegdash.dataset.NM000347) ### eegdash.dataset.dataset.HID alias of [`DS004851`](eegdash.dataset.DS004851.md#eegdash.dataset.DS004851) ### eegdash.dataset.dataset.HUPiEEG alias of [`DS004100`](eegdash.dataset.DS004100.md#eegdash.dataset.DS004100) ### eegdash.dataset.dataset.Hatano alias of [`DS007118`](eegdash.dataset.DS007118.md#eegdash.dataset.DS007118) ### eegdash.dataset.dataset.Haupt2025 alias of [`DS004951`](eegdash.dataset.DS004951.md#eegdash.dataset.DS004951) ### eegdash.dataset.dataset.HealthyBrainNetwork alias of [`NM000103`](eegdash.dataset.NM000103.md#eegdash.dataset.NM000103) ### eegdash.dataset.dataset.HeartBEAM alias of [`DS006466`](eegdash.dataset.DS006466.md#eegdash.dataset.DS006466) ### eegdash.dataset.dataset.HenaoIsaza2026 alias of [`DS007427`](eegdash.dataset.DS007427.md#eegdash.dataset.DS007427) ### eegdash.dataset.dataset.Hermann2021 alias of [`DS003352`](eegdash.dataset.DS003352.md#eegdash.dataset.DS003352) ### eegdash.dataset.dataset.Hermes2024 alias of [`DS006392`](eegdash.dataset.DS006392.md#eegdash.dataset.DS006392) ### eegdash.dataset.dataset.Herrema2024 alias of [`DS005494`](eegdash.dataset.DS005494.md#eegdash.dataset.DS005494) ### eegdash.dataset.dataset.Hinss2021 alias of [`NM000206`](eegdash.dataset.NM000206.md#eegdash.dataset.NM000206) ### eegdash.dataset.dataset.Hinss2021_v2 alias of [`NM000343`](eegdash.dataset.NM000343.md#eegdash.dataset.NM000343) ### eegdash.dataset.dataset.Huang2022 alias of [`DS004457`](eegdash.dataset.DS004457.md#eegdash.dataset.DS004457) ### eegdash.dataset.dataset.Huebner2017 alias of [`NM000199`](eegdash.dataset.NM000199.md#eegdash.dataset.NM000199) ### eegdash.dataset.dataset.Huebner2018 alias of [`NM000195`](eegdash.dataset.NM000195.md#eegdash.dataset.NM000195) ### eegdash.dataset.dataset.HySER alias of [`NM000108`](eegdash.dataset.NM000108.md#eegdash.dataset.NM000108) ### eegdash.dataset.dataset.Hyser alias of [`NM000108`](eegdash.dataset.NM000108.md#eegdash.dataset.NM000108) ### eegdash.dataset.dataset.IACKD alias of [`DS006840`](eegdash.dataset.DS006840.md#eegdash.dataset.DS006840) ### eegdash.dataset.dataset.Jao2020 alias of [`NM000249`](eegdash.dataset.NM000249.md#eegdash.dataset.NM000249) ### eegdash.dataset.dataset.Johnson2024 alias of [`DS004850`](eegdash.dataset.DS004850.md#eegdash.dataset.DS004850) ### eegdash.dataset.dataset.Johnson2025 alias of [`DS004852`](eegdash.dataset.DS004852.md#eegdash.dataset.DS004852) ### eegdash.dataset.dataset.Kajikawa2000 alias of [`DS007028`](eegdash.dataset.DS007028.md#eegdash.dataset.DS007028) ### eegdash.dataset.dataset.Kalenkovich2019 alias of [`DS003703`](eegdash.dataset.DS003703.md#eegdash.dataset.DS003703) ### eegdash.dataset.dataset.Kanno2025 alias of [`DS005545`](eegdash.dataset.DS005545.md#eegdash.dataset.DS005545) ### eegdash.dataset.dataset.Kekecs2024 alias of [`DS004572`](eegdash.dataset.DS004572.md#eegdash.dataset.DS004572) ### eegdash.dataset.dataset.Kidder2024 alias of [`DS004278`](eegdash.dataset.DS004278.md#eegdash.dataset.DS004278) ### eegdash.dataset.dataset.Kim2025 alias of [`NM000127`](eegdash.dataset.NM000127.md#eegdash.dataset.NM000127) ### eegdash.dataset.dataset.Kinley2019 alias of [`DS006446`](eegdash.dataset.DS006446.md#eegdash.dataset.DS006446) ### eegdash.dataset.dataset.Kitazawa2025 alias of [`DS005007`](eegdash.dataset.DS005007.md#eegdash.dataset.DS005007) ### eegdash.dataset.dataset.Kucyi2024 alias of [`DS007216`](eegdash.dataset.DS007216.md#eegdash.dataset.DS007216) ### eegdash.dataset.dataset.Kuroda2024 alias of [`DS006107`](eegdash.dataset.DS006107.md#eegdash.dataset.DS006107) ### eegdash.dataset.dataset.LEMON alias of [`NM000179`](eegdash.dataset.NM000179.md#eegdash.dataset.NM000179) ### eegdash.dataset.dataset.LPP alias of [`DS005345`](eegdash.dataset.DS005345.md#eegdash.dataset.DS005345) ### eegdash.dataset.dataset.LeganesFonteneau2024 alias of [`DS006159`](eegdash.dataset.DS006159.md#eegdash.dataset.DS006159) ### eegdash.dataset.dataset.Lin2019 alias of [`DS006035`](eegdash.dataset.DS006035.md#eegdash.dataset.DS006035) ### eegdash.dataset.dataset.LittlePrince alias of [`DS007524`](eegdash.dataset.DS007524.md#eegdash.dataset.DS007524) ### eegdash.dataset.dataset.Liu2022EldBETA alias of [`NM000130`](eegdash.dataset.NM000130.md#eegdash.dataset.NM000130) ### eegdash.dataset.dataset.Lowe2025 alias of [`DS006817`](eegdash.dataset.DS006817.md#eegdash.dataset.DS006817) ### eegdash.dataset.dataset.Luke2019 alias of [`DS005964`](eegdash.dataset.DS005964.md#eegdash.dataset.DS005964) ### eegdash.dataset.dataset.MAMEM2 alias of [`NM000120`](eegdash.dataset.NM000120.md#eegdash.dataset.NM000120) ### eegdash.dataset.dataset.MAMEM2_SSVEP alias of [`NM000120`](eegdash.dataset.NM000120.md#eegdash.dataset.NM000120) ### eegdash.dataset.dataset.MAMEM3 alias of [`NM000121`](eegdash.dataset.NM000121.md#eegdash.dataset.NM000121) ### eegdash.dataset.dataset.MASC_MEG alias of [`NM000229`](eegdash.dataset.NM000229.md#eegdash.dataset.NM000229) ### eegdash.dataset.dataset.MAVIS alias of [`DS004010`](eegdash.dataset.DS004010.md#eegdash.dataset.DS004010) ### eegdash.dataset.dataset.MEGMEM alias of [`DS003694`](eegdash.dataset.DS003694.md#eegdash.dataset.DS003694) ### eegdash.dataset.dataset.MEG_MASC alias of [`NM000229`](eegdash.dataset.NM000229.md#eegdash.dataset.NM000229) ### eegdash.dataset.dataset.MEG_SCANS alias of [`DS006468`](eegdash.dataset.DS006468.md#eegdash.dataset.DS006468) ### eegdash.dataset.dataset.MNESomato alias of [`DS003104`](eegdash.dataset.DS003104.md#eegdash.dataset.DS003104) ### eegdash.dataset.dataset.MNESomatoData alias of [`DS003104`](eegdash.dataset.DS003104.md#eegdash.dataset.DS003104) ### eegdash.dataset.dataset.MNE_Sample_Data alias of [`DS000248`](eegdash.dataset.DS000248.md#eegdash.dataset.DS000248) ### eegdash.dataset.dataset.MSSV alias of [`DS006366`](eegdash.dataset.DS006366.md#eegdash.dataset.DS006366) ### eegdash.dataset.dataset.MUSING alias of [`DS003774`](eegdash.dataset.DS003774.md#eegdash.dataset.DS003774) ### eegdash.dataset.dataset.Maestu2021 alias of [`DS003483`](eegdash.dataset.DS003483.md#eegdash.dataset.DS003483) ### eegdash.dataset.dataset.Martzoukou2024_Post alias of [`DS007314`](eegdash.dataset.DS007314.md#eegdash.dataset.DS007314) ### eegdash.dataset.dataset.Martzoukou2024_Post_A alias of [`DS007315`](eegdash.dataset.DS007315.md#eegdash.dataset.DS007315) ### eegdash.dataset.dataset.Melcon2024 alias of [`DS006171`](eegdash.dataset.DS006171.md#eegdash.dataset.DS006171) ### eegdash.dataset.dataset.Mendola2020 alias of [`DS002001`](eegdash.dataset.DS002001.md#eegdash.dataset.DS002001) ### eegdash.dataset.dataset.Mesquita2019 alias of [`DS005963`](eegdash.dataset.DS005963.md#eegdash.dataset.DS005963) ### eegdash.dataset.dataset.MetaRDK alias of [`DS006253`](eegdash.dataset.DS006253.md#eegdash.dataset.DS006253) ### eegdash.dataset.dataset.Mheich2020 alias of [`DS002791`](eegdash.dataset.DS002791.md#eegdash.dataset.DS002791) ### eegdash.dataset.dataset.Mheich2024 alias of [`DS002833`](eegdash.dataset.DS002833.md#eegdash.dataset.DS002833) ### eegdash.dataset.dataset.Miller2021 alias of [`DS003708`](eegdash.dataset.DS003708.md#eegdash.dataset.DS003708) ### eegdash.dataset.dataset.Mishra2024 alias of [`DS007322`](eegdash.dataset.DS007322.md#eegdash.dataset.DS007322) ### eegdash.dataset.dataset.Mivalt2024 alias of [`DS004624`](eegdash.dataset.DS004624.md#eegdash.dataset.DS004624) ### eegdash.dataset.dataset.Moerel2023 alias of [`DS004995`](eegdash.dataset.DS004995.md#eegdash.dataset.DS004995) ### eegdash.dataset.dataset.Moerel2025 alias of [`DS007521`](eegdash.dataset.DS007521.md#eegdash.dataset.DS007521) ### eegdash.dataset.dataset.Moradi2024 alias of [`DS004598`](eegdash.dataset.DS004598.md#eegdash.dataset.DS004598) ### eegdash.dataset.dataset.Motion_Yucel2014 alias of [`DS005929`](eegdash.dataset.DS005929.md#eegdash.dataset.DS005929) ### *class* eegdash.dataset.dataset.NM000103(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network EEG - Not for Commercial Use * **Study:** `nm000103` (NeMAR) * **Author (year):** `Shirazi2017` * **Canonical:** `HealthyBrainNetwork`, `HBN_EEG_NC`, `HBN_NoCommercial` Also importable as: `NM000103`, `Shirazi2017`, `HealthyBrainNetwork`, `HBN_EEG_NC`, `HBN_NoCommercial`. Modality: `eeg`. Subjects: 447; recordings: 3522; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000103](https://openneuro.org/datasets/nm000103) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000103](https://nemar.org/dataexplorer/detail?dataset_id=nm000103) DOI: [https://doi.org/10.82901/nemar.nm000103](https://doi.org/10.82901/nemar.nm000103) ### Examples ```pycon >>> from eegdash.dataset import NM000103 >>> dataset = NM000103(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HealthyBrainNetwork', 'HBN_EEG_NC', 'HBN_NoCommercial']* ### *class* eegdash.dataset.dataset.NM000104(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography * **Study:** `nm000104` (NeMAR) * **Author (year):** `Sivakumar2024` * **Canonical:** `emg2qwerty` Also importable as: `NM000104`, `Sivakumar2024`, `emg2qwerty`. Modality: `emg`. Subjects: 108; recordings: 1136; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000104](https://openneuro.org/datasets/nm000104) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000104](https://nemar.org/dataexplorer/detail?dataset_id=nm000104) DOI: [https://doi.org/10.82901/nemar.nm000104](https://doi.org/10.82901/nemar.nm000104) ### Examples ```pycon >>> from eegdash.dataset import NM000104 >>> dataset = NM000104(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['emg2qwerty']* ### *class* eegdash.dataset.dataset.NM000105(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRL Discrete Gestures: Hand Gesture Recognition from Surface Electromyography * **Study:** `nm000105` (NeMAR) * **Author (year):** `Kaifosh2025` * **Canonical:** `FRL_DiscreteGestures` Also importable as: `NM000105`, `Kaifosh2025`, `FRL_DiscreteGestures`. Modality: `emg`. Subjects: 100; recordings: 100; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000105](https://openneuro.org/datasets/nm000105) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000105](https://nemar.org/dataexplorer/detail?dataset_id=nm000105) DOI: [https://doi.org/10.82901/nemar.nm000105](https://doi.org/10.82901/nemar.nm000105) ### Examples ```pycon >>> from eegdash.dataset import NM000105 >>> dataset = NM000105(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['FRL_DiscreteGestures']* ### *class* eegdash.dataset.dataset.NM000106(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRL Handwriting: Handwriting Decoding from Surface Electromyography * **Study:** `nm000106` (NeMAR) * **Author (year):** `Kaifosh2025_106` * **Canonical:** `FRL_Handwriting` Also importable as: `NM000106`, `Kaifosh2025_106`, `FRL_Handwriting`. Modality: `emg`. Subjects: 100; recordings: 807; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000106](https://openneuro.org/datasets/nm000106) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000106](https://nemar.org/dataexplorer/detail?dataset_id=nm000106) DOI: [https://doi.org/10.82901/nemar.nm000106](https://doi.org/10.82901/nemar.nm000106) ### Examples ```pycon >>> from eegdash.dataset import NM000106 >>> dataset = NM000106(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['FRL_Handwriting']* ### *class* eegdash.dataset.dataset.NM000107(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRL Wrist Control: Wrist Movement Decoding from Surface Electromyography * **Study:** `nm000107` (NeMAR) * **Author (year):** `Kaifosh2025_107` * **Canonical:** `FRL_WristControl` Also importable as: `NM000107`, `Kaifosh2025_107`, `FRL_WristControl`. Modality: `emg`. Subjects: 100; recordings: 182; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000107](https://openneuro.org/datasets/nm000107) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000107](https://nemar.org/dataexplorer/detail?dataset_id=nm000107) DOI: [https://doi.org/10.82901/nemar.nm000107](https://doi.org/10.82901/nemar.nm000107) ### Examples ```pycon >>> from eegdash.dataset import NM000107 >>> dataset = NM000107(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['FRL_WristControl']* ### *class* eegdash.dataset.dataset.NM000108(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HySER: High-Density Surface Electromyogram Recordings * **Study:** `nm000108` (NeMAR) * **Author (year):** `Jiang2021` * **Canonical:** `HySER`, `Hyser` Also importable as: `NM000108`, `Jiang2021`, `HySER`, `Hyser`. Modality: `emg`. Subjects: 20; recordings: 1514; tasks: 38. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000108](https://openneuro.org/datasets/nm000108) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000108](https://nemar.org/dataexplorer/detail?dataset_id=nm000108) DOI: [https://doi.org/10.82901/nemar.nm000108](https://doi.org/10.82901/nemar.nm000108) ### Examples ```pycon >>> from eegdash.dataset import NM000108 >>> dataset = NM000108(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HySER', 'Hyser']* ### *class* eegdash.dataset.dataset.NM000109(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG During Mental Arithmetic Tasks * **Study:** `nm000109` (NeMAR) * **Author (year):** `Zyma2019` * **Canonical:** — Also importable as: `NM000109`, `Zyma2019`. Modality: `eeg`. Subjects: 36; recordings: 72; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000109](https://openneuro.org/datasets/nm000109) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000109](https://nemar.org/dataexplorer/detail?dataset_id=nm000109) DOI: [https://doi.org/10.82901/nemar.nm000109](https://doi.org/10.82901/nemar.nm000109) ### Examples ```pycon >>> from eegdash.dataset import NM000109 >>> dataset = NM000109(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000110(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CHB-MIT * **Study:** `nm000110` (NeMAR) * **Author (year):** `Connolly2010` * **Canonical:** `CHBMIT`, `CHB_MIT` Also importable as: `NM000110`, `Connolly2010`, `CHBMIT`, `CHB_MIT`. Modality: `eeg`. Subjects: 24; recordings: 686; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000110](https://openneuro.org/datasets/nm000110) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000110](https://nemar.org/dataexplorer/detail?dataset_id=nm000110) DOI: [https://doi.org/10.82901/nemar.nm000110](https://doi.org/10.82901/nemar.nm000110) ### Examples ```pycon >>> from eegdash.dataset import NM000110 >>> dataset = NM000110(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['CHBMIT', 'CHB_MIT']* ### *class* eegdash.dataset.dataset.NM000112(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FACED - Finer-grained Affective Computing EEG Dataset * **Study:** `nm000112` (NeMAR) * **Author (year):** `Liu2024_112` * **Canonical:** `FACED` Also importable as: `NM000112`, `Liu2024_112`, `FACED`. Modality: `eeg`. Subjects: 123; recordings: 123; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000112](https://openneuro.org/datasets/nm000112) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000112](https://nemar.org/dataexplorer/detail?dataset_id=nm000112) DOI: [https://doi.org/10.82901/nemar.nm000112](https://doi.org/10.82901/nemar.nm000112) ### Examples ```pycon >>> from eegdash.dataset import NM000112 >>> dataset = NM000112(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['FACED']* ### *class* eegdash.dataset.dataset.NM000113(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 2020 BCI competition, track 3 * **Study:** `nm000113` (NeMAR) * **Author (year):** `Lee2020` * **Canonical:** — Also importable as: `NM000113`, `Lee2020`. Modality: `eeg`. Subjects: 15; recordings: 45; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000113](https://openneuro.org/datasets/nm000113) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000113](https://nemar.org/dataexplorer/detail?dataset_id=nm000113) DOI: [https://doi.org/10.82901/nemar.nm000113](https://doi.org/10.82901/nemar.nm000113) ### Examples ```pycon >>> from eegdash.dataset import NM000113 >>> dataset = NM000113(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000114(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MDD Patients and Healthy Controls EEG Data * **Study:** `nm000114` (NeMAR) * **Author (year):** `Mumtaz2017` * **Canonical:** — Also importable as: `NM000114`, `Mumtaz2017`. Modality: `eeg`. Subjects: 64; recordings: 181; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000114](https://openneuro.org/datasets/nm000114) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000114](https://nemar.org/dataexplorer/detail?dataset_id=nm000114) DOI: [https://doi.org/10.82901/nemar.nm000114](https://doi.org/10.82901/nemar.nm000114) ### Examples ```pycon >>> from eegdash.dataset import NM000114 >>> dataset = NM000114(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000115(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Zhou2016 * **Study:** `nm000115` (NeMAR) * **Author (year):** `Zhou2016` * **Canonical:** — Also importable as: `NM000115`, `Zhou2016`. Modality: `eeg`. Subjects: 4; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000115](https://openneuro.org/datasets/nm000115) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000115](https://nemar.org/dataexplorer/detail?dataset_id=nm000115) DOI: [https://doi.org/10.82901/nemar.nm000115](https://doi.org/10.82901/nemar.nm000115) ### Examples ```pycon >>> from eegdash.dataset import NM000115 >>> dataset = NM000115(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000118(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Nakanishi2015 – SSVEP Nakanishi 2015 dataset * **Study:** `nm000118` (NeMAR) * **Author (year):** `Nakanishi2015` * **Canonical:** — Also importable as: `NM000118`, `Nakanishi2015`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 9; recordings: 9; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000118](https://openneuro.org/datasets/nm000118) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000118](https://nemar.org/dataexplorer/detail?dataset_id=nm000118) ### Examples ```pycon >>> from eegdash.dataset import NM000118 >>> dataset = NM000118(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000119(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Oikonomou2016 – SSVEP MAMEM 1 dataset * **Study:** `nm000119` (NeMAR) * **Author (year):** `Oikonomou2016_MAMEM1` * **Canonical:** `Oikonomou2016` Also importable as: `NM000119`, `Oikonomou2016_MAMEM1`, `Oikonomou2016`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 11; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000119](https://openneuro.org/datasets/nm000119) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000119](https://nemar.org/dataexplorer/detail?dataset_id=nm000119) ### Examples ```pycon >>> from eegdash.dataset import NM000119 >>> dataset = NM000119(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Oikonomou2016']* ### *class* eegdash.dataset.dataset.NM000120(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Oikonomou2016 – SSVEP MAMEM 2 dataset * **Study:** `nm000120` (NeMAR) * **Author (year):** `Oikonomou2016_MAMEM2` * **Canonical:** `MAMEM2`, `SSVEPMAMEM2`, `MAMEM2_SSVEP` Also importable as: `NM000120`, `Oikonomou2016_MAMEM2`, `MAMEM2`, `SSVEPMAMEM2`, `MAMEM2_SSVEP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 11; recordings: 55; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000120](https://openneuro.org/datasets/nm000120) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000120](https://nemar.org/dataexplorer/detail?dataset_id=nm000120) ### Examples ```pycon >>> from eegdash.dataset import NM000120 >>> dataset = NM000120(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MAMEM2', 'SSVEPMAMEM2', 'MAMEM2_SSVEP']* ### *class* eegdash.dataset.dataset.NM000121(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Oikonomou2016 – SSVEP MAMEM 3 dataset * **Study:** `nm000121` (NeMAR) * **Author (year):** `Oikonomou2016_MAMEM3` * **Canonical:** `MAMEM3`, `SSVEP_MAMEM3` Also importable as: `NM000121`, `Oikonomou2016_MAMEM3`, `MAMEM3`, `SSVEP_MAMEM3`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 11; recordings: 110; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000121](https://openneuro.org/datasets/nm000121) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000121](https://nemar.org/dataexplorer/detail?dataset_id=nm000121) ### Examples ```pycon >>> from eegdash.dataset import NM000121 >>> dataset = NM000121(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MAMEM3', 'SSVEP_MAMEM3']* ### *class* eegdash.dataset.dataset.NM000122(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Chen2017 – Single-flicker online SSVEP BCI dataset * **Study:** `nm000122` (NeMAR) * **Author (year):** `Chen2017` * **Canonical:** — Also importable as: `NM000122`, `Chen2017`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000122](https://openneuro.org/datasets/nm000122) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000122](https://nemar.org/dataexplorer/detail?dataset_id=nm000122) ### Examples ```pycon >>> from eegdash.dataset import NM000122 >>> dataset = NM000122(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000123(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Kalunga2016 – SSVEP Exo dataset * **Study:** `nm000123` (NeMAR) * **Author (year):** `Kalunga2016` * **Canonical:** — Also importable as: `NM000123`, `Kalunga2016`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 12; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000123](https://openneuro.org/datasets/nm000123) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000123](https://nemar.org/dataexplorer/detail?dataset_id=nm000123) ### Examples ```pycon >>> from eegdash.dataset import NM000123 >>> dataset = NM000123(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000124(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Han2024 – SSVEP fatigue dataset with two frequency paradigms * **Study:** `nm000124` (NeMAR) * **Author (year):** `Han2024` * **Canonical:** — Also importable as: `NM000124`, `Han2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 24; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000124](https://openneuro.org/datasets/nm000124) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000124](https://nemar.org/dataexplorer/detail?dataset_id=nm000124) ### Examples ```pycon >>> from eegdash.dataset import NM000124 >>> dataset = NM000124(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000125(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Lee2021 – SSVEP paradigm of the Mobile BCI dataset * **Study:** `nm000125` (NeMAR) * **Author (year):** `Lee2021_SSVEP` * **Canonical:** — Also importable as: `NM000125`, `Lee2021_SSVEP`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 23; recordings: 85; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000125](https://openneuro.org/datasets/nm000125) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000125](https://nemar.org/dataexplorer/detail?dataset_id=nm000125) ### Examples ```pycon >>> from eegdash.dataset import NM000125 >>> dataset = NM000125(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000126(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Wang2016 – SSVEP Wang 2016 dataset * **Study:** `nm000126` (NeMAR) * **Author (year):** `Wang2016` * **Canonical:** — Also importable as: `NM000126`, `Wang2016`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000126](https://openneuro.org/datasets/nm000126) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000126](https://nemar.org/dataexplorer/detail?dataset_id=nm000126) ### Examples ```pycon >>> from eegdash.dataset import NM000126 >>> dataset = NM000126(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000127(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Kim2025 – 40-class beta-range SSVEP speller dataset * **Study:** `nm000127` (NeMAR) * **Author (year):** `Kim2025_SSVEP` * **Canonical:** `Kim2025` Also importable as: `NM000127`, `Kim2025_SSVEP`, `Kim2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 40; recordings: 240; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000127](https://openneuro.org/datasets/nm000127) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000127](https://nemar.org/dataexplorer/detail?dataset_id=nm000127) ### Examples ```pycon >>> from eegdash.dataset import NM000127 >>> dataset = NM000127(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kim2025']* ### *class* eegdash.dataset.dataset.NM000128(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dong2023 – 59-subject 40-class SSVEP dataset * **Study:** `nm000128` (NeMAR) * **Author (year):** `Dong2023` * **Canonical:** — Also importable as: `NM000128`, `Dong2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 59; recordings: 59; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000128](https://openneuro.org/datasets/nm000128) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000128](https://nemar.org/dataexplorer/detail?dataset_id=nm000128) ### Examples ```pycon >>> from eegdash.dataset import NM000128 >>> dataset = NM000128(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000129(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Liu2020 – BETA SSVEP benchmark dataset * **Study:** `nm000129` (NeMAR) * **Author (year):** `Liu2020` * **Canonical:** `BetaSSVEP`, `BETA_SSVEP`, `BETA` Also importable as: `NM000129`, `Liu2020`, `BetaSSVEP`, `BETA_SSVEP`, `BETA`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 70; recordings: 70; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000129](https://openneuro.org/datasets/nm000129) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000129](https://nemar.org/dataexplorer/detail?dataset_id=nm000129) ### Examples ```pycon >>> from eegdash.dataset import NM000129 >>> dataset = NM000129(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BetaSSVEP', 'BETA_SSVEP', 'BETA']* ### *class* eegdash.dataset.dataset.NM000130(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Liu2022 – eldBETA SSVEP benchmark dataset for elderly population * **Study:** `nm000130` (NeMAR) * **Author (year):** `Liu2022` * **Canonical:** `EldBETA`, `eldBETA`, `Liu2022EldBETA` Also importable as: `NM000130`, `Liu2022`, `EldBETA`, `eldBETA`, `Liu2022EldBETA`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 100; recordings: 700; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000130](https://openneuro.org/datasets/nm000130) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000130](https://nemar.org/dataexplorer/detail?dataset_id=nm000130) ### Examples ```pycon >>> from eegdash.dataset import NM000130 >>> dataset = NM000130(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EldBETA', 'eldBETA', 'Liu2022EldBETA']* ### *class* eegdash.dataset.dataset.NM000131(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Wang2021 – Combined SSVEP dataset with single stimulus location for two inputs * **Study:** `nm000131` (NeMAR) * **Author (year):** `Wang2021` * **Canonical:** — Also importable as: `NM000131`, `Wang2021`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 8; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000131](https://openneuro.org/datasets/nm000131) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000131](https://nemar.org/dataexplorer/detail?dataset_id=nm000131) ### Examples ```pycon >>> from eegdash.dataset import NM000131 >>> dataset = NM000131(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000132(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ERP CORE * **Study:** `nm000132` (NeMAR) * **Author (year):** `Kappenman2021` * **Canonical:** `ERPCORE`, `ERP_CORE` Also importable as: `NM000132`, `Kappenman2021`, `ERPCORE`, `ERP_CORE`. Modality: `eeg`. Subjects: 40; recordings: 240; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000132](https://openneuro.org/datasets/nm000132) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000132](https://nemar.org/dataexplorer/detail?dataset_id=nm000132) DOI: [https://doi.org/10.82901/nemar.nm000132](https://doi.org/10.82901/nemar.nm000132) ### Examples ```pycon >>> from eegdash.dataset import NM000132 >>> dataset = NM000132(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ERPCORE', 'ERP_CORE']* ### *class* eegdash.dataset.dataset.NM000133(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alljoined1 * **Study:** `nm000133` (NeMAR) * **Author (year):** `Xu2024` * **Canonical:** `Alljoined1`, `Alljoined` Also importable as: `NM000133`, `Xu2024`, `Alljoined1`, `Alljoined`. Modality: `eeg`. Subjects: 8; recordings: 13; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000133](https://openneuro.org/datasets/nm000133) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000133](https://nemar.org/dataexplorer/detail?dataset_id=nm000133) DOI: [https://doi.org/10.82901/nemar.nm000133](https://doi.org/10.82901/nemar.nm000133) ### Examples ```pycon >>> from eegdash.dataset import NM000133 >>> dataset = NM000133(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Alljoined1', 'Alljoined']* ### *class* eegdash.dataset.dataset.NM000134(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alljoined-1.6M * **Study:** `nm000134` (NeMAR) * **Author (year):** `Xu2025` * **Canonical:** `Alljoined16M`, `Alljoined_16M`, `Alljoined1p6M` Also importable as: `NM000134`, `Xu2025`, `Alljoined16M`, `Alljoined_16M`, `Alljoined1p6M`. Modality: `eeg`. Subjects: 20; recordings: 1525; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000134](https://openneuro.org/datasets/nm000134) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000134](https://nemar.org/dataexplorer/detail?dataset_id=nm000134) DOI: [https://doi.org/10.82901/nemar.nm000134](https://doi.org/10.82901/nemar.nm000134) ### Examples ```pycon >>> from eegdash.dataset import NM000134 >>> dataset = NM000134(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Alljoined16M', 'Alljoined_16M', 'Alljoined1p6M']* ### *class* eegdash.dataset.dataset.NM000135(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-004 Motor Imagery dataset * **Study:** `nm000135` (NeMAR) * **Author (year):** `Leeb2014` * **Canonical:** `BNCI2014004` Also importable as: `NM000135`, `Leeb2014`, `BNCI2014004`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 1; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000135](https://openneuro.org/datasets/nm000135) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000135](https://nemar.org/dataexplorer/detail?dataset_id=nm000135) ### Examples ```pycon >>> from eegdash.dataset import NM000135 >>> dataset = NM000135(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2014004']* ### *class* eegdash.dataset.dataset.NM000136(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) GuttmannFlury2025-P300 * **Study:** `nm000136` (NeMAR) * **Author (year):** `GuttmannFlury2025` * **Canonical:** — Also importable as: `NM000136`, `GuttmannFlury2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 31; recordings: 63; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000136](https://openneuro.org/datasets/nm000136) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000136](https://nemar.org/dataexplorer/detail?dataset_id=nm000136) DOI: [https://doi.org/10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) ### Examples ```pycon >>> from eegdash.dataset import NM000136 >>> dataset = NM000136(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000137(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Classical motor imagery dataset with left hand, right hand, and rest * **Study:** `nm000137` (NeMAR) * **Author (year):** `Kaya2018` * **Canonical:** — Also importable as: `NM000137`, `Kaya2018`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 7; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000137](https://openneuro.org/datasets/nm000137) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000137](https://nemar.org/dataexplorer/detail?dataset_id=nm000137) ### Examples ```pycon >>> from eegdash.dataset import NM000137 >>> dataset = NM000137(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000138(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alex Motor Imagery dataset * **Study:** `nm000138` (NeMAR) * **Author (year):** `Barachant2012` * **Canonical:** `AlexMI`, `AlexMotorImagery`, `AlexandreMotorImagery` Also importable as: `NM000138`, `Barachant2012`, `AlexMI`, `AlexMotorImagery`, `AlexandreMotorImagery`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000138](https://openneuro.org/datasets/nm000138) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000138](https://nemar.org/dataexplorer/detail?dataset_id=nm000138) ### Examples ```pycon >>> from eegdash.dataset import NM000138 >>> dataset = NM000138(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['AlexMI', 'AlexMotorImagery', 'AlexandreMotorImagery']* ### *class* eegdash.dataset.dataset.NM000139(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-001 Motor Imagery dataset * **Study:** `nm000139` (NeMAR) * **Author (year):** `Tangermann2014` * **Canonical:** `BNCI2014001`, `BCICIV1`, `BCICompIV1` Also importable as: `NM000139`, `Tangermann2014`, `BNCI2014001`, `BCICIV1`, `BCICompIV1`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 9; recordings: 108; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000139](https://openneuro.org/datasets/nm000139) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000139](https://nemar.org/dataexplorer/detail?dataset_id=nm000139) ### Examples ```pycon >>> from eegdash.dataset import NM000139 >>> dataset = NM000139(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2014001', 'BCICIV1', 'BCICompIV1']* ### *class* eegdash.dataset.dataset.NM000140(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-001 Motor Imagery dataset * **Study:** `nm000140` (NeMAR) * **Author (year):** `Faller2015` * **Canonical:** `BNCI2015`, `BNCI2015001` Also importable as: `NM000140`, `Faller2015`, `BNCI2015`, `BNCI2015001`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 12; recordings: 28; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000140](https://openneuro.org/datasets/nm000140) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000140](https://nemar.org/dataexplorer/detail?dataset_id=nm000140) ### Examples ```pycon >>> from eegdash.dataset import NM000140 >>> dataset = NM000140(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2015', 'BNCI2015001']* ### *class* eegdash.dataset.dataset.NM000141(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor execution dataset from Wairagkar et al 2018 * **Study:** `nm000141` (NeMAR) * **Author (year):** `Wairagkar2018` * **Canonical:** — Also importable as: `NM000141`, `Wairagkar2018`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 14; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000141](https://openneuro.org/datasets/nm000141) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000141](https://nemar.org/dataexplorer/detail?dataset_id=nm000141) ### Examples ```pycon >>> from eegdash.dataset import NM000141 >>> dataset = NM000141(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000142(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Ear-EEG motor execution dataset from Wu et al 2020 * **Study:** `nm000142` (NeMAR) * **Author (year):** `Wu2020` * **Canonical:** — Also importable as: `NM000142`, `Wu2020`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 6; recordings: 13; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000142](https://openneuro.org/datasets/nm000142) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000142](https://nemar.org/dataexplorer/detail?dataset_id=nm000142) ### Examples ```pycon >>> from eegdash.dataset import NM000142 >>> dataset = NM000142(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000143(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI2003_IVa Motor Imagery dataset * **Study:** `nm000143` (NeMAR) * **Author (year):** `BNCI2003` * **Canonical:** `BCICIII_IVa`, `BCICompIII_IVa`, `BNCI2003_IVa` Also importable as: `NM000143`, `BNCI2003`, `BCICIII_IVa`, `BCICompIII_IVa`, `BNCI2003_IVa`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000143](https://openneuro.org/datasets/nm000143) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000143](https://nemar.org/dataexplorer/detail?dataset_id=nm000143) ### Examples ```pycon >>> from eegdash.dataset import NM000143 >>> dataset = NM000143(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCICIII_IVa', 'BCICompIII_IVa', 'BNCI2003_IVa']* ### *class* eegdash.dataset.dataset.NM000144(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-004 Mental tasks dataset * **Study:** `nm000144` (NeMAR) * **Author (year):** `Scherer2015` * **Canonical:** — Also importable as: `NM000144`, `Scherer2015`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 9; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000144](https://openneuro.org/datasets/nm000144) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000144](https://nemar.org/dataexplorer/detail?dataset_id=nm000144) ### Examples ```pycon >>> from eegdash.dataset import NM000144 >>> dataset = NM000144(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000145(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Munich Motor Imagery dataset * **Study:** `nm000145` (NeMAR) * **Author (year):** `GrosseWentrup2009` * **Canonical:** — Also importable as: `NM000145`, `GrosseWentrup2009`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 10; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000145](https://openneuro.org/datasets/nm000145) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000145](https://nemar.org/dataexplorer/detail?dataset_id=nm000145) ### Examples ```pycon >>> from eegdash.dataset import NM000145 >>> dataset = NM000145(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000146(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Imagery dataset from Weibo et al 2014 * **Study:** `nm000146` (NeMAR) * **Author (year):** `Yi2014` * **Canonical:** `Weibo2014` Also importable as: `NM000146`, `Yi2014`, `Weibo2014`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 10; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000146](https://openneuro.org/datasets/nm000146) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000146](https://nemar.org/dataexplorer/detail?dataset_id=nm000146) ### Examples ```pycon >>> from eegdash.dataset import NM000146 >>> dataset = NM000146(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Weibo2014']* ### *class* eegdash.dataset.dataset.NM000147(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RomaniBF2025ERP * **Study:** `nm000147` (NeMAR) * **Author (year):** `RomaniBF2025` * **Canonical:** `Romani2025` Also importable as: `NM000147`, `RomaniBF2025`, `Romani2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 22; recordings: 120; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000147](https://openneuro.org/datasets/nm000147) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000147](https://nemar.org/dataexplorer/detail?dataset_id=nm000147) DOI: [https://doi.org/10.48550/arXiv.2510.10169](https://doi.org/10.48550/arXiv.2510.10169) ### Examples ```pycon >>> from eegdash.dataset import NM000147 >>> dataset = NM000147(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Romani2025']* ### *class* eegdash.dataset.dataset.NM000148(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor imagery BCI dataset with pupillometry augmentation * **Study:** `nm000148` (NeMAR) * **Author (year):** `Rozado2015` * **Canonical:** — Also importable as: `NM000148`, `Rozado2015`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 30; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000148](https://openneuro.org/datasets/nm000148) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000148](https://nemar.org/dataexplorer/detail?dataset_id=nm000148) ### Examples ```pycon >>> from eegdash.dataset import NM000148 >>> dataset = NM000148(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000149(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2019-001 Motor Imagery dataset for Spinal Cord Injury patients * **Study:** `nm000149` (NeMAR) * **Author (year):** `Ofner2019` * **Canonical:** — Also importable as: `NM000149`, `Ofner2019`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 10; recordings: 90; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000149](https://openneuro.org/datasets/nm000149) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000149](https://nemar.org/dataexplorer/detail?dataset_id=nm000149) ### Examples ```pycon >>> from eegdash.dataset import NM000149 >>> dataset = NM000149(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000150(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Liu2025 - NEMAR Dataset * **Study:** `nm000150` (NeMAR) * **Author (year):** `Liu2025_NEMAR` * **Canonical:** — Also importable as: `NM000150`, `Liu2025_NEMAR`. Modality: `eeg`. Subjects: 0; recordings: 0; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000150](https://openneuro.org/datasets/nm000150) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000150](https://nemar.org/dataexplorer/detail?dataset_id=nm000150) ### Examples ```pycon >>> from eegdash.dataset import NM000150 >>> dataset = NM000150(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000151(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor imagery dataset for three imaginary states of the same upper extremity * **Study:** `nm000151` (NeMAR) * **Author (year):** `Tavakolan2017` * **Canonical:** — Also importable as: `NM000151`, `Tavakolan2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 12; recordings: 46; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000151](https://openneuro.org/datasets/nm000151) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000151](https://nemar.org/dataexplorer/detail?dataset_id=nm000151) ### Examples ```pycon >>> from eegdash.dataset import NM000151 >>> dataset = NM000151(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000152(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Upper-limb elbow-centered motor imagery dataset (10 classes) * **Study:** `nm000152` (NeMAR) * **Author (year):** `Zhang2017` * **Canonical:** — Also importable as: `NM000152`, `Zhang2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 12; recordings: 180; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000152](https://openneuro.org/datasets/nm000152) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000152](https://nemar.org/dataexplorer/detail?dataset_id=nm000152) ### Examples ```pycon >>> from eegdash.dataset import NM000152 >>> dataset = NM000152(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000155(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MUniverse Caillet et al 2023 * **Study:** `nm000155` (NeMAR) * **Author (year):** `Caillet2023` * **Canonical:** — Also importable as: `NM000155`, `Caillet2023`. Modality: `emg`. Subjects: 6; recordings: 11; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000155](https://openneuro.org/datasets/nm000155) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000155](https://nemar.org/dataexplorer/detail?dataset_id=nm000155) DOI: [https://doi.org/https://doi.org/10.7910/DVN/F9GWIW](https://doi.org/https://doi.org/10.7910/DVN/F9GWIW) ### Examples ```pycon >>> from eegdash.dataset import NM000155 >>> dataset = NM000155(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000157(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-B * **Study:** `nm000157` (NeMAR) * **Author (year):** `Mainsah2025` * **Canonical:** — Also importable as: `NM000157`, `Mainsah2025`. Modality: `eeg`. Subjects: 19; recordings: 544; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000157](https://openneuro.org/datasets/nm000157) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000157](https://nemar.org/dataexplorer/detail?dataset_id=nm000157) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000157 >>> dataset = NM000157(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000158(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset ``` [1]_ ``` from the study on motor imagery ``` [2]_ ``` * **Study:** `nm000158` (NeMAR) * **Author (year):** `Liu2024` * **Canonical:** — Also importable as: `NM000158`, `Liu2024`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 50; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000158](https://openneuro.org/datasets/nm000158) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000158](https://nemar.org/dataexplorer/detail?dataset_id=nm000158) ### Examples ```pycon >>> from eegdash.dataset import NM000158 >>> dataset = NM000158(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000159(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MUniverse Avrillon et al 2024 * **Study:** `nm000159` (NeMAR) * **Author (year):** `Avrillon2024` * **Canonical:** — Also importable as: `NM000159`, `Avrillon2024`. Modality: `emg`. Subjects: 16; recordings: 124; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000159](https://openneuro.org/datasets/nm000159) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000159](https://nemar.org/dataexplorer/detail?dataset_id=nm000159) DOI: [https://doi.org/https://doi.org/10.7910/DVN/L9OQY7](https://doi.org/https://doi.org/10.7910/DVN/L9OQY7) ### Examples ```pycon >>> from eegdash.dataset import NM000159 >>> dataset = NM000159(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000160(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multi-joint upper-limb MI dataset from Yi et al. 2025 * **Study:** `nm000160` (NeMAR) * **Author (year):** `Yi2025` * **Canonical:** — Also importable as: `NM000160`, `Yi2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 18; recordings: 141; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000160](https://openneuro.org/datasets/nm000160) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000160](https://nemar.org/dataexplorer/detail?dataset_id=nm000160) ### Examples ```pycon >>> from eegdash.dataset import NM000160 >>> dataset = NM000160(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000161(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2024-001 Handwritten Character Classification dataset * **Study:** `nm000161` (NeMAR) * **Author (year):** `Crell2024` * **Canonical:** — Also importable as: `NM000161`, `Crell2024`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 20; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000161](https://openneuro.org/datasets/nm000161) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000161](https://nemar.org/dataexplorer/detail?dataset_id=nm000161) ### Examples ```pycon >>> from eegdash.dataset import NM000161 >>> dataset = NM000161(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000162(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2025-001 Motor Kinematics Reaching dataset * **Study:** `nm000162` (NeMAR) * **Author (year):** `Srisrisawang2025` * **Canonical:** `BNCI2025` Also importable as: `NM000162`, `Srisrisawang2025`, `BNCI2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000162](https://openneuro.org/datasets/nm000162) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000162](https://nemar.org/dataexplorer/detail?dataset_id=nm000162) ### Examples ```pycon >>> from eegdash.dataset import NM000162 >>> dataset = NM000162(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2025']* ### *class* eegdash.dataset.dataset.NM000163(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) c-VEP and Burst-VEP dataset from Castillos et al. (2023) * **Study:** `nm000163` (NeMAR) * **Author (year):** `Castillos2023_VEP` * **Canonical:** — Also importable as: `NM000163`, `Castillos2023_VEP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000163](https://openneuro.org/datasets/nm000163) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000163](https://nemar.org/dataexplorer/detail?dataset_id=nm000163) ### Examples ```pycon >>> from eegdash.dataset import NM000163 >>> dataset = NM000163(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000165(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MUniverse Grison et al 2025 * **Study:** `nm000165` (NeMAR) * **Author (year):** `Grison2025` * **Canonical:** — Also importable as: `NM000165`, `Grison2025`. Modality: `emg`. Subjects: 1; recordings: 10; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000165](https://openneuro.org/datasets/nm000165) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000165](https://nemar.org/dataexplorer/detail?dataset_id=nm000165) DOI: [https://doi.org/https://doi.org/10.7910/DVN/ID1WNQ](https://doi.org/https://doi.org/10.7910/DVN/ID1WNQ) ### Examples ```pycon >>> from eegdash.dataset import NM000165 >>> dataset = NM000165(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000166(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) M3CV: Multi-subject, Multi-session, Multi-task EEG Database * **Study:** `nm000166` (NeMAR) * **Author (year):** `Huang2018` * **Canonical:** — Also importable as: `NM000166`, `Huang2018`. Modality: `eeg`. Subjects: 95; recordings: 2469; tasks: 13. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000166](https://openneuro.org/datasets/nm000166) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000166](https://nemar.org/dataexplorer/detail?dataset_id=nm000166) DOI: [https://doi.org/10.1016/j.neuroimage.2022.119666](https://doi.org/10.1016/j.neuroimage.2022.119666) ### Examples ```pycon >>> from eegdash.dataset import NM000166 >>> dataset = NM000166(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000167(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor imagery dataset from Ma et al. 2020 * **Study:** `nm000167` (NeMAR) * **Author (year):** `Ma2020` * **Canonical:** — Also importable as: `NM000167`, `Ma2020`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 25; recordings: 375; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000167](https://openneuro.org/datasets/nm000167) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000167](https://nemar.org/dataexplorer/detail?dataset_id=nm000167) ### Examples ```pycon >>> from eegdash.dataset import NM000167 >>> dataset = NM000167(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000168(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-013 Error-Related Potentials dataset * **Study:** `nm000168` (NeMAR) * **Author (year):** `Chavarriaga2015` * **Canonical:** `Chavarriaga2010` Also importable as: `NM000168`, `Chavarriaga2015`, `Chavarriaga2010`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 6; recordings: 120; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000168](https://openneuro.org/datasets/nm000168) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000168](https://nemar.org/dataexplorer/detail?dataset_id=nm000168) ### Examples ```pycon >>> from eegdash.dataset import NM000168 >>> dataset = NM000168(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Chavarriaga2010']* ### *class* eegdash.dataset.dataset.NM000169(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-008 P300 dataset (ALS patients) * **Study:** `nm000169` (NeMAR) * **Author (year):** `Riccio2014` * **Canonical:** `BNCI2014008` Also importable as: `NM000169`, `Riccio2014`, `BNCI2014008`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000169](https://openneuro.org/datasets/nm000169) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000169](https://nemar.org/dataexplorer/detail?dataset_id=nm000169) ### Examples ```pycon >>> from eegdash.dataset import NM000169 >>> dataset = NM000169(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2014008']* ### *class* eegdash.dataset.dataset.NM000170(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2025-002 Continuous 2D Trajectory Decoding dataset * **Study:** `nm000170` (NeMAR) * **Author (year):** `Pulferer2025` * **Canonical:** — Also importable as: `NM000170`, `Pulferer2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 10; recordings: 90; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000170](https://openneuro.org/datasets/nm000170) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000170](https://nemar.org/dataexplorer/detail?dataset_id=nm000170) ### Examples ```pycon >>> from eegdash.dataset import NM000170 >>> dataset = NM000170(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000171(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-002 Motor Imagery dataset * **Study:** `nm000171` (NeMAR) * **Author (year):** `Steyrl2014` * **Canonical:** `BNCI2014002` Also importable as: `NM000171`, `Steyrl2014`, `BNCI2014002`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 14; recordings: 112; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000171](https://openneuro.org/datasets/nm000171) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000171](https://nemar.org/dataexplorer/detail?dataset_id=nm000171) ### Examples ```pycon >>> from eegdash.dataset import NM000171 >>> dataset = NM000171(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2014002']* ### *class* eegdash.dataset.dataset.NM000172(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High-gamma dataset described in Schirrmeister et al. 2017 * **Study:** `nm000172` (NeMAR) * **Author (year):** `Schirrmeister2017` * **Canonical:** — Also importable as: `NM000172`, `Schirrmeister2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 14; recordings: 28; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000172](https://openneuro.org/datasets/nm000172) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000172](https://nemar.org/dataexplorer/detail?dataset_id=nm000172) ### Examples ```pycon >>> from eegdash.dataset import NM000172 >>> dataset = NM000172(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000173(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Imagery ataset from Ofner et al 2017 * **Study:** `nm000173` (NeMAR) * **Author (year):** `Ofner2017` * **Canonical:** — Also importable as: `NM000173`, `Ofner2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 15; recordings: 300; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000173](https://openneuro.org/datasets/nm000173) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000173](https://nemar.org/dataexplorer/detail?dataset_id=nm000173) ### Examples ```pycon >>> from eegdash.dataset import NM000173 >>> dataset = NM000173(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000175(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) fNIRS Finger Tapping * **Study:** `nm000175` (NeMAR) * **Author (year):** `Luke2024` * **Canonical:** — Also importable as: `NM000175`, `Luke2024`. Modality: `fnirs`. Subjects: 5; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000175](https://openneuro.org/datasets/nm000175) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000175](https://nemar.org/dataexplorer/detail?dataset_id=nm000175) ### Examples ```pycon >>> from eegdash.dataset import NM000175 >>> dataset = NM000175(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000176(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects) * **Study:** `nm000176` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI` * **Canonical:** `BigP3BCI_StudyK`, `BigP3BCI_K` Also importable as: `NM000176`, `Mainsah2025_BigP3BCI`, `BigP3BCI_StudyK`, `BigP3BCI_K`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 5; recordings: 128; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000176](https://openneuro.org/datasets/nm000176) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000176](https://nemar.org/dataexplorer/detail?dataset_id=nm000176) ### Examples ```pycon >>> from eegdash.dataset import NM000176 >>> dataset = NM000176(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyK', 'BigP3BCI_K']* ### *class* eegdash.dataset.dataset.NM000179(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LEMON: MPI Leipzig Mind-Brain-Body EEG (Resting State) * **Study:** `nm000179` (NeMAR) * **Author (year):** `Babayan2018` * **Canonical:** `LEMON` Also importable as: `NM000179`, `Babayan2018`, `LEMON`. Modality: `eeg`. Subjects: 215; recordings: 215; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000179](https://openneuro.org/datasets/nm000179) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000179](https://nemar.org/dataexplorer/detail?dataset_id=nm000179) DOI: [https://doi.org/10.1038/sdata.2018.308](https://doi.org/10.1038/sdata.2018.308) ### Examples ```pycon >>> from eegdash.dataset import NM000179 >>> dataset = NM000179(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['LEMON']* ### *class* eegdash.dataset.dataset.NM000180(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Brennan2019: EEG during Alice in Wonderland Listening * **Study:** `nm000180` (NeMAR) * **Author (year):** `Brennan2019` * **Canonical:** — Also importable as: `NM000180`, `Brennan2019`. Modality: `eeg`. Subjects: 45; recordings: 45; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000180](https://openneuro.org/datasets/nm000180) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000180](https://nemar.org/dataexplorer/detail?dataset_id=nm000180) DOI: [https://doi.org/10.1371/journal.pone.0207741](https://doi.org/10.1371/journal.pone.0207741) ### Examples ```pycon >>> from eegdash.dataset import NM000180 >>> dataset = NM000180(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000181(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NMT: Neurodiagnostic Montage Template Scalp EEG * **Study:** `nm000181` (NeMAR) * **Author (year):** `Khan2019` * **Canonical:** — Also importable as: `NM000181`, `Khan2019`. Modality: `eeg`. Subjects: 2417; recordings: 2417; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000181](https://openneuro.org/datasets/nm000181) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000181](https://nemar.org/dataexplorer/detail?dataset_id=nm000181) DOI: [https://doi.org/10.5281/zenodo.10909103](https://doi.org/10.5281/zenodo.10909103) ### Examples ```pycon >>> from eegdash.dataset import NM000181 >>> dataset = NM000181(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000185(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sleep-EDF Expanded: Whole-Night PSG Recordings * **Study:** `nm000185` (NeMAR) * **Author (year):** `Kemp2000` * **Canonical:** `SleepEDF`, `SleepEDFExpanded` Also importable as: `NM000185`, `Kemp2000`, `SleepEDF`, `SleepEDFExpanded`. Modality: `eeg`. Subjects: 100; recordings: 197; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000185](https://openneuro.org/datasets/nm000185) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000185](https://nemar.org/dataexplorer/detail?dataset_id=nm000185) DOI: [https://doi.org/10.13026/C2X676](https://doi.org/10.13026/C2X676) ### Examples ```pycon >>> from eegdash.dataset import NM000185 >>> dataset = NM000185(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['SleepEDF', 'SleepEDFExpanded']* ### *class* eegdash.dataset.dataset.NM000186(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study E — 6x6 checkerboard (8 healthy subjects) * **Study:** `nm000186` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_E` * **Canonical:** `BigP3BCI_StudyE`, `BigP3BCI_E` Also importable as: `NM000186`, `Mainsah2025_BigP3BCI_E`, `BigP3BCI_StudyE`, `BigP3BCI_E`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 8; recordings: 88; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000186](https://openneuro.org/datasets/nm000186) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000186](https://nemar.org/dataexplorer/detail?dataset_id=nm000186) ### Examples ```pycon >>> from eegdash.dataset import NM000186 >>> dataset = NM000186(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyE', 'BigP3BCI_E']* ### *class* eegdash.dataset.dataset.NM000187(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study N — 9x8 dry/wet electrode comparison (8 ALS subjects) * **Study:** `nm000187` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_N` * **Canonical:** `BigP3BCI_StudyN` Also importable as: `NM000187`, `Mainsah2025_BigP3BCI_N`, `BigP3BCI_StudyN`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 8; recordings: 160; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000187](https://openneuro.org/datasets/nm000187) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000187](https://nemar.org/dataexplorer/detail?dataset_id=nm000187) ### Examples ```pycon >>> from eegdash.dataset import NM000187 >>> dataset = NM000187(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyN']* ### *class* eegdash.dataset.dataset.NM000188(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-009 P300 dataset * **Study:** `nm000188` (NeMAR) * **Author (year):** `Arico2014` * **Canonical:** `BNCI2014_009_P300` Also importable as: `NM000188`, `Arico2014`, `BNCI2014_009_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000188](https://openneuro.org/datasets/nm000188) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000188](https://nemar.org/dataexplorer/detail?dataset_id=nm000188) ### Examples ```pycon >>> from eegdash.dataset import NM000188 >>> dataset = NM000188(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2014_009_P300']* ### *class* eegdash.dataset.dataset.NM000189(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-003 P300 dataset * **Study:** `nm000189` (NeMAR) * **Author (year):** `Schreuder2015_P300` * **Canonical:** `BNCI2015_P300`, `BNCI2015_003_P300`, `BNCI2015_003_AMUSE` Also importable as: `NM000189`, `Schreuder2015_P300`, `BNCI2015_P300`, `BNCI2015_003_P300`, `BNCI2015_003_AMUSE`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000189](https://openneuro.org/datasets/nm000189) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000189](https://nemar.org/dataexplorer/detail?dataset_id=nm000189) ### Examples ```pycon >>> from eegdash.dataset import NM000189 >>> dataset = NM000189(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2015_P300', 'BNCI2015_003_P300', 'BNCI2015_003_AMUSE']* ### *class* eegdash.dataset.dataset.NM000190(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-012 PASS2D P300 dataset * **Study:** `nm000190` (NeMAR) * **Author (year):** `Hohne2015` * **Canonical:** — Also importable as: `NM000190`, `Hohne2015`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000190](https://openneuro.org/datasets/nm000190) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000190](https://nemar.org/dataexplorer/detail?dataset_id=nm000190) ### Examples ```pycon >>> from eegdash.dataset import NM000190 >>> dataset = NM000190(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000191(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study F — 6x6 multi-paradigm, 3 sessions (10 healthy subjects) * **Study:** `nm000191` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_F` * **Canonical:** `BigP3BCI_StudyF`, `BigP3BCI_F` Also importable as: `NM000191`, `Mainsah2025_BigP3BCI_F`, `BigP3BCI_StudyF`, `BigP3BCI_F`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 10; recordings: 270; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000191](https://openneuro.org/datasets/nm000191) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000191](https://nemar.org/dataexplorer/detail?dataset_id=nm000191) ### Examples ```pycon >>> from eegdash.dataset import NM000191 >>> dataset = NM000191(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyF', 'BigP3BCI_F']* ### *class* eegdash.dataset.dataset.NM000192(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-006 Music BCI dataset * **Study:** `nm000192` (NeMAR) * **Author (year):** `Treder2015_BNCI_006_Music` * **Canonical:** `BNCI2015_BNCI_006_Music`, `BNCI_2015_006_Music`, `BNCI2015_006_MusicBCI` Also importable as: `NM000192`, `Treder2015_BNCI_006_Music`, `BNCI2015_BNCI_006_Music`, `BNCI_2015_006_Music`, `BNCI2015_006_MusicBCI`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 11; recordings: 11; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000192](https://openneuro.org/datasets/nm000192) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000192](https://nemar.org/dataexplorer/detail?dataset_id=nm000192) ### Examples ```pycon >>> from eegdash.dataset import NM000192 >>> dataset = NM000192(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2015_BNCI_006_Music', 'BNCI_2015_006_Music', 'BNCI2015_006_MusicBCI']* ### *class* eegdash.dataset.dataset.NM000193(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Class for Kojima2024A dataset management. P300 dataset * **Study:** `nm000193` (NeMAR) * **Author (year):** `Kojima2024A_P300` * **Canonical:** — Also importable as: `NM000193`, `Kojima2024A_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 11; recordings: 66; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000193](https://openneuro.org/datasets/nm000193) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000193](https://nemar.org/dataexplorer/detail?dataset_id=nm000193) ### Examples ```pycon >>> from eegdash.dataset import NM000193 >>> dataset = NM000193(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000194(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-010 RSVP P300 dataset * **Study:** `nm000194` (NeMAR) * **Author (year):** `Acqualagna2015` * **Canonical:** — Also importable as: `NM000194`, `Acqualagna2015`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000194](https://openneuro.org/datasets/nm000194) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000194](https://nemar.org/dataexplorer/detail?dataset_id=nm000194) ### Examples ```pycon >>> from eegdash.dataset import NM000194 >>> dataset = NM000194(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000195(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mixture of LLP and EM for a visual matrix speller (ERP) dataset from * **Study:** `nm000195` (NeMAR) * **Author (year):** `Hubner2018` * **Canonical:** `Huebner2018` Also importable as: `NM000195`, `Hubner2018`, `Huebner2018`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 360; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000195](https://openneuro.org/datasets/nm000195) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000195](https://nemar.org/dataexplorer/detail?dataset_id=nm000195) ### Examples ```pycon >>> from eegdash.dataset import NM000195 >>> dataset = NM000195(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Huebner2018']* ### *class* eegdash.dataset.dataset.NM000196(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) c-VEP dataset from Thielen et al. (2015) * **Study:** `nm000196` (NeMAR) * **Author (year):** `Thielen2015` * **Canonical:** — Also importable as: `NM000196`, `Thielen2015`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000196](https://openneuro.org/datasets/nm000196) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000196](https://nemar.org/dataexplorer/detail?dataset_id=nm000196) ### Examples ```pycon >>> from eegdash.dataset import NM000196 >>> dataset = NM000196(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000197(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study M — 9x8 adaptive/checkerboard (21 ALS subjects) * **Study:** `nm000197` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_M` * **Canonical:** `BigP3BCI_StudyM`, `BigP3BCI_M` Also importable as: `NM000197`, `Mainsah2025_BigP3BCI_M`, `BigP3BCI_StudyM`, `BigP3BCI_M`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 21; recordings: 420; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000197](https://openneuro.org/datasets/nm000197) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000197](https://nemar.org/dataexplorer/detail?dataset_id=nm000197) ### Examples ```pycon >>> from eegdash.dataset import NM000197 >>> dataset = NM000197(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyM', 'BigP3BCI_M']* ### *class* eegdash.dataset.dataset.NM000198(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-008 Center Speller P300 dataset * **Study:** `nm000198` (NeMAR) * **Author (year):** `Treder2015_P300` * **Canonical:** `BNCI2015_008_P300`, `BNCI2015_008_CenterSpeller` Also importable as: `NM000198`, `Treder2015_P300`, `BNCI2015_008_P300`, `BNCI2015_008_CenterSpeller`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000198](https://openneuro.org/datasets/nm000198) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000198](https://nemar.org/dataexplorer/detail?dataset_id=nm000198) ### Examples ```pycon >>> from eegdash.dataset import NM000198 >>> dataset = NM000198(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2015_008_P300', 'BNCI2015_008_CenterSpeller']* ### *class* eegdash.dataset.dataset.NM000199(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Learning from label proportions for a visual matrix speller (ERP) * **Study:** `nm000199` (NeMAR) * **Author (year):** `Hubner2017` * **Canonical:** `Huebner2017` Also importable as: `NM000199`, `Hubner2017`, `Huebner2017`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 342; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000199](https://openneuro.org/datasets/nm000199) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000199](https://nemar.org/dataexplorer/detail?dataset_id=nm000199) ### Examples ```pycon >>> from eegdash.dataset import NM000199 >>> dataset = NM000199(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Huebner2017']* ### *class* eegdash.dataset.dataset.NM000200(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects) * **Study:** `nm000200` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_I` * **Canonical:** `BigP3BCI_StudyI`, `BigP3BCI_I` Also importable as: `NM000200`, `Mainsah2025_BigP3BCI_I`, `BigP3BCI_StudyI`, `BigP3BCI_I`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 265; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000200](https://openneuro.org/datasets/nm000200) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000200](https://nemar.org/dataexplorer/detail?dataset_id=nm000200) ### Examples ```pycon >>> from eegdash.dataset import NM000200 >>> dataset = NM000200(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyI', 'BigP3BCI_I']* ### *class* eegdash.dataset.dataset.NM000201(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ERP paradigm of the Mobile BCI dataset * **Study:** `nm000201` (NeMAR) * **Author (year):** `Lee2021_ERP` * **Canonical:** — Also importable as: `NM000201`, `Lee2021_ERP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 24; recordings: 113; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000201](https://openneuro.org/datasets/nm000201) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000201](https://nemar.org/dataexplorer/detail?dataset_id=nm000201) ### Examples ```pycon >>> from eegdash.dataset import NM000201 >>> dataset = NM000201(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000204(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Bluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch) * **Study:** `nm000204` (NeMAR) * **Author (year):** `Lee2024_Bluetooth_speaker_14` * **Canonical:** — Also importable as: `NM000204`, `Lee2024_Bluetooth_speaker_14`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 14; recordings: 420; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000204](https://openneuro.org/datasets/nm000204) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000204](https://nemar.org/dataexplorer/detail?dataset_id=nm000204) ### Examples ```pycon >>> from eegdash.dataset import NM000204 >>> dataset = NM000204(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000205(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RSVP collaborative BCI dataset from Zheng et al 2020 * **Study:** `nm000205` (NeMAR) * **Author (year):** `Zheng2020` * **Canonical:** — Also importable as: `NM000205`, `Zheng2020`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 14; recordings: 84; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000205](https://openneuro.org/datasets/nm000205) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000205](https://nemar.org/dataexplorer/detail?dataset_id=nm000205) ### Examples ```pycon >>> from eegdash.dataset import NM000205 >>> dataset = NM000205(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000206(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neuroergonomic 2021 dataset * **Study:** `nm000206` (NeMAR) * **Author (year):** `Hinss2021_Neuroergonomic` * **Canonical:** `Hinss2021` Also importable as: `NM000206`, `Hinss2021_Neuroergonomic`, `Hinss2021`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000206](https://openneuro.org/datasets/nm000206) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000206](https://nemar.org/dataexplorer/detail?dataset_id=nm000206) ### Examples ```pycon >>> from eegdash.dataset import NM000206 >>> dataset = NM000206(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Hinss2021']* ### *class* eegdash.dataset.dataset.NM000207(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Class for Kojima2024B dataset management. P300 dataset * **Study:** `nm000207` (NeMAR) * **Author (year):** `Kojima2024B_P300` * **Canonical:** — Also importable as: `NM000207`, `Kojima2024B_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 180; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000207](https://openneuro.org/datasets/nm000207) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000207](https://nemar.org/dataexplorer/detail?dataset_id=nm000207) ### Examples ```pycon >>> from eegdash.dataset import NM000207 >>> dataset = NM000207(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000208(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Door lock control experiment (15 subjects, 4 classes, 31 EEG ch) * **Study:** `nm000208` (NeMAR) * **Author (year):** `Lee2024_Door_lock_control` * **Canonical:** — Also importable as: `NM000208`, `Lee2024_Door_lock_control`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 14; recordings: 434; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000208](https://openneuro.org/datasets/nm000208) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000208](https://nemar.org/dataexplorer/detail?dataset_id=nm000208) ### Examples ```pycon >>> from eegdash.dataset import NM000208 >>> dataset = NM000208(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000209(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor imagery + spatial attention dataset from Forenzo & He 2023 * **Study:** `nm000209` (NeMAR) * **Author (year):** `Forenzo2023` * **Canonical:** — Also importable as: `NM000209`, `Forenzo2023`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 25; recordings: 150; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000209](https://openneuro.org/datasets/nm000209) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000209](https://nemar.org/dataexplorer/detail?dataset_id=nm000209) ### Examples ```pycon >>> from eegdash.dataset import NM000209 >>> dataset = NM000209(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000210(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIAUT-P300 dataset for autism from Simoes et al 2020 * **Study:** `nm000210` (NeMAR) * **Author (year):** `Simoes2020` * **Canonical:** `BCIAUTP300`, `BCIAUT_P300`, `BCIAUT` Also importable as: `NM000210`, `Simoes2020`, `BCIAUTP300`, `BCIAUT_P300`, `BCIAUT`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 15; recordings: 210; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000210](https://openneuro.org/datasets/nm000210) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000210](https://nemar.org/dataexplorer/detail?dataset_id=nm000210) ### Examples ```pycon >>> from eegdash.dataset import NM000210 >>> dataset = NM000210(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCIAUTP300', 'BCIAUT_P300', 'BCIAUT']* ### *class* eegdash.dataset.dataset.NM000211(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RSVP ERP dataset for authentication from Zhang et al 2025 * **Study:** `nm000211` (NeMAR) * **Author (year):** `Zhang2025_RSVP` * **Canonical:** `Zhang2025` Also importable as: `NM000211`, `Zhang2025_RSVP`, `Zhang2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 240; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000211](https://openneuro.org/datasets/nm000211) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000211](https://nemar.org/dataexplorer/detail?dataset_id=nm000211) ### Examples ```pycon >>> from eegdash.dataset import NM000211 >>> dataset = NM000211(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Zhang2025']* ### *class* eegdash.dataset.dataset.NM000212(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-007 Motion VEP (mVEP) Speller dataset * **Study:** `nm000212` (NeMAR) * **Author (year):** `Schaeff2015` * **Canonical:** — Also importable as: `NM000212`, `Schaeff2015`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 16; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000212](https://openneuro.org/datasets/nm000212) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000212](https://nemar.org/dataexplorer/detail?dataset_id=nm000212) ### Examples ```pycon >>> from eegdash.dataset import NM000212 >>> dataset = NM000212(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000213(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Television control experiment (30 subjects, 4 classes, 31 EEG ch) * **Study:** `nm000213` (NeMAR) * **Author (year):** `Lee2024_Television_control_30` * **Canonical:** — Also importable as: `NM000213`, `Lee2024_Television_control_30`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 30; recordings: 2300; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000213](https://openneuro.org/datasets/nm000213) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000213](https://nemar.org/dataexplorer/detail?dataset_id=nm000213) ### Examples ```pycon >>> from eegdash.dataset import NM000213 >>> dataset = NM000213(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000214(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) c-VEP dataset from Thielen et al. (2021) * **Study:** `nm000214` (NeMAR) * **Author (year):** `Thielen2021` * **Canonical:** — Also importable as: `NM000214`, `Thielen2021`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 30; recordings: 150; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000214](https://openneuro.org/datasets/nm000214) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000214](https://nemar.org/dataexplorer/detail?dataset_id=nm000214) ### Examples ```pycon >>> from eegdash.dataset import NM000214 >>> dataset = NM000214(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000215(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset BI2014b from a “Brain Invaders” experiment * **Study:** `nm000215` (NeMAR) * **Author (year):** `Korczowski2014_P300` * **Canonical:** `BrainInvaders2014b`, `BI2014b`, `BrainInvadersBI2014b` Also importable as: `NM000215`, `Korczowski2014_P300`, `BrainInvaders2014b`, `BI2014b`, `BrainInvadersBI2014b`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 38; recordings: 38; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000215](https://openneuro.org/datasets/nm000215) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000215](https://nemar.org/dataexplorer/detail?dataset_id=nm000215) ### Examples ```pycon >>> from eegdash.dataset import NM000215 >>> dataset = NM000215(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BrainInvaders2014b', 'BI2014b', 'BrainInvadersBI2014b']* ### *class* eegdash.dataset.dataset.NM000216(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset BI2015a from a “Brain Invaders” experiment * **Study:** `nm000216` (NeMAR) * **Author (year):** `Korczowski2015_P300` * **Canonical:** `BrainInvaders2015a`, `BI2015a` Also importable as: `NM000216`, `Korczowski2015_P300`, `BrainInvaders2015a`, `BI2015a`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 43; recordings: 129; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000216](https://openneuro.org/datasets/nm000216) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000216](https://nemar.org/dataexplorer/detail?dataset_id=nm000216) ### Examples ```pycon >>> from eegdash.dataset import NM000216 >>> dataset = NM000216(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BrainInvaders2015a', 'BI2015a']* ### *class* eegdash.dataset.dataset.NM000217(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset BI2015b from a “Brain Invaders” experiment * **Study:** `nm000217` (NeMAR) * **Author (year):** `Korczowski2015_P300_BI2015b` * **Canonical:** `BrainInvaders2015b`, `BI2015b` Also importable as: `NM000217`, `Korczowski2015_P300_BI2015b`, `BrainInvaders2015b`, `BI2015b`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 44; recordings: 176; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000217](https://openneuro.org/datasets/nm000217) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000217](https://nemar.org/dataexplorer/detail?dataset_id=nm000217) ### Examples ```pycon >>> from eegdash.dataset import NM000217 >>> dataset = NM000217(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BrainInvaders2015b', 'BI2015b']* ### *class* eegdash.dataset.dataset.NM000218(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study H — 9x8 checkerboard with gaze conditions (16 healthy subjects) * **Study:** `nm000218` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_H` * **Canonical:** `BigP3BCI_StudyH`, `BigP3BCI_H` Also importable as: `NM000218`, `Mainsah2025_BigP3BCI_H`, `BigP3BCI_StudyH`, `BigP3BCI_H`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 16; recordings: 372; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000218](https://openneuro.org/datasets/nm000218) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000218](https://nemar.org/dataexplorer/detail?dataset_id=nm000218) ### Examples ```pycon >>> from eegdash.dataset import NM000218 >>> dataset = NM000218(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyH', 'BigP3BCI_H']* ### *class* eegdash.dataset.dataset.NM000219(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset * **Study:** `nm000219` (NeMAR) * **Author (year):** `Reichert2020` * **Canonical:** `BNCI2020`, `BNCI2020_002_AttentionShift`, `BNCI2020_002_CovertSpatialAttention` Also importable as: `NM000219`, `Reichert2020`, `BNCI2020`, `BNCI2020_002_AttentionShift`, `BNCI2020_002_CovertSpatialAttention`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000219](https://openneuro.org/datasets/nm000219) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000219](https://nemar.org/dataexplorer/detail?dataset_id=nm000219) ### Examples ```pycon >>> from eegdash.dataset import NM000219 >>> dataset = NM000219(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2020', 'BNCI2020_002_AttentionShift', 'BNCI2020_002_CovertSpatialAttention']* ### *class* eegdash.dataset.dataset.NM000221(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alphawaves dataset * **Study:** `nm000221` (NeMAR) * **Author (year):** `Cattan2017` * **Canonical:** `Alphawaves`, `Rodrigues2017`, `AlphaWaves` Also importable as: `NM000221`, `Cattan2017`, `Alphawaves`, `Rodrigues2017`, `AlphaWaves`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 19; recordings: 19; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000221](https://openneuro.org/datasets/nm000221) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000221](https://nemar.org/dataexplorer/detail?dataset_id=nm000221) ### Examples ```pycon >>> from eegdash.dataset import NM000221 >>> dataset = NM000221(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Alphawaves', 'Rodrigues2017', 'AlphaWaves']* ### *class* eegdash.dataset.dataset.NM000222(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Air conditioner control experiment (10 subjects, 4 classes, 25 EEG ch) * **Study:** `nm000222` (NeMAR) * **Author (year):** `Lee2024_Air_conditioner_control` * **Canonical:** — Also importable as: `NM000222`, `Lee2024_Air_conditioner_control`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 305; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000222](https://openneuro.org/datasets/nm000222) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000222](https://nemar.org/dataexplorer/detail?dataset_id=nm000222) ### Examples ```pycon >>> from eegdash.dataset import NM000222 >>> dataset = NM000222(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000223(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electric light control experiment (15 subjects, 4 classes, 31 EEG ch) * **Study:** `nm000223` (NeMAR) * **Author (year):** `Lee2024_Electric_light_control` * **Canonical:** — Also importable as: `NM000223`, `Lee2024_Electric_light_control`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 465; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000223](https://openneuro.org/datasets/nm000223) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000223](https://nemar.org/dataexplorer/detail?dataset_id=nm000223) ### Examples ```pycon >>> from eegdash.dataset import NM000223 >>> dataset = NM000223(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000225(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PhysioNet 2018 Challenge: Sleep Arousal Detection PSG (Training) * **Study:** `nm000225` (NeMAR) * **Author (year):** `Ghassemi2018` * **Canonical:** — Also importable as: `NM000225`, `Ghassemi2018`. Modality: `eeg`. Subjects: 1983; recordings: 1983; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000225](https://openneuro.org/datasets/nm000225) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000225](https://nemar.org/dataexplorer/detail?dataset_id=nm000225) DOI: [https://doi.org/10.13026/6phb-r450](https://doi.org/10.13026/6phb-r450) ### Examples ```pycon >>> from eegdash.dataset import NM000225 >>> dataset = NM000225(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000226(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Zhou2016 * **Study:** `nm000226` (NeMAR) * **Author (year):** `Zhou2016_226` * **Canonical:** `Zhou2016_NEMAR` Also importable as: `NM000226`, `Zhou2016_226`, `Zhou2016_NEMAR`. Modality: `eeg`. Subjects: 4; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000226](https://openneuro.org/datasets/nm000226) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000226](https://nemar.org/dataexplorer/detail?dataset_id=nm000226) DOI: [https://doi.org/10.82901/nemar.nm000115](https://doi.org/10.82901/nemar.nm000115) ### Examples ```pycon >>> from eegdash.dataset import NM000226 >>> dataset = NM000226(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Zhou2016_NEMAR']* ### *class* eegdash.dataset.dataset.NM000227(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Eye-BCI Motor Execution dataset from Guttmann-Flury et al 2025 * **Study:** `nm000227` (NeMAR) * **Author (year):** `GuttmannFlury2025_Eye` * **Canonical:** `GuttmannFlury2025_ME` Also importable as: `NM000227`, `GuttmannFlury2025_Eye`, `GuttmannFlury2025_ME`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 31; recordings: 63; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000227](https://openneuro.org/datasets/nm000227) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000227](https://nemar.org/dataexplorer/detail?dataset_id=nm000227) ### Examples ```pycon >>> from eegdash.dataset import NM000227 >>> dataset = NM000227(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['GuttmannFlury2025_ME']* ### *class* eegdash.dataset.dataset.NM000228(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Nieuwland et al. 2018: Multi-site N400 Replication Study * **Study:** `nm000228` (NeMAR) * **Author (year):** `Nieuwland2018` * **Canonical:** — Also importable as: `NM000228`, `Nieuwland2018`. Modality: `eeg`. Subjects: 356; recordings: 397; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000228](https://openneuro.org/datasets/nm000228) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000228](https://nemar.org/dataexplorer/detail?dataset_id=nm000228) DOI: [https://doi.org/10.7554/eLife.33468](https://doi.org/10.7554/eLife.33468) ### Examples ```pycon >>> from eegdash.dataset import NM000228 >>> dataset = NM000228(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000229(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Gwilliams et al. 2023 — Introducing MEG-MASC: a high-quality magneto-encephalography dataset for evaluating natural speech processing * **Study:** `nm000229` (NeMAR) * **Author (year):** `Gwilliams2023` * **Canonical:** `MASC_MEG`, `MEG_MASC` Also importable as: `NM000229`, `Gwilliams2023`, `MASC_MEG`, `MEG_MASC`. Modality: `eeg`. Subjects: 29; recordings: 1360; tasks: 79. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000229](https://openneuro.org/datasets/nm000229) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000229](https://nemar.org/dataexplorer/detail?dataset_id=nm000229) DOI: [https://doi.org/10.1038/s41597-023-02752-5](https://doi.org/10.1038/s41597-023-02752-5) ### Examples ```pycon >>> from eegdash.dataset import NM000229 >>> dataset = NM000229(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MASC_MEG', 'MEG_MASC']* ### *class* eegdash.dataset.dataset.NM000230(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Lower-limb MI dataset for knee pain patients from Zuo et al. 2025 * **Study:** `nm000230` (NeMAR) * **Author (year):** `Zuo2025` * **Canonical:** — Also importable as: `NM000230`, `Zuo2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 30; recordings: 118; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000230](https://openneuro.org/datasets/nm000230) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000230](https://nemar.org/dataexplorer/detail?dataset_id=nm000230) ### Examples ```pycon >>> from eegdash.dataset import NM000230 >>> dataset = NM000230(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000231(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset from Hoffmann et al 2008 * **Study:** `nm000231` (NeMAR) * **Author (year):** `Hoffmann2008` * **Canonical:** `EPFLP300`, `EPFL_P300`, `EPFLP300Dataset` Also importable as: `NM000231`, `Hoffmann2008`, `EPFLP300`, `EPFL_P300`, `EPFLP300Dataset`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 8; recordings: 192; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000231](https://openneuro.org/datasets/nm000231) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000231](https://nemar.org/dataexplorer/detail?dataset_id=nm000231) ### Examples ```pycon >>> from eegdash.dataset import NM000231 >>> dataset = NM000231(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EPFLP300', 'EPFL_P300', 'EPFLP300Dataset']* ### *class* eegdash.dataset.dataset.NM000232(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition * **Study:** `nm000232` (NeMAR) * **Author (year):** `Gifford2019` * **Canonical:** — Also importable as: `NM000232`, `Gifford2019`. Modality: `eeg`. Subjects: 10; recordings: 638; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000232](https://openneuro.org/datasets/nm000232) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000232](https://nemar.org/dataexplorer/detail?dataset_id=nm000232) DOI: [https://doi.org/10.17605/OSF.IO/3JK45](https://doi.org/10.17605/OSF.IO/3JK45) ### Examples ```pycon >>> from eegdash.dataset import NM000232 >>> dataset = NM000232(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000234(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset * **Study:** `nm000234` (NeMAR) * **Author (year):** `Schreuder2015_ERP` * **Canonical:** `BNCI2015_ERP` Also importable as: `NM000234`, `Schreuder2015_ERP`, `BNCI2015_ERP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000234](https://openneuro.org/datasets/nm000234) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000234](https://nemar.org/dataexplorer/detail?dataset_id=nm000234) ### Examples ```pycon >>> from eegdash.dataset import NM000234 >>> dataset = NM000234(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2015_ERP']* ### *class* eegdash.dataset.dataset.NM000235(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025 * **Study:** `nm000235` (NeMAR) * **Author (year):** `GuttmannFlury2025_Eye_BCI` * **Canonical:** `GuttmannFlury2025_MIME` Also importable as: `NM000235`, `GuttmannFlury2025_Eye_BCI`, `GuttmannFlury2025_MIME`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 31; recordings: 63; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000235](https://openneuro.org/datasets/nm000235) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000235](https://nemar.org/dataexplorer/detail?dataset_id=nm000235) ### Examples ```pycon >>> from eegdash.dataset import NM000235 >>> dataset = NM000235(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['GuttmannFlury2025_MIME']* ### *class* eegdash.dataset.dataset.NM000236(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of an EEG-based BCI experiment in Virtual Reality using P300 * **Study:** `nm000236` (NeMAR) * **Author (year):** `Cattan2019_P300` * **Canonical:** — Also importable as: `NM000236`, `Cattan2019_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 2520; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000236](https://openneuro.org/datasets/nm000236) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000236](https://nemar.org/dataexplorer/detail?dataset_id=nm000236) ### Examples ```pycon >>> from eegdash.dataset import NM000236 >>> dataset = NM000236(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000237(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 7-day motor imagery BCI EEG dataset from Zhou et al 2021 * **Study:** `nm000237` (NeMAR) * **Author (year):** `Zhou2021` * **Canonical:** — Also importable as: `NM000237`, `Zhou2021`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 20; recordings: 833; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000237](https://openneuro.org/datasets/nm000237) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000237](https://nemar.org/dataexplorer/detail?dataset_id=nm000237) ### Examples ```pycon >>> from eegdash.dataset import NM000237 >>> dataset = NM000237(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000238(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SparrKULee: A Speech-Evoked Auditory Response Repository from KU Leuven, Containing the EEG of 85 Participants * **Study:** `nm000238` (NeMAR) * **Author (year):** `Accou2024` * **Canonical:** — Also importable as: `NM000238`, `Accou2024`. Modality: `eeg`. Subjects: 87; recordings: 4088; tasks: 366. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000238](https://openneuro.org/datasets/nm000238) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000238](https://nemar.org/dataexplorer/detail?dataset_id=nm000238) DOI: [https://doi.org/10.48804/K3VSND](https://doi.org/10.48804/K3VSND) ### Examples ```pycon >>> from eegdash.dataset import NM000238 >>> dataset = NM000238(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000239(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023) * **Study:** `nm000239` (NeMAR) * **Author (year):** `MartinezCagigal2023` * **Canonical:** — Also importable as: `NM000239`, `MartinezCagigal2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 16; recordings: 640; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000239](https://openneuro.org/datasets/nm000239) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000239](https://nemar.org/dataexplorer/detail?dataset_id=nm000239) ### Examples ```pycon >>> from eegdash.dataset import NM000239 >>> dataset = NM000239(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000240(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Checkerboard m-sequence-based c-VEP dataset from * **Study:** `nm000240` (NeMAR) * **Author (year):** `FernandezRodriguez2025` * **Canonical:** `FernandezRodriguez2023` Also importable as: `NM000240`, `FernandezRodriguez2025`, `FernandezRodriguez2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 16; recordings: 383; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000240](https://openneuro.org/datasets/nm000240) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000240](https://nemar.org/dataexplorer/detail?dataset_id=nm000240) ### Examples ```pycon >>> from eegdash.dataset import NM000240 >>> dataset = NM000240(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['FernandezRodriguez2023']* ### *class* eegdash.dataset.dataset.NM000241(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CerebroVoice: Bilingual sEEG Speech Dataset * **Study:** `nm000241` (NeMAR) * **Author (year):** `Zhang2019` * **Canonical:** — Also importable as: `NM000241`, `Zhang2019`. Modality: `ieeg`. Subjects: 2; recordings: 18; tasks: 9. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000241](https://openneuro.org/datasets/nm000241) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000241](https://nemar.org/dataexplorer/detail?dataset_id=nm000241) DOI: [https://doi.org/10.5281/zenodo.13332808](https://doi.org/10.5281/zenodo.13332808) ### Examples ```pycon >>> from eegdash.dataset import NM000241 >>> dataset = NM000241(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000242(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual imagery EEG dataset from Gao et al 2026 * **Study:** `nm000242` (NeMAR) * **Author (year):** `Gao2026_Visual_imagery_et` * **Canonical:** `Gao2026` Also importable as: `NM000242`, `Gao2026_Visual_imagery_et`, `Gao2026`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 22; recordings: 125; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000242](https://openneuro.org/datasets/nm000242) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000242](https://nemar.org/dataexplorer/detail?dataset_id=nm000242) ### Examples ```pycon >>> from eegdash.dataset import NM000242 >>> dataset = NM000242(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Gao2026']* ### *class* eegdash.dataset.dataset.NM000243(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2016-002 Emergency Braking during Simulated Driving dataset * **Study:** `nm000243` (NeMAR) * **Author (year):** `Haufe2016` * **Canonical:** `BNCI2016`, `BNCI2016002` Also importable as: `NM000243`, `Haufe2016`, `BNCI2016`, `BNCI2016002`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 15; recordings: 15; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000243](https://openneuro.org/datasets/nm000243) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000243](https://nemar.org/dataexplorer/detail?dataset_id=nm000243) ### Examples ```pycon >>> from eegdash.dataset import NM000243 >>> dataset = NM000243(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2016', 'BNCI2016002']* ### *class* eegdash.dataset.dataset.NM000244(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset BI2014a from a “Brain Invaders” experiment * **Study:** `nm000244` (NeMAR) * **Author (year):** `Korczowski2014_P300_BI2014a` * **Canonical:** `BrainInvaders2014a`, `BI2014a` Also importable as: `NM000244`, `Korczowski2014_P300_BI2014a`, `BrainInvaders2014a`, `BI2014a`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 64; recordings: 64; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000244](https://openneuro.org/datasets/nm000244) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000244](https://nemar.org/dataexplorer/detail?dataset_id=nm000244) ### Examples ```pycon >>> from eegdash.dataset import NM000244 >>> dataset = NM000244(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BrainInvaders2014a', 'BI2014a']* ### *class* eegdash.dataset.dataset.NM000245(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Imagery dataset from Cho et al 2017 * **Study:** `nm000245` (NeMAR) * **Author (year):** `Cho2017` * **Canonical:** — Also importable as: `NM000245`, `Cho2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 52; recordings: 52; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000245](https://openneuro.org/datasets/nm000245) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000245](https://nemar.org/dataexplorer/detail?dataset_id=nm000245) ### Examples ```pycon >>> from eegdash.dataset import NM000245 >>> dataset = NM000245(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000246(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025 * **Study:** `nm000246` (NeMAR) * **Author (year):** `Yang2025_Multi` * **Canonical:** `WBCIC_SHU`, `WBCICSHU` Also importable as: `NM000246`, `Yang2025_Multi`, `WBCIC_SHU`, `WBCICSHU`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 51; recordings: 153; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000246](https://openneuro.org/datasets/nm000246) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000246](https://nemar.org/dataexplorer/detail?dataset_id=nm000246) ### Examples ```pycon >>> from eegdash.dataset import NM000246 >>> dataset = NM000246(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['WBCIC_SHU', 'WBCICSHU']* ### *class* eegdash.dataset.dataset.NM000247(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study S1 — 9x8 face/house paradigm (10 healthy subjects) * **Study:** `nm000247` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_S1` * **Canonical:** `BigP3BCI_StudyS1`, `BigP3BCI_S1` Also importable as: `NM000247`, `Mainsah2025_BigP3BCI_S1`, `BigP3BCI_StudyS1`, `BigP3BCI_S1`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 120; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000247](https://openneuro.org/datasets/nm000247) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000247](https://nemar.org/dataexplorer/detail?dataset_id=nm000247) ### Examples ```pycon >>> from eegdash.dataset import NM000247 >>> dataset = NM000247(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyS1', 'BigP3BCI_S1']* ### *class* eegdash.dataset.dataset.NM000248(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study L — 6x6 multi-paradigm (11 ALS subjects) * **Study:** `nm000248` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_L` * **Canonical:** — Also importable as: `NM000248`, `Mainsah2025_BigP3BCI_L`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 11; recordings: 330; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000248](https://openneuro.org/datasets/nm000248) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000248](https://nemar.org/dataexplorer/detail?dataset_id=nm000248) ### Examples ```pycon >>> from eegdash.dataset import NM000248 >>> dataset = NM000248(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000249(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2022-001 EEG Correlates of Difficulty Level dataset * **Study:** `nm000249` (NeMAR) * **Author (year):** `Jao2022` * **Canonical:** `Jao2020` Also importable as: `NM000249`, `Jao2022`, `Jao2020`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 13; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000249](https://openneuro.org/datasets/nm000249) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000249](https://nemar.org/dataexplorer/detail?dataset_id=nm000249) ### Examples ```pycon >>> from eegdash.dataset import NM000249 >>> dataset = NM000249(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Jao2020']* ### *class* eegdash.dataset.dataset.NM000250(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Class for Dreyer2023 dataset management. MI dataset * **Study:** `nm000250` (NeMAR) * **Author (year):** `Dreyer2023` * **Canonical:** — Also importable as: `NM000250`, `Dreyer2023`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 87; recordings: 520; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000250](https://openneuro.org/datasets/nm000250) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000250](https://nemar.org/dataexplorer/detail?dataset_id=nm000250) ### Examples ```pycon >>> from eegdash.dataset import NM000250 >>> dataset = NM000250(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000251(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) He et al. 2025 — VocalMind: A Stereotactic EEG Dataset for Vocalized, Mimed, and Imagined Speech in Tonal Language * **Study:** `nm000251` (NeMAR) * **Author (year):** `He2025` * **Canonical:** — Also importable as: `NM000251`, `He2025`. Modality: `ieeg`. Subjects: 1; recordings: 6; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000251](https://openneuro.org/datasets/nm000251) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000251](https://nemar.org/dataexplorer/detail?dataset_id=nm000251) DOI: [https://doi.org/10.1038/s41597-025-04741-2](https://doi.org/10.1038/s41597-025-04741-2) ### Examples ```pycon >>> from eegdash.dataset import NM000251 >>> dataset = NM000251(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000253(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Wang et al. 2024 — Brain Treebank: Large-scale intracranial recordings from naturalistic language stimuli * **Study:** `nm000253` (NeMAR) * **Author (year):** `Wang2024_et_al_Brain` * **Canonical:** `BrainTreeBank` Also importable as: `NM000253`, `Wang2024_et_al_Brain`, `BrainTreeBank`. Modality: `ieeg`. Subjects: 10; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000253](https://openneuro.org/datasets/nm000253) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000253](https://nemar.org/dataexplorer/detail?dataset_id=nm000253) DOI: [https://doi.org/10.48550/arXiv.2411.08343](https://doi.org/10.48550/arXiv.2411.08343) ### Examples ```pycon >>> from eegdash.dataset import NM000253 >>> dataset = NM000253(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BrainTreeBank']* ### *class* eegdash.dataset.dataset.NM000254(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Naturalistic viewing: An open-access dataset using simultaneous EEG-fMRI * **Study:** `nm000254` (NeMAR) * **Author (year):** `Telesford2024` * **Canonical:** — Also importable as: `NM000254`, `Telesford2024`. Modality: `eeg`. Subjects: 22; recordings: 942; tasks: 12. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000254](https://openneuro.org/datasets/nm000254) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000254](https://nemar.org/dataexplorer/detail?dataset_id=nm000254) ### Examples ```pycon >>> from eegdash.dataset import NM000254 >>> dataset = NM000254(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000255(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Brain, Body, and Behaviour Dataset (1.0.0) - Experiment 2 * **Study:** `nm000255` (NeMAR) * **Author (year):** `Madsen2024_E2` * **Canonical:** — Also importable as: `NM000255`, `Madsen2024_E2`. Modality: `eeg`. Subjects: 30; recordings: 291; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000255](https://openneuro.org/datasets/nm000255) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000255](https://nemar.org/dataexplorer/detail?dataset_id=nm000255) ### Examples ```pycon >>> from eegdash.dataset import NM000255 >>> dataset = NM000255(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000256(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Brain, Body, and Behaviour Dataset (1.0.0) - Experiment 3 * **Study:** `nm000256` (NeMAR) * **Author (year):** `Madsen2024_E3` * **Canonical:** — Also importable as: `NM000256`, `Madsen2024_E3`. Modality: `eeg`. Subjects: 29; recordings: 332; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000256](https://openneuro.org/datasets/nm000256) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000256](https://nemar.org/dataexplorer/detail?dataset_id=nm000256) ### Examples ```pycon >>> from eegdash.dataset import NM000256 >>> dataset = NM000256(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000259(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Speier2017 * **Study:** `nm000259` (NeMAR) * **Author (year):** `Speier2017` * **Canonical:** — Also importable as: `NM000259`, `Speier2017`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000259](https://openneuro.org/datasets/nm000259) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000259](https://nemar.org/dataexplorer/detail?dataset_id=nm000259) DOI: [https://doi.org/10.1371/journal.pone.0175382](https://doi.org/10.1371/journal.pone.0175382) ### Examples ```pycon >>> from eegdash.dataset import NM000259 >>> dataset = NM000259(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000260(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BrainInvaders2012 * **Study:** `nm000260` (NeMAR) * **Author (year):** `BrainInvaders2012` * **Canonical:** `BI2012`, `BrainInvaders` Also importable as: `NM000260`, `BrainInvaders2012`, `BI2012`, `BrainInvaders`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 23; recordings: 46; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000260](https://openneuro.org/datasets/nm000260) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000260](https://nemar.org/dataexplorer/detail?dataset_id=nm000260) DOI: [https://doi.org/10.5281/zenodo.2649006](https://doi.org/10.5281/zenodo.2649006) ### Examples ```pycon >>> from eegdash.dataset import NM000260 >>> dataset = NM000260(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BI2012', 'BrainInvaders']* ### *class* eegdash.dataset.dataset.NM000264(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BrainInvaders2013a * **Study:** `nm000264` (NeMAR) * **Author (year):** `BrainInvaders2013` * **Canonical:** `BrainInvaders2013a`, `BI2013a` Also importable as: `NM000264`, `BrainInvaders2013`, `BrainInvaders2013a`, `BI2013a`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 24; recordings: 292; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000264](https://openneuro.org/datasets/nm000264) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000264](https://nemar.org/dataexplorer/detail?dataset_id=nm000264) DOI: [https://doi.org/10.5281/zenodo.1494163](https://doi.org/10.5281/zenodo.1494163) ### Examples ```pycon >>> from eegdash.dataset import NM000264 >>> dataset = NM000264(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BrainInvaders2013a', 'BI2013a']* ### *class* eegdash.dataset.dataset.NM000265(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) GuttmannFlury2025-MI * **Study:** `nm000265` (NeMAR) * **Author (year):** `GuttmannFlury2025_MI` * **Canonical:** — Also importable as: `NM000265`, `GuttmannFlury2025_MI`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 31; recordings: 126; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000265](https://openneuro.org/datasets/nm000265) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000265](https://nemar.org/dataexplorer/detail?dataset_id=nm000265) DOI: [https://doi.org/10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) ### Examples ```pycon >>> from eegdash.dataset import NM000265 >>> dataset = NM000265(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000266(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sosulski2019 * **Study:** `nm000266` (NeMAR) * **Author (year):** `Sosulski2019` * **Canonical:** — Also importable as: `NM000266`, `Sosulski2019`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 1060; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000266](https://openneuro.org/datasets/nm000266) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000266](https://nemar.org/dataexplorer/detail?dataset_id=nm000266) DOI: [https://doi.org/10.48550/arXiv.2109.06011](https://doi.org/10.48550/arXiv.2109.06011) ### Examples ```pycon >>> from eegdash.dataset import NM000266 >>> dataset = NM000266(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000267(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Shin2017A * **Study:** `nm000267` (NeMAR) * **Author (year):** `Shin2017_Shin2017A` * **Canonical:** `Shin2017A` Also importable as: `NM000267`, `Shin2017_Shin2017A`, `Shin2017A`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 29; recordings: 174; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000267](https://openneuro.org/datasets/nm000267) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000267](https://nemar.org/dataexplorer/detail?dataset_id=nm000267) DOI: [https://doi.org/10.1109/TNSRE.2016.2628057](https://doi.org/10.1109/TNSRE.2016.2628057) ### Examples ```pycon >>> from eegdash.dataset import NM000267 >>> dataset = NM000267(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Shin2017A']* ### *class* eegdash.dataset.dataset.NM000268(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Shin2017B * **Study:** `nm000268` (NeMAR) * **Author (year):** `Shin2017_Shin2017B` * **Canonical:** `Shin2017B` Also importable as: `NM000268`, `Shin2017_Shin2017B`, `Shin2017B`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 29; recordings: 174; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000268](https://openneuro.org/datasets/nm000268) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000268](https://nemar.org/dataexplorer/detail?dataset_id=nm000268) DOI: [https://doi.org/10.1109/TNSRE.2016.2628057](https://doi.org/10.1109/TNSRE.2016.2628057) ### Examples ```pycon >>> from eegdash.dataset import NM000268 >>> dataset = NM000268(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Shin2017B']* ### *class* eegdash.dataset.dataset.NM000270(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) liu2025 - NEMAR Dataset * **Study:** `nm000270` (NeMAR) * **Author (year):** `Liu2025` * **Canonical:** — Also importable as: `NM000270`, `Liu2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 27; recordings: 797; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000270](https://openneuro.org/datasets/nm000270) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000270](https://nemar.org/dataexplorer/detail?dataset_id=nm000270) ### Examples ```pycon >>> from eegdash.dataset import NM000270 >>> dataset = NM000270(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000271(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) chang2025 - NEMAR Dataset * **Study:** `nm000271` (NeMAR) * **Author (year):** `Chang2025_2` * **Canonical:** `Chang2025` Also importable as: `NM000271`, `Chang2025_2`, `Chang2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 28; recordings: 1245; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000271](https://openneuro.org/datasets/nm000271) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000271](https://nemar.org/dataexplorer/detail?dataset_id=nm000271) ### Examples ```pycon >>> from eegdash.dataset import NM000271 >>> dataset = NM000271(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Chang2025']* ### *class* eegdash.dataset.dataset.NM000272(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) romani-bf2025-erp - NEMAR Dataset * **Study:** `nm000272` (NeMAR) * **Author (year):** `Romani2025_BF_ERP` * **Canonical:** `Romani2025_erp` Also importable as: `NM000272`, `Romani2025_BF_ERP`, `Romani2025_erp`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Unknown`. Subjects: 22; recordings: 1022; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000272](https://openneuro.org/datasets/nm000272) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000272](https://nemar.org/dataexplorer/detail?dataset_id=nm000272) ### Examples ```pycon >>> from eegdash.dataset import NM000272 >>> dataset = NM000272(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Romani2025_erp']* ### *class* eegdash.dataset.dataset.NM000277(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-G * **Study:** `nm000277` (NeMAR) * **Author (year):** `Mainsah2025_G` * **Canonical:** `BigP3BCI_G`, `BigP3BCI_StudyG` Also importable as: `NM000277`, `Mainsah2025_G`, `BigP3BCI_G`, `BigP3BCI_StudyG`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 20; recordings: 320; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000277](https://openneuro.org/datasets/nm000277) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000277](https://nemar.org/dataexplorer/detail?dataset_id=nm000277) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000277 >>> dataset = NM000277(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_G', 'BigP3BCI_StudyG']* ### *class* eegdash.dataset.dataset.NM000301(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-D * **Study:** `nm000301` (NeMAR) * **Author (year):** `Mainsah2025_D` * **Canonical:** — Also importable as: `NM000301`, `Mainsah2025_D`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 307; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000301](https://openneuro.org/datasets/nm000301) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000301](https://nemar.org/dataexplorer/detail?dataset_id=nm000301) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000301 >>> dataset = NM000301(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000303(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-O * **Study:** `nm000303` (NeMAR) * **Author (year):** `Mainsah2025_O` * **Canonical:** — Also importable as: `NM000303`, `Mainsah2025_O`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Other`. Subjects: 18; recordings: 347; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000303](https://openneuro.org/datasets/nm000303) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000303](https://nemar.org/dataexplorer/detail?dataset_id=nm000303) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000303 >>> dataset = NM000303(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000310(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) GuttmannFlury2025-SSVEP * **Study:** `nm000310` (NeMAR) * **Author (year):** `GuttmannFlury2025_SSVEP` * **Canonical:** — Also importable as: `NM000310`, `GuttmannFlury2025_SSVEP`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 11; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000310](https://openneuro.org/datasets/nm000310) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000310](https://nemar.org/dataexplorer/detail?dataset_id=nm000310) DOI: [https://doi.org/10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) ### Examples ```pycon >>> from eegdash.dataset import NM000310 >>> dataset = NM000310(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000311(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal upper-limb MI/ME EEG (Jeong et al. 2020) * **Study:** `nm000311` (NeMAR) * **Author (year):** `Jeong2020` * **Canonical:** — Also importable as: `NM000311`, `Jeong2020`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 25; recordings: 213; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000311](https://openneuro.org/datasets/nm000311) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000311](https://nemar.org/dataexplorer/detail?dataset_id=nm000311) DOI: [https://doi.org/10.82901/nemar.nm000311](https://doi.org/10.82901/nemar.nm000311) ### Examples ```pycon >>> from eegdash.dataset import NM000311 >>> dataset = NM000311(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000313(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-S2 * **Study:** `nm000313` (NeMAR) * **Author (year):** `Mainsah2025_S2` * **Canonical:** — Also importable as: `NM000313`, `Mainsah2025_S2`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 24; recordings: 288; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000313](https://openneuro.org/datasets/nm000313) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000313](https://nemar.org/dataexplorer/detail?dataset_id=nm000313) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000313 >>> dataset = NM000313(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000321(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-Q * **Study:** `nm000321` (NeMAR) * **Author (year):** `Mainsah2025_Q` * **Canonical:** — Also importable as: `NM000321`, `Mainsah2025_Q`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 36; recordings: 360; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000321](https://openneuro.org/datasets/nm000321) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000321](https://nemar.org/dataexplorer/detail?dataset_id=nm000321) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000321 >>> dataset = NM000321(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000323(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Lee2019-ERP * **Study:** `nm000323` (NeMAR) * **Author (year):** `Lee2019_ERP` * **Canonical:** `OpenBMI_ERP`, `OpenBMI_P300` Also importable as: `NM000323`, `Lee2019_ERP`, `OpenBMI_ERP`, `OpenBMI_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 54; recordings: 216; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000323](https://openneuro.org/datasets/nm000323) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000323](https://nemar.org/dataexplorer/detail?dataset_id=nm000323) DOI: [https://doi.org/10.1093/gigascience/giz002](https://doi.org/10.1093/gigascience/giz002) ### Examples ```pycon >>> from eegdash.dataset import NM000323 >>> dataset = NM000323(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['OpenBMI_ERP', 'OpenBMI_P300']* ### *class* eegdash.dataset.dataset.NM000326(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-C * **Study:** `nm000326` (NeMAR) * **Author (year):** `Mainsah2025_C` * **Canonical:** — Also importable as: `NM000326`, `Mainsah2025_C`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 19; recordings: 341; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000326](https://openneuro.org/datasets/nm000326) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000326](https://nemar.org/dataexplorer/detail?dataset_id=nm000326) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000326 >>> dataset = NM000326(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000329(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Brandl2020 * **Study:** `nm000329` (NeMAR) * **Author (year):** `Brandl2020` * **Canonical:** — Also importable as: `NM000329`, `Brandl2020`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 16; recordings: 112; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000329](https://openneuro.org/datasets/nm000329) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000329](https://nemar.org/dataexplorer/detail?dataset_id=nm000329) DOI: [https://doi.org/10.3389/fnins.2020.566147](https://doi.org/10.3389/fnins.2020.566147) ### Examples ```pycon >>> from eegdash.dataset import NM000329 >>> dataset = NM000329(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000336(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-R * **Study:** `nm000336` (NeMAR) * **Author (year):** `Mainsah2025_R` * **Canonical:** — Also importable as: `NM000336`, `Mainsah2025_R`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 20; recordings: 480; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000336](https://openneuro.org/datasets/nm000336) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000336](https://nemar.org/dataexplorer/detail?dataset_id=nm000336) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000336 >>> dataset = NM000336(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000338(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Lee2019-MI * **Study:** `nm000338` (NeMAR) * **Author (year):** `Lee2019_MI` * **Canonical:** `OpenBMI_MI` Also importable as: `NM000338`, `Lee2019_MI`, `OpenBMI_MI`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 54; recordings: 216; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000338](https://openneuro.org/datasets/nm000338) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000338](https://nemar.org/dataexplorer/detail?dataset_id=nm000338) DOI: [https://doi.org/10.1093/gigascience/giz002](https://doi.org/10.1093/gigascience/giz002) ### Examples ```pycon >>> from eegdash.dataset import NM000338 >>> dataset = NM000338(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['OpenBMI_MI']* ### *class* eegdash.dataset.dataset.NM000339(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Stieger2021 * **Study:** `nm000339` (NeMAR) * **Author (year):** `Stieger2021` * **Canonical:** — Also importable as: `NM000339`, `Stieger2021`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 62; recordings: 598; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000339](https://openneuro.org/datasets/nm000339) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000339](https://nemar.org/dataexplorer/detail?dataset_id=nm000339) DOI: [https://doi.org/10.1038/s41597-021-00883-1](https://doi.org/10.1038/s41597-021-00883-1) ### Examples ```pycon >>> from eegdash.dataset import NM000339 >>> dataset = NM000339(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000340(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-J * **Study:** `nm000340` (NeMAR) * **Author (year):** `Mainsah2025_J` * **Canonical:** — Also importable as: `NM000340`, `Mainsah2025_J`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 20; recordings: 502; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000340](https://openneuro.org/datasets/nm000340) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000340](https://nemar.org/dataexplorer/detail?dataset_id=nm000340) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000340 >>> dataset = NM000340(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000341(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cattan2019-PHMD * **Study:** `nm000341` (NeMAR) * **Author (year):** `Cattan2019_PHMD` * **Canonical:** — Also importable as: `NM000341`, `Cattan2019_PHMD`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000341](https://openneuro.org/datasets/nm000341) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000341](https://nemar.org/dataexplorer/detail?dataset_id=nm000341) DOI: [https://doi.org/10.5281/zenodo.2617084](https://doi.org/10.5281/zenodo.2617084) ### Examples ```pycon >>> from eegdash.dataset import NM000341 >>> dataset = NM000341(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000342(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CastillosCVEP40 * **Study:** `nm000342` (NeMAR) * **Author (year):** `Castillos2023_CastillosCVEP40` * **Canonical:** `CastillosCVEP40` Also importable as: `NM000342`, `Castillos2023_CastillosCVEP40`, `CastillosCVEP40`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000342](https://openneuro.org/datasets/nm000342) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000342](https://nemar.org/dataexplorer/detail?dataset_id=nm000342) DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### Examples ```pycon >>> from eegdash.dataset import NM000342 >>> dataset = NM000342(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['CastillosCVEP40']* ### *class* eegdash.dataset.dataset.NM000343(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Hinss2021 * **Study:** `nm000343` (NeMAR) * **Author (year):** `Hinss2021` * **Canonical:** `Hinss2021_v2` Also importable as: `NM000343`, `Hinss2021`, `Hinss2021_v2`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000343](https://openneuro.org/datasets/nm000343) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000343](https://nemar.org/dataexplorer/detail?dataset_id=nm000343) DOI: [https://doi.org/10.1038/s41597-022-01898-y](https://doi.org/10.1038/s41597-022-01898-y) ### Examples ```pycon >>> from eegdash.dataset import NM000343 >>> dataset = NM000343(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Hinss2021_v2']* ### *class* eegdash.dataset.dataset.NM000344(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CastillosBurstVEP100 * **Study:** `nm000344` (NeMAR) * **Author (year):** `Castillos2023_CastillosBurstVEP100` * **Canonical:** — Also importable as: `NM000344`, `Castillos2023_CastillosBurstVEP100`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000344](https://openneuro.org/datasets/nm000344) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000344](https://nemar.org/dataexplorer/detail?dataset_id=nm000344) DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### Examples ```pycon >>> from eegdash.dataset import NM000344 >>> dataset = NM000344(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000345(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CastillosBurstVEP40 * **Study:** `nm000345` (NeMAR) * **Author (year):** `Castillos2023_CastillosBurstVEP40` * **Canonical:** — Also importable as: `NM000345`, `Castillos2023_CastillosBurstVEP40`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000345](https://openneuro.org/datasets/nm000345) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000345](https://nemar.org/dataexplorer/detail?dataset_id=nm000345) DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### Examples ```pycon >>> from eegdash.dataset import NM000345 >>> dataset = NM000345(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000346(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CastillosCVEP100 * **Study:** `nm000346` (NeMAR) * **Author (year):** `Castillos2023_CastillosCVEP100` * **Canonical:** — Also importable as: `NM000346`, `Castillos2023_CastillosCVEP100`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000346](https://openneuro.org/datasets/nm000346) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000346](https://nemar.org/dataexplorer/detail?dataset_id=nm000346) DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### Examples ```pycon >>> from eegdash.dataset import NM000346 >>> dataset = NM000346(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.dataset.NM000347(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HefmiIch2025 * **Study:** `nm000347` (NeMAR) * **Author (year):** `HefmiIch2025` * **Canonical:** `HEFMI_ICH`, `HEFMIICH` Also importable as: `NM000347`, `HefmiIch2025`, `HEFMI_ICH`, `HEFMIICH`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 37; recordings: 98; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000347](https://openneuro.org/datasets/nm000347) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000347](https://nemar.org/dataexplorer/detail?dataset_id=nm000347) DOI: [https://doi.org/10.1038/s41597-025-06100-7](https://doi.org/10.1038/s41597-025-06100-7) ### Examples ```pycon >>> from eegdash.dataset import NM000347 >>> dataset = NM000347(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HEFMI_ICH', 'HEFMIICH']* ### *class* eegdash.dataset.dataset.NM000348(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Yang2025 * **Study:** `nm000348` (NeMAR) * **Author (year):** `Yang2025` * **Canonical:** — Also importable as: `NM000348`, `Yang2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 51; recordings: 153; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000348](https://openneuro.org/datasets/nm000348) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000348](https://nemar.org/dataexplorer/detail?dataset_id=nm000348) DOI: [https://doi.org/10.1038/s41597-025-04826-y](https://doi.org/10.1038/s41597-025-04826-y) ### Examples ```pycon >>> from eegdash.dataset import NM000348 >>> dataset = NM000348(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Yang2025']* ### *class* eegdash.dataset.dataset.NM000351(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-P * **Study:** `nm000351` (NeMAR) * **Author (year):** `Mainsah2025_P` * **Canonical:** — Also importable as: `NM000351`, `Mainsah2025_P`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 19; recordings: 228; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000351](https://openneuro.org/datasets/nm000351) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000351](https://nemar.org/dataexplorer/detail?dataset_id=nm000351) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000351 >>> dataset = NM000351(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### eegdash.dataset.dataset.NOD_EEG alias of [`DS005811`](eegdash.dataset.DS005811.md#eegdash.dataset.DS005811) ### eegdash.dataset.dataset.NOD_MEG alias of [`DS005810`](eegdash.dataset.DS005810.md#eegdash.dataset.DS005810) ### eegdash.dataset.dataset.NenckiSymfonia alias of [`DS004621`](eegdash.dataset.DS004621.md#eegdash.dataset.DS004621) ### eegdash.dataset.dataset.Neuma alias of [`DS004588`](eegdash.dataset.DS004588.md#eegdash.dataset.DS004588) ### eegdash.dataset.dataset.NeuroMorph alias of [`DS005241`](eegdash.dataset.DS005241.md#eegdash.dataset.DS005241) ### eegdash.dataset.dataset.Nierula2019 alias of [`DS005307`](eegdash.dataset.DS005307.md#eegdash.dataset.DS005307) ### eegdash.dataset.dataset.Ning2024 alias of [`DS004830`](eegdash.dataset.DS004830.md#eegdash.dataset.DS004830) ### eegdash.dataset.dataset.Normannseth2026 alias of [`DS007615`](eegdash.dataset.DS007615.md#eegdash.dataset.DS007615) ### eegdash.dataset.dataset.OMEGA alias of [`DS000247`](eegdash.dataset.DS000247.md#eegdash.dataset.DS000247) ### eegdash.dataset.dataset.ORHA alias of [`DS005363`](eegdash.dataset.DS005363.md#eegdash.dataset.DS005363) ### eegdash.dataset.dataset.OcularLDT alias of [`DS002312`](eegdash.dataset.DS002312.md#eegdash.dataset.DS002312) ### eegdash.dataset.dataset.Oikonomou2016 alias of [`NM000119`](eegdash.dataset.NM000119.md#eegdash.dataset.NM000119) ### eegdash.dataset.dataset.Omelyusik2026 alias of [`DS006136`](eegdash.dataset.DS006136.md#eegdash.dataset.DS006136) ### eegdash.dataset.dataset.Onton2024 alias of [`DS006695`](eegdash.dataset.DS006695.md#eegdash.dataset.DS006695) ### eegdash.dataset.dataset.OpenBMI_ERP alias of [`NM000323`](eegdash.dataset.NM000323.md#eegdash.dataset.NM000323) ### eegdash.dataset.dataset.OpenBMI_MI alias of [`NM000338`](eegdash.dataset.NM000338.md#eegdash.dataset.NM000338) ### eegdash.dataset.dataset.OpenBMI_P300 alias of [`NM000323`](eegdash.dataset.NM000323.md#eegdash.dataset.NM000323) ### eegdash.dataset.dataset.PAL alias of [`DS005059`](eegdash.dataset.DS005059.md#eegdash.dataset.DS005059) ### eegdash.dataset.dataset.PDEEG alias of [`DS007526`](eegdash.dataset.DS007526.md#eegdash.dataset.DS007526) ### eegdash.dataset.dataset.PD_EEG alias of [`DS007526`](eegdash.dataset.DS007526.md#eegdash.dataset.DS007526) ### eegdash.dataset.dataset.PEARLNeuro alias of [`DS004796`](eegdash.dataset.DS004796.md#eegdash.dataset.DS004796) ### eegdash.dataset.dataset.PEERS alias of [`DS004395`](eegdash.dataset.DS004395.md#eegdash.dataset.DS004395) ### eegdash.dataset.dataset.PRIOS alias of [`DS004370`](eegdash.dataset.DS004370.md#eegdash.dataset.DS004370) ### eegdash.dataset.dataset.PROMENADE alias of [`DS005946`](eegdash.dataset.DS005946.md#eegdash.dataset.DS005946) ### eegdash.dataset.dataset.PWIe alias of [`DS005932`](eegdash.dataset.DS005932.md#eegdash.dataset.DS005932) ### eegdash.dataset.dataset.Penalver2024 alias of [`DS004502`](eegdash.dataset.DS004502.md#eegdash.dataset.DS004502) ### eegdash.dataset.dataset.Peng2018 alias of [`DS005777`](eegdash.dataset.DS005777.md#eegdash.dataset.DS005777) ### eegdash.dataset.dataset.PerceiveImagine alias of [`DS005697`](eegdash.dataset.DS005697.md#eegdash.dataset.DS005697) ### eegdash.dataset.dataset.PhysionetMI alias of [`DS004362`](eegdash.dataset.DS004362.md#eegdash.dataset.DS004362) ### eegdash.dataset.dataset.Podcast alias of [`DS005574`](eegdash.dataset.DS005574.md#eegdash.dataset.DS005574) ### eegdash.dataset.dataset.Pohle2019 alias of [`DS006374`](eegdash.dataset.DS006374.md#eegdash.dataset.DS006374) ### eegdash.dataset.dataset.RAM_catFR alias of [`DS005491`](eegdash.dataset.DS005491.md#eegdash.dataset.DS005491) ### eegdash.dataset.dataset.RESPect_CCEP alias of [`DS004080`](eegdash.dataset.DS004080.md#eegdash.dataset.DS004080) ### eegdash.dataset.dataset.RESPect_intraop alias of [`DS003844`](eegdash.dataset.DS003844.md#eegdash.dataset.DS003844) ### eegdash.dataset.dataset.RESPect_longterm alias of [`DS003848`](eegdash.dataset.DS003848.md#eegdash.dataset.DS003848) ### eegdash.dataset.dataset.Ramzaoui2024 alias of [`DS006979`](eegdash.dataset.DS006979.md#eegdash.dataset.DS006979) ### eegdash.dataset.dataset.Rani2019 alias of [`DS004012`](eegdash.dataset.DS004012.md#eegdash.dataset.DS004012) ### eegdash.dataset.dataset.Rockhill2022 alias of [`DS004473`](eegdash.dataset.DS004473.md#eegdash.dataset.DS004473) ### eegdash.dataset.dataset.Rodrigues2017 alias of [`NM000221`](eegdash.dataset.NM000221.md#eegdash.dataset.NM000221) ### eegdash.dataset.dataset.Romani2025 alias of [`NM000147`](eegdash.dataset.NM000147.md#eegdash.dataset.NM000147) ### eegdash.dataset.dataset.Romani2025_erp alias of [`NM000272`](eegdash.dataset.NM000272.md#eegdash.dataset.NM000272) ### eegdash.dataset.dataset.Runabout alias of [`DS003620`](eegdash.dataset.DS003620.md#eegdash.dataset.DS003620) ### eegdash.dataset.dataset.SINGSING alias of [`DS006629`](eegdash.dataset.DS006629.md#eegdash.dataset.DS006629) ### eegdash.dataset.dataset.SSVEPMAMEM2 alias of [`NM000120`](eegdash.dataset.NM000120.md#eegdash.dataset.NM000120) ### eegdash.dataset.dataset.SSVEP_MAMEM3 alias of [`NM000121`](eegdash.dataset.NM000121.md#eegdash.dataset.NM000121) ### eegdash.dataset.dataset.STRONG alias of [`DS004849`](eegdash.dataset.DS004849.md#eegdash.dataset.DS004849) ### eegdash.dataset.dataset.STReEF alias of [`DS005448`](eegdash.dataset.DS005448.md#eegdash.dataset.DS005448) ### eegdash.dataset.dataset.Sakakura2024 alias of [`DS004859`](eegdash.dataset.DS004859.md#eegdash.dataset.DS004859) ### eegdash.dataset.dataset.Sakakura2025 alias of [`DS004551`](eegdash.dataset.DS004551.md#eegdash.dataset.DS004551) ### eegdash.dataset.dataset.Sato2024 alias of [`DS007602`](eegdash.dataset.DS007602.md#eegdash.dataset.DS007602) ### eegdash.dataset.dataset.Sato2025 alias of [`DS007591`](eegdash.dataset.DS007591.md#eegdash.dataset.DS007591) ### eegdash.dataset.dataset.SeizeIT2 alias of [`DS005873`](eegdash.dataset.DS005873.md#eegdash.dataset.DS005873) ### eegdash.dataset.dataset.Shalamberidze2025 alias of [`DS007609`](eegdash.dataset.DS007609.md#eegdash.dataset.DS007609) ### eegdash.dataset.dataset.Shin2017A alias of [`NM000267`](eegdash.dataset.NM000267.md#eegdash.dataset.NM000267) ### eegdash.dataset.dataset.Shin2017B alias of [`NM000268`](eegdash.dataset.NM000268.md#eegdash.dataset.NM000268) ### eegdash.dataset.dataset.SleepEDF alias of [`NM000185`](eegdash.dataset.NM000185.md#eegdash.dataset.NM000185) ### eegdash.dataset.dataset.SleepEDFExpanded alias of [`NM000185`](eegdash.dataset.NM000185.md#eegdash.dataset.NM000185) ### eegdash.dataset.dataset.Somato alias of [`DS003104`](eegdash.dataset.DS003104.md#eegdash.dataset.DS003104) ### eegdash.dataset.dataset.Surrey_cEEGrid_sleep alias of [`DS005207`](eegdash.dataset.DS005207.md#eegdash.dataset.DS005207) ### eegdash.dataset.dataset.THINGS alias of [`DS003825`](eegdash.dataset.DS003825.md#eegdash.dataset.DS003825) ### eegdash.dataset.dataset.THINGSMEG alias of [`DS004212`](eegdash.dataset.DS004212.md#eegdash.dataset.DS004212) ### eegdash.dataset.dataset.THINGS_EEG alias of [`DS003825`](eegdash.dataset.DS003825.md#eegdash.dataset.DS003825) ### eegdash.dataset.dataset.THINGS_MEG alias of [`DS004212`](eegdash.dataset.DS004212.md#eegdash.dataset.DS004212) ### eegdash.dataset.dataset.TMNRED alias of [`DS005383`](eegdash.dataset.DS005383.md#eegdash.dataset.DS005383) ### eegdash.dataset.dataset.TNO alias of [`DS004660`](eegdash.dataset.DS004660.md#eegdash.dataset.DS004660) ### eegdash.dataset.dataset.TX14 alias of [`DS004841`](eegdash.dataset.DS004841.md#eegdash.dataset.DS004841) ### eegdash.dataset.dataset.TX15 alias of [`DS004842`](eegdash.dataset.DS004842.md#eegdash.dataset.DS004842) ### eegdash.dataset.dataset.TX18 alias of [`DS004854`](eegdash.dataset.DS004854.md#eegdash.dataset.DS004854) ### eegdash.dataset.dataset.TX20 alias of [`DS004657`](eegdash.dataset.DS004657.md#eegdash.dataset.DS004657) ### eegdash.dataset.dataset.Todorovic2023 alias of [`DS005261`](eegdash.dataset.DS005261.md#eegdash.dataset.DS005261) ### eegdash.dataset.dataset.ToonFaces alias of [`DS004324`](eegdash.dataset.DS004324.md#eegdash.dataset.DS004324) ### eegdash.dataset.dataset.Touryan1999 alias of [`DS004118`](eegdash.dataset.DS004118.md#eegdash.dataset.DS004118) ### eegdash.dataset.dataset.Tripathy2024 alias of [`DS007473`](eegdash.dataset.DS007473.md#eegdash.dataset.DS007473) ### eegdash.dataset.dataset.VEPCON alias of [`DS003505`](eegdash.dataset.DS003505.md#eegdash.dataset.DS003505) ### eegdash.dataset.dataset.Veillette2019 alias of [`DS005403`](eegdash.dataset.DS005403.md#eegdash.dataset.DS005403) ### eegdash.dataset.dataset.Vianney2025 alias of [`DS007358`](eegdash.dataset.DS007358.md#eegdash.dataset.DS007358) ### eegdash.dataset.dataset.VisualContextTrajectory alias of [`DS004603`](eegdash.dataset.DS004603.md#eegdash.dataset.DS004603) ### eegdash.dataset.dataset.VisualContextTrajectory_v2 alias of [`DS006817`](eegdash.dataset.DS006817.md#eegdash.dataset.DS006817) ### eegdash.dataset.dataset.WBCICSHU alias of [`NM000246`](eegdash.dataset.NM000246.md#eegdash.dataset.NM000246) ### eegdash.dataset.dataset.WBCIC_SHU alias of [`NM000246`](eegdash.dataset.NM000246.md#eegdash.dataset.NM000246) ### eegdash.dataset.dataset.WIRED_ICM alias of [`DS004993`](eegdash.dataset.DS004993.md#eegdash.dataset.DS004993) ### eegdash.dataset.dataset.Wakeman2015 alias of [`DS000117`](eegdash.dataset.DS000117.md#eegdash.dataset.DS000117) ### eegdash.dataset.dataset.WakemanHenson alias of [`DS000117`](eegdash.dataset.DS000117.md#eegdash.dataset.DS000117) ### eegdash.dataset.dataset.WakemanHenson_EEG_MEG alias of [`DS002718`](eegdash.dataset.DS002718.md#eegdash.dataset.DS002718) ### eegdash.dataset.dataset.Weibo2014 alias of [`NM000146`](eegdash.dataset.NM000146.md#eegdash.dataset.NM000146) ### eegdash.dataset.dataset.Weisend2007 alias of [`DS004107`](eegdash.dataset.DS004107.md#eegdash.dataset.DS004107) ### eegdash.dataset.dataset.Wimmer2024 alias of [`DS004398`](eegdash.dataset.DS004398.md#eegdash.dataset.DS004398) ### eegdash.dataset.dataset.Yang2025 alias of [`NM000348`](eegdash.dataset.NM000348.md#eegdash.dataset.NM000348) ### eegdash.dataset.dataset.Yu2019 alias of [`DS006386`](eegdash.dataset.DS006386.md#eegdash.dataset.DS006386) ### eegdash.dataset.dataset.Yucel2014 alias of [`DS005929`](eegdash.dataset.DS005929.md#eegdash.dataset.DS005929) ### eegdash.dataset.dataset.Yucel2015 alias of [`DS005776`](eegdash.dataset.DS005776.md#eegdash.dataset.DS005776) ### eegdash.dataset.dataset.Zhang2025 alias of [`NM000211`](eegdash.dataset.NM000211.md#eegdash.dataset.NM000211) ### eegdash.dataset.dataset.Zhao2024 alias of [`DS005473`](eegdash.dataset.DS005473.md#eegdash.dataset.DS005473) ### eegdash.dataset.dataset.Zhou2016_NEMAR alias of [`NM000226`](eegdash.dataset.NM000226.md#eegdash.dataset.NM000226) ### eegdash.dataset.dataset.Zhou2024 alias of [`DS007471`](eegdash.dataset.DS007471.md#eegdash.dataset.DS007471) ### eegdash.dataset.dataset.catFR_Categorized_Free_Recall alias of [`DS004809`](eegdash.dataset.DS004809.md#eegdash.dataset.DS004809) ### eegdash.dataset.dataset.catFR_closed_loop alias of [`DS005558`](eegdash.dataset.DS005558.md#eegdash.dataset.DS005558) ### eegdash.dataset.dataset.catFR_open_loop alias of [`DS005491`](eegdash.dataset.DS005491.md#eegdash.dataset.DS005491) ### eegdash.dataset.dataset.catFR_stim alias of [`DS005491`](eegdash.dataset.DS005491.md#eegdash.dataset.DS005491) ### eegdash.dataset.dataset.eldBETA alias of [`NM000130`](eegdash.dataset.NM000130.md#eegdash.dataset.NM000130) ### eegdash.dataset.dataset.emg2qwerty alias of [`NM000104`](eegdash.dataset.NM000104.md#eegdash.dataset.NM000104) ### eegdash.dataset.dataset.neuromorph alias of [`DS005241`](eegdash.dataset.DS005241.md#eegdash.dataset.DS005241) ### eegdash.dataset.dataset.ocular_ldt alias of [`DS002312`](eegdash.dataset.DS002312.md#eegdash.dataset.DS002312) ### eegdash.dataset.dataset.pyFR alias of [`DS004865`](eegdash.dataset.DS004865.md#eegdash.dataset.DS004865) # eldBETA: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import eldBETA dataset = eldBETA(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = eldBETA(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = eldBETA( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{eldbeta, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `ELDBETA` | |----------------|-------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `ELDBETA` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/eldbeta) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=eldbeta) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [eldbeta](https://openneuro.org/datasets/eldbeta) - NeMAR: [eldbeta](https://nemar.org/dataexplorer/detail?dataset_id=eldbeta) ## API Reference Use the `eldBETA` class to access this dataset programmatically. ### eegdash.dataset.eldBETA alias of [`NM000130`](eegdash.dataset.NM000130.md#eegdash.dataset.NM000130) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/eldbeta) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=eldbeta) # emg2qwerty: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import emg2qwerty dataset = emg2qwerty(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = emg2qwerty(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = emg2qwerty( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{emg2qwerty, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `EMG2QWERTY` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `EMG2QWERTY` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/emg2qwerty) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=emg2qwerty) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [emg2qwerty](https://openneuro.org/datasets/emg2qwerty) - NeMAR: [emg2qwerty](https://nemar.org/dataexplorer/detail?dataset_id=emg2qwerty) ## API Reference Use the `emg2qwerty` class to access this dataset programmatically. ### eegdash.dataset.emg2qwerty alias of [`NM000104`](eegdash.dataset.NM000104.md#eegdash.dataset.NM000104) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/emg2qwerty) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=emg2qwerty) # eegdash.dataset.exceptions module Custom exceptions for EEGDash. This module defines exceptions used throughout the EEGDash library to provide informative error messages for common issues. ### *exception* eegdash.dataset.exceptions.DataIntegrityError(message: str, record: dict[str, Any] | None = None, issues: list[str] | None = None, authors: list[str] | None = None, contact_info: list[str] | None = None, source_url: str | None = None) Bases: `EEGDashError` Raised when a dataset record has known data integrity issues. This exception is raised when attempting to load a record that has been flagged during ingestion as having missing companion files or other integrity problems. #### record The problematic record metadata. * **Type:** dict #### issues List of specific integrity issues found. * **Type:** list[str] #### authors Dataset authors who can be contacted about the issue. * **Type:** list[str] #### contact_info Contact information for reporting the issue. * **Type:** list[str] | None #### source_url URL to the dataset source for reporting issues. * **Type:** str | None ### Examples ```pycon >>> try: ... dataset.raw # Attempt to load data ... except DataIntegrityError as e: ... print(f"Cannot load: {e.issues}") ... print(f"Contact authors: {e.authors}") ``` #### *classmethod* from_record(record: dict[str, Any]) → DataIntegrityError Create a DataIntegrityError from a record with integrity issues. * **Parameters:** **record** (*dict*) – Record containing `_data_integrity_issues` and optionally `_dataset_authors`, `_dataset_contact`, `_source_url`. * **Returns:** Exception with all relevant context. * **Return type:** DataIntegrityError #### log_error() → None Log the error using the EEGDash logger with rich formatting. #### log_warning() → None Log the integrity issues as warnings (non-blocking). #### print_rich(console: Console | None = None) → None Print a rich formatted version of the error to the console. * **Parameters:** **console** (*Console* *,* *optional*) – Rich console to print to. If None, creates a new one. #### *classmethod* warn_from_record(record: dict[str, Any]) → None Log a warning about data integrity issues without raising an exception. Use this when you want to warn about issues but still allow loading. * **Parameters:** **record** (*dict*) – Record containing `_data_integrity_issues` and optionally `_dataset_authors`, `_dataset_contact`, `_source_url`. ### *exception* eegdash.dataset.exceptions.EEGDashError Bases: `Exception` Base exception for all EEGDash errors. # eegdash.dataset.io module Input/Output utilities for EEG datasets. This module contains helper functions for managing EEG data files, specifically for fixing common issues in BIDS datasets and handling file system operations. # eegdash.dataset package ## Submodules * [eegdash.dataset.base module](eegdash.dataset.base.md) * [eegdash.dataset.bids_dataset module](eegdash.dataset.bids_dataset.md) * [eegdash.dataset.dataset module](eegdash.dataset.dataset.md) * [eegdash.dataset.exceptions module](eegdash.dataset.exceptions.md) * [eegdash.dataset.io module](eegdash.dataset.io.md) * [eegdash.dataset.registry module](eegdash.dataset.registry.md) ## Module contents Public API for dataset helpers and dynamically generated datasets. ### eegdash.dataset.ABSeqMEG alias of [`DS004483`](eegdash.dataset.DS004483.md#eegdash.dataset.DS004483) ### eegdash.dataset.ANDI alias of [`DS004661`](eegdash.dataset.DS004661.md#eegdash.dataset.DS004661) ### eegdash.dataset.APPLESEED alias of [`DS003710`](eegdash.dataset.DS003710.md#eegdash.dataset.DS003710) ### eegdash.dataset.AlexMI alias of [`NM000138`](eegdash.dataset.NM000138.md#eegdash.dataset.NM000138) ### eegdash.dataset.AlexMotorImagery alias of [`NM000138`](eegdash.dataset.NM000138.md#eegdash.dataset.NM000138) ### eegdash.dataset.AlexandreMotorImagery alias of [`NM000138`](eegdash.dataset.NM000138.md#eegdash.dataset.NM000138) ### eegdash.dataset.Alljoined alias of [`NM000133`](eegdash.dataset.NM000133.md#eegdash.dataset.NM000133) ### eegdash.dataset.Alljoined1 alias of [`NM000133`](eegdash.dataset.NM000133.md#eegdash.dataset.NM000133) ### eegdash.dataset.Alljoined16M alias of [`NM000134`](eegdash.dataset.NM000134.md#eegdash.dataset.NM000134) ### eegdash.dataset.Alljoined1p6M alias of [`NM000134`](eegdash.dataset.NM000134.md#eegdash.dataset.NM000134) ### eegdash.dataset.Alljoined_16M alias of [`NM000134`](eegdash.dataset.NM000134.md#eegdash.dataset.NM000134) ### eegdash.dataset.AlphaWaves alias of [`NM000221`](eegdash.dataset.NM000221.md#eegdash.dataset.NM000221) ### eegdash.dataset.Alphawaves alias of [`NM000221`](eegdash.dataset.NM000221.md#eegdash.dataset.NM000221) ### eegdash.dataset.ArEEG alias of [`DS005262`](eegdash.dataset.DS005262.md#eegdash.dataset.DS005262) ### eegdash.dataset.Ataseven2024 alias of [`DS007431`](eegdash.dataset.DS007431.md#eegdash.dataset.DS007431) ### eegdash.dataset.BCI2000_Intracranial alias of [`DS004624`](eegdash.dataset.DS004624.md#eegdash.dataset.DS004624) ### eegdash.dataset.BCI2000_intraop alias of [`DS004944`](eegdash.dataset.DS004944.md#eegdash.dataset.DS004944) ### eegdash.dataset.BCIAUT alias of [`NM000210`](eegdash.dataset.NM000210.md#eegdash.dataset.NM000210) ### eegdash.dataset.BCIAUTP300 alias of [`NM000210`](eegdash.dataset.NM000210.md#eegdash.dataset.NM000210) ### eegdash.dataset.BCIAUT_P300 alias of [`NM000210`](eegdash.dataset.NM000210.md#eegdash.dataset.NM000210) ### eegdash.dataset.BCICIII_IVa alias of [`NM000143`](eegdash.dataset.NM000143.md#eegdash.dataset.NM000143) ### eegdash.dataset.BCICIV1 alias of [`NM000139`](eegdash.dataset.NM000139.md#eegdash.dataset.NM000139) ### eegdash.dataset.BCICompIII_IVa alias of [`NM000143`](eegdash.dataset.NM000143.md#eegdash.dataset.NM000143) ### eegdash.dataset.BCICompIV1 alias of [`NM000139`](eegdash.dataset.NM000139.md#eegdash.dataset.NM000139) ### eegdash.dataset.BCIT alias of [`DS004119`](eegdash.dataset.DS004119.md#eegdash.dataset.DS004119) ### eegdash.dataset.BCITAdvancedGuardDuty alias of [`DS004106`](eegdash.dataset.DS004106.md#eegdash.dataset.DS004106) ### eegdash.dataset.BCITBaselineDriving alias of [`DS004120`](eegdash.dataset.DS004120.md#eegdash.dataset.DS004120) ### eegdash.dataset.BCITMindWandering alias of [`DS004121`](eegdash.dataset.DS004121.md#eegdash.dataset.DS004121) ### eegdash.dataset.BCIT_Auditory_Cueing alias of [`DS004105`](eegdash.dataset.DS004105.md#eegdash.dataset.DS004105) ### eegdash.dataset.BCIT_Traffic_Complexity alias of [`DS004123`](eegdash.dataset.DS004123.md#eegdash.dataset.DS004123) ### eegdash.dataset.BETA alias of [`NM000129`](eegdash.dataset.NM000129.md#eegdash.dataset.NM000129) ### eegdash.dataset.BETA_SSVEP alias of [`NM000129`](eegdash.dataset.NM000129.md#eegdash.dataset.NM000129) ### eegdash.dataset.BI2012 alias of [`NM000260`](eegdash.dataset.NM000260.md#eegdash.dataset.NM000260) ### eegdash.dataset.BI2013a alias of [`NM000264`](eegdash.dataset.NM000264.md#eegdash.dataset.NM000264) ### eegdash.dataset.BI2014a alias of [`NM000244`](eegdash.dataset.NM000244.md#eegdash.dataset.NM000244) ### eegdash.dataset.BI2014b alias of [`NM000215`](eegdash.dataset.NM000215.md#eegdash.dataset.NM000215) ### eegdash.dataset.BI2015a alias of [`NM000216`](eegdash.dataset.NM000216.md#eegdash.dataset.NM000216) ### eegdash.dataset.BI2015b alias of [`NM000217`](eegdash.dataset.NM000217.md#eegdash.dataset.NM000217) ### eegdash.dataset.BMI_HDEEG_D1 alias of [`DS004444`](eegdash.dataset.DS004444.md#eegdash.dataset.DS004444) ### eegdash.dataset.BMI_HDEEG_D2 alias of [`DS004446`](eegdash.dataset.DS004446.md#eegdash.dataset.DS004446) ### eegdash.dataset.BMI_HDEEG_D3 alias of [`DS004447`](eegdash.dataset.DS004447.md#eegdash.dataset.DS004447) ### eegdash.dataset.BMI_HDEEG_D4 alias of [`DS004448`](eegdash.dataset.DS004448.md#eegdash.dataset.DS004448) ### eegdash.dataset.BNCI2003_IVa alias of [`NM000143`](eegdash.dataset.NM000143.md#eegdash.dataset.NM000143) ### eegdash.dataset.BNCI2014001 alias of [`NM000139`](eegdash.dataset.NM000139.md#eegdash.dataset.NM000139) ### eegdash.dataset.BNCI2014002 alias of [`NM000171`](eegdash.dataset.NM000171.md#eegdash.dataset.NM000171) ### eegdash.dataset.BNCI2014004 alias of [`NM000135`](eegdash.dataset.NM000135.md#eegdash.dataset.NM000135) ### eegdash.dataset.BNCI2014008 alias of [`NM000169`](eegdash.dataset.NM000169.md#eegdash.dataset.NM000169) ### eegdash.dataset.BNCI2014_009_P300 alias of [`NM000188`](eegdash.dataset.NM000188.md#eegdash.dataset.NM000188) ### eegdash.dataset.BNCI2015 alias of [`NM000140`](eegdash.dataset.NM000140.md#eegdash.dataset.NM000140) ### eegdash.dataset.BNCI2015001 alias of [`NM000140`](eegdash.dataset.NM000140.md#eegdash.dataset.NM000140) ### eegdash.dataset.BNCI2015_003_AMUSE alias of [`NM000189`](eegdash.dataset.NM000189.md#eegdash.dataset.NM000189) ### eegdash.dataset.BNCI2015_003_P300 alias of [`NM000189`](eegdash.dataset.NM000189.md#eegdash.dataset.NM000189) ### eegdash.dataset.BNCI2015_006_MusicBCI alias of [`NM000192`](eegdash.dataset.NM000192.md#eegdash.dataset.NM000192) ### eegdash.dataset.BNCI2015_008_CenterSpeller alias of [`NM000198`](eegdash.dataset.NM000198.md#eegdash.dataset.NM000198) ### eegdash.dataset.BNCI2015_008_P300 alias of [`NM000198`](eegdash.dataset.NM000198.md#eegdash.dataset.NM000198) ### eegdash.dataset.BNCI2015_BNCI_006_Music alias of [`NM000192`](eegdash.dataset.NM000192.md#eegdash.dataset.NM000192) ### eegdash.dataset.BNCI2015_ERP alias of [`NM000234`](eegdash.dataset.NM000234.md#eegdash.dataset.NM000234) ### eegdash.dataset.BNCI2015_P300 alias of [`NM000189`](eegdash.dataset.NM000189.md#eegdash.dataset.NM000189) ### eegdash.dataset.BNCI2016 alias of [`NM000243`](eegdash.dataset.NM000243.md#eegdash.dataset.NM000243) ### eegdash.dataset.BNCI2016002 alias of [`NM000243`](eegdash.dataset.NM000243.md#eegdash.dataset.NM000243) ### eegdash.dataset.BNCI2020 alias of [`NM000219`](eegdash.dataset.NM000219.md#eegdash.dataset.NM000219) ### eegdash.dataset.BNCI2020_002_AttentionShift alias of [`NM000219`](eegdash.dataset.NM000219.md#eegdash.dataset.NM000219) ### eegdash.dataset.BNCI2020_002_CovertSpatialAttention alias of [`NM000219`](eegdash.dataset.NM000219.md#eegdash.dataset.NM000219) ### eegdash.dataset.BNCI2025 alias of [`NM000162`](eegdash.dataset.NM000162.md#eegdash.dataset.NM000162) ### eegdash.dataset.BNCI_2015_006_Music alias of [`NM000192`](eegdash.dataset.NM000192.md#eegdash.dataset.NM000192) ### eegdash.dataset.BOAS alias of [`DS005555`](eegdash.dataset.DS005555.md#eegdash.dataset.DS005555) ### eegdash.dataset.Barras2021 alias of [`DS007169`](eegdash.dataset.DS007169.md#eegdash.dataset.DS007169) ### eegdash.dataset.Barras2025 alias of [`DS007262`](eegdash.dataset.DS007262.md#eegdash.dataset.DS007262) ### eegdash.dataset.BetaSSVEP alias of [`NM000129`](eegdash.dataset.NM000129.md#eegdash.dataset.NM000129) ### eegdash.dataset.BigP3BCI_E alias of [`NM000186`](eegdash.dataset.NM000186.md#eegdash.dataset.NM000186) ### eegdash.dataset.BigP3BCI_F alias of [`NM000191`](eegdash.dataset.NM000191.md#eegdash.dataset.NM000191) ### eegdash.dataset.BigP3BCI_G alias of [`NM000277`](eegdash.dataset.NM000277.md#eegdash.dataset.NM000277) ### eegdash.dataset.BigP3BCI_H alias of [`NM000218`](eegdash.dataset.NM000218.md#eegdash.dataset.NM000218) ### eegdash.dataset.BigP3BCI_I alias of [`NM000200`](eegdash.dataset.NM000200.md#eegdash.dataset.NM000200) ### eegdash.dataset.BigP3BCI_K alias of [`NM000176`](eegdash.dataset.NM000176.md#eegdash.dataset.NM000176) ### eegdash.dataset.BigP3BCI_M alias of [`NM000197`](eegdash.dataset.NM000197.md#eegdash.dataset.NM000197) ### eegdash.dataset.BigP3BCI_S1 alias of [`NM000247`](eegdash.dataset.NM000247.md#eegdash.dataset.NM000247) ### eegdash.dataset.BigP3BCI_StudyE alias of [`NM000186`](eegdash.dataset.NM000186.md#eegdash.dataset.NM000186) ### eegdash.dataset.BigP3BCI_StudyF alias of [`NM000191`](eegdash.dataset.NM000191.md#eegdash.dataset.NM000191) ### eegdash.dataset.BigP3BCI_StudyG alias of [`NM000277`](eegdash.dataset.NM000277.md#eegdash.dataset.NM000277) ### eegdash.dataset.BigP3BCI_StudyH alias of [`NM000218`](eegdash.dataset.NM000218.md#eegdash.dataset.NM000218) ### eegdash.dataset.BigP3BCI_StudyI alias of [`NM000200`](eegdash.dataset.NM000200.md#eegdash.dataset.NM000200) ### eegdash.dataset.BigP3BCI_StudyK alias of [`NM000176`](eegdash.dataset.NM000176.md#eegdash.dataset.NM000176) ### eegdash.dataset.BigP3BCI_StudyM alias of [`NM000197`](eegdash.dataset.NM000197.md#eegdash.dataset.NM000197) ### eegdash.dataset.BigP3BCI_StudyN alias of [`NM000187`](eegdash.dataset.NM000187.md#eegdash.dataset.NM000187) ### eegdash.dataset.BigP3BCI_StudyS1 alias of [`NM000247`](eegdash.dataset.NM000247.md#eegdash.dataset.NM000247) ### eegdash.dataset.Bogacz2024 alias of [`DS002908`](eegdash.dataset.DS002908.md#eegdash.dataset.DS002908) ### eegdash.dataset.BrainInvaders alias of [`NM000260`](eegdash.dataset.NM000260.md#eegdash.dataset.NM000260) ### eegdash.dataset.BrainInvaders2013a alias of [`NM000264`](eegdash.dataset.NM000264.md#eegdash.dataset.NM000264) ### eegdash.dataset.BrainInvaders2014a alias of [`NM000244`](eegdash.dataset.NM000244.md#eegdash.dataset.NM000244) ### eegdash.dataset.BrainInvaders2014b alias of [`NM000215`](eegdash.dataset.NM000215.md#eegdash.dataset.NM000215) ### eegdash.dataset.BrainInvaders2015a alias of [`NM000216`](eegdash.dataset.NM000216.md#eegdash.dataset.NM000216) ### eegdash.dataset.BrainInvaders2015b alias of [`NM000217`](eegdash.dataset.NM000217.md#eegdash.dataset.NM000217) ### eegdash.dataset.BrainInvadersBI2014b alias of [`NM000215`](eegdash.dataset.NM000215.md#eegdash.dataset.NM000215) ### eegdash.dataset.BrainTreeBank alias of [`NM000253`](eegdash.dataset.NM000253.md#eegdash.dataset.NM000253) ### eegdash.dataset.Broitman2019 alias of [`DS005857`](eegdash.dataset.DS005857.md#eegdash.dataset.DS005857) ### eegdash.dataset.CARLA alias of [`DS004977`](eegdash.dataset.DS004977.md#eegdash.dataset.DS004977) ### eegdash.dataset.CHBMIT alias of [`NM000110`](eegdash.dataset.NM000110.md#eegdash.dataset.NM000110) ### eegdash.dataset.CHB_MIT alias of [`NM000110`](eegdash.dataset.NM000110.md#eegdash.dataset.NM000110) ### eegdash.dataset.CHISCO20 alias of [`DS006317`](eegdash.dataset.DS006317.md#eegdash.dataset.DS006317) ### eegdash.dataset.CPSEED alias of [`DS006465`](eegdash.dataset.DS006465.md#eegdash.dataset.DS006465) ### eegdash.dataset.CPSEED_3M alias of [`DS006465`](eegdash.dataset.DS006465.md#eegdash.dataset.DS006465) ### eegdash.dataset.CastillosCVEP40 alias of [`NM000342`](eegdash.dataset.NM000342.md#eegdash.dataset.NM000342) ### eegdash.dataset.CatFR alias of [`DS004809`](eegdash.dataset.DS004809.md#eegdash.dataset.DS004809) ### eegdash.dataset.Chandravadia2022 alias of [`DS005028`](eegdash.dataset.DS005028.md#eegdash.dataset.DS005028) ### eegdash.dataset.Chang2025 alias of [`NM000271`](eegdash.dataset.NM000271.md#eegdash.dataset.NM000271) ### eegdash.dataset.Chavarriaga2010 alias of [`NM000168`](eegdash.dataset.NM000168.md#eegdash.dataset.NM000168) ### eegdash.dataset.Chisco alias of [`DS005170`](eegdash.dataset.DS005170.md#eegdash.dataset.DS005170) ### eegdash.dataset.Chisco20 alias of [`DS006317`](eegdash.dataset.DS006317.md#eegdash.dataset.DS006317) ### eegdash.dataset.Chisco2_0 alias of [`DS006317`](eegdash.dataset.DS006317.md#eegdash.dataset.DS006317) ### eegdash.dataset.Cote2015 alias of [`DS003082`](eegdash.dataset.DS003082.md#eegdash.dataset.DS003082) ### eegdash.dataset.Couperus2017 alias of [`DS007096`](eegdash.dataset.DS007096.md#eegdash.dataset.DS007096) ### eegdash.dataset.Couperus2021_LRP alias of [`DS007139`](eegdash.dataset.DS007139.md#eegdash.dataset.DS007139) ### eegdash.dataset.Couperus2021_MMN alias of [`DS007069`](eegdash.dataset.DS007069.md#eegdash.dataset.DS007069) ### eegdash.dataset.Couperus2021_N2pc alias of [`DS007137`](eegdash.dataset.DS007137.md#eegdash.dataset.DS007137) ### eegdash.dataset.Couperus2021_N400 alias of [`DS007052`](eegdash.dataset.DS007052.md#eegdash.dataset.DS007052) ### eegdash.dataset.Couperus2021_P300 alias of [`DS007056`](eegdash.dataset.DS007056.md#eegdash.dataset.DS007056) ### eegdash.dataset.DENS alias of [`DS003751`](eegdash.dataset.DS003751.md#eegdash.dataset.DS003751) ### *class* eegdash.dataset.DS000117(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multisubject, multimodal face processing * **Study:** `ds000117` (OpenNeuro) * **Author (year):** `Wakeman2018` * **Canonical:** `Wakeman2015`, `WakemanHenson` Also importable as: `DS000117`, `Wakeman2018`, `Wakeman2015`, `WakemanHenson`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 17; recordings: 104; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds000117](https://openneuro.org/datasets/ds000117) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds000117](https://nemar.org/dataexplorer/detail?dataset_id=ds000117) DOI: [https://doi.org/10.18112/openneuro.ds000117.v1.1.0](https://doi.org/10.18112/openneuro.ds000117.v1.1.0) NEMAR citation count: 77 ### Examples ```pycon >>> from eegdash.dataset import DS000117 >>> dataset = DS000117(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Wakeman2015', 'WakemanHenson']* ### *class* eegdash.dataset.DS000246(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEG-BIDS Brainstorm data sample * **Study:** `ds000246` (OpenNeuro) * **Author (year):** `Bock2018` * **Canonical:** — Also importable as: `DS000246`, `Bock2018`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 2; recordings: 3; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds000246](https://openneuro.org/datasets/ds000246) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds000246](https://nemar.org/dataexplorer/detail?dataset_id=ds000246) DOI: [https://doi.org/10.18112/openneuro.ds000246.v1.0.1](https://doi.org/10.18112/openneuro.ds000246.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS000246 >>> dataset = DS000246(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS000247(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEG-BIDS OMEGA RestingState_sample * **Study:** `ds000247` (OpenNeuro) * **Author (year):** `Niso2018` * **Canonical:** `OMEGA` Also importable as: `DS000247`, `Niso2018`, `OMEGA`. Modality: `meg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 6; recordings: 10; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds000247](https://openneuro.org/datasets/ds000247) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds000247](https://nemar.org/dataexplorer/detail?dataset_id=ds000247) DOI: [https://doi.org/10.18112/openneuro.ds000247.v1.0.2](https://doi.org/10.18112/openneuro.ds000247.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS000247 >>> dataset = DS000247(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['OMEGA']* ### *class* eegdash.dataset.DS000248(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MNE-Sample-Data * **Study:** `ds000248` (OpenNeuro) * **Author (year):** `Gramfort2018` * **Canonical:** `MNE_Sample_Data` Also importable as: `DS000248`, `Gramfort2018`, `MNE_Sample_Data`. Modality: `meg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 2; recordings: 3; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds000248](https://openneuro.org/datasets/ds000248) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds000248](https://nemar.org/dataexplorer/detail?dataset_id=ds000248) DOI: [https://doi.org/10.18112/openneuro.ds000248.v1.2.4](https://doi.org/10.18112/openneuro.ds000248.v1.2.4) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS000248 >>> dataset = DS000248(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MNE_Sample_Data']* ### *class* eegdash.dataset.DS001785(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Evidence accumulation relates to perceptual consciousness and monitoring * **Study:** `ds001785` (OpenNeuro) * **Author (year):** `Pereira2019_Evidence` * **Canonical:** — Also importable as: `DS001785`, `Pereira2019_Evidence`. Modality: `eeg`. Subjects: 18; recordings: 54; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001785](https://openneuro.org/datasets/ds001785) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001785](https://nemar.org/dataexplorer/detail?dataset_id=ds001785) DOI: [https://doi.org/10.18112/openneuro.ds001785.v1.1.1](https://doi.org/10.18112/openneuro.ds001785.v1.1.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS001785 >>> dataset = DS001785(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS001787(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG meditation study * **Study:** `ds001787` (OpenNeuro) * **Author (year):** `Delorme2019` * **Canonical:** — Also importable as: `DS001787`, `Delorme2019`. Modality: `eeg`. Subjects: 24; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001787](https://openneuro.org/datasets/ds001787) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001787](https://nemar.org/dataexplorer/detail?dataset_id=ds001787) DOI: [https://doi.org/10.18112/openneuro.ds001787.v1.1.1](https://doi.org/10.18112/openneuro.ds001787.v1.1.1) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS001787 >>> dataset = DS001787(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS001810(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG study of the attentional blink; before, during, and after transcranial Direct Current Stimulation (tDCS) * **Study:** `ds001810` (OpenNeuro) * **Author (year):** `Reteig2019` * **Canonical:** — Also importable as: `DS001810`, `Reteig2019`. Modality: `eeg`. Subjects: 47; recordings: 263; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001810](https://openneuro.org/datasets/ds001810) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001810](https://nemar.org/dataexplorer/detail?dataset_id=ds001810) DOI: [https://doi.org/10.18112/openneuro.ds001810.v1.1.0](https://doi.org/10.18112/openneuro.ds001810.v1.1.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS001810 >>> dataset = DS001810(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS001849(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RS_TMSEEG_Data * **Study:** `ds001849` (OpenNeuro) * **Author (year):** `Freedberg2019` * **Canonical:** — Also importable as: `DS001849`, `Freedberg2019`. Modality: `eeg`. Subjects: 20; recordings: 120; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001849](https://openneuro.org/datasets/ds001849) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001849](https://nemar.org/dataexplorer/detail?dataset_id=ds001849) DOI: [https://doi.org/10.18112/openneuro.ds001849.v1.0.2](https://doi.org/10.18112/openneuro.ds001849.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS001849 >>> dataset = DS001849(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS001971(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Audiocue walking study * **Study:** `ds001971` (OpenNeuro) * **Author (year):** `Wagner2019` * **Canonical:** — Also importable as: `DS001971`, `Wagner2019`. Modality: `eeg`. Subjects: 20; recordings: 273; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds001971](https://openneuro.org/datasets/ds001971) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds001971](https://nemar.org/dataexplorer/detail?dataset_id=ds001971) DOI: [https://doi.org/10.18112/openneuro.ds001971.v1.1.1](https://doi.org/10.18112/openneuro.ds001971.v1.1.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS001971 >>> dataset = DS001971(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002001(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Rivalry_Tagging * **Study:** `ds002001` (OpenNeuro) * **Author (year):** `Mendola2019` * **Canonical:** `Mendola2020` Also importable as: `DS002001`, `Mendola2019`, `Mendola2020`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 11; recordings: 69; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002001](https://openneuro.org/datasets/ds002001) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002001](https://nemar.org/dataexplorer/detail?dataset_id=ds002001) DOI: [https://doi.org/10.18112/openneuro.ds002001.v1.0.0](https://doi.org/10.18112/openneuro.ds002001.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS002001 >>> dataset = DS002001(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Mendola2020']* ### *class* eegdash.dataset.DS002034(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Real-time EEG feedback on alpha power lateralization leads to behavioral improvements in a covert attention task * **Study:** `ds002034` (OpenNeuro) * **Author (year):** `Schneider2019` * **Canonical:** — Also importable as: `DS002034`, `Schneider2019`. Modality: `eeg`. Subjects: 14; recordings: 167; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002034](https://openneuro.org/datasets/ds002034) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002034](https://nemar.org/dataexplorer/detail?dataset_id=ds002034) DOI: [https://doi.org/10.18112/openneuro.ds002034.v1.0.3](https://doi.org/10.18112/openneuro.ds002034.v1.0.3) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS002034 >>> dataset = DS002034(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002094(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Single-pulse open-loop TMS-EEG dataset * **Study:** `ds002094` (OpenNeuro) * **Author (year):** `DS2094_Single_pulse` * **Canonical:** — Also importable as: `DS002094`, `DS2094_Single_pulse`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 20; recordings: 43; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002094](https://openneuro.org/datasets/ds002094) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002094](https://nemar.org/dataexplorer/detail?dataset_id=ds002094) NEMAR citation count: 30 ### Examples ```pycon >>> from eegdash.dataset import DS002094 >>> dataset = DS002094(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002158(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Disentangling the origins of confidence in speeded perceptual judgments through multimodal imaging * **Study:** `ds002158` (OpenNeuro) * **Author (year):** `Pereira2019_Disentangling` * **Canonical:** — Also importable as: `DS002158`, `Pereira2019_Disentangling`. Modality: `eeg`. Subjects: 20; recordings: 117; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002158](https://openneuro.org/datasets/ds002158) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002158](https://nemar.org/dataexplorer/detail?dataset_id=ds002158) DOI: [https://doi.org/10.18112/openneuro.ds002158.v1.0.2](https://doi.org/10.18112/openneuro.ds002158.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002158 >>> dataset = DS002158(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002181(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CRYPTO and PROVIDE EEG Baseline Data * **Study:** `ds002181` (OpenNeuro) * **Author (year):** `Xie2019` * **Canonical:** — Also importable as: `DS002181`, `Xie2019`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Development`. Subjects: 226; recordings: 226; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002181](https://openneuro.org/datasets/ds002181) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002181](https://nemar.org/dataexplorer/detail?dataset_id=ds002181) DOI: [https://doi.org/mockDOI](https://doi.org/mockDOI) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002181 >>> dataset = DS002181(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002218(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory and Visual Rhythm Omission EEG * **Study:** `ds002218` (OpenNeuro) * **Author (year):** `Comstock2019` * **Canonical:** — Also importable as: `DS002218`, `Comstock2019`. Modality: `eeg`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002218](https://openneuro.org/datasets/ds002218) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002218](https://nemar.org/dataexplorer/detail?dataset_id=ds002218) DOI: [https://doi.org/mockDOI](https://doi.org/mockDOI) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002218 >>> dataset = DS002218(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002312(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) OcularLDT * **Study:** `ds002312` (OpenNeuro) * **Author (year):** `Brooks2019` * **Canonical:** `OcularLDT`, `ocular_ldt` Also importable as: `DS002312`, `Brooks2019`, `OcularLDT`, `ocular_ldt`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 19; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002312](https://openneuro.org/datasets/ds002312) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002312](https://nemar.org/dataexplorer/detail?dataset_id=ds002312) DOI: [https://doi.org/10.18112/openneuro.ds002312.v1.0.0](https://doi.org/10.18112/openneuro.ds002312.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS002312 >>> dataset = DS002312(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['OcularLDT', 'ocular_ldt']* ### *class* eegdash.dataset.DS002336(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP1 * **Study:** `ds002336` (OpenNeuro) * **Author (year):** `Lioi2019_multi` * **Canonical:** — Also importable as: `DS002336`, `Lioi2019_multi`. Modality: `eeg`. Subjects: 10; recordings: 54; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002336](https://openneuro.org/datasets/ds002336) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002336](https://nemar.org/dataexplorer/detail?dataset_id=ds002336) DOI: [https://doi.org/10.18112/openneuro.ds002336.v2.0.2](https://doi.org/10.18112/openneuro.ds002336.v2.0.2) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS002336 >>> dataset = DS002336(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002338(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A multi-modal human neuroimaging dataset for data integration: simultaneous EEG and fMRI acquisition during a motor imagery neurofeedback task: XP2 * **Study:** `ds002338` (OpenNeuro) * **Author (year):** `Lioi2019_multi_modal` * **Canonical:** — Also importable as: `DS002338`, `Lioi2019_multi_modal`. Modality: `eeg`. Subjects: 17; recordings: 85; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002338](https://openneuro.org/datasets/ds002338) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002338](https://nemar.org/dataexplorer/detail?dataset_id=ds002338) DOI: [https://doi.org/10.18112/openneuro.ds002338.v2.0.1](https://doi.org/10.18112/openneuro.ds002338.v2.0.1) NEMAR citation count: 11 ### Examples ```pycon >>> from eegdash.dataset import DS002338 >>> dataset = DS002338(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002550(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Differential brain mechanisms of selection and maintenance of information during working memory (MEG data) * **Study:** `ds002550` (OpenNeuro) * **Author (year):** `Quentin2020` * **Canonical:** — Also importable as: `DS002550`, `Quentin2020`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 22; recordings: 377; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002550](https://openneuro.org/datasets/ds002550) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002550](https://nemar.org/dataexplorer/detail?dataset_id=ds002550) DOI: [https://doi.org/10.18112/openneuro.ds002550.v1.0.1](https://doi.org/10.18112/openneuro.ds002550.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS002550 >>> dataset = DS002550(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002578(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual Oddball Task (256 channels) * **Study:** `ds002578` (OpenNeuro) * **Author (year):** `Delorme2020_Visual_Oddball_256` * **Canonical:** — Also importable as: `DS002578`, `Delorme2020_Visual_Oddball_256`. Modality: `eeg`. Subjects: 2; recordings: 2; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002578](https://openneuro.org/datasets/ds002578) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002578](https://nemar.org/dataexplorer/detail?dataset_id=ds002578) DOI: [https://doi.org/10.18112/openneuro.ds002578.v1.1.0](https://doi.org/10.18112/openneuro.ds002578.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002578 >>> dataset = DS002578(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002680(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Go-nogo categorization and detection task * **Study:** `ds002680` (OpenNeuro) * **Author (year):** `Delorme2020_Go_nogo_categorization` * **Canonical:** — Also importable as: `DS002680`, `Delorme2020_Go_nogo_categorization`. Modality: `eeg`. Subjects: 14; recordings: 350; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002680](https://openneuro.org/datasets/ds002680) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002680](https://nemar.org/dataexplorer/detail?dataset_id=ds002680) DOI: [https://doi.org/10.18112/openneuro.ds002680.v1.2.0](https://doi.org/10.18112/openneuro.ds002680.v1.2.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS002680 >>> dataset = DS002680(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002691(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Internal attention study * **Study:** `ds002691` (OpenNeuro) * **Author (year):** `Delorme2020_Internal_attention` * **Canonical:** — Also importable as: `DS002691`, `Delorme2020_Internal_attention`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002691](https://openneuro.org/datasets/ds002691) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002691](https://nemar.org/dataexplorer/detail?dataset_id=ds002691) DOI: [https://doi.org/10.18112/openneuro.ds002691.v1.1.0](https://doi.org/10.18112/openneuro.ds002691.v1.1.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS002691 >>> dataset = DS002691(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002712(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Numbers and Letters * **Study:** `ds002712` (OpenNeuro) * **Author (year):** `Aurtenetxe2020` * **Canonical:** — Also importable as: `DS002712`, `Aurtenetxe2020`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 25; recordings: 82; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002712](https://openneuro.org/datasets/ds002712) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002712](https://nemar.org/dataexplorer/detail?dataset_id=ds002712) DOI: [https://doi.org/10.18112/openneuro.ds002712.v1.0.1](https://doi.org/10.18112/openneuro.ds002712.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002712 >>> dataset = DS002712(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002718(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Face processing EEG dataset for EEGLAB * **Study:** `ds002718` (OpenNeuro) * **Author (year):** `Wakeman2020` * **Canonical:** `WakemanHenson_EEG_MEG` Also importable as: `DS002718`, `Wakeman2020`, `WakemanHenson_EEG_MEG`. Modality: `eeg`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002718](https://openneuro.org/datasets/ds002718) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002718](https://nemar.org/dataexplorer/detail?dataset_id=ds002718) DOI: [https://doi.org/10.18112/openneuro.ds002718.v1.1.0](https://doi.org/10.18112/openneuro.ds002718.v1.1.0) NEMAR citation count: 11 ### Examples ```pycon >>> from eegdash.dataset import DS002718 >>> dataset = DS002718(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['WakemanHenson_EEG_MEG']* ### *class* eegdash.dataset.DS002720(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recorded during development of a tempo-based brain-computer music interface * **Study:** `ds002720` (OpenNeuro) * **Author (year):** `Daly2020_recorded` * **Canonical:** — Also importable as: `DS002720`, `Daly2020_recorded`. Modality: `eeg`. Subjects: 18; recordings: 165; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002720](https://openneuro.org/datasets/ds002720) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002720](https://nemar.org/dataexplorer/detail?dataset_id=ds002720) DOI: [https://doi.org/10.18112/openneuro.ds002720.v1.0.1](https://doi.org/10.18112/openneuro.ds002720.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002720 >>> dataset = DS002720(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002721(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) An EEG dataset recorded during affective music listening * **Study:** `ds002721` (OpenNeuro) * **Author (year):** `Daly2020_recorded_affective` * **Canonical:** — Also importable as: `DS002721`, `Daly2020_recorded_affective`. Modality: `eeg`. Subjects: 31; recordings: 185; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002721](https://openneuro.org/datasets/ds002721) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002721](https://nemar.org/dataexplorer/detail?dataset_id=ds002721) DOI: [https://doi.org/10.18112/openneuro.ds002721.v1.0.2](https://doi.org/10.18112/openneuro.ds002721.v1.0.2) NEMAR citation count: 10 ### Examples ```pycon >>> from eegdash.dataset import DS002721 >>> dataset = DS002721(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002722(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recorded during development of an affective brain-computer music interface: calibration session * **Study:** `ds002722` (OpenNeuro) * **Author (year):** `Daly2020_recorded_development` * **Canonical:** — Also importable as: `DS002722`, `Daly2020_recorded_development`. Modality: `eeg`. Subjects: 19; recordings: 94; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002722](https://openneuro.org/datasets/ds002722) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002722](https://nemar.org/dataexplorer/detail?dataset_id=ds002722) DOI: [https://doi.org/10.18112/openneuro.ds002722.v1.0.1](https://doi.org/10.18112/openneuro.ds002722.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS002722 >>> dataset = DS002722(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002723(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recorded during development of an affective brain-computer music interface: testing session * **Study:** `ds002723` (OpenNeuro) * **Author (year):** `Daly2020_session` * **Canonical:** — Also importable as: `DS002723`, `Daly2020_session`. Modality: `eeg`. Subjects: 8; recordings: 44; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002723](https://openneuro.org/datasets/ds002723) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002723](https://nemar.org/dataexplorer/detail?dataset_id=ds002723) DOI: [https://doi.org/10.18112/openneuro.ds002723.v1.1.0](https://doi.org/10.18112/openneuro.ds002723.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002723 >>> dataset = DS002723(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002724(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recorded during development of an affective brain-computer music interface: training sessions * **Study:** `ds002724` (OpenNeuro) * **Author (year):** `Daly2020_sessions` * **Canonical:** — Also importable as: `DS002724`, `Daly2020_sessions`. Modality: `eeg`. Subjects: 10; recordings: 96; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002724](https://openneuro.org/datasets/ds002724) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002724](https://nemar.org/dataexplorer/detail?dataset_id=ds002724) DOI: [https://doi.org/10.18112/openneuro.ds002724.v1.0.1](https://doi.org/10.18112/openneuro.ds002724.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002724 >>> dataset = DS002724(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002725(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset recording joint EEG-fMRI during affective music listening * **Study:** `ds002725` (OpenNeuro) * **Author (year):** `Daly2020_joint` * **Canonical:** — Also importable as: `DS002725`, `Daly2020_joint`. Modality: `eeg`. Subjects: 21; recordings: 105; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002725](https://openneuro.org/datasets/ds002725) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002725](https://nemar.org/dataexplorer/detail?dataset_id=ds002725) DOI: [https://doi.org/10.18112/openneuro.ds002725.v1.0.0](https://doi.org/10.18112/openneuro.ds002725.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS002725 >>> dataset = DS002725(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002761(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) memoryreplay * **Study:** `ds002761` (OpenNeuro) * **Author (year):** `Wimmer2020` * **Canonical:** — Also importable as: `DS002761`, `Wimmer2020`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 25; recordings: 249; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002761](https://openneuro.org/datasets/ds002761) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002761](https://nemar.org/dataexplorer/detail?dataset_id=ds002761) DOI: [https://doi.org/10.18112/openneuro.ds002761.v1.1.2](https://doi.org/10.18112/openneuro.ds002761.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002761 >>> dataset = DS002761(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002778(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) UC San Diego Resting State EEG Data from Patients with Parkinson’s Disease * **Study:** `ds002778` (OpenNeuro) * **Author (year):** `Rockhill2020` * **Canonical:** — Also importable as: `DS002778`, `Rockhill2020`. Modality: `eeg`. Subjects: 31; recordings: 46; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002778](https://openneuro.org/datasets/ds002778) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002778](https://nemar.org/dataexplorer/detail?dataset_id=ds002778) DOI: [https://doi.org/10.18112/openneuro.ds002778.v1.0.5](https://doi.org/10.18112/openneuro.ds002778.v1.0.5) NEMAR citation count: 42 ### Examples ```pycon >>> from eegdash.dataset import DS002778 >>> dataset = DS002778(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002791(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) DataSet1 * **Study:** `ds002791` (OpenNeuro) * **Author (year):** `Mheich2020_DataSet1` * **Canonical:** `Mheich2020` Also importable as: `DS002791`, `Mheich2020_DataSet1`, `Mheich2020`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 23; recordings: 92; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002791](https://openneuro.org/datasets/ds002791) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002791](https://nemar.org/dataexplorer/detail?dataset_id=ds002791) DOI: [https://doi.org/10.18112/openneuro.ds002791.v1.0.0](https://doi.org/10.18112/openneuro.ds002791.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS002791 >>> dataset = DS002791(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Mheich2020']* ### *class* eegdash.dataset.DS002799(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Human es-fMRI Resource: Concurrent deep-brain stimulation and whole-brain functional MRI * **Study:** `ds002799` (OpenNeuro) * **Author (year):** `Thompson2024` * **Canonical:** — Also importable as: `DS002799`, `Thompson2024`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 27; recordings: 16824; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002799](https://openneuro.org/datasets/ds002799) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002799](https://nemar.org/dataexplorer/detail?dataset_id=ds002799) DOI: [https://doi.org/10.18112/openneuro.ds002799.v1.0.4](https://doi.org/10.18112/openneuro.ds002799.v1.0.4) ### Examples ```pycon >>> from eegdash.dataset import DS002799 >>> dataset = DS002799(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002814(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Multimodal Neuroimaging Dataset to Study Spatiotemporal Dynamics of Visual Processing in Humans * **Study:** `ds002814` (OpenNeuro) * **Author (year):** `Ebrahiminia2020` * **Canonical:** — Also importable as: `DS002814`, `Ebrahiminia2020`. Modality: `eeg`. Subjects: 21; recordings: 168; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002814](https://openneuro.org/datasets/ds002814) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002814](https://nemar.org/dataexplorer/detail?dataset_id=ds002814) DOI: [https://doi.org/10.18112/openneuro.ds002814.v1.3.0](https://doi.org/10.18112/openneuro.ds002814.v1.3.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS002814 >>> dataset = DS002814(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002833(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) DataSet2 * **Study:** `ds002833` (OpenNeuro) * **Author (year):** `Mheich2020_DataSet2` * **Canonical:** `Mheich2024` Also importable as: `DS002833`, `Mheich2020_DataSet2`, `Mheich2024`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 20; recordings: 80; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002833](https://openneuro.org/datasets/ds002833) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002833](https://nemar.org/dataexplorer/detail?dataset_id=ds002833) DOI: [https://doi.org/10.18112/openneuro.ds002833.v1.0.0](https://doi.org/10.18112/openneuro.ds002833.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS002833 >>> dataset = DS002833(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Mheich2024']* ### *class* eegdash.dataset.DS002885(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) DBS Phantom Recordings * **Study:** `ds002885` (OpenNeuro) * **Author (year):** `Kandemir2020` * **Canonical:** — Also importable as: `DS002885`, `Kandemir2020`. Modality: `meg`; Experiment type: `Other`; Subject type: `Other`. Subjects: 2; recordings: 7; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002885](https://openneuro.org/datasets/ds002885) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002885](https://nemar.org/dataexplorer/detail?dataset_id=ds002885) DOI: [https://doi.org/10.18112/openneuro.ds002885.v1.0.1](https://doi.org/10.18112/openneuro.ds002885.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002885 >>> dataset = DS002885(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002893(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory-Visual Shift Study * **Study:** `ds002893` (OpenNeuro) * **Author (year):** `Westerfield2022` * **Canonical:** — Also importable as: `DS002893`, `Westerfield2022`. Modality: `eeg`. Subjects: 49; recordings: 52; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002893](https://openneuro.org/datasets/ds002893) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002893](https://nemar.org/dataexplorer/detail?dataset_id=ds002893) DOI: [https://doi.org/10.18112/openneuro.ds002893.v2.0.0](https://doi.org/10.18112/openneuro.ds002893.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002893 >>> dataset = DS002893(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS002908(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Human MEG recordings during sequential conflict task * **Study:** `ds002908` (OpenNeuro) * **Author (year):** `Bogacz2020` * **Canonical:** `Bogacz2024` Also importable as: `DS002908`, `Bogacz2020`, `Bogacz2024`. Modality: `meg`; Experiment type: `Attention`; Subject type: `Unknown`. Subjects: 13; recordings: 53; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds002908](https://openneuro.org/datasets/ds002908) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds002908](https://nemar.org/dataexplorer/detail?dataset_id=ds002908) DOI: [https://doi.org/10.18112/openneuro.ds002908.v1.0.0](https://doi.org/10.18112/openneuro.ds002908.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS002908 >>> dataset = DS002908(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Bogacz2024']* ### *class* eegdash.dataset.DS003004(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Imagined Emotion Study * **Study:** `ds003004` (OpenNeuro) * **Author (year):** `Onton2020` * **Canonical:** — Also importable as: `DS003004`, `Onton2020`. Modality: `eeg`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003004](https://openneuro.org/datasets/ds003004) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003004](https://nemar.org/dataexplorer/detail?dataset_id=ds003004) DOI: [https://doi.org/10.18112/openneuro.ds003004.v1.1.1](https://doi.org/10.18112/openneuro.ds003004.v1.1.1) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003004 >>> dataset = DS003004(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003029(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Epilepsy-iEEG-Multicenter-Dataset * **Study:** `ds003029` (OpenNeuro) * **Author (year):** `Li2020` * **Canonical:** — Also importable as: `DS003029`, `Li2020`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 35; recordings: 106; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003029](https://openneuro.org/datasets/ds003029) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003029](https://nemar.org/dataexplorer/detail?dataset_id=ds003029) DOI: [https://doi.org/10.18112/openneuro.ds003029.v1.0.5](https://doi.org/10.18112/openneuro.ds003029.v1.0.5) NEMAR citation count: 19 ### Examples ```pycon >>> from eegdash.dataset import DS003029 >>> dataset = DS003029(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003039(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) free walking study * **Study:** `ds003039` (OpenNeuro) * **Author (year):** `Jacobsen2020` * **Canonical:** — Also importable as: `DS003039`, `Jacobsen2020`. Modality: `eeg`. Subjects: 19; recordings: 19; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003039](https://openneuro.org/datasets/ds003039) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003039](https://nemar.org/dataexplorer/detail?dataset_id=ds003039) DOI: [https://doi.org/10.18112/openneuro.ds003039.v1.0.2](https://doi.org/10.18112/openneuro.ds003039.v1.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003039 >>> dataset = DS003039(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003061(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data from an auditory oddball task * **Study:** `ds003061` (OpenNeuro) * **Author (year):** `Delorme2020_auditory_oddball` * **Canonical:** `Delorme` Also importable as: `DS003061`, `Delorme2020_auditory_oddball`, `Delorme`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 39; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003061](https://openneuro.org/datasets/ds003061) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003061](https://nemar.org/dataexplorer/detail?dataset_id=ds003061) DOI: [https://doi.org/10.18112/openneuro.ds003061.v1.1.0](https://doi.org/10.18112/openneuro.ds003061.v1.1.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003061 >>> dataset = DS003061(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Delorme']* ### *class* eegdash.dataset.DS003078(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PROBE iEEG * **Study:** `ds003078` (OpenNeuro) * **Author (year):** `DOMENECH2020` * **Canonical:** — Also importable as: `DS003078`, `DOMENECH2020`. Modality: `ieeg`; Experiment type: `Unknown`; Subject type: `Surgery`. Subjects: 6; recordings: 72; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003078](https://openneuro.org/datasets/ds003078) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003078](https://nemar.org/dataexplorer/detail?dataset_id=ds003078) DOI: [https://doi.org/10.18112/openneuro.ds003078.v1.0.0](https://doi.org/10.18112/openneuro.ds003078.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003078 >>> dataset = DS003078(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003082(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory Cortex Mapping Dataset * **Study:** `ds003082` (OpenNeuro) * **Author (year):** `Cote2020` * **Canonical:** `Cote2015` Also importable as: `DS003082`, `Cote2020`, `Cote2015`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 2; recordings: 3; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003082](https://openneuro.org/datasets/ds003082) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003082](https://nemar.org/dataexplorer/detail?dataset_id=ds003082) DOI: [https://doi.org/10.18112/openneuro.ds003082.v1.0.0](https://doi.org/10.18112/openneuro.ds003082.v1.0.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003082 >>> dataset = DS003082(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Cote2015']* ### *class* eegdash.dataset.DS003104(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MNE-somato-data-bids (anonymized) * **Study:** `ds003104` (OpenNeuro) * **Author (year):** `Parkkonen2020` * **Canonical:** `MNESomato`, `Somato`, `MNESomatoData` Also importable as: `DS003104`, `Parkkonen2020`, `MNESomato`, `Somato`, `MNESomatoData`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003104](https://openneuro.org/datasets/ds003104) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003104](https://nemar.org/dataexplorer/detail?dataset_id=ds003104) DOI: [https://doi.org/10.18112/openneuro.ds003104.v1.0.1](https://doi.org/10.18112/openneuro.ds003104.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003104 >>> dataset = DS003104(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MNESomato', 'Somato', 'MNESomatoData']* ### *class* eegdash.dataset.DS003190(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Assesment of the visual stimuli properties in P300 paradigm * **Study:** `ds003190` (OpenNeuro) * **Author (year):** `MendozaMontoya2020` * **Canonical:** — Also importable as: `DS003190`, `MendozaMontoya2020`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 19; recordings: 384; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003190](https://openneuro.org/datasets/ds003190) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003190](https://nemar.org/dataexplorer/detail?dataset_id=ds003190) DOI: [https://doi.org/10.18112/openneuro.ds003190.v1.0.1](https://doi.org/10.18112/openneuro.ds003190.v1.0.1) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003190 >>> dataset = DS003190(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003194(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neuroepo multisession * **Study:** `ds003194` (OpenNeuro) * **Author (year):** `Vega2020_Neuroepo` * **Canonical:** — Also importable as: `DS003194`, `Vega2020_Neuroepo`. Modality: `eeg`. Subjects: 15; recordings: 29; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003194](https://openneuro.org/datasets/ds003194) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003194](https://nemar.org/dataexplorer/detail?dataset_id=ds003194) DOI: [https://doi.org/10.18112/openneuro.ds003194.v1.0.3](https://doi.org/10.18112/openneuro.ds003194.v1.0.3) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003194 >>> dataset = DS003194(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003195(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Placebo Neuroepo multisession * **Study:** `ds003195` (OpenNeuro) * **Author (year):** `Vega2020_Placebo` * **Canonical:** — Also importable as: `DS003195`, `Vega2020_Placebo`. Modality: `eeg`. Subjects: 10; recordings: 20; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003195](https://openneuro.org/datasets/ds003195) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003195](https://nemar.org/dataexplorer/detail?dataset_id=ds003195) DOI: [https://doi.org/10.18112/openneuro.ds003195.v1.0.3](https://doi.org/10.18112/openneuro.ds003195.v1.0.3) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003195 >>> dataset = DS003195(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003343(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Disentangling the percepts of illusory movement and sensory stimulation during tendon vibration in the EEG * **Study:** `ds003343` (OpenNeuro) * **Author (year):** `Schneider2020` * **Canonical:** — Also importable as: `DS003343`, `Schneider2020`. Modality: `eeg`. Subjects: 20; recordings: 59; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003343](https://openneuro.org/datasets/ds003343) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003343](https://nemar.org/dataexplorer/detail?dataset_id=ds003343) DOI: [https://doi.org/10.18112/openneuro.ds003343.v2.0.1](https://doi.org/10.18112/openneuro.ds003343.v2.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003343 >>> dataset = DS003343(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003352(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 1 - Light Pink Spiral * **Study:** `ds003352` (OpenNeuro) * **Author (year):** `Hermann2020` * **Canonical:** `Hermann2021` Also importable as: `DS003352`, `Hermann2020`, `Hermann2021`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 18; recordings: 138; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003352](https://openneuro.org/datasets/ds003352) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003352](https://nemar.org/dataexplorer/detail?dataset_id=ds003352) DOI: [https://doi.org/10.18112/openneuro.ds003352.v1.0.0](https://doi.org/10.18112/openneuro.ds003352.v1.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003352 >>> dataset = DS003352(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Hermann2021']* ### *class* eegdash.dataset.DS003374(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of neurons and intracranial EEG from human amygdala during aversive dynamic visual stimulation * **Study:** `ds003374` (OpenNeuro) * **Author (year):** `Fedele2020` * **Canonical:** — Also importable as: `DS003374`, `Fedele2020`. Modality: `ieeg`; Experiment type: `Affect`; Subject type: `Epilepsy`. Subjects: 9; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003374](https://openneuro.org/datasets/ds003374) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003374](https://nemar.org/dataexplorer/detail?dataset_id=ds003374) DOI: [https://doi.org/10.18112/openneuro.ds003374.v1.1.1](https://doi.org/10.18112/openneuro.ds003374.v1.1.1) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003374 >>> dataset = DS003374(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003380(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Corticothalamic communication under analgesia, sedation and gradual ischemia: a multimodal model of controlled gradual cerebral ischemia in pig * **Study:** `ds003380` (OpenNeuro) * **Author (year):** `Frasch2020` * **Canonical:** — Also importable as: `DS003380`, `Frasch2020`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 1; recordings: 5; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003380](https://openneuro.org/datasets/ds003380) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003380](https://nemar.org/dataexplorer/detail?dataset_id=ds003380) DOI: [https://doi.org/10.18112/openneuro.ds003380.v1.0.0](https://doi.org/10.18112/openneuro.ds003380.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS003380 >>> dataset = DS003380(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003392(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NeuroSpin hMT+ Localizer DATA (MEG & aMRI) * **Study:** `ds003392` (OpenNeuro) * **Author (year):** `Zilber2020` * **Canonical:** — Also importable as: `DS003392`, `Zilber2020`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 12; recordings: 33; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003392](https://openneuro.org/datasets/ds003392) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003392](https://nemar.org/dataexplorer/detail?dataset_id=ds003392) DOI: [https://doi.org/10.18112/openneuro.ds003392.v1.0.4](https://doi.org/10.18112/openneuro.ds003392.v1.0.4) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003392 >>> dataset = DS003392(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003420(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HD-EEGtask(Dataset 1) * **Study:** `ds003420` (OpenNeuro) * **Author (year):** `Mheich2020_HD` * **Canonical:** — Also importable as: `DS003420`, `Mheich2020_HD`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 23; recordings: 92; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003420](https://openneuro.org/datasets/ds003420) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003420](https://nemar.org/dataexplorer/detail?dataset_id=ds003420) DOI: [https://doi.org/10.18112/openneuro.ds003420.v1.0.2](https://doi.org/10.18112/openneuro.ds003420.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003420 >>> dataset = DS003420(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003421(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HD-EEGtask(Dataset 2) * **Study:** `ds003421` (OpenNeuro) * **Author (year):** `Mheich2020_HD_EEGtask` * **Canonical:** — Also importable as: `DS003421`, `Mheich2020_HD_EEGtask`. Modality: `eeg`. Subjects: 20; recordings: 80; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003421](https://openneuro.org/datasets/ds003421) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003421](https://nemar.org/dataexplorer/detail?dataset_id=ds003421) DOI: [https://doi.org/10.18112/openneuro.ds003421.v1.0.2](https://doi.org/10.18112/openneuro.ds003421.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003421 >>> dataset = DS003421(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003458(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Three armed bandit gambling task * **Study:** `ds003458` (OpenNeuro) * **Author (year):** `Cavanagh2021_Three` * **Canonical:** — Also importable as: `DS003458`, `Cavanagh2021_Three`. Modality: `eeg`. Subjects: 23; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003458](https://openneuro.org/datasets/ds003458) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003458](https://nemar.org/dataexplorer/detail?dataset_id=ds003458) DOI: [https://doi.org/10.18112/openneuro.ds003458.v1.1.0](https://doi.org/10.18112/openneuro.ds003458.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003458 >>> dataset = DS003458(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003474(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Probabilistic Selection and Depression * **Study:** `ds003474` (OpenNeuro) * **Author (year):** `Cavanagh2021_Probabilistic` * **Canonical:** — Also importable as: `DS003474`, `Cavanagh2021_Probabilistic`. Modality: `eeg`. Subjects: 122; recordings: 122; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003474](https://openneuro.org/datasets/ds003474) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003474](https://nemar.org/dataexplorer/detail?dataset_id=ds003474) DOI: [https://doi.org/10.18112/openneuro.ds003474.v1.1.0](https://doi.org/10.18112/openneuro.ds003474.v1.1.0) NEMAR citation count: 9 ### Examples ```pycon >>> from eegdash.dataset import DS003474 >>> dataset = DS003474(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003478(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Depression rest * **Study:** `ds003478` (OpenNeuro) * **Author (year):** `Cavanagh2021_Depression` * **Canonical:** — Also importable as: `DS003478`, `Cavanagh2021_Depression`. Modality: `eeg`. Subjects: 122; recordings: 243; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003478](https://openneuro.org/datasets/ds003478) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003478](https://nemar.org/dataexplorer/detail?dataset_id=ds003478) DOI: [https://doi.org/10.18112/openneuro.ds003478.v1.1.0](https://doi.org/10.18112/openneuro.ds003478.v1.1.0) NEMAR citation count: 22 ### Examples ```pycon >>> from eegdash.dataset import DS003478 >>> dataset = DS003478(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003483(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Logical reasoning study * **Study:** `ds003483` (OpenNeuro) * **Author (year):** `Cognitive2021` * **Canonical:** `Maestu2021` Also importable as: `DS003483`, `Cognitive2021`, `Maestu2021`. Modality: `meg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 21; recordings: 41; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003483](https://openneuro.org/datasets/ds003483) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003483](https://nemar.org/dataexplorer/detail?dataset_id=ds003483) DOI: [https://doi.org/10.18112/openneuro.ds003483.v1.0.2](https://doi.org/10.18112/openneuro.ds003483.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003483 >>> dataset = DS003483(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Maestu2021']* ### *class* eegdash.dataset.DS003490(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: 3-Stim Auditory Oddball and Rest in Parkinson’s * **Study:** `ds003490` (OpenNeuro) * **Author (year):** `Cavanagh2021_3` * **Canonical:** — Also importable as: `DS003490`, `Cavanagh2021_3`. Modality: `eeg`. Subjects: 50; recordings: 75; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003490](https://openneuro.org/datasets/ds003490) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003490](https://nemar.org/dataexplorer/detail?dataset_id=ds003490) DOI: [https://doi.org/10.18112/openneuro.ds003490.v1.1.0](https://doi.org/10.18112/openneuro.ds003490.v1.1.0) NEMAR citation count: 13 ### Examples ```pycon >>> from eegdash.dataset import DS003490 >>> dataset = DS003490(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003498(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) interictal iEEG during slow-wave sleep with HFO markings * **Study:** `ds003498` (OpenNeuro) * **Author (year):** `Fedele2021` * **Canonical:** — Also importable as: `DS003498`, `Fedele2021`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 20; recordings: 385; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003498](https://openneuro.org/datasets/ds003498) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003498](https://nemar.org/dataexplorer/detail?dataset_id=ds003498) DOI: [https://doi.org/10.18112/openneuro.ds003498.v1.0.1](https://doi.org/10.18112/openneuro.ds003498.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003498 >>> dataset = DS003498(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003505(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) VEPCON: Source imaging of high-density visual evoked potentials with multi-scale brain parcellations and connectomes * **Study:** `ds003505` (OpenNeuro) * **Author (year):** `Pascucci2021` * **Canonical:** `VEPCON` Also importable as: `DS003505`, `Pascucci2021`, `VEPCON`. Modality: `eeg`. Subjects: 19; recordings: 37; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003505](https://openneuro.org/datasets/ds003505) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003505](https://nemar.org/dataexplorer/detail?dataset_id=ds003505) DOI: [https://doi.org/10.18112/openneuro.ds003505.v1.1.1](https://doi.org/10.18112/openneuro.ds003505.v1.1.1) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003505 >>> dataset = DS003505(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['VEPCON']* ### *class* eegdash.dataset.DS003506(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Reinforcement Learning in Parkinson’s * **Study:** `ds003506` (OpenNeuro) * **Author (year):** `Cavanagh2021_Reinforcement` * **Canonical:** — Also importable as: `DS003506`, `Cavanagh2021_Reinforcement`. Modality: `eeg`. Subjects: 56; recordings: 84; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003506](https://openneuro.org/datasets/ds003506) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003506](https://nemar.org/dataexplorer/detail?dataset_id=ds003506) DOI: [https://doi.org/10.18112/openneuro.ds003506.v1.1.0](https://doi.org/10.18112/openneuro.ds003506.v1.1.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003506 >>> dataset = DS003506(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003509(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Simon Conflict in Parkinson’s * **Study:** `ds003509` (OpenNeuro) * **Author (year):** `Cavanagh2021_Simon` * **Canonical:** — Also importable as: `DS003509`, `Cavanagh2021_Simon`. Modality: `eeg`. Subjects: 56; recordings: 84; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003509](https://openneuro.org/datasets/ds003509) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003509](https://nemar.org/dataexplorer/detail?dataset_id=ds003509) DOI: [https://doi.org/10.18112/openneuro.ds003509.v1.1.0](https://doi.org/10.18112/openneuro.ds003509.v1.1.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003509 >>> dataset = DS003509(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003516(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Attended Speaker Paradigm (Own Name in Ignored Stream) * **Study:** `ds003516` (OpenNeuro) * **Author (year):** `Holtze2021` * **Canonical:** — Also importable as: `DS003516`, `Holtze2021`. Modality: `eeg`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003516](https://openneuro.org/datasets/ds003516) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003516](https://nemar.org/dataexplorer/detail?dataset_id=ds003516) DOI: [https://doi.org/10.18112/openneuro.ds003516.v1.1.1](https://doi.org/10.18112/openneuro.ds003516.v1.1.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003516 >>> dataset = DS003516(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003517(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Continuous gameplay of an 8-bit style video game * **Study:** `ds003517` (OpenNeuro) * **Author (year):** `Cavanagh2021_Continuous` * **Canonical:** — Also importable as: `DS003517`, `Cavanagh2021_Continuous`. Modality: `eeg`. Subjects: 17; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003517](https://openneuro.org/datasets/ds003517) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003517](https://nemar.org/dataexplorer/detail?dataset_id=ds003517) DOI: [https://doi.org/10.18112/openneuro.ds003517.v1.1.0](https://doi.org/10.18112/openneuro.ds003517.v1.1.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003517 >>> dataset = DS003517(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003518(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Simon Conflict w/ Reinforcement + Cabergoline Challenge * **Study:** `ds003518` (OpenNeuro) * **Author (year):** `Cavanagh2021_Simon_Conflict` * **Canonical:** — Also importable as: `DS003518`, `Cavanagh2021_Simon_Conflict`. Modality: `eeg`. Subjects: 110; recordings: 137; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003518](https://openneuro.org/datasets/ds003518) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003518](https://nemar.org/dataexplorer/detail?dataset_id=ds003518) DOI: [https://doi.org/10.18112/openneuro.ds003518.v1.1.0](https://doi.org/10.18112/openneuro.ds003518.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003518 >>> dataset = DS003518(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003519(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Visual Working Memory + Cabergoline Challenge * **Study:** `ds003519` (OpenNeuro) * **Author (year):** `Cavanagh2021_Visual` * **Canonical:** — Also importable as: `DS003519`, `Cavanagh2021_Visual`. Modality: `eeg`. Subjects: 27; recordings: 54; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003519](https://openneuro.org/datasets/ds003519) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003519](https://nemar.org/dataexplorer/detail?dataset_id=ds003519) DOI: [https://doi.org/10.18112/openneuro.ds003519.v1.1.0](https://doi.org/10.18112/openneuro.ds003519.v1.1.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003519 >>> dataset = DS003519(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003522(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Three-Stim Auditory Oddball and Rest in Acute and Chronic TBI * **Study:** `ds003522` (OpenNeuro) * **Author (year):** `Cavanagh2021_Three_Stim` * **Canonical:** — Also importable as: `DS003522`, `Cavanagh2021_Three_Stim`. Modality: `eeg`. Subjects: 96; recordings: 200; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003522](https://openneuro.org/datasets/ds003522) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003522](https://nemar.org/dataexplorer/detail?dataset_id=ds003522) DOI: [https://doi.org/10.18112/openneuro.ds003522.v1.1.0](https://doi.org/10.18112/openneuro.ds003522.v1.1.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003522 >>> dataset = DS003522(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003523(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Visual Working Memory in Acute TBI * **Study:** `ds003523` (OpenNeuro) * **Author (year):** `Cavanagh2021_Visual_Working` * **Canonical:** — Also importable as: `DS003523`, `Cavanagh2021_Visual_Working`. Modality: `eeg`. Subjects: 91; recordings: 221; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003523](https://openneuro.org/datasets/ds003523) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003523](https://nemar.org/dataexplorer/detail?dataset_id=ds003523) DOI: [https://doi.org/10.18112/openneuro.ds003523.v1.1.0](https://doi.org/10.18112/openneuro.ds003523.v1.1.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003523 >>> dataset = DS003523(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003555(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of EEG recordings of pediatric patients with epilepsy based on the 10-20 system * **Study:** `ds003555` (OpenNeuro) * **Author (year):** `Cserpan2021` * **Canonical:** — Also importable as: `DS003555`, `Cserpan2021`. Modality: `eeg`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003555](https://openneuro.org/datasets/ds003555) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003555](https://nemar.org/dataexplorer/detail?dataset_id=ds003555) DOI: [https://doi.org/10.18112/openneuro.ds003555.v1.0.1](https://doi.org/10.18112/openneuro.ds003555.v1.0.1) NEMAR citation count: 8 ### Examples ```pycon >>> from eegdash.dataset import DS003555 >>> dataset = DS003555(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003568(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mood induction in MDD and healthy adolescents * **Study:** `ds003568` (OpenNeuro) * **Author (year):** `Liuzzi2021` * **Canonical:** — Also importable as: `DS003568`, `Liuzzi2021`. Modality: `meg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 51; recordings: 118; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003568](https://openneuro.org/datasets/ds003568) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003568](https://nemar.org/dataexplorer/detail?dataset_id=ds003568) DOI: [https://doi.org/10.18112/openneuro.ds003568.v1.0.2](https://doi.org/10.18112/openneuro.ds003568.v1.0.2) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003568 >>> dataset = DS003568(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003570(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Improvisation and Musical Structures * **Study:** `ds003570` (OpenNeuro) * **Author (year):** `Goldman2021` * **Canonical:** — Also importable as: `DS003570`, `Goldman2021`. Modality: `eeg`. Subjects: 40; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003570](https://openneuro.org/datasets/ds003570) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003570](https://nemar.org/dataexplorer/detail?dataset_id=ds003570) DOI: [https://doi.org/10.18112/openneuro.ds003570.v1.0.0](https://doi.org/10.18112/openneuro.ds003570.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003570 >>> dataset = DS003570(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003574(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Reward biases spontaneous neural reactivation during sleep * **Study:** `ds003574` (OpenNeuro) * **Author (year):** `Sterpenich2021` * **Canonical:** — Also importable as: `DS003574`, `Sterpenich2021`. Modality: `eeg`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003574](https://openneuro.org/datasets/ds003574) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003574](https://nemar.org/dataexplorer/detail?dataset_id=ds003574) DOI: [https://doi.org/10.18112/openneuro.ds003574.v1.0.2](https://doi.org/10.18112/openneuro.ds003574.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003574 >>> dataset = DS003574(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003602(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Childhood Sexual Abuse and problem drinking in women: Neurobehavioral mechanisms * **Study:** `ds003602` (OpenNeuro) * **Author (year):** `Korucuoglu2021` * **Canonical:** — Also importable as: `DS003602`, `Korucuoglu2021`. Modality: `eeg`. Subjects: 118; recordings: 699; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003602](https://openneuro.org/datasets/ds003602) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003602](https://nemar.org/dataexplorer/detail?dataset_id=ds003602) DOI: [https://doi.org/10.18112/openneuro.ds003602.v1.0.0](https://doi.org/10.18112/openneuro.ds003602.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003602 >>> dataset = DS003602(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003620(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Runabout: A mobile EEG study of auditory oddball processing in laboratory and real-world conditions * **Study:** `ds003620` (OpenNeuro) * **Author (year):** `Liebherr2021` * **Canonical:** `Runabout` Also importable as: `DS003620`, `Liebherr2021`, `Runabout`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 44; recordings: 100; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003620](https://openneuro.org/datasets/ds003620) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003620](https://nemar.org/dataexplorer/detail?dataset_id=ds003620) DOI: [https://doi.org/10.18112/openneuro.ds003620.v1.1.1](https://doi.org/10.18112/openneuro.ds003620.v1.1.1) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003620 >>> dataset = DS003620(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Runabout']* ### *class* eegdash.dataset.DS003626(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Inner Speech * **Study:** `ds003626` (OpenNeuro) * **Author (year):** `Nieto2021` * **Canonical:** — Also importable as: `DS003626`, `Nieto2021`. Modality: `eeg`. Subjects: 10; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003626](https://openneuro.org/datasets/ds003626) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003626](https://nemar.org/dataexplorer/detail?dataset_id=ds003626) DOI: [https://doi.org/10.18112/openneuro.ds003626.v2.0.0](https://doi.org/10.18112/openneuro.ds003626.v2.0.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS003626 >>> dataset = DS003626(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003633(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ForrestGump-MEG * **Study:** `ds003633` (OpenNeuro) * **Author (year):** `Liu2021` * **Canonical:** `ForrestGump_MEG` Also importable as: `DS003633`, `Liu2021`, `ForrestGump_MEG`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 12; recordings: 96; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003633](https://openneuro.org/datasets/ds003633) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003633](https://nemar.org/dataexplorer/detail?dataset_id=ds003633) DOI: [https://doi.org/10.18112/openneuro.ds003633.v1.0.3](https://doi.org/10.18112/openneuro.ds003633.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003633 >>> dataset = DS003633(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ForrestGump_MEG']* ### *class* eegdash.dataset.DS003638(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Electrophysiological biomarkers of behavioral dimensions from cross-species paradigms * **Study:** `ds003638` (OpenNeuro) * **Author (year):** `Cavanagh2021_Electrophysiological` * **Canonical:** — Also importable as: `DS003638`, `Cavanagh2021_Electrophysiological`. Modality: `eeg`. Subjects: 57; recordings: 57; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003638](https://openneuro.org/datasets/ds003638) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003638](https://nemar.org/dataexplorer/detail?dataset_id=ds003638) DOI: [https://doi.org/10.18112/openneuro.ds003638.v1.0.0](https://doi.org/10.18112/openneuro.ds003638.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003638 >>> dataset = DS003638(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003645(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Face processing MEEG dataset with HED annotation * **Study:** `ds003645` (OpenNeuro) * **Author (year):** `Wakeman2021` * **Canonical:** — Also importable as: `DS003645`, `Wakeman2021`. Modality: `eeg, meg`. Subjects: 19; recordings: 224; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003645](https://openneuro.org/datasets/ds003645) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003645](https://nemar.org/dataexplorer/detail?dataset_id=ds003645) DOI: [https://doi.org/10.18112/openneuro.ds003645.v2.0.2](https://doi.org/10.18112/openneuro.ds003645.v2.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003645 >>> dataset = DS003645(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003655(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) VerbalWorkingMemory * **Study:** `ds003655` (OpenNeuro) * **Author (year):** `Pavlov2021_VerbalWorkingMemory` * **Canonical:** — Also importable as: `DS003655`, `Pavlov2021_VerbalWorkingMemory`. Modality: `eeg`. Subjects: 156; recordings: 156; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003655](https://openneuro.org/datasets/ds003655) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003655](https://nemar.org/dataexplorer/detail?dataset_id=ds003655) DOI: [https://doi.org/10.18112/openneuro.ds003655.v1.0.0](https://doi.org/10.18112/openneuro.ds003655.v1.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003655 >>> dataset = DS003655(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003670(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of Concurrent EEG, ECG, and Behavior with Multiple Doses of transcranial Electrical Stimulation - BIDS * **Study:** `ds003670` (OpenNeuro) * **Author (year):** `Gebodh2021` * **Canonical:** — Also importable as: `DS003670`, `Gebodh2021`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 25; recordings: 62; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003670](https://openneuro.org/datasets/ds003670) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003670](https://nemar.org/dataexplorer/detail?dataset_id=ds003670) DOI: [https://doi.org/10.18112/openneuro.ds003670.v1.1.0](https://doi.org/10.18112/openneuro.ds003670.v1.1.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS003670 >>> dataset = DS003670(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003682(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Model-based aversive learning in humans is supported by preferential task state reactivation * **Study:** `ds003682` (OpenNeuro) * **Author (year):** `Wise2021` * **Canonical:** — Also importable as: `DS003682`, `Wise2021`. Modality: `meg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 28; recordings: 336; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003682](https://openneuro.org/datasets/ds003682) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003682](https://nemar.org/dataexplorer/detail?dataset_id=ds003682) DOI: [https://doi.org/10.18112/openneuro.ds003682.v1.0.0](https://doi.org/10.18112/openneuro.ds003682.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003682 >>> dataset = DS003682(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003688(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Open multimodal iEEG-fMRI dataset from naturalistic stimulation with a short audiovisual film * **Study:** `ds003688` (OpenNeuro) * **Author (year):** `Berezutskaya2021` * **Canonical:** — Also importable as: `DS003688`, `Berezutskaya2021`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Epilepsy`. Subjects: 51; recordings: 107; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003688](https://openneuro.org/datasets/ds003688) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003688](https://nemar.org/dataexplorer/detail?dataset_id=ds003688) DOI: [https://doi.org/10.18112/openneuro.ds003688.v1.0.7](https://doi.org/10.18112/openneuro.ds003688.v1.0.7) NEMAR citation count: 9 ### Examples ```pycon >>> from eegdash.dataset import DS003688 >>> dataset = DS003688(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003690(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG, ECG and pupil data from young and older adults: rest and auditory cued reaction time tasks * **Study:** `ds003690` (OpenNeuro) * **Author (year):** `Ribeiro2021` * **Canonical:** — Also importable as: `DS003690`, `Ribeiro2021`. Modality: `eeg`. Subjects: 75; recordings: 375; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003690](https://openneuro.org/datasets/ds003690) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003690](https://nemar.org/dataexplorer/detail?dataset_id=ds003690) DOI: [https://doi.org/10.18112/openneuro.ds003690.v1.0.0](https://doi.org/10.18112/openneuro.ds003690.v1.0.0) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003690 >>> dataset = DS003690(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003694(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEGMEM * **Study:** `ds003694` (OpenNeuro) * **Author (year):** `Griffiths2021` * **Canonical:** `MEGMEM` Also importable as: `DS003694`, `Griffiths2021`, `MEGMEM`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 28; recordings: 132; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003694](https://openneuro.org/datasets/ds003694) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003694](https://nemar.org/dataexplorer/detail?dataset_id=ds003694) DOI: [https://doi.org/10.18112/openneuro.ds003694.v1.0.0](https://doi.org/10.18112/openneuro.ds003694.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003694 >>> dataset = DS003694(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MEGMEM']* ### *class* eegdash.dataset.DS003702(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Social Memory cuing * **Study:** `ds003702` (OpenNeuro) * **Author (year):** `Gregory2021` * **Canonical:** — Also importable as: `DS003702`, `Gregory2021`. Modality: `eeg`. Subjects: 47; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003702](https://openneuro.org/datasets/ds003702) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003702](https://nemar.org/dataexplorer/detail?dataset_id=ds003702) DOI: [https://doi.org/10.18112/openneuro.ds003702.v1.0.1](https://doi.org/10.18112/openneuro.ds003702.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003702 >>> dataset = DS003702(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003703(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Frequency Tagging of Syntactic Structure or Lexical Properties * **Study:** `ds003703` (OpenNeuro) * **Author (year):** `Kalenkovich2021` * **Canonical:** `Kalenkovich2019` Also importable as: `DS003703`, `Kalenkovich2021`, `Kalenkovich2019`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 34; recordings: 102; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003703](https://openneuro.org/datasets/ds003703) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003703](https://nemar.org/dataexplorer/detail?dataset_id=ds003703) DOI: [https://doi.org/10.18112/openneuro.ds003703.v1.0.0](https://doi.org/10.18112/openneuro.ds003703.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003703 >>> dataset = DS003703(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kalenkovich2019']* ### *class* eegdash.dataset.DS003708(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Basis profile curve identification to understand electrical stimulation effects in human brain networks * **Study:** `ds003708` (OpenNeuro) * **Author (year):** `Hermes2021` * **Canonical:** `Miller2021` Also importable as: `DS003708`, `Hermes2021`, `Miller2021`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003708](https://openneuro.org/datasets/ds003708) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003708](https://nemar.org/dataexplorer/detail?dataset_id=ds003708) DOI: [https://doi.org/10.18112/openneuro.ds003708.v1.0.0](https://doi.org/10.18112/openneuro.ds003708.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003708 >>> dataset = DS003708(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Miller2021']* ### *class* eegdash.dataset.DS003710(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) APPLESEED Example Dataset * **Study:** `ds003710` (OpenNeuro) * **Author (year):** `Williams2021` * **Canonical:** `APPLESEED` Also importable as: `DS003710`, `Williams2021`, `APPLESEED`. Modality: `eeg`. Subjects: 13; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003710](https://openneuro.org/datasets/ds003710) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003710](https://nemar.org/dataexplorer/detail?dataset_id=ds003710) DOI: [https://doi.org/10.18112/openneuro.ds003710.v1.0.2](https://doi.org/10.18112/openneuro.ds003710.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003710 >>> dataset = DS003710(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['APPLESEED']* ### *class* eegdash.dataset.DS003739(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Perturbed beam-walking task * **Study:** `ds003739` (OpenNeuro) * **Author (year):** `Peterson2021_Perturbed_beam_walking` * **Canonical:** — Also importable as: `DS003739`, `Peterson2021_Perturbed_beam_walking`. Modality: `eeg`. Subjects: 30; recordings: 120; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003739](https://openneuro.org/datasets/ds003739) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003739](https://nemar.org/dataexplorer/detail?dataset_id=ds003739) DOI: [https://doi.org/10.18112/openneuro.ds003739.v1.0.2](https://doi.org/10.18112/openneuro.ds003739.v1.0.2) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003739 >>> dataset = DS003739(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003751(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset on Emotion with Naturalistic Stimuli (DENS) * **Study:** `ds003751` (OpenNeuro) * **Author (year):** `Mishra2021` * **Canonical:** `DENS` Also importable as: `DS003751`, `Mishra2021`, `DENS`. Modality: `eeg`. Subjects: 38; recordings: 38; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003751](https://openneuro.org/datasets/ds003751) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003751](https://nemar.org/dataexplorer/detail?dataset_id=ds003751) DOI: [https://doi.org/10.18112/openneuro.ds003751.v1.0.2](https://doi.org/10.18112/openneuro.ds003751.v1.0.2) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003751 >>> dataset = DS003751(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['DENS']* ### *class* eegdash.dataset.DS003753(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Probabilistic Learning with Affective Feedback: Exp * **Study:** `ds003753` (OpenNeuro) * **Author (year):** `Brown2021_Probabilistic` * **Canonical:** — Also importable as: `DS003753`, `Brown2021_Probabilistic`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003753](https://openneuro.org/datasets/ds003753) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003753](https://nemar.org/dataexplorer/detail?dataset_id=ds003753) ### Examples ```pycon >>> from eegdash.dataset import DS003753 >>> dataset = DS003753(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003766(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A resource for assessing dynamic binary choices in the adult brain using EEG and mouse-tracking * **Study:** `ds003766` (OpenNeuro) * **Author (year):** `Chen2021` * **Canonical:** — Also importable as: `DS003766`, `Chen2021`. Modality: `eeg`. Subjects: 31; recordings: 124; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003766](https://openneuro.org/datasets/ds003766) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003766](https://nemar.org/dataexplorer/detail?dataset_id=ds003766) DOI: [https://doi.org/10.18112/openneuro.ds003766.v2.0.3](https://doi.org/10.18112/openneuro.ds003766.v2.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003766 >>> dataset = DS003766(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003768(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Simultaneous EEG and fMRI signals during sleep from humans * **Study:** `ds003768` (OpenNeuro) * **Author (year):** `Gu2021` * **Canonical:** — Also importable as: `DS003768`, `Gu2021`. Modality: `eeg`. Subjects: 33; recordings: 255; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003768](https://openneuro.org/datasets/ds003768) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003768](https://nemar.org/dataexplorer/detail?dataset_id=ds003768) DOI: [https://doi.org/10.18112/openneuro.ds003768.v1.0.0](https://doi.org/10.18112/openneuro.ds003768.v1.0.0) NEMAR citation count: 21 ### Examples ```pycon >>> from eegdash.dataset import DS003768 >>> dataset = DS003768(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003774(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Music Listening- Genre EEG dataset (MUSIN-G) * **Study:** `ds003774` (OpenNeuro) * **Author (year):** `Miyapuram2021` * **Canonical:** `MUSING` Also importable as: `DS003774`, `Miyapuram2021`, `MUSING`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 20; recordings: 240; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003774](https://openneuro.org/datasets/ds003774) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003774](https://nemar.org/dataexplorer/detail?dataset_id=ds003774) DOI: [https://doi.org/10.18112/openneuro.ds003774.v1.0.0](https://doi.org/10.18112/openneuro.ds003774.v1.0.0) NEMAR citation count: 8 ### Examples ```pycon >>> from eegdash.dataset import DS003774 >>> dataset = DS003774(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MUSING']* ### *class* eegdash.dataset.DS003775(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SRM Resting-state EEG * **Study:** `ds003775` (OpenNeuro) * **Author (year):** `HatlestadHall2021` * **Canonical:** — Also importable as: `DS003775`, `HatlestadHall2021`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 111; recordings: 153; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003775](https://openneuro.org/datasets/ds003775) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003775](https://nemar.org/dataexplorer/detail?dataset_id=ds003775) DOI: [https://doi.org/10.18112/openneuro.ds003775.v1.2.1](https://doi.org/10.18112/openneuro.ds003775.v1.2.1) NEMAR citation count: 8 ### Examples ```pycon >>> from eegdash.dataset import DS003775 >>> dataset = DS003775(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003800(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory Gamma Entrainment * **Study:** `ds003800` (OpenNeuro) * **Author (year):** `Lahijanian2021_Auditory` * **Canonical:** — Also importable as: `DS003800`, `Lahijanian2021_Auditory`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Dementia`. Subjects: 13; recordings: 24; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003800](https://openneuro.org/datasets/ds003800) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003800](https://nemar.org/dataexplorer/detail?dataset_id=ds003800) DOI: [https://doi.org/10.18112/openneuro.ds003800.v1.0.0](https://doi.org/10.18112/openneuro.ds003800.v1.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS003800 >>> dataset = DS003800(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003801(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neural Tracking to go * **Study:** `ds003801` (OpenNeuro) * **Author (year):** `Straetmans2021` * **Canonical:** — Also importable as: `DS003801`, `Straetmans2021`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003801](https://openneuro.org/datasets/ds003801) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003801](https://nemar.org/dataexplorer/detail?dataset_id=ds003801) DOI: [https://doi.org/10.18112/openneuro.ds003801.v1.0.0](https://doi.org/10.18112/openneuro.ds003801.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003801 >>> dataset = DS003801(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003805(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multisensory Gamma Entrainment * **Study:** `ds003805` (OpenNeuro) * **Author (year):** `Lahijanian2021_Multisensory` * **Canonical:** — Also importable as: `DS003805`, `Lahijanian2021_Multisensory`. Modality: `eeg`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003805](https://openneuro.org/datasets/ds003805) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003805](https://nemar.org/dataexplorer/detail?dataset_id=ds003805) DOI: [https://doi.org/10.18112/openneuro.ds003805.v1.0.0](https://doi.org/10.18112/openneuro.ds003805.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003805 >>> dataset = DS003805(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003810(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Imagery vs Rest - Low-Cost EEG System * **Study:** `ds003810` (OpenNeuro) * **Author (year):** `Peterson2021_Motor_Imagery_vs` * **Canonical:** — Also importable as: `DS003810`, `Peterson2021_Motor_Imagery_vs`. Modality: `eeg`. Subjects: 10; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003810](https://openneuro.org/datasets/ds003810) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003810](https://nemar.org/dataexplorer/detail?dataset_id=ds003810) DOI: [https://doi.org/10.18112/openneuro.ds003810.v2.0.2](https://doi.org/10.18112/openneuro.ds003810.v2.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003810 >>> dataset = DS003810(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003816(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Effect of Buddhism Derived Loving Kindness Meditation on Modulating EEG: Long-term and Short-term Effect * **Study:** `ds003816` (OpenNeuro) * **Author (year):** `Sun2024` * **Canonical:** — Also importable as: `DS003816`, `Sun2024`. Modality: `eeg`. Subjects: 48; recordings: 1077; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003816](https://openneuro.org/datasets/ds003816) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003816](https://nemar.org/dataexplorer/detail?dataset_id=ds003816) DOI: [https://doi.org/10.18112/openneuro.ds003816.v1.0.1](https://doi.org/10.18112/openneuro.ds003816.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS003816 >>> dataset = DS003816(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003822(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Probabilistic Learning with Affective Feedback: Exp * **Study:** `ds003822` (OpenNeuro) * **Author (year):** `Brown2021_Probabilistic_Learning` * **Canonical:** — Also importable as: `DS003822`, `Brown2021_Probabilistic_Learning`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003822](https://openneuro.org/datasets/ds003822) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003822](https://nemar.org/dataexplorer/detail?dataset_id=ds003822) ### Examples ```pycon >>> from eegdash.dataset import DS003822 >>> dataset = DS003822(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003825(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Human electroencephalography recordings from 50 subjects for 22,248 images from 1,854 object concepts * **Study:** `ds003825` (OpenNeuro) * **Author (year):** `Grootswagers2021` * **Canonical:** `THINGS`, `THINGS_EEG` Also importable as: `DS003825`, `Grootswagers2021`, `THINGS`, `THINGS_EEG`. Modality: `eeg`. Subjects: 50; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003825](https://openneuro.org/datasets/ds003825) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003825](https://nemar.org/dataexplorer/detail?dataset_id=ds003825) DOI: [https://doi.org/10.18112/openneuro.ds003825.v1.1.0](https://doi.org/10.18112/openneuro.ds003825.v1.1.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003825 >>> dataset = DS003825(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['THINGS', 'THINGS_EEG']* ### *class* eegdash.dataset.DS003838(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG, pupillometry, ECG and photoplethysmography, and behavioral data in the digit span task and rest * **Study:** `ds003838` (OpenNeuro) * **Author (year):** `Pavlov2021_pupillometry` * **Canonical:** — Also importable as: `DS003838`, `Pavlov2021_pupillometry`. Modality: `eeg`. Subjects: 65; recordings: 130; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003838](https://openneuro.org/datasets/ds003838) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003838](https://nemar.org/dataexplorer/detail?dataset_id=ds003838) DOI: [https://doi.org/10.18112/openneuro.ds003838.v1.0.6](https://doi.org/10.18112/openneuro.ds003838.v1.0.6) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003838 >>> dataset = DS003838(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003844(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset Clinical Epilepsy iEEG to BIDS -RESPect_intraoperative_iEEG * **Study:** `ds003844` (OpenNeuro) * **Author (year):** `Zweiphenning2021` * **Canonical:** `RESPect_intraop` Also importable as: `DS003844`, `Zweiphenning2021`, `RESPect_intraop`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 6; recordings: 38; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003844](https://openneuro.org/datasets/ds003844) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003844](https://nemar.org/dataexplorer/detail?dataset_id=ds003844) DOI: [https://doi.org/10.18112/openneuro.ds003844.v1.0.1](https://doi.org/10.18112/openneuro.ds003844.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS003844 >>> dataset = DS003844(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['RESPect_intraop']* ### *class* eegdash.dataset.DS003846(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Prediction Error * **Study:** `ds003846` (OpenNeuro) * **Author (year):** `Gehrke2021` * **Canonical:** — Also importable as: `DS003846`, `Gehrke2021`. Modality: `eeg`. Subjects: 19; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003846](https://openneuro.org/datasets/ds003846) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003846](https://nemar.org/dataexplorer/detail?dataset_id=ds003846) DOI: [https://doi.org/10.18112/openneuro.ds003846.v2.0.2](https://doi.org/10.18112/openneuro.ds003846.v2.0.2) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS003846 >>> dataset = DS003846(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003848(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset Clinical Epilepsy iEEG to BIDS - RESPect_longterm_iEEG * **Study:** `ds003848` (OpenNeuro) * **Author (year):** `Blooijs2021` * **Canonical:** `RESPect_longterm` Also importable as: `DS003848`, `Blooijs2021`, `RESPect_longterm`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 6; recordings: 22; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003848](https://openneuro.org/datasets/ds003848) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003848](https://nemar.org/dataexplorer/detail?dataset_id=ds003848) DOI: [https://doi.org/10.18112/openneuro.ds003848.v1.0.3](https://doi.org/10.18112/openneuro.ds003848.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003848 >>> dataset = DS003848(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['RESPect_longterm']* ### *class* eegdash.dataset.DS003876(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Epilepsy-iEEG-Interictal-Multicenter-Dataset * **Study:** `ds003876` (OpenNeuro) * **Author (year):** `Gunnarsdottir2021` * **Canonical:** — Also importable as: `DS003876`, `Gunnarsdottir2021`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 39; recordings: 54; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003876](https://openneuro.org/datasets/ds003876) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003876](https://nemar.org/dataexplorer/detail?dataset_id=ds003876) DOI: [https://doi.org/10.18112/openneuro.ds003876.v1.0.2](https://doi.org/10.18112/openneuro.ds003876.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003876 >>> dataset = DS003876(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003885(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Capacity for movement is an organisational principle in object representations: EEG data from Experiment 1 * **Study:** `ds003885` (OpenNeuro) * **Author (year):** `Shatek2021_E1` * **Canonical:** — Also importable as: `DS003885`, `Shatek2021_E1`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003885](https://openneuro.org/datasets/ds003885) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003885](https://nemar.org/dataexplorer/detail?dataset_id=ds003885) DOI: [https://doi.org/10.18112/openneuro.ds003885.v1.0.7](https://doi.org/10.18112/openneuro.ds003885.v1.0.7) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS003885 >>> dataset = DS003885(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003887(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Capacity for movement is an organisational principle in object representations: EEG data from Experiment 2 * **Study:** `ds003887` (OpenNeuro) * **Author (year):** `Shatek2021_E2` * **Canonical:** — Also importable as: `DS003887`, `Shatek2021_E2`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003887](https://openneuro.org/datasets/ds003887) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003887](https://nemar.org/dataexplorer/detail?dataset_id=ds003887) DOI: [https://doi.org/10.18112/openneuro.ds003887.v1.2.2](https://doi.org/10.18112/openneuro.ds003887.v1.2.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS003887 >>> dataset = DS003887(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003922(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multisensory Correlation Detector * **Study:** `ds003922` (OpenNeuro) * **Author (year):** `Lerousseau2021` * **Canonical:** — Also importable as: `DS003922`, `Lerousseau2021`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 14; recordings: 164; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003922](https://openneuro.org/datasets/ds003922) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003922](https://nemar.org/dataexplorer/detail?dataset_id=ds003922) DOI: [https://doi.org/10.18112/openneuro.ds003922.v1.0.1](https://doi.org/10.18112/openneuro.ds003922.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003922 >>> dataset = DS003922(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003944(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: First Episode Psychosis vs. Control Resting Task 1 * **Study:** `ds003944` (OpenNeuro) * **Author (year):** `Salisbury2021_First` * **Canonical:** — Also importable as: `DS003944`, `Salisbury2021_First`. Modality: `eeg`. Subjects: 82; recordings: 82; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003944](https://openneuro.org/datasets/ds003944) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003944](https://nemar.org/dataexplorer/detail?dataset_id=ds003944) DOI: [https://doi.org/10.18112/openneuro.ds003944.v1.0.1](https://doi.org/10.18112/openneuro.ds003944.v1.0.1) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003944 >>> dataset = DS003944(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003947(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: First Episode Psychosis vs. Control Resting Task 2 * **Study:** `ds003947` (OpenNeuro) * **Author (year):** `Salisbury2021_First_Episode` * **Canonical:** — Also importable as: `DS003947`, `Salisbury2021_First_Episode`. Modality: `eeg`. Subjects: 61; recordings: 61; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003947](https://openneuro.org/datasets/ds003947) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003947](https://nemar.org/dataexplorer/detail?dataset_id=ds003947) DOI: [https://doi.org/10.18112/openneuro.ds003947.v1.0.1](https://doi.org/10.18112/openneuro.ds003947.v1.0.1) NEMAR citation count: 8 ### Examples ```pycon >>> from eegdash.dataset import DS003947 >>> dataset = DS003947(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003969(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Meditation vs thinking task * **Study:** `ds003969` (OpenNeuro) * **Author (year):** `Delorme2021` * **Canonical:** — Also importable as: `DS003969`, `Delorme2021`. Modality: `eeg`. Subjects: 98; recordings: 392; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003969](https://openneuro.org/datasets/ds003969) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003969](https://nemar.org/dataexplorer/detail?dataset_id=ds003969) DOI: [https://doi.org/10.18112/openneuro.ds003969.v1.0.0](https://doi.org/10.18112/openneuro.ds003969.v1.0.0) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS003969 >>> dataset = DS003969(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS003987(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Amphetamine trials 5CCPT and Probabilistic Learning * **Study:** `ds003987` (OpenNeuro) * **Author (year):** `Cavanagh2022_Amphetamine_trials_5` * **Canonical:** — Also importable as: `DS003987`, `Cavanagh2022_Amphetamine_trials_5`. Modality: `eeg`. Subjects: 23; recordings: 69; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds003987](https://openneuro.org/datasets/ds003987) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds003987](https://nemar.org/dataexplorer/detail?dataset_id=ds003987) DOI: [https://doi.org/10.18112/openneuro.ds003987.v1.0.0](https://doi.org/10.18112/openneuro.ds003987.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS003987 >>> dataset = DS003987(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004000(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Fribourg Ultimatum Game in Schizophrenia Study * **Study:** `ds004000` (OpenNeuro) * **Author (year):** `Padee2022` * **Canonical:** — Also importable as: `DS004000`, `Padee2022`. Modality: `eeg`. Subjects: 43; recordings: 86; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004000](https://openneuro.org/datasets/ds004000) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004000](https://nemar.org/dataexplorer/detail?dataset_id=ds004000) DOI: [https://doi.org/10.18112/openneuro.ds004000.v1.0.0](https://doi.org/10.18112/openneuro.ds004000.v1.0.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS004000 >>> dataset = DS004000(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004010(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MAVIS * **Study:** `ds004010` (OpenNeuro) * **Author (year):** `Waschke2022` * **Canonical:** `MAVIS` Also importable as: `DS004010`, `Waschke2022`, `MAVIS`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004010](https://openneuro.org/datasets/ds004010) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004010](https://nemar.org/dataexplorer/detail?dataset_id=ds004010) DOI: [https://doi.org/10.18112/openneuro.ds004010.v1.0.0](https://doi.org/10.18112/openneuro.ds004010.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004010 >>> dataset = DS004010(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MAVIS']* ### *class* eegdash.dataset.DS004011(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The nature of neural object representations during dynamic occlusion * **Study:** `ds004011` (OpenNeuro) * **Author (year):** `Teichmann2022` * **Canonical:** — Also importable as: `DS004011`, `Teichmann2022`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 22; recordings: 132; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004011](https://openneuro.org/datasets/ds004011) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004011](https://nemar.org/dataexplorer/detail?dataset_id=ds004011) DOI: [https://doi.org/10.18112/openneuro.ds004011.v1.0.3](https://doi.org/10.18112/openneuro.ds004011.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004011 >>> dataset = DS004011(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004012(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BRAR_NQ * **Study:** `ds004012` (OpenNeuro) * **Author (year):** `Rani2022` * **Canonical:** `Rani2019` Also importable as: `DS004012`, `Rani2022`, `Rani2019`. Modality: `meg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 30; recordings: 294; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004012](https://openneuro.org/datasets/ds004012) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004012](https://nemar.org/dataexplorer/detail?dataset_id=ds004012) DOI: [https://doi.org/10.18112/openneuro.ds004012.v1.0.0](https://doi.org/10.18112/openneuro.ds004012.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004012 >>> dataset = DS004012(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Rani2019']* ### *class* eegdash.dataset.DS004015(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Attended speaker paradigm (cEEGrid data) * **Study:** `ds004015` (OpenNeuro) * **Author (year):** `Holtze2022_Attended` * **Canonical:** — Also importable as: `DS004015`, `Holtze2022_Attended`. Modality: `eeg`. Subjects: 36; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004015](https://openneuro.org/datasets/ds004015) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004015](https://nemar.org/dataexplorer/detail?dataset_id=ds004015) DOI: [https://doi.org/10.18112/openneuro.ds004015.v1.0.2](https://doi.org/10.18112/openneuro.ds004015.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004015 >>> dataset = DS004015(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004017(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Embodied Learning for Literacy EEG * **Study:** `ds004017` (OpenNeuro) * **Author (year):** `Damsgaard2022` * **Canonical:** — Also importable as: `DS004017`, `Damsgaard2022`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 21; recordings: 63; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004017](https://openneuro.org/datasets/ds004017) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004017](https://nemar.org/dataexplorer/detail?dataset_id=ds004017) DOI: [https://doi.org/10.18112/openneuro.ds004017.v1.0.3](https://doi.org/10.18112/openneuro.ds004017.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004017 >>> dataset = DS004017(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004018(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG recordings for 200 object images presented in RSVP sequences at 5Hz or 20Hz * **Study:** `ds004018` (OpenNeuro) * **Author (year):** `Grootswagers2022_RSVP` * **Canonical:** — Also importable as: `DS004018`, `Grootswagers2022_RSVP`. Modality: `eeg`. Subjects: 16; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004018](https://openneuro.org/datasets/ds004018) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004018](https://nemar.org/dataexplorer/detail?dataset_id=ds004018) DOI: [https://doi.org/10.18112/openneuro.ds004018.v2.0.0](https://doi.org/10.18112/openneuro.ds004018.v2.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004018 >>> dataset = DS004018(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004019(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Effect of obesity on arithmetic processing in preteens with high and low math skills. An event-related potentials study * **Study:** `ds004019` (OpenNeuro) * **Author (year):** `AlatorreCruz2022_Effect` * **Canonical:** — Also importable as: `DS004019`, `AlatorreCruz2022_Effect`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Obese`. Subjects: 62; recordings: 62; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004019](https://openneuro.org/datasets/ds004019) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004019](https://nemar.org/dataexplorer/detail?dataset_id=ds004019) DOI: [https://doi.org/10.18112/openneuro.ds004019.v1.0.0](https://doi.org/10.18112/openneuro.ds004019.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004019 >>> dataset = DS004019(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004022(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal EEG and fNIRS Biosignal Acquisition during Motor Imagery Tasks in Patients with Orthopedic Impairment * **Study:** `ds004022` (OpenNeuro) * **Author (year):** `Lee2022` * **Canonical:** — Also importable as: `DS004022`, `Lee2022`. Modality: `eeg`. Subjects: 7; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004022](https://openneuro.org/datasets/ds004022) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004022](https://nemar.org/dataexplorer/detail?dataset_id=ds004022) DOI: [https://doi.org/10.18112/openneuro.ds004022.v1.0.0](https://doi.org/10.18112/openneuro.ds004022.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004022 >>> dataset = DS004022(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004024(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TMS-EEG-MRI-fMRI-DWI data on paired associative stimulation and connectivity (Shirley Ryan AbilityLab, Chicago, IL) * **Study:** `ds004024` (OpenNeuro) * **Author (year):** `Pavon2022` * **Canonical:** — Also importable as: `DS004024`, `Pavon2022`. Modality: `eeg`. Subjects: 13; recordings: 497; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004024](https://openneuro.org/datasets/ds004024) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004024](https://nemar.org/dataexplorer/detail?dataset_id=ds004024) DOI: [https://doi.org/10.18112/openneuro.ds004024.v1.0.1](https://doi.org/10.18112/openneuro.ds004024.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004024 >>> dataset = DS004024(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004033(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrode walking study * **Study:** `ds004033` (OpenNeuro) * **Author (year):** `Scanlon2022` * **Canonical:** — Also importable as: `DS004033`, `Scanlon2022`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 18; recordings: 36; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004033](https://openneuro.org/datasets/ds004033) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004033](https://nemar.org/dataexplorer/detail?dataset_id=ds004033) DOI: [https://doi.org/10.18112/openneuro.ds004033.v1.0.0](https://doi.org/10.18112/openneuro.ds004033.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004033 >>> dataset = DS004033(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004040(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Trance channeling EEG study * **Study:** `ds004040` (OpenNeuro) * **Author (year):** `Cannard2022` * **Canonical:** — Also importable as: `DS004040`, `Cannard2022`. Modality: `eeg`. Subjects: 13; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004040](https://openneuro.org/datasets/ds004040) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004040](https://nemar.org/dataexplorer/detail?dataset_id=ds004040) DOI: [https://doi.org/10.18112/openneuro.ds004040.v1.0.0](https://doi.org/10.18112/openneuro.ds004040.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004040 >>> dataset = DS004040(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004043(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes * **Study:** `ds004043` (OpenNeuro) * **Author (year):** `Moerel2022_time` * **Canonical:** — Also importable as: `DS004043`, `Moerel2022_time`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004043](https://openneuro.org/datasets/ds004043) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004043](https://nemar.org/dataexplorer/detail?dataset_id=ds004043) DOI: [https://doi.org/10.18112/openneuro.ds004043.v1.1.0](https://doi.org/10.18112/openneuro.ds004043.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004043 >>> dataset = DS004043(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004067(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Moral conviction and metacognitive ability shape multiple stages of information processing * **Study:** `ds004067` (OpenNeuro) * **Author (year):** `Yoder2022` * **Canonical:** — Also importable as: `DS004067`, `Yoder2022`. Modality: `eeg`. Subjects: 80; recordings: 84; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004067](https://openneuro.org/datasets/ds004067) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004067](https://nemar.org/dataexplorer/detail?dataset_id=ds004067) DOI: [https://doi.org/10.18112/openneuro.ds004067.v1.0.1](https://doi.org/10.18112/openneuro.ds004067.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004067 >>> dataset = DS004067(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004075(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) what_are_we_talking_about * **Study:** `ds004075` (OpenNeuro) * **Author (year):** `Boncz2022` * **Canonical:** — Also importable as: `DS004075`, `Boncz2022`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 29; recordings: 116; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004075](https://openneuro.org/datasets/ds004075) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004075](https://nemar.org/dataexplorer/detail?dataset_id=ds004075) DOI: [https://doi.org/10.18112/openneuro.ds004075.v1.0.0](https://doi.org/10.18112/openneuro.ds004075.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004075 >>> dataset = DS004075(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004078(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A synchronized multimodal neuroimaging dataset to study brain language processing * **Study:** `ds004078` (OpenNeuro) * **Author (year):** `Wang2022_StudyBRAIN` * **Canonical:** — Also importable as: `DS004078`, `Wang2022_StudyBRAIN`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 12; recordings: 720; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004078](https://openneuro.org/datasets/ds004078) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004078](https://nemar.org/dataexplorer/detail?dataset_id=ds004078) DOI: [https://doi.org/10.18112/openneuro.ds004078.v1.0.4](https://doi.org/10.18112/openneuro.ds004078.v1.0.4) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS004078 >>> dataset = DS004078(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004080(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CCEP ECoG dataset across age 4-51 * **Study:** `ds004080` (OpenNeuro) * **Author (year):** `Blooijs2023_CCEP_ECoG` * **Canonical:** `RESPect_CCEP` Also importable as: `DS004080`, `Blooijs2023_CCEP_ECoG`, `RESPect_CCEP`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 74; recordings: 117; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004080](https://openneuro.org/datasets/ds004080) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004080](https://nemar.org/dataexplorer/detail?dataset_id=ds004080) DOI: [https://doi.org/10.18112/openneuro.ds004080.v1.2.4](https://doi.org/10.18112/openneuro.ds004080.v1.2.4) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004080 >>> dataset = DS004080(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['RESPect_CCEP']* ### *class* eegdash.dataset.DS004100(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HUP iEEG Epilepsy Dataset * **Study:** `ds004100` (OpenNeuro) * **Author (year):** `Bernabei2022` * **Canonical:** `HUPiEEG` Also importable as: `DS004100`, `Bernabei2022`, `HUPiEEG`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 57; recordings: 319; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004100](https://openneuro.org/datasets/ds004100) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004100](https://nemar.org/dataexplorer/detail?dataset_id=ds004100) DOI: [https://doi.org/10.18112/openneuro.ds004100.v1.1.3](https://doi.org/10.18112/openneuro.ds004100.v1.1.3) NEMAR citation count: 21 ### Examples ```pycon >>> from eegdash.dataset import DS004100 >>> dataset = DS004100(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HUPiEEG']* ### *class* eegdash.dataset.DS004105(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Auditory Cueing * **Study:** `ds004105` (OpenNeuro) * **Author (year):** `Garcia2022` * **Canonical:** `BCIT_Auditory_Cueing` Also importable as: `DS004105`, `Garcia2022`, `BCIT_Auditory_Cueing`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004105](https://openneuro.org/datasets/ds004105) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004105](https://nemar.org/dataexplorer/detail?dataset_id=ds004105) DOI: [https://doi.org/10.18112/openneuro.ds004105.v1.0.0](https://doi.org/10.18112/openneuro.ds004105.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004105 >>> dataset = DS004105(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCIT_Auditory_Cueing']* ### *class* eegdash.dataset.DS004106(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Advanced Guard Duty * **Study:** `ds004106` (OpenNeuro) * **Author (year):** `Touryan2022` * **Canonical:** `BCITAdvancedGuardDuty` Also importable as: `DS004106`, `Touryan2022`, `BCITAdvancedGuardDuty`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 27; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004106](https://openneuro.org/datasets/ds004106) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004106](https://nemar.org/dataexplorer/detail?dataset_id=ds004106) DOI: [https://doi.org/10.18112/openneuro.ds004106.v1.0.0](https://doi.org/10.18112/openneuro.ds004106.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004106 >>> dataset = DS004106(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCITAdvancedGuardDuty']* ### *class* eegdash.dataset.DS004107(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MIND DATA * **Study:** `ds004107` (OpenNeuro) * **Author (year):** `Weisend2022` * **Canonical:** `Weisend2007` Also importable as: `DS004107`, `Weisend2022`, `Weisend2007`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 9; recordings: 89; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004107](https://openneuro.org/datasets/ds004107) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004107](https://nemar.org/dataexplorer/detail?dataset_id=ds004107) DOI: [https://doi.org/10.18112/openneuro.ds004107.v1.0.0](https://doi.org/10.18112/openneuro.ds004107.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004107 >>> dataset = DS004107(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Weisend2007']* ### *class* eegdash.dataset.DS004117(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sternberg Working Memory * **Study:** `ds004117` (OpenNeuro) * **Author (year):** `Onton2022` * **Canonical:** — Also importable as: `DS004117`, `Onton2022`. Modality: `eeg`. Subjects: 23; recordings: 85; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004117](https://openneuro.org/datasets/ds004117) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004117](https://nemar.org/dataexplorer/detail?dataset_id=ds004117) DOI: [https://doi.org/10.18112/openneuro.ds004117.v1.0.1](https://doi.org/10.18112/openneuro.ds004117.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004117 >>> dataset = DS004117(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004118(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Calibration Driving * **Study:** `ds004118` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Calibration` * **Canonical:** `Touryan1999` Also importable as: `DS004118`, `Touryan2022_BCIT_Calibration`, `Touryan1999`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 156; recordings: 247; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004118](https://openneuro.org/datasets/ds004118) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004118](https://nemar.org/dataexplorer/detail?dataset_id=ds004118) DOI: [https://doi.org/10.18112/openneuro.ds004118.v1.0.1](https://doi.org/10.18112/openneuro.ds004118.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004118 >>> dataset = DS004118(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Touryan1999']* ### *class* eegdash.dataset.DS004119(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Basic Guard Duty * **Study:** `ds004119` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Basic` * **Canonical:** `BCIT` Also importable as: `DS004119`, `Touryan2022_BCIT_Basic`, `BCIT`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004119](https://openneuro.org/datasets/ds004119) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004119](https://nemar.org/dataexplorer/detail?dataset_id=ds004119) DOI: [https://doi.org/10.18112/openneuro.ds004119.v1.0.0](https://doi.org/10.18112/openneuro.ds004119.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004119 >>> dataset = DS004119(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCIT']* ### *class* eegdash.dataset.DS004120(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Baseline Driving * **Study:** `ds004120` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Baseline` * **Canonical:** `BCITBaselineDriving` Also importable as: `DS004120`, `Touryan2022_BCIT_Baseline`, `BCITBaselineDriving`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 109; recordings: 131; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004120](https://openneuro.org/datasets/ds004120) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004120](https://nemar.org/dataexplorer/detail?dataset_id=ds004120) DOI: [https://doi.org/10.18112/openneuro.ds004120.v1.0.0](https://doi.org/10.18112/openneuro.ds004120.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004120 >>> dataset = DS004120(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCITBaselineDriving']* ### *class* eegdash.dataset.DS004121(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Mind Wandering * **Study:** `ds004121` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Mind` * **Canonical:** `BCITMindWandering` Also importable as: `DS004121`, `Touryan2022_BCIT_Mind`, `BCITMindWandering`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004121](https://openneuro.org/datasets/ds004121) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004121](https://nemar.org/dataexplorer/detail?dataset_id=ds004121) DOI: [https://doi.org/10.18112/openneuro.ds004121.v1.0.0](https://doi.org/10.18112/openneuro.ds004121.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004121 >>> dataset = DS004121(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCITMindWandering']* ### *class* eegdash.dataset.DS004122(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Speed Control * **Study:** `ds004122` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Speed` * **Canonical:** — Also importable as: `DS004122`, `Touryan2022_BCIT_Speed`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 32; recordings: 63; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004122](https://openneuro.org/datasets/ds004122) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004122](https://nemar.org/dataexplorer/detail?dataset_id=ds004122) DOI: [https://doi.org/10.18112/openneuro.ds004122.v1.0.0](https://doi.org/10.18112/openneuro.ds004122.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004122 >>> dataset = DS004122(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004123(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIT Traffic Complexity * **Study:** `ds004123` (OpenNeuro) * **Author (year):** `Touryan2022_BCIT_Traffic` * **Canonical:** `BCIT_Traffic_Complexity` Also importable as: `DS004123`, `Touryan2022_BCIT_Traffic`, `BCIT_Traffic_Complexity`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 29; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004123](https://openneuro.org/datasets/ds004123) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004123](https://nemar.org/dataexplorer/detail?dataset_id=ds004123) DOI: [https://doi.org/10.18112/openneuro.ds004123.v1.0.0](https://doi.org/10.18112/openneuro.ds004123.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004123 >>> dataset = DS004123(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCIT_Traffic_Complexity']* ### *class* eegdash.dataset.DS004127(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Somatosensory Cortex Rat DISC Data * **Study:** `ds004127` (OpenNeuro) * **Author (year):** `Abrego2022` * **Canonical:** — Also importable as: `DS004127`, `Abrego2022`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Other`. Subjects: 8; recordings: 73; tasks: 11. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004127](https://openneuro.org/datasets/ds004127) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004127](https://nemar.org/dataexplorer/detail?dataset_id=ds004127) DOI: [https://doi.org/10.18112/openneuro.ds004127.v3.0.0](https://doi.org/10.18112/openneuro.ds004127.v3.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004127 >>> dataset = DS004127(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004147(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Average Task Value * **Study:** `ds004147` (OpenNeuro) * **Author (year):** `Hassall2022_Average` * **Canonical:** — Also importable as: `DS004147`, `Hassall2022_Average`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004147](https://openneuro.org/datasets/ds004147) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004147](https://nemar.org/dataexplorer/detail?dataset_id=ds004147) DOI: [https://doi.org/10.18112/openneuro.ds004147.v1.0.2](https://doi.org/10.18112/openneuro.ds004147.v1.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004147 >>> dataset = DS004147(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004148(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A test-retest resting and cognitive state EEG dataset * **Study:** `ds004148` (OpenNeuro) * **Author (year):** `Wang2022_test_retest_resting` * **Canonical:** — Also importable as: `DS004148`, `Wang2022_test_retest_resting`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 60; recordings: 900; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004148](https://openneuro.org/datasets/ds004148) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004148](https://nemar.org/dataexplorer/detail?dataset_id=ds004148) DOI: [https://doi.org/10.18112/openneuro.ds004148.v1.0.0](https://doi.org/10.18112/openneuro.ds004148.v1.0.0) NEMAR citation count: 12 ### Examples ```pycon >>> from eegdash.dataset import DS004148 >>> dataset = DS004148(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004151(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Effect of obesity on inhibitory control in preadolescents during stop-signal task. An event-related potentials study * **Study:** `ds004151` (OpenNeuro) * **Author (year):** `AlatorreCruz2022_Effect_obesity` * **Canonical:** — Also importable as: `DS004151`, `AlatorreCruz2022_Effect_obesity`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Obese`. Subjects: 57; recordings: 57; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004151](https://openneuro.org/datasets/ds004151) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004151](https://nemar.org/dataexplorer/detail?dataset_id=ds004151) DOI: [https://doi.org/10.18112/openneuro.ds004151.v1.0.0](https://doi.org/10.18112/openneuro.ds004151.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004151 >>> dataset = DS004151(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004152(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Drum Trainer * **Study:** `ds004152` (OpenNeuro) * **Author (year):** `Hassall2022_Drum` * **Canonical:** — Also importable as: `DS004152`, `Hassall2022_Drum`. Modality: `eeg`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004152](https://openneuro.org/datasets/ds004152) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004152](https://nemar.org/dataexplorer/detail?dataset_id=ds004152) DOI: [https://doi.org/10.18112/openneuro.ds004152.v1.1.2](https://doi.org/10.18112/openneuro.ds004152.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004152 >>> dataset = DS004152(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004166(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Effects of Forward and Backward Span Trainings on Working Memory: Evidence from a Randomized, Controlled Trial * **Study:** `ds004166` (OpenNeuro) * **Author (year):** `Li2022` * **Canonical:** — Also importable as: `DS004166`, `Li2022`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 71; recordings: 213; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004166](https://openneuro.org/datasets/ds004166) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004166](https://nemar.org/dataexplorer/detail?dataset_id=ds004166) DOI: [https://doi.org/10.18112/openneuro.ds004166.v1.0.0](https://doi.org/10.18112/openneuro.ds004166.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004166 >>> dataset = DS004166(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004194(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual ECoG dataset * **Study:** `ds004194` (OpenNeuro) * **Author (year):** `Groen2022` * **Canonical:** — Also importable as: `DS004194`, `Groen2022`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Epilepsy`. Subjects: 14; recordings: 209; tasks: 7. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004194](https://openneuro.org/datasets/ds004194) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004194](https://nemar.org/dataexplorer/detail?dataset_id=ds004194) DOI: [https://doi.org/10.18112/openneuro.ds004194.v3.0.0](https://doi.org/10.18112/openneuro.ds004194.v3.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS004194 >>> dataset = DS004194(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004196(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Bimodal dataset on Inner speech * **Study:** `ds004196` (OpenNeuro) * **Author (year):** `Liwicki2022` * **Canonical:** — Also importable as: `DS004196`, `Liwicki2022`. Modality: `eeg`. Subjects: 4; recordings: 4; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004196](https://openneuro.org/datasets/ds004196) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004196](https://nemar.org/dataexplorer/detail?dataset_id=ds004196) DOI: [https://doi.org/10.18112/openneuro.ds004196.v2.0.2](https://doi.org/10.18112/openneuro.ds004196.v2.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004196 >>> dataset = DS004196(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004200(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Temporal Scaling * **Study:** `ds004200` (OpenNeuro) * **Author (year):** `Hassall2022_Temporal` * **Canonical:** — Also importable as: `DS004200`, `Hassall2022_Temporal`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004200](https://openneuro.org/datasets/ds004200) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004200](https://nemar.org/dataexplorer/detail?dataset_id=ds004200) DOI: [https://doi.org/10.18112/openneuro.ds004200.v1.0.1](https://doi.org/10.18112/openneuro.ds004200.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004200 >>> dataset = DS004200(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004212(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) THINGS-MEG * **Study:** `ds004212` (OpenNeuro) * **Author (year):** `Hebart2022` * **Canonical:** `THINGS_MEG`, `THINGSMEG` Also importable as: `DS004212`, `Hebart2022`, `THINGS_MEG`, `THINGSMEG`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 5; recordings: 500; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004212](https://openneuro.org/datasets/ds004212) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004212](https://nemar.org/dataexplorer/detail?dataset_id=ds004212) DOI: [https://doi.org/10.18112/openneuro.ds004212.v3.0.0](https://doi.org/10.18112/openneuro.ds004212.v3.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004212 >>> dataset = DS004212(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['THINGS_MEG', 'THINGSMEG']* ### *class* eegdash.dataset.DS004229(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) amnoise * **Study:** `ds004229` (OpenNeuro) * **Author (year):** `Mittag2022` * **Canonical:** — Also importable as: `DS004229`, `Mittag2022`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Dyslexia`. Subjects: 2; recordings: 3; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004229](https://openneuro.org/datasets/ds004229) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004229](https://nemar.org/dataexplorer/detail?dataset_id=ds004229) DOI: [https://doi.org/10.18112/openneuro.ds004229.v1.0.3](https://doi.org/10.18112/openneuro.ds004229.v1.0.3) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004229 >>> dataset = DS004229(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004252(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Rotation-tolerant representations elucidate the time course of high-level object processing * **Study:** `ds004252` (OpenNeuro) * **Author (year):** `Moerel2022_Rotation` * **Canonical:** — Also importable as: `DS004252`, `Moerel2022_Rotation`. Modality: `eeg`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004252](https://openneuro.org/datasets/ds004252) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004252](https://nemar.org/dataexplorer/detail?dataset_id=ds004252) DOI: [https://doi.org/10.18112/openneuro.ds004252.v1.0.2](https://doi.org/10.18112/openneuro.ds004252.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004252 >>> dataset = DS004252(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004256(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Encoding of Sound Source Elevation in Human Cortex * **Study:** `ds004256` (OpenNeuro) * **Author (year):** `Bialas2022` * **Canonical:** — Also importable as: `DS004256`, `Bialas2022`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 53; recordings: 53; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004256](https://openneuro.org/datasets/ds004256) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004256](https://nemar.org/dataexplorer/detail?dataset_id=ds004256) DOI: [https://doi.org/10.18112/openneuro.ds004256.v1.0.5](https://doi.org/10.18112/openneuro.ds004256.v1.0.5) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004256 >>> dataset = DS004256(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004262(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Continuous Feedback Processing * **Study:** `ds004262` (OpenNeuro) * **Author (year):** `Hassall2022_Continuous` * **Canonical:** — Also importable as: `DS004262`, `Hassall2022_Continuous`. Modality: `eeg`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004262](https://openneuro.org/datasets/ds004262) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004262](https://nemar.org/dataexplorer/detail?dataset_id=ds004262) DOI: [https://doi.org/10.18112/openneuro.ds004262.v1.0.0](https://doi.org/10.18112/openneuro.ds004262.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004262 >>> dataset = DS004262(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004264(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Steer the Ship * **Study:** `ds004264` (OpenNeuro) * **Author (year):** `Hassall2022_Steer` * **Canonical:** — Also importable as: `DS004264`, `Hassall2022_Steer`. Modality: `eeg`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004264](https://openneuro.org/datasets/ds004264) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004264](https://nemar.org/dataexplorer/detail?dataset_id=ds004264) DOI: [https://doi.org/10.18112/openneuro.ds004264.v1.1.0](https://doi.org/10.18112/openneuro.ds004264.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004264 >>> dataset = DS004264(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004276(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory single word recognition in MEG * **Study:** `ds004276` (OpenNeuro) * **Author (year):** `Gaston2022` * **Canonical:** — Also importable as: `DS004276`, `Gaston2022`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 19; recordings: 19; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004276](https://openneuro.org/datasets/ds004276) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004276](https://nemar.org/dataexplorer/detail?dataset_id=ds004276) DOI: [https://doi.org/10.18112/openneuro.ds004276.v1.0.0](https://doi.org/10.18112/openneuro.ds004276.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004276 >>> dataset = DS004276(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004278(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sustained Neural Representations of Personally Familiar People and Places During Cued Recall * **Study:** `ds004278` (OpenNeuro) * **Author (year):** `Kidder2022` * **Canonical:** `Kidder2024` Also importable as: `DS004278`, `Kidder2022`, `Kidder2024`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004278](https://openneuro.org/datasets/ds004278) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004278](https://nemar.org/dataexplorer/detail?dataset_id=ds004278) DOI: [https://doi.org/10.18112/openneuro.ds004278.v1.0.1](https://doi.org/10.18112/openneuro.ds004278.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004278 >>> dataset = DS004278(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kidder2024']* ### *class* eegdash.dataset.DS004279(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Large Spanish EEG * **Study:** `ds004279` (OpenNeuro) * **Author (year):** `Araya2022` * **Canonical:** — Also importable as: `DS004279`, `Araya2022`. Modality: `eeg`. Subjects: 56; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004279](https://openneuro.org/datasets/ds004279) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004279](https://nemar.org/dataexplorer/detail?dataset_id=ds004279) DOI: [https://doi.org/10.18112/openneuro.ds004279.v1.1.2](https://doi.org/10.18112/openneuro.ds004279.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004279 >>> dataset = DS004279(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004284(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) eeg-neuroforecasting * **Study:** `ds004284` (OpenNeuro) * **Author (year):** `Veillette2022` * **Canonical:** — Also importable as: `DS004284`, `Veillette2022`. Modality: `eeg`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004284](https://openneuro.org/datasets/ds004284) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004284](https://nemar.org/dataexplorer/detail?dataset_id=ds004284) DOI: [https://doi.org/10.18112/openneuro.ds004284.v1.0.0](https://doi.org/10.18112/openneuro.ds004284.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004284 >>> dataset = DS004284(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004295(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Reward gain and punishment avoidance reversal learning * **Study:** `ds004295` (OpenNeuro) * **Author (year):** `Stolz2022` * **Canonical:** — Also importable as: `DS004295`, `Stolz2022`. Modality: `eeg`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004295](https://openneuro.org/datasets/ds004295) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004295](https://nemar.org/dataexplorer/detail?dataset_id=ds004295) DOI: [https://doi.org/10.18112/openneuro.ds004295.v1.0.0](https://doi.org/10.18112/openneuro.ds004295.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004295 >>> dataset = DS004295(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004306(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Semantic Imagination and Perception Dataset * **Study:** `ds004306` (OpenNeuro) * **Author (year):** `Wilson2022` * **Canonical:** — Also importable as: `DS004306`, `Wilson2022`. Modality: `eeg`. Subjects: 12; recordings: 15; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004306](https://openneuro.org/datasets/ds004306) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004306](https://nemar.org/dataexplorer/detail?dataset_id=ds004306) DOI: [https://doi.org/10.18112/openneuro.ds004306.v1.0.2](https://doi.org/10.18112/openneuro.ds004306.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004306 >>> dataset = DS004306(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004315(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mood Manipulation and PST, Experiment 1 * **Study:** `ds004315` (OpenNeuro) * **Author (year):** `Cavanagh2022_E1` * **Canonical:** — Also importable as: `DS004315`, `Cavanagh2022_E1`. Modality: `eeg`. Subjects: 50; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004315](https://openneuro.org/datasets/ds004315) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004315](https://nemar.org/dataexplorer/detail?dataset_id=ds004315) DOI: [https://doi.org/10.18112/openneuro.ds004315.v1.0.0](https://doi.org/10.18112/openneuro.ds004315.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004315 >>> dataset = DS004315(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004317(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mood Manipulation and PST, Experiment 2 * **Study:** `ds004317` (OpenNeuro) * **Author (year):** `Cavanagh2022_E2` * **Canonical:** — Also importable as: `DS004317`, `Cavanagh2022_E2`. Modality: `eeg`. Subjects: 50; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004317](https://openneuro.org/datasets/ds004317) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004317](https://nemar.org/dataexplorer/detail?dataset_id=ds004317) DOI: [https://doi.org/10.18112/openneuro.ds004317.v1.0.3](https://doi.org/10.18112/openneuro.ds004317.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004317 >>> dataset = DS004317(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004324(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ToonFaces * **Study:** `ds004324` (OpenNeuro) * **Author (year):** `Chacon2022` * **Canonical:** `ToonFaces` Also importable as: `DS004324`, `Chacon2022`, `ToonFaces`. Modality: `eeg`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004324](https://openneuro.org/datasets/ds004324) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004324](https://nemar.org/dataexplorer/detail?dataset_id=ds004324) DOI: [https://doi.org/10.18112/openneuro.ds004324.v1.0.0](https://doi.org/10.18112/openneuro.ds004324.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004324 >>> dataset = DS004324(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ToonFaces']* ### *class* eegdash.dataset.DS004330(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The spatiotemporal neural dynamics of object recognition for natural images and line drawings (MEG) * **Study:** `ds004330` (OpenNeuro) * **Author (year):** `Singer2022` * **Canonical:** — Also importable as: `DS004330`, `Singer2022`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 30; recordings: 270; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004330](https://openneuro.org/datasets/ds004330) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004330](https://nemar.org/dataexplorer/detail?dataset_id=ds004330) DOI: [https://doi.org/10.18112/openneuro.ds004330.v1.0.0](https://doi.org/10.18112/openneuro.ds004330.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004330 >>> dataset = DS004330(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004346(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FLUX: A pipeline for MEG analysis * **Study:** `ds004346` (OpenNeuro) * **Author (year):** `Ferrante2022` * **Canonical:** `FLUX` Also importable as: `DS004346`, `Ferrante2022`, `FLUX`. Modality: `meg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 1; recordings: 3; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004346](https://openneuro.org/datasets/ds004346) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004346](https://nemar.org/dataexplorer/detail?dataset_id=ds004346) DOI: [https://doi.org/10.18112/openneuro.ds004346.v1.0.8](https://doi.org/10.18112/openneuro.ds004346.v1.0.8) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004346 >>> dataset = DS004346(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['FLUX']* ### *class* eegdash.dataset.DS004347(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Symmetry perception and affective responses: a combined EEG/EMG study * **Study:** `ds004347` (OpenNeuro) * **Author (year):** `Makin2022` * **Canonical:** — Also importable as: `DS004347`, `Makin2022`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004347](https://openneuro.org/datasets/ds004347) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004347](https://nemar.org/dataexplorer/detail?dataset_id=ds004347) DOI: [https://doi.org/10.18112/openneuro.ds004347.v1.0.0](https://doi.org/10.18112/openneuro.ds004347.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004347 >>> dataset = DS004347(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004348(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Ear-EEG Sleep Monitoring 2017 (EESM17) * **Study:** `ds004348` (OpenNeuro) * **Author (year):** `Mikkelsen2022` * **Canonical:** `EESM17` Also importable as: `DS004348`, `Mikkelsen2022`, `EESM17`. Modality: `eeg`. Subjects: 9; recordings: 18; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004348](https://openneuro.org/datasets/ds004348) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004348](https://nemar.org/dataexplorer/detail?dataset_id=ds004348) DOI: [https://doi.org/10.18112/openneuro.ds004348.v1.0.5](https://doi.org/10.18112/openneuro.ds004348.v1.0.5) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004348 >>> dataset = DS004348(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EESM17']* ### *class* eegdash.dataset.DS004350(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Executive Functionning Study for Assessing the Effect of Neurofeedback * **Study:** `ds004350` (OpenNeuro) * **Author (year):** `Delorme2022` * **Canonical:** — Also importable as: `DS004350`, `Delorme2022`. Modality: `eeg`. Subjects: 24; recordings: 240; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004350](https://openneuro.org/datasets/ds004350) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004350](https://nemar.org/dataexplorer/detail?dataset_id=ds004350) DOI: [https://doi.org/10.18112/openneuro.ds004350.v2.0.0](https://doi.org/10.18112/openneuro.ds004350.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004350 >>> dataset = DS004350(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004356(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Subcortical responses to music and speech are alike while cortical responses diverge * **Study:** `ds004356` (OpenNeuro) * **Author (year):** `Shan2022` * **Canonical:** — Also importable as: `DS004356`, `Shan2022`. Modality: `eeg`. Subjects: 22; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004356](https://openneuro.org/datasets/ds004356) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004356](https://nemar.org/dataexplorer/detail?dataset_id=ds004356) DOI: [https://doi.org/10.18112/openneuro.ds004356.v2.2.1](https://doi.org/10.18112/openneuro.ds004356.v2.2.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004356 >>> dataset = DS004356(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004357(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Features-EEG * **Study:** `ds004357` (OpenNeuro) * **Author (year):** `Grootswagers2022_EEG` * **Canonical:** — Also importable as: `DS004357`, `Grootswagers2022_EEG`. Modality: `eeg`. Subjects: 16; recordings: 16; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004357](https://openneuro.org/datasets/ds004357) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004357](https://nemar.org/dataexplorer/detail?dataset_id=ds004357) DOI: [https://doi.org/10.18112/openneuro.ds004357.v1.0.1](https://doi.org/10.18112/openneuro.ds004357.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004357 >>> dataset = DS004357(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004362(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Motor Movement/Imagery Dataset * **Study:** `ds004362` (OpenNeuro) * **Author (year):** `Schalk2022` * **Canonical:** `PhysionetMI`, `EEGMotorMovementImagery` Also importable as: `DS004362`, `Schalk2022`, `PhysionetMI`, `EEGMotorMovementImagery`. Modality: `eeg`. Subjects: 109; recordings: 1526; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004362](https://openneuro.org/datasets/ds004362) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004362](https://nemar.org/dataexplorer/detail?dataset_id=ds004362) DOI: [https://doi.org/10.18112/openneuro.ds004362.v1.0.0](https://doi.org/10.18112/openneuro.ds004362.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004362 >>> dataset = DS004362(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PhysionetMI', 'EEGMotorMovementImagery']* ### *class* eegdash.dataset.DS004367(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Meta-rdk: Raw EEG data * **Study:** `ds004367` (OpenNeuro) * **Author (year):** `Rouy2022_Meta` * **Canonical:** — Also importable as: `DS004367`, `Rouy2022_Meta`. Modality: `eeg`. Subjects: 40; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004367](https://openneuro.org/datasets/ds004367) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004367](https://nemar.org/dataexplorer/detail?dataset_id=ds004367) DOI: [https://doi.org/10.18112/openneuro.ds004367.v1.0.2](https://doi.org/10.18112/openneuro.ds004367.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004367 >>> dataset = DS004367(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004368(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Meta-rdk: Preprocessed EEG data * **Study:** `ds004368` (OpenNeuro) * **Author (year):** `Rouy2022_Meta_rdk` * **Canonical:** — Also importable as: `DS004368`, `Rouy2022_Meta_rdk`. Modality: `eeg`. Subjects: 39; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004368](https://openneuro.org/datasets/ds004368) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004368](https://nemar.org/dataexplorer/detail?dataset_id=ds004368) DOI: [https://doi.org/10.18112/openneuro.ds004368.v1.0.2](https://doi.org/10.18112/openneuro.ds004368.v1.0.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004368 >>> dataset = DS004368(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004369(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Blink-Pause-Relation (Competing Speaker Paradigm) * **Study:** `ds004369` (OpenNeuro) * **Author (year):** `Holtze2022_Blink` * **Canonical:** — Also importable as: `DS004369`, `Holtze2022_Blink`. Modality: `eeg`. Subjects: 41; recordings: 41; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004369](https://openneuro.org/datasets/ds004369) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004369](https://nemar.org/dataexplorer/detail?dataset_id=ds004369) DOI: [https://doi.org/10.18112/openneuro.ds004369.v1.0.1](https://doi.org/10.18112/openneuro.ds004369.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004369 >>> dataset = DS004369(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004370(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PRIOS * **Study:** `ds004370` (OpenNeuro) * **Author (year):** `Blooijs2022_PRIOS` * **Canonical:** `PRIOS` Also importable as: `DS004370`, `Blooijs2022_PRIOS`, `PRIOS`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 7; recordings: 15; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004370](https://openneuro.org/datasets/ds004370) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004370](https://nemar.org/dataexplorer/detail?dataset_id=ds004370) DOI: [https://doi.org/10.18112/openneuro.ds004370.v1.0.2](https://doi.org/10.18112/openneuro.ds004370.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004370 >>> dataset = DS004370(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PRIOS']* ### *class* eegdash.dataset.DS004381(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Intraoperative EEG dataset during medianus-tibialis stimulation with 8 different rates * **Study:** `ds004381` (OpenNeuro) * **Author (year):** `Selmin2022` * **Canonical:** — Also importable as: `DS004381`, `Selmin2022`. Modality: `eeg`. Subjects: 18; recordings: 437; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004381](https://openneuro.org/datasets/ds004381) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004381](https://nemar.org/dataexplorer/detail?dataset_id=ds004381) DOI: [https://doi.org/10.18112/openneuro.ds004381.v1.0.2](https://doi.org/10.18112/openneuro.ds004381.v1.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004381 >>> dataset = DS004381(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004388(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Somatosensory evoked potentials in the human spinal cord to mixed nerve stimulation * **Study:** `ds004388` (OpenNeuro) * **Author (year):** `Nierula2023_Somatosensory` * **Canonical:** — Also importable as: `DS004388`, `Nierula2023_Somatosensory`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 40; recordings: 399; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004388](https://openneuro.org/datasets/ds004388) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004388](https://nemar.org/dataexplorer/detail?dataset_id=ds004388) DOI: [https://doi.org/10.18112/openneuro.ds004388.v1.0.0](https://doi.org/10.18112/openneuro.ds004388.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004388 >>> dataset = DS004388(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004389(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Somatosensory evoked potentials in the human spinal cord to mixed and sensory nerve stimulation * **Study:** `ds004389` (OpenNeuro) * **Author (year):** `Nierula2023_Somatosensory_evoked` * **Canonical:** — Also importable as: `DS004389`, `Nierula2023_Somatosensory_evoked`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 26; recordings: 260; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004389](https://openneuro.org/datasets/ds004389) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004389](https://nemar.org/dataexplorer/detail?dataset_id=ds004389) DOI: [https://doi.org/10.18112/openneuro.ds004389.v1.0.0](https://doi.org/10.18112/openneuro.ds004389.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004389 >>> dataset = DS004389(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004395(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Penn Electrophysiology of Encoding and Retrieval Study (PEERS) * **Study:** `ds004395` (OpenNeuro) * **Author (year):** `Kahana2023` * **Canonical:** `PEERS` Also importable as: `DS004395`, `Kahana2023`, `PEERS`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 364; recordings: 6483; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004395](https://openneuro.org/datasets/ds004395) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004395](https://nemar.org/dataexplorer/detail?dataset_id=ds004395) DOI: [https://doi.org/10.18112/openneuro.ds004395.v2.0.0](https://doi.org/10.18112/openneuro.ds004395.v2.0.0) NEMAR citation count: 6 ### Examples ```pycon >>> from eegdash.dataset import DS004395 >>> dataset = DS004395(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PEERS']* ### *class* eegdash.dataset.DS004398(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) planmemreplay * **Study:** `ds004398` (OpenNeuro) * **Author (year):** `Wimmer2023` * **Canonical:** `Wimmer2024` Also importable as: `DS004398`, `Wimmer2023`, `Wimmer2024`. Modality: `meg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004398](https://openneuro.org/datasets/ds004398) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004398](https://nemar.org/dataexplorer/detail?dataset_id=ds004398) DOI: [https://doi.org/10.18112/openneuro.ds004398.v1.0.0](https://doi.org/10.18112/openneuro.ds004398.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004398 >>> dataset = DS004398(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Wimmer2024']* ### *class* eegdash.dataset.DS004408(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG responses to continuous naturalistic speech * **Study:** `ds004408` (OpenNeuro) * **Author (year):** `Liberto2023` * **Canonical:** — Also importable as: `DS004408`, `Liberto2023`. Modality: `eeg`. Subjects: 19; recordings: 380; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004408](https://openneuro.org/datasets/ds004408) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004408](https://nemar.org/dataexplorer/detail?dataset_id=ds004408) DOI: [https://doi.org/10.18112/openneuro.ds004408.v1.0.8](https://doi.org/10.18112/openneuro.ds004408.v1.0.8) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004408 >>> dataset = DS004408(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004444(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The BMI-HDEEG dataset 1 * **Study:** `ds004444` (OpenNeuro) * **Author (year):** `Iwama2023_D1` * **Canonical:** `BMI_HDEEG_D1` Also importable as: `DS004444`, `Iwama2023_D1`, `BMI_HDEEG_D1`. Modality: `eeg`. Subjects: 30; recordings: 465; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004444](https://openneuro.org/datasets/ds004444) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004444](https://nemar.org/dataexplorer/detail?dataset_id=ds004444) DOI: [https://doi.org/10.18112/openneuro.ds004444.v1.0.1](https://doi.org/10.18112/openneuro.ds004444.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004444 >>> dataset = DS004444(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BMI_HDEEG_D1']* ### *class* eegdash.dataset.DS004446(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The BMI-HDEEG dataset 2 * **Study:** `ds004446` (OpenNeuro) * **Author (year):** `Iwama2023_D2` * **Canonical:** `BMI_HDEEG_D2` Also importable as: `DS004446`, `Iwama2023_D2`, `BMI_HDEEG_D2`. Modality: `eeg`. Subjects: 30; recordings: 237; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004446](https://openneuro.org/datasets/ds004446) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004446](https://nemar.org/dataexplorer/detail?dataset_id=ds004446) DOI: [https://doi.org/10.18112/openneuro.ds004446.v1.0.1](https://doi.org/10.18112/openneuro.ds004446.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004446 >>> dataset = DS004446(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BMI_HDEEG_D2']* ### *class* eegdash.dataset.DS004447(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The BMI-HDEEG dataset 3 * **Study:** `ds004447` (OpenNeuro) * **Author (year):** `Iwama2023_D3` * **Canonical:** `BMI_HDEEG_D3` Also importable as: `DS004447`, `Iwama2023_D3`, `BMI_HDEEG_D3`. Modality: `eeg`. Subjects: 22; recordings: 418; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004447](https://openneuro.org/datasets/ds004447) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004447](https://nemar.org/dataexplorer/detail?dataset_id=ds004447) DOI: [https://doi.org/10.18112/openneuro.ds004447.v1.0.1](https://doi.org/10.18112/openneuro.ds004447.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004447 >>> dataset = DS004447(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BMI_HDEEG_D3']* ### *class* eegdash.dataset.DS004448(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The BMI-HDEEG dataset 4 * **Study:** `ds004448` (OpenNeuro) * **Author (year):** `Iwama2023_D4` * **Canonical:** `BMI_HDEEG_D4` Also importable as: `DS004448`, `Iwama2023_D4`, `BMI_HDEEG_D4`. Modality: `eeg`. Subjects: 56; recordings: 280; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004448](https://openneuro.org/datasets/ds004448) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004448](https://nemar.org/dataexplorer/detail?dataset_id=ds004448) DOI: [https://doi.org/10.18112/openneuro.ds004448.v1.0.2](https://doi.org/10.18112/openneuro.ds004448.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004448 >>> dataset = DS004448(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BMI_HDEEG_D4']* ### *class* eegdash.dataset.DS004457(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrical stimulation of temporal and limbic circuitry produces distinct responses in human ventral temporal cortex * **Study:** `ds004457` (OpenNeuro) * **Author (year):** `Huang2023` * **Canonical:** `Huang2022` Also importable as: `DS004457`, `Huang2023`, `Huang2022`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 5; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004457](https://openneuro.org/datasets/ds004457) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004457](https://nemar.org/dataexplorer/detail?dataset_id=ds004457) DOI: [https://doi.org/10.18112/openneuro.ds004457.v1.0.1](https://doi.org/10.18112/openneuro.ds004457.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004457 >>> dataset = DS004457(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Huang2022']* ### *class* eegdash.dataset.DS004460(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG and motion capture data set for a full-body/joystick rotation task * **Study:** `ds004460` (OpenNeuro) * **Author (year):** `Gramann2023` * **Canonical:** — Also importable as: `DS004460`, `Gramann2023`. Modality: `eeg`. Subjects: 20; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004460](https://openneuro.org/datasets/ds004460) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004460](https://nemar.org/dataexplorer/detail?dataset_id=ds004460) DOI: [https://doi.org/10.18112/openneuro.ds004460.v1.1.0](https://doi.org/10.18112/openneuro.ds004460.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004460 >>> dataset = DS004460(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004473(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) sEEG Forced Two-Choice Task * **Study:** `ds004473` (OpenNeuro) * **Author (year):** `Rockhill2023` * **Canonical:** `Rockhill2022` Also importable as: `DS004473`, `Rockhill2023`, `Rockhill2022`. Modality: `ieeg`; Experiment type: `Motor`; Subject type: `Epilepsy`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004473](https://openneuro.org/datasets/ds004473) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004473](https://nemar.org/dataexplorer/detail?dataset_id=ds004473) DOI: [https://doi.org/10.18112/openneuro.ds004473.v1.0.1](https://doi.org/10.18112/openneuro.ds004473.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004473 >>> dataset = DS004473(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Rockhill2022']* ### *class* eegdash.dataset.DS004475(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mobile EEG split-belt walking study * **Study:** `ds004475` (OpenNeuro) * **Author (year):** `Jacobsen2023` * **Canonical:** — Also importable as: `DS004475`, `Jacobsen2023`. Modality: `eeg`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004475](https://openneuro.org/datasets/ds004475) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004475](https://nemar.org/dataexplorer/detail?dataset_id=ds004475) DOI: [https://doi.org/10.18112/openneuro.ds004475.v1.0.3](https://doi.org/10.18112/openneuro.ds004475.v1.0.3) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004475 >>> dataset = DS004475(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004477(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PES - Pandemic Emergency Scenario * **Study:** `ds004477` (OpenNeuro) * **Author (year):** `Papastylianou2023` * **Canonical:** — Also importable as: `DS004477`, `Papastylianou2023`. Modality: `eeg`. Subjects: 9; recordings: 9; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004477](https://openneuro.org/datasets/ds004477) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004477](https://nemar.org/dataexplorer/detail?dataset_id=ds004477) DOI: [https://doi.org/10.18112/openneuro.ds004477.v1.0.2](https://doi.org/10.18112/openneuro.ds004477.v1.0.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004477 >>> dataset = DS004477(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004483(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ABSeqMEG * **Study:** `ds004483` (OpenNeuro) * **Author (year):** `Planton2023` * **Canonical:** `ABSeqMEG` Also importable as: `DS004483`, `Planton2023`, `ABSeqMEG`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 19; recordings: 282; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004483](https://openneuro.org/datasets/ds004483) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004483](https://nemar.org/dataexplorer/detail?dataset_id=ds004483) DOI: [https://doi.org/10.18112/openneuro.ds004483.v1.0.0](https://doi.org/10.18112/openneuro.ds004483.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004483 >>> dataset = DS004483(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ABSeqMEG']* ### *class* eegdash.dataset.DS004502(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Anticipatory differences between Attention and Expectation * **Study:** `ds004502` (OpenNeuro) * **Author (year):** `Penalver2023` * **Canonical:** `Penalver2024` Also importable as: `DS004502`, `Penalver2023`, `Penalver2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 48; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004502](https://openneuro.org/datasets/ds004502) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004502](https://nemar.org/dataexplorer/detail?dataset_id=ds004502) DOI: [https://doi.org/10.18112/openneuro.ds004502.v1.0.1](https://doi.org/10.18112/openneuro.ds004502.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004502 >>> dataset = DS004502(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Penalver2024']* ### *class* eegdash.dataset.DS004504(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset of EEG recordings from: Alzheimer’s disease, Frontotemporal dementia and Healthy subjects * **Study:** `ds004504` (OpenNeuro) * **Author (year):** `Miltiadous2023` * **Canonical:** — Also importable as: `DS004504`, `Miltiadous2023`. Modality: `eeg`. Subjects: 88; recordings: 88; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004504](https://openneuro.org/datasets/ds004504) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004504](https://nemar.org/dataexplorer/detail?dataset_id=ds004504) DOI: [https://doi.org/10.18112/openneuro.ds004504.v1.0.8](https://doi.org/10.18112/openneuro.ds004504.v1.0.8) NEMAR citation count: 55 ### Examples ```pycon >>> from eegdash.dataset import DS004504 >>> dataset = DS004504(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004505(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Real World Table Tennis * **Study:** `ds004505` (OpenNeuro) * **Author (year):** `Studnicki2023` * **Canonical:** — Also importable as: `DS004505`, `Studnicki2023`. Modality: `eeg`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004505](https://openneuro.org/datasets/ds004505) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004505](https://nemar.org/dataexplorer/detail?dataset_id=ds004505) DOI: [https://doi.org/10.18112/openneuro.ds004505.v1.0.4](https://doi.org/10.18112/openneuro.ds004505.v1.0.4) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS004505 >>> dataset = DS004505(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004511(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Deception_data * **Study:** `ds004511` (OpenNeuro) * **Author (year):** `Makowski2023_Deception` * **Canonical:** — Also importable as: `DS004511`, `Makowski2023_Deception`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 45; recordings: 134; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004511](https://openneuro.org/datasets/ds004511) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004511](https://nemar.org/dataexplorer/detail?dataset_id=ds004511) DOI: [https://doi.org/10.18112/openneuro.ds004511.v1.0.2](https://doi.org/10.18112/openneuro.ds004511.v1.0.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004511 >>> dataset = DS004511(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004514(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Simultaneous EEG and fNIRS recordings for semantic decoding of imagined animals and tools * **Study:** `ds004514` (OpenNeuro) * **Author (year):** `Rybar2023_Simultaneous` * **Canonical:** — Also importable as: `DS004514`, `Rybar2023_Simultaneous`. Modality: `eeg, fnirs`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 12; recordings: 24; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004514](https://openneuro.org/datasets/ds004514) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004514](https://nemar.org/dataexplorer/detail?dataset_id=ds004514) DOI: [https://doi.org/10.18112/openneuro.ds004514.v1.1.2](https://doi.org/10.18112/openneuro.ds004514.v1.1.2) ### Examples ```pycon >>> from eegdash.dataset import DS004514 >>> dataset = DS004514(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004515(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Alcohol imagery reinforcement learning task with light and heavy drinker participants * **Study:** `ds004515` (OpenNeuro) * **Author (year):** `Singh2023` * **Canonical:** — Also importable as: `DS004515`, `Singh2023`. Modality: `eeg`. Subjects: 54; recordings: 54; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004515](https://openneuro.org/datasets/ds004515) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004515](https://nemar.org/dataexplorer/detail?dataset_id=ds004515) DOI: [https://doi.org/10.18112/openneuro.ds004515.v1.0.0](https://doi.org/10.18112/openneuro.ds004515.v1.0.0) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS004515 >>> dataset = DS004515(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004517(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG recordings for semantic decoding of imagined animals and tools during auditory imagery task * **Study:** `ds004517` (OpenNeuro) * **Author (year):** `Rybar2023_semantic` * **Canonical:** — Also importable as: `DS004517`, `Rybar2023_semantic`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 7; recordings: 7; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004517](https://openneuro.org/datasets/ds004517) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004517](https://nemar.org/dataexplorer/detail?dataset_id=ds004517) DOI: [https://doi.org/10.18112/openneuro.ds004517.v1.0.2](https://doi.org/10.18112/openneuro.ds004517.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS004517 >>> dataset = DS004517(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004519(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Internal selective attention is delayed by competition between endogenous and exogenous factors * **Study:** `ds004519` (OpenNeuro) * **Author (year):** `Ester2023_Internal` * **Canonical:** `Ester2022` Also importable as: `DS004519`, `Ester2023_Internal`, `Ester2022`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 40; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004519](https://openneuro.org/datasets/ds004519) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004519](https://nemar.org/dataexplorer/detail?dataset_id=ds004519) DOI: [https://doi.org/10.18112/openneuro.ds004519.v1.0.1](https://doi.org/10.18112/openneuro.ds004519.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004519 >>> dataset = DS004519(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ester2022']* ### *class* eegdash.dataset.DS004520(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Changes in behavioral priority influence the accessibility of working memory content - Experiment 2 * **Study:** `ds004520` (OpenNeuro) * **Author (year):** `Ester2023_Changes` * **Canonical:** `Ester2024_E2` Also importable as: `DS004520`, `Ester2023_Changes`, `Ester2024_E2`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 33; recordings: 33; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004520](https://openneuro.org/datasets/ds004520) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004520](https://nemar.org/dataexplorer/detail?dataset_id=ds004520) DOI: [https://doi.org/10.18112/openneuro.ds004520.v1.0.1](https://doi.org/10.18112/openneuro.ds004520.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004520 >>> dataset = DS004520(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ester2024_E2']* ### *class* eegdash.dataset.DS004521(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Changes in behavioral priority influence the accessibility of working memory content - Experiment 1 * **Study:** `ds004521` (OpenNeuro) * **Author (year):** `Ester2023_Changes_behavioral` * **Canonical:** `Ester2024_E1` Also importable as: `DS004521`, `Ester2023_Changes_behavioral`, `Ester2024_E1`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004521](https://openneuro.org/datasets/ds004521) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004521](https://nemar.org/dataexplorer/detail?dataset_id=ds004521) DOI: [https://doi.org/10.18112/openneuro.ds004521.v1.0.1](https://doi.org/10.18112/openneuro.ds004521.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004521 >>> dataset = DS004521(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ester2024_E1']* ### *class* eegdash.dataset.DS004532(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: Probabilistic Selection Task (PST) + PST with Cabergoline Challenge * **Study:** `ds004532` (OpenNeuro) * **Author (year):** `Cavanagh2023` * **Canonical:** — Also importable as: `DS004532`, `Cavanagh2023`. Modality: `eeg`. Subjects: 110; recordings: 137; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004532](https://openneuro.org/datasets/ds004532) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004532](https://nemar.org/dataexplorer/detail?dataset_id=ds004532) DOI: [https://doi.org/10.18112/openneuro.ds004532.v1.2.0](https://doi.org/10.18112/openneuro.ds004532.v1.2.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004532 >>> dataset = DS004532(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004541(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal EEG-fNIRS data from patients undergoing general anesthesia * **Study:** `ds004541` (OpenNeuro) * **Author (year):** `Ferron2023` * **Canonical:** `Ferron2019` Also importable as: `DS004541`, `Ferron2023`, `Ferron2019`. Modality: `eeg, fnirs`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 8; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004541](https://openneuro.org/datasets/ds004541) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004541](https://nemar.org/dataexplorer/detail?dataset_id=ds004541) DOI: [https://doi.org/10.18112/openneuro.ds004541.v1.0.0](https://doi.org/10.18112/openneuro.ds004541.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS004541 >>> dataset = DS004541(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ferron2019']* ### *class* eegdash.dataset.DS004551(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG on children during slow wave sleep * **Study:** `ds004551` (OpenNeuro) * **Author (year):** `Sakakura2023_children_slow_wave` * **Canonical:** `Sakakura2025` Also importable as: `DS004551`, `Sakakura2023_children_slow_wave`, `Sakakura2025`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Epilepsy`. Subjects: 114; recordings: 125; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004551](https://openneuro.org/datasets/ds004551) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004551](https://nemar.org/dataexplorer/detail?dataset_id=ds004551) DOI: [https://doi.org/10.18112/openneuro.ds004551.v1.0.6](https://doi.org/10.18112/openneuro.ds004551.v1.0.6) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004551 >>> dataset = DS004551(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Sakakura2025']* ### *class* eegdash.dataset.DS004554(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Forced Picture Naming Task * **Study:** `ds004554` (OpenNeuro) * **Author (year):** `Volpert2023` * **Canonical:** — Also importable as: `DS004554`, `Volpert2023`. Modality: `eeg`. Subjects: 16; recordings: 16; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004554](https://openneuro.org/datasets/ds004554) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004554](https://nemar.org/dataexplorer/detail?dataset_id=ds004554) DOI: [https://doi.org/10.18112/openneuro.ds004554.v1.0.4](https://doi.org/10.18112/openneuro.ds004554.v1.0.4) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004554 >>> dataset = DS004554(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004561(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Illusion of Agency over Electrically-Actuated Movements * **Study:** `ds004561` (OpenNeuro) * **Author (year):** `Veillette2023` * **Canonical:** — Also importable as: `DS004561`, `Veillette2023`. Modality: `eeg`. Subjects: 23; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004561](https://openneuro.org/datasets/ds004561) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004561](https://nemar.org/dataexplorer/detail?dataset_id=ds004561) DOI: [https://doi.org/10.18112/openneuro.ds004561.v1.0.0](https://doi.org/10.18112/openneuro.ds004561.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004561 >>> dataset = DS004561(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004563(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Vicarious touch: overlapping neural patterns between seeing and feeling touch * **Study:** `ds004563` (OpenNeuro) * **Author (year):** `Smit2023` * **Canonical:** — Also importable as: `DS004563`, `Smit2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Other`. Subjects: 40; recordings: 119; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004563](https://openneuro.org/datasets/ds004563) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004563](https://nemar.org/dataexplorer/detail?dataset_id=ds004563) DOI: [https://doi.org/10.18112/openneuro.ds004563.v1.0.1](https://doi.org/10.18112/openneuro.ds004563.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004563 >>> dataset = DS004563(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004572(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effects of sham hypnosis techniques * **Study:** `ds004572` (OpenNeuro) * **Author (year):** `Kekecs2023` * **Canonical:** `Kekecs2024` Also importable as: `DS004572`, `Kekecs2023`, `Kekecs2024`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 52; recordings: 516; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004572](https://openneuro.org/datasets/ds004572) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004572](https://nemar.org/dataexplorer/detail?dataset_id=ds004572) DOI: [https://doi.org/10.18112/openneuro.ds004572.v1.3.2](https://doi.org/10.18112/openneuro.ds004572.v1.3.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004572 >>> dataset = DS004572(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kekecs2024']* ### *class* eegdash.dataset.DS004574(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cross-modal Oddball Task. * **Study:** `ds004574` (OpenNeuro) * **Author (year):** `Singh2023_Cross_modal` * **Canonical:** — Also importable as: `DS004574`, `Singh2023_Cross_modal`. Modality: `eeg`. Subjects: 146; recordings: 146; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004574](https://openneuro.org/datasets/ds004574) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004574](https://nemar.org/dataexplorer/detail?dataset_id=ds004574) DOI: [https://doi.org/10.18112/openneuro.ds004574.v1.0.0](https://doi.org/10.18112/openneuro.ds004574.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004574 >>> dataset = DS004574(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004577(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset containing resting EEG for a sample of 103 normal infants in the first year of life * **Study:** `ds004577` (OpenNeuro) * **Author (year):** `Unit2023` * **Canonical:** — Also importable as: `DS004577`, `Unit2023`. Modality: `eeg`. Subjects: 103; recordings: 130; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004577](https://openneuro.org/datasets/ds004577) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004577](https://nemar.org/dataexplorer/detail?dataset_id=ds004577) DOI: [https://doi.org/10.18112/openneuro.ds004577.v1.0.1](https://doi.org/10.18112/openneuro.ds004577.v1.0.1) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004577 >>> dataset = DS004577(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004579(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Interval Timing Task * **Study:** `ds004579` (OpenNeuro) * **Author (year):** `Singh2023_Interval_Timing` * **Canonical:** — Also importable as: `DS004579`, `Singh2023_Interval_Timing`. Modality: `eeg`. Subjects: 139; recordings: 139; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004579](https://openneuro.org/datasets/ds004579) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004579](https://nemar.org/dataexplorer/detail?dataset_id=ds004579) DOI: [https://doi.org/10.18112/openneuro.ds004579.v1.0.0](https://doi.org/10.18112/openneuro.ds004579.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004579 >>> dataset = DS004579(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004580(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Simon-conflict Task. * **Study:** `ds004580` (OpenNeuro) * **Author (year):** `Singh2023_Simon_conflict` * **Canonical:** — Also importable as: `DS004580`, `Singh2023_Simon_conflict`. Modality: `eeg`. Subjects: 147; recordings: 147; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004580](https://openneuro.org/datasets/ds004580) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004580](https://nemar.org/dataexplorer/detail?dataset_id=ds004580) DOI: [https://doi.org/10.18112/openneuro.ds004580.v1.0.0](https://doi.org/10.18112/openneuro.ds004580.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004580 >>> dataset = DS004580(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004582(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FakeFaceEmo_data * **Study:** `ds004582` (OpenNeuro) * **Author (year):** `Makowski2023_FakeFaceEmo` * **Canonical:** — Also importable as: `DS004582`, `Makowski2023_FakeFaceEmo`. Modality: `eeg`. Subjects: 73; recordings: 73; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004582](https://openneuro.org/datasets/ds004582) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004582](https://nemar.org/dataexplorer/detail?dataset_id=ds004582) DOI: [https://doi.org/10.18112/openneuro.ds004582.v1.0.0](https://doi.org/10.18112/openneuro.ds004582.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004582 >>> dataset = DS004582(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004584(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Rest eyes open * **Study:** `ds004584` (OpenNeuro) * **Author (year):** `Singh2023_Rest_eyes` * **Canonical:** — Also importable as: `DS004584`, `Singh2023_Rest_eyes`. Modality: `eeg`. Subjects: 149; recordings: 149; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004584](https://openneuro.org/datasets/ds004584) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004584](https://nemar.org/dataexplorer/detail?dataset_id=ds004584) DOI: [https://doi.org/10.18112/openneuro.ds004584.v1.0.0](https://doi.org/10.18112/openneuro.ds004584.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004584 >>> dataset = DS004584(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004587(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) IllusionGameEEG_data * **Study:** `ds004587` (OpenNeuro) * **Author (year):** `Makowski2023_IllusionGameEEG` * **Canonical:** — Also importable as: `DS004587`, `Makowski2023_IllusionGameEEG`. Modality: `eeg`. Subjects: 103; recordings: 114; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004587](https://openneuro.org/datasets/ds004587) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004587](https://nemar.org/dataexplorer/detail?dataset_id=ds004587) DOI: [https://doi.org/10.18112/openneuro.ds004587.v1.0.0](https://doi.org/10.18112/openneuro.ds004587.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004587 >>> dataset = DS004587(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004588(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neuma * **Study:** `ds004588` (OpenNeuro) * **Author (year):** `Georgiadis2023` * **Canonical:** `Neuma` Also importable as: `DS004588`, `Georgiadis2023`, `Neuma`. Modality: `eeg`. Subjects: 42; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004588](https://openneuro.org/datasets/ds004588) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004588](https://nemar.org/dataexplorer/detail?dataset_id=ds004588) DOI: [https://doi.org/10.18112/openneuro.ds004588.v1.2.0](https://doi.org/10.18112/openneuro.ds004588.v1.2.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004588 >>> dataset = DS004588(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Neuma']* ### *class* eegdash.dataset.DS004595(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls * **Study:** `ds004595` (OpenNeuro) * **Author (year):** `Campbell2023` * **Canonical:** — Also importable as: `DS004595`, `Campbell2023`. Modality: `eeg`. Subjects: 53; recordings: 53; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004595](https://openneuro.org/datasets/ds004595) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004595](https://nemar.org/dataexplorer/detail?dataset_id=ds004595) DOI: [https://doi.org/10.18112/openneuro.ds004595.v1.0.0](https://doi.org/10.18112/openneuro.ds004595.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004595 >>> dataset = DS004595(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004598(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LFP during linear track in 6-month old TgF344-AD rats * **Study:** `ds004598` (OpenNeuro) * **Author (year):** `Faraz2023` * **Canonical:** `Moradi2024` Also importable as: `DS004598`, `Faraz2023`, `Moradi2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Dementia`. Subjects: 9; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004598](https://openneuro.org/datasets/ds004598) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004598](https://nemar.org/dataexplorer/detail?dataset_id=ds004598) DOI: [https://doi.org/10.18112/openneuro.ds004598.v1.0.0](https://doi.org/10.18112/openneuro.ds004598.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004598 >>> dataset = DS004598(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Moradi2024']* ### *class* eegdash.dataset.DS004602(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Registered Replication Report of ERN/Pe Psychometrics * **Study:** `ds004602` (OpenNeuro) * **Author (year):** `Clayson2023_Registered` * **Canonical:** — Also importable as: `DS004602`, `Clayson2023_Registered`. Modality: `eeg`. Subjects: 182; recordings: 546; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004602](https://openneuro.org/datasets/ds004602) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004602](https://nemar.org/dataexplorer/detail?dataset_id=ds004602) DOI: [https://doi.org/10.18112/openneuro.ds004602.v1.0.1](https://doi.org/10.18112/openneuro.ds004602.v1.0.1) NEMAR citation count: 5 ### Examples ```pycon >>> from eegdash.dataset import DS004602 >>> dataset = DS004602(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004603(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual Attribute-Specific Contextual Trajectory Paradigm * **Study:** `ds004603` (OpenNeuro) * **Author (year):** `Lowe2023` * **Canonical:** `VisualContextTrajectory` Also importable as: `DS004603`, `Lowe2023`, `VisualContextTrajectory`. Modality: `eeg`. Subjects: 37; recordings: 37; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004603](https://openneuro.org/datasets/ds004603) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004603](https://nemar.org/dataexplorer/detail?dataset_id=ds004603) DOI: [https://doi.org/10.18112/openneuro.ds004603.v1.1.0](https://doi.org/10.18112/openneuro.ds004603.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004603 >>> dataset = DS004603(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['VisualContextTrajectory']* ### *class* eegdash.dataset.DS004621(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Nencki-Symfonia EEG/ERP dataset * **Study:** `ds004621` (OpenNeuro) * **Author (year):** `Patrycja2023_Nencki` * **Canonical:** `NenckiSymfonia` Also importable as: `DS004621`, `Patrycja2023_Nencki`, `NenckiSymfonia`. Modality: `eeg`. Subjects: 42; recordings: 167; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004621](https://openneuro.org/datasets/ds004621) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004621](https://nemar.org/dataexplorer/detail?dataset_id=ds004621) DOI: [https://doi.org/10.18112/openneuro.ds004621.v1.0.4](https://doi.org/10.18112/openneuro.ds004621.v1.0.4) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004621 >>> dataset = DS004621(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['NenckiSymfonia']* ### *class* eegdash.dataset.DS004624(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Intracranial recordings using BCI2000 and the CorTec BrainInterchange * **Study:** `ds004624` (OpenNeuro) * **Author (year):** `Mivalt2025` * **Canonical:** `Mivalt2024`, `BCI2000_Intracranial` Also importable as: `DS004624`, `Mivalt2025`, `Mivalt2024`, `BCI2000_Intracranial`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 3; recordings: 614; tasks: 28. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004624](https://openneuro.org/datasets/ds004624) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004624](https://nemar.org/dataexplorer/detail?dataset_id=ds004624) DOI: [https://doi.org/10.18112/openneuro.ds004624.v2.0.0](https://doi.org/10.18112/openneuro.ds004624.v2.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004624 >>> dataset = DS004624(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Mivalt2024', 'BCI2000_Intracranial']* ### *class* eegdash.dataset.DS004625(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mind in Motion Young Adults Walking Over Uneven Terrain * **Study:** `ds004625` (OpenNeuro) * **Author (year):** `Liu2023` * **Canonical:** — Also importable as: `DS004625`, `Liu2023`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 32; recordings: 543; tasks: 9. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004625](https://openneuro.org/datasets/ds004625) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004625](https://nemar.org/dataexplorer/detail?dataset_id=ds004625) DOI: [https://doi.org/10.18112/openneuro.ds004625.v1.0.2](https://doi.org/10.18112/openneuro.ds004625.v1.0.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004625 >>> dataset = DS004625(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004626(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Can we dissociate hypervigilance to social threats from altered perceptual decision-making processes in lonely individuals? An exploration with Drift Diffusion Modelling and event-related potentials. * **Study:** `ds004626` (OpenNeuro) * **Author (year):** `Maka2023` * **Canonical:** — Also importable as: `DS004626`, `Maka2023`. Modality: `eeg`. Subjects: 52; recordings: 52; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004626](https://openneuro.org/datasets/ds004626) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004626](https://nemar.org/dataexplorer/detail?dataset_id=ds004626) DOI: [https://doi.org/10.18112/openneuro.ds004626.v1.0.2](https://doi.org/10.18112/openneuro.ds004626.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004626 >>> dataset = DS004626(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004635(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Gaffrey Lab Infant Microstates Reliability * **Study:** `ds004635` (OpenNeuro) * **Author (year):** `Bagdasarov2023` * **Canonical:** — Also importable as: `DS004635`, `Bagdasarov2023`. Modality: `eeg`. Subjects: 48; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004635](https://openneuro.org/datasets/ds004635) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004635](https://nemar.org/dataexplorer/detail?dataset_id=ds004635) DOI: [https://doi.org/10.18112/openneuro.ds004635.v3.1.0](https://doi.org/10.18112/openneuro.ds004635.v3.1.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004635 >>> dataset = DS004635(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004642(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Intraoperative recordings of medianus stimulation with low and high impedance ECoG * **Study:** `ds004642` (OpenNeuro) * **Author (year):** `Dimakopoulos2023_Intraoperative` * **Canonical:** — Also importable as: `DS004642`, `Dimakopoulos2023_Intraoperative`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Surgery`. Subjects: 10; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004642](https://openneuro.org/datasets/ds004642) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004642](https://nemar.org/dataexplorer/detail?dataset_id=ds004642) DOI: [https://doi.org/10.18112/openneuro.ds004642.v1.0.1](https://doi.org/10.18112/openneuro.ds004642.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004642 >>> dataset = DS004642(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004657(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Driving with Autonomous Aids * **Study:** `ds004657` (OpenNeuro) * **Author (year):** `Metcalfe2023_Driving` * **Canonical:** `TX20` Also importable as: `DS004657`, `Metcalfe2023_Driving`, `TX20`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 24; recordings: 119; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004657](https://openneuro.org/datasets/ds004657) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004657](https://nemar.org/dataexplorer/detail?dataset_id=ds004657) DOI: [https://doi.org/10.18112/openneuro.ds004657.v1.0.3](https://doi.org/10.18112/openneuro.ds004657.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004657 >>> dataset = DS004657(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['TX20']* ### *class* eegdash.dataset.DS004660(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TNO * **Study:** `ds004660` (OpenNeuro) * **Author (year):** `Johnson2023_TNO` * **Canonical:** `TNO` Also importable as: `DS004660`, `Johnson2023_TNO`, `TNO`. Modality: `eeg`. Subjects: 21; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004660](https://openneuro.org/datasets/ds004660) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004660](https://nemar.org/dataexplorer/detail?dataset_id=ds004660) DOI: [https://doi.org/10.18112/openneuro.ds004660.v1.0.2](https://doi.org/10.18112/openneuro.ds004660.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004660 >>> dataset = DS004660(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['TNO']* ### *class* eegdash.dataset.DS004661(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ANDI * **Study:** `ds004661` (OpenNeuro) * **Author (year):** `Johnson2023_ANDI` * **Canonical:** `ANDI` Also importable as: `DS004661`, `Johnson2023_ANDI`, `ANDI`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004661](https://openneuro.org/datasets/ds004661) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004661](https://nemar.org/dataexplorer/detail?dataset_id=ds004661) DOI: [https://doi.org/10.18112/openneuro.ds004661.v1.1.0](https://doi.org/10.18112/openneuro.ds004661.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004661 >>> dataset = DS004661(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ANDI']* ### *class* eegdash.dataset.DS004696(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HAPwave_bids * **Study:** `ds004696` (OpenNeuro) * **Author (year):** `Valencia2023` * **Canonical:** — Also importable as: `DS004696`, `Valencia2023`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004696](https://openneuro.org/datasets/ds004696) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004696](https://nemar.org/dataexplorer/detail?dataset_id=ds004696) DOI: [https://doi.org/10.18112/openneuro.ds004696.v1.0.1](https://doi.org/10.18112/openneuro.ds004696.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004696 >>> dataset = DS004696(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004703(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) sEEG Passive listening to natural speech * **Study:** `ds004703` (OpenNeuro) * **Author (year):** `Mai2023` * **Canonical:** — Also importable as: `DS004703`, `Mai2023`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Surgery`. Subjects: 10; recordings: 11; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004703](https://openneuro.org/datasets/ds004703) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004703](https://nemar.org/dataexplorer/detail?dataset_id=ds004703) DOI: [https://doi.org/10.18112/openneuro.ds004703.v1.1.0](https://doi.org/10.18112/openneuro.ds004703.v1.1.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004703 >>> dataset = DS004703(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004706(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Spatial memory and non-invasive closed-loop stimulus timing * **Study:** `ds004706` (OpenNeuro) * **Author (year):** `Rudoler2023` * **Canonical:** — Also importable as: `DS004706`, `Rudoler2023`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 34; recordings: 298; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004706](https://openneuro.org/datasets/ds004706) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004706](https://nemar.org/dataexplorer/detail?dataset_id=ds004706) DOI: [https://doi.org/10.18112/openneuro.ds004706.v1.0.0](https://doi.org/10.18112/openneuro.ds004706.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004706 >>> dataset = DS004706(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004718(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Le Petit Prince Hong Kong: Naturalistic fMRI and EEG dataset from older Cantonese speakers * **Study:** `ds004718` (OpenNeuro) * **Author (year):** `Momenian2023` * **Canonical:** — Also importable as: `DS004718`, `Momenian2023`. Modality: `eeg`. Subjects: 51; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004718](https://openneuro.org/datasets/ds004718) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004718](https://nemar.org/dataexplorer/detail?dataset_id=ds004718) DOI: [https://doi.org/10.18112/openneuro.ds004718.v1.1.2](https://doi.org/10.18112/openneuro.ds004718.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004718 >>> dataset = DS004718(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004738(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) sfb_meg_phantom (B04/C01) * **Study:** `ds004738` (OpenNeuro) * **Author (year):** `Bahners2023` * **Canonical:** — Also importable as: `DS004738`, `Bahners2023`. Modality: `meg`; Experiment type: `Other`; Subject type: `Other`. Subjects: 4; recordings: 25; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004738](https://openneuro.org/datasets/ds004738) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004738](https://nemar.org/dataexplorer/detail?dataset_id=ds004738) DOI: [https://doi.org/10.18112/openneuro.ds004738.v1.0.1](https://doi.org/10.18112/openneuro.ds004738.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004738 >>> dataset = DS004738(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004745(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 8-Channel SSVEP EEG Dataset with Artifact Trials * **Study:** `ds004745` (OpenNeuro) * **Author (year):** `Kumaravel2023` * **Canonical:** — Also importable as: `DS004745`, `Kumaravel2023`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 6; recordings: 6; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004745](https://openneuro.org/datasets/ds004745) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004745](https://nemar.org/dataexplorer/detail?dataset_id=ds004745) DOI: [https://doi.org/10.18112/openneuro.ds004745.v1.0.1](https://doi.org/10.18112/openneuro.ds004745.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004745 >>> dataset = DS004745(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004752(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of intracranial EEG, scalp EEG and beamforming sources from epilepsy patients performing a verbal working memory task * **Study:** `ds004752` (OpenNeuro) * **Author (year):** `Dimakopoulos2023_intracranial` * **Canonical:** — Also importable as: `DS004752`, `Dimakopoulos2023_intracranial`. Modality: `eeg, ieeg`. Subjects: 15; recordings: 136; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004752](https://openneuro.org/datasets/ds004752) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004752](https://nemar.org/dataexplorer/detail?dataset_id=ds004752) DOI: [https://doi.org/10.18112/openneuro.ds004752.v1.0.1](https://doi.org/10.18112/openneuro.ds004752.v1.0.1) NEMAR citation count: 4 ### Examples ```pycon >>> from eegdash.dataset import DS004752 >>> dataset = DS004752(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004770(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG on children during gameplay * **Study:** `ds004770` (OpenNeuro) * **Author (year):** `Ueda2023` * **Canonical:** — Also importable as: `DS004770`, `Ueda2023`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 10; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004770](https://openneuro.org/datasets/ds004770) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004770](https://nemar.org/dataexplorer/detail?dataset_id=ds004770) DOI: [https://doi.org/10.18112/openneuro.ds004770.v1.0.0](https://doi.org/10.18112/openneuro.ds004770.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004770 >>> dataset = DS004770(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004771(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG/ERP data from a Python Reading Task * **Study:** `ds004771` (OpenNeuro) * **Author (year):** `Kuo2023` * **Canonical:** — Also importable as: `DS004771`, `Kuo2023`. Modality: `eeg`. Subjects: 61; recordings: 61; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004771](https://openneuro.org/datasets/ds004771) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004771](https://nemar.org/dataexplorer/detail?dataset_id=ds004771) DOI: [https://doi.org/10.18112/openneuro.ds004771.v1.0.0](https://doi.org/10.18112/openneuro.ds004771.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004771 >>> dataset = DS004771(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004774(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Automatic Evoked Response Detection (ER-Detect) dataset * **Study:** `ds004774` (OpenNeuro) * **Author (year):** `Boom2023` * **Canonical:** `ERDetect`, `ER_Detect` Also importable as: `DS004774`, `Boom2023`, `ERDetect`, `ER_Detect`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 14; recordings: 14; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004774](https://openneuro.org/datasets/ds004774) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004774](https://nemar.org/dataexplorer/detail?dataset_id=ds004774) DOI: [https://doi.org/10.18112/openneuro.ds004774.v1.0.0](https://doi.org/10.18112/openneuro.ds004774.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS004774 >>> dataset = DS004774(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ERDetect', 'ER_Detect']* ### *class* eegdash.dataset.DS004784(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Phantom EEG Dataset with Motion, Muscle, and Eye Artifacts and Example Scripts * **Study:** `ds004784` (OpenNeuro) * **Author (year):** `Downey2023` * **Canonical:** — Also importable as: `DS004784`, `Downey2023`. Modality: `eeg`. Subjects: 1; recordings: 6; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004784](https://openneuro.org/datasets/ds004784) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004784](https://nemar.org/dataexplorer/detail?dataset_id=ds004784) DOI: [https://doi.org/10.18112/openneuro.ds004784.v1.0.4](https://doi.org/10.18112/openneuro.ds004784.v1.0.4) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004784 >>> dataset = DS004784(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004785(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data for paper titled - Precise cortical contributions to feedback sensorimotor control during reactive balance * **Study:** `ds004785` (OpenNeuro) * **Author (year):** `Boebinger2023` * **Canonical:** — Also importable as: `DS004785`, `Boebinger2023`. Modality: `eeg`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004785](https://openneuro.org/datasets/ds004785) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004785](https://nemar.org/dataexplorer/detail?dataset_id=ds004785) DOI: [https://doi.org/10.18112/openneuro.ds004785.v1.0.1](https://doi.org/10.18112/openneuro.ds004785.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004785 >>> dataset = DS004785(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004789(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Delayed Free Recall of Word Lists * **Study:** `ds004789` (OpenNeuro) * **Author (year):** `Herrema2023_Delayed_Free_Recall` * **Canonical:** — Also importable as: `DS004789`, `Herrema2023_Delayed_Free_Recall`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 273; recordings: 983; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004789](https://openneuro.org/datasets/ds004789) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004789](https://nemar.org/dataexplorer/detail?dataset_id=ds004789) DOI: [https://doi.org/10.18112/openneuro.ds004789.v3.1.0](https://doi.org/10.18112/openneuro.ds004789.v3.1.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004789 >>> dataset = DS004789(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004796(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Polish Electroencephalography, Alzheimer’s Risk-genes, Lifestyle and Neuroimaging (PEARL-Neuro) Database * **Study:** `ds004796` (OpenNeuro) * **Author (year):** `Patrycja2023_Polish` * **Canonical:** `PEARLNeuro` Also importable as: `DS004796`, `Patrycja2023_Polish`, `PEARLNeuro`. Modality: `eeg`. Subjects: 79; recordings: 235; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004796](https://openneuro.org/datasets/ds004796) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004796](https://nemar.org/dataexplorer/detail?dataset_id=ds004796) DOI: [https://doi.org/10.18112/openneuro.ds004796.v1.1.0](https://doi.org/10.18112/openneuro.ds004796.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004796 >>> dataset = DS004796(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PEARLNeuro']* ### *class* eegdash.dataset.DS004802(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Pilot data for Loneliness in the Brain: Distinguishing Between Hypersensitivity and Hyperalertness * **Study:** `ds004802` (OpenNeuro) * **Author (year):** `Bathelt2023` * **Canonical:** — Also importable as: `DS004802`, `Bathelt2023`. Modality: `eeg`. Subjects: 39; recordings: 79; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004802](https://openneuro.org/datasets/ds004802) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004802](https://nemar.org/dataexplorer/detail?dataset_id=ds004802) DOI: [https://doi.org/10.18112/openneuro.ds004802.v1.0.0](https://doi.org/10.18112/openneuro.ds004802.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004802 >>> dataset = DS004802(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004809(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Categorized Free Recall: Delayed Free Recall of Word Lists Organized by Semantic Categories * **Study:** `ds004809` (OpenNeuro) * **Author (year):** `Herrema2023_Categorized_Free_Recall` * **Canonical:** `catFR_Categorized_Free_Recall`, `CatFR` Also importable as: `DS004809`, `Herrema2023_Categorized_Free_Recall`, `catFR_Categorized_Free_Recall`, `CatFR`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 252; recordings: 889; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004809](https://openneuro.org/datasets/ds004809) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004809](https://nemar.org/dataexplorer/detail?dataset_id=ds004809) DOI: [https://doi.org/10.18112/openneuro.ds004809.v2.2.0](https://doi.org/10.18112/openneuro.ds004809.v2.2.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004809 >>> dataset = DS004809(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['catFR_Categorized_Free_Recall', 'CatFR']* ### *class* eegdash.dataset.DS004816(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG-attention-rsvp-exp1 * **Study:** `ds004816` (OpenNeuro) * **Author (year):** `Grootswagers2023_E1` * **Canonical:** — Also importable as: `DS004816`, `Grootswagers2023_E1`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004816](https://openneuro.org/datasets/ds004816) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004816](https://nemar.org/dataexplorer/detail?dataset_id=ds004816) DOI: [https://doi.org/10.18112/openneuro.ds004816.v1.0.0](https://doi.org/10.18112/openneuro.ds004816.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004816 >>> dataset = DS004816(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004817(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG-attention-rsvp-exp2 * **Study:** `ds004817` (OpenNeuro) * **Author (year):** `Grootswagers2023_E2` * **Canonical:** — Also importable as: `DS004817`, `Grootswagers2023_E2`. Modality: `eeg`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004817](https://openneuro.org/datasets/ds004817) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004817](https://nemar.org/dataexplorer/detail?dataset_id=ds004817) DOI: [https://doi.org/10.18112/openneuro.ds004817.v1.0.0](https://doi.org/10.18112/openneuro.ds004817.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004817 >>> dataset = DS004817(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004819(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Flexible, Scalable, High Channel Count Stereo-Electrode for Recording in the Human Brain * **Study:** `ds004819` (OpenNeuro) * **Author (year):** `Lee2023` * **Canonical:** — Also importable as: `DS004819`, `Lee2023`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 1; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004819](https://openneuro.org/datasets/ds004819) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004819](https://nemar.org/dataexplorer/detail?dataset_id=ds004819) DOI: [https://doi.org/10.18112/openneuro.ds004819.v1.0.0](https://doi.org/10.18112/openneuro.ds004819.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004819 >>> dataset = DS004819(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004830(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Spatial Attention Decoding using fNIRS During Complex Scene Analysis * **Study:** `ds004830` (OpenNeuro) * **Author (year):** `Ning2023` * **Canonical:** `Ning2024` Also importable as: `DS004830`, `Ning2023`, `Ning2024`. Modality: `fnirs`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004830](https://openneuro.org/datasets/ds004830) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004830](https://nemar.org/dataexplorer/detail?dataset_id=ds004830) DOI: [https://doi.org/10.18112/openneuro.ds004830.v2.0.0](https://doi.org/10.18112/openneuro.ds004830.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS004830 >>> dataset = DS004830(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ning2024']* ### *class* eegdash.dataset.DS004837(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Magnetoencephalographic (MEG) Pitch and Duration Mismatch Negativity (MMN) in First-Episode Psychosis * **Study:** `ds004837` (OpenNeuro) * **Author (year):** `LopezCaballero2023` * **Canonical:** — Also importable as: `DS004837`, `LopezCaballero2023`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Schizophrenia/Psychosis`. Subjects: 60; recordings: 106; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004837](https://openneuro.org/datasets/ds004837) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004837](https://nemar.org/dataexplorer/detail?dataset_id=ds004837) DOI: [https://doi.org/10.18112/openneuro.ds004837.v1.0.2](https://doi.org/10.18112/openneuro.ds004837.v1.0.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004837 >>> dataset = DS004837(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004840(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of electrophysiological signals (EEG, ECG, EMG) during Music therapy with adult burn patients in the Intensive Care Unit. * **Study:** `ds004840` (OpenNeuro) * **Author (year):** `CordobaSilva2023` * **Canonical:** — Also importable as: `DS004840`, `CordobaSilva2023`. Modality: `eeg`. Subjects: 9; recordings: 51; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004840](https://openneuro.org/datasets/ds004840) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004840](https://nemar.org/dataexplorer/detail?dataset_id=ds004840) DOI: [https://doi.org/10.18112/openneuro.ds004840.v1.0.1](https://doi.org/10.18112/openneuro.ds004840.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004840 >>> dataset = DS004840(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004841(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TX14 * **Study:** `ds004841` (OpenNeuro) * **Author (year):** `Larkin2023_TX14` * **Canonical:** `TX14` Also importable as: `DS004841`, `Larkin2023_TX14`, `TX14`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 20; recordings: 147; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004841](https://openneuro.org/datasets/ds004841) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004841](https://nemar.org/dataexplorer/detail?dataset_id=ds004841) DOI: [https://doi.org/10.18112/openneuro.ds004841.v1.0.1](https://doi.org/10.18112/openneuro.ds004841.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004841 >>> dataset = DS004841(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['TX14']* ### *class* eegdash.dataset.DS004842(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TX15 * **Study:** `ds004842` (OpenNeuro) * **Author (year):** `Larkin2023_TX15` * **Canonical:** `TX15` Also importable as: `DS004842`, `Larkin2023_TX15`, `TX15`. Modality: `eeg`. Subjects: 14; recordings: 102; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004842](https://openneuro.org/datasets/ds004842) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004842](https://nemar.org/dataexplorer/detail?dataset_id=ds004842) DOI: [https://doi.org/10.18112/openneuro.ds004842.v1.0.0](https://doi.org/10.18112/openneuro.ds004842.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004842 >>> dataset = DS004842(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['TX15']* ### *class* eegdash.dataset.DS004843(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) T16 * **Study:** `ds004843` (OpenNeuro) * **Author (year):** `Johnson2023_T16` * **Canonical:** — Also importable as: `DS004843`, `Johnson2023_T16`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 14; recordings: 92; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004843](https://openneuro.org/datasets/ds004843) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004843](https://nemar.org/dataexplorer/detail?dataset_id=ds004843) DOI: [https://doi.org/10.18112/openneuro.ds004843.v1.0.0](https://doi.org/10.18112/openneuro.ds004843.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004843 >>> dataset = DS004843(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004844(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) T22 * **Study:** `ds004844` (OpenNeuro) * **Author (year):** `Metcalfe2023_T22` * **Canonical:** — Also importable as: `DS004844`, `Metcalfe2023_T22`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 17; recordings: 68; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004844](https://openneuro.org/datasets/ds004844) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004844](https://nemar.org/dataexplorer/detail?dataset_id=ds004844) DOI: [https://doi.org/10.18112/openneuro.ds004844.v1.0.0](https://doi.org/10.18112/openneuro.ds004844.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004844 >>> dataset = DS004844(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004849(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) STRONG * **Study:** `ds004849` (OpenNeuro) * **Author (year):** `Johnson2023_STRONG` * **Canonical:** `STRONG` Also importable as: `DS004849`, `Johnson2023_STRONG`, `STRONG`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004849](https://openneuro.org/datasets/ds004849) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004849](https://nemar.org/dataexplorer/detail?dataset_id=ds004849) DOI: [https://doi.org/10.18112/openneuro.ds004849.v1.0.0](https://doi.org/10.18112/openneuro.ds004849.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004849 >>> dataset = DS004849(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['STRONG']* ### *class* eegdash.dataset.DS004850(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ODE * **Study:** `ds004850` (OpenNeuro) * **Author (year):** `Johnson2023_ODE` * **Canonical:** `Johnson2024` Also importable as: `DS004850`, `Johnson2023_ODE`, `Johnson2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004850](https://openneuro.org/datasets/ds004850) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004850](https://nemar.org/dataexplorer/detail?dataset_id=ds004850) DOI: [https://doi.org/10.18112/openneuro.ds004850.v1.0.0](https://doi.org/10.18112/openneuro.ds004850.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004850 >>> dataset = DS004850(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Johnson2024']* ### *class* eegdash.dataset.DS004851(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HID * **Study:** `ds004851` (OpenNeuro) * **Author (year):** `Johnson2023_HID` * **Canonical:** `HID` Also importable as: `DS004851`, `Johnson2023_HID`, `HID`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 66; recordings: 66; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004851](https://openneuro.org/datasets/ds004851) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004851](https://nemar.org/dataexplorer/detail?dataset_id=ds004851) DOI: [https://doi.org/10.18112/openneuro.ds004851.v2.1.0](https://doi.org/10.18112/openneuro.ds004851.v2.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004851 >>> dataset = DS004851(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HID']* ### *class* eegdash.dataset.DS004852(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) InsurgentCivilian * **Study:** `ds004852` (OpenNeuro) * **Author (year):** `Johnson2023_InsurgentCivilian` * **Canonical:** `Johnson2025` Also importable as: `DS004852`, `Johnson2023_InsurgentCivilian`, `Johnson2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004852](https://openneuro.org/datasets/ds004852) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004852](https://nemar.org/dataexplorer/detail?dataset_id=ds004852) DOI: [https://doi.org/10.18112/openneuro.ds004852.v1.0.0](https://doi.org/10.18112/openneuro.ds004852.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004852 >>> dataset = DS004852(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Johnson2025']* ### *class* eegdash.dataset.DS004853(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TX17 * **Study:** `ds004853` (OpenNeuro) * **Author (year):** `Johnson2023_TX17` * **Canonical:** — Also importable as: `DS004853`, `Johnson2023_TX17`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004853](https://openneuro.org/datasets/ds004853) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004853](https://nemar.org/dataexplorer/detail?dataset_id=ds004853) DOI: [https://doi.org/10.18112/openneuro.ds004853.v1.0.0](https://doi.org/10.18112/openneuro.ds004853.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004853 >>> dataset = DS004853(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004854(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TX18 * **Study:** `ds004854` (OpenNeuro) * **Author (year):** `Johnson2023_TX18` * **Canonical:** `TX18` Also importable as: `DS004854`, `Johnson2023_TX18`, `TX18`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004854](https://openneuro.org/datasets/ds004854) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004854](https://nemar.org/dataexplorer/detail?dataset_id=ds004854) DOI: [https://doi.org/10.18112/openneuro.ds004854.v1.0.0](https://doi.org/10.18112/openneuro.ds004854.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004854 >>> dataset = DS004854(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['TX18']* ### *class* eegdash.dataset.DS004855(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FT * **Study:** `ds004855` (OpenNeuro) * **Author (year):** `Johnson2023_FT` * **Canonical:** — Also importable as: `DS004855`, `Johnson2023_FT`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004855](https://openneuro.org/datasets/ds004855) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004855](https://nemar.org/dataexplorer/detail?dataset_id=ds004855) DOI: [https://doi.org/10.18112/openneuro.ds004855.v1.0.0](https://doi.org/10.18112/openneuro.ds004855.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004855 >>> dataset = DS004855(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004859(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG on children during Stroop task * **Study:** `ds004859` (OpenNeuro) * **Author (year):** `Sakakura2023_children_Stroop` * **Canonical:** `Sakakura2024` Also importable as: `DS004859`, `Sakakura2023_children_Stroop`, `Sakakura2024`. Modality: `ieeg`; Experiment type: `Attention`; Subject type: `Unknown`. Subjects: 7; recordings: 9; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004859](https://openneuro.org/datasets/ds004859) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004859](https://nemar.org/dataexplorer/detail?dataset_id=ds004859) DOI: [https://doi.org/10.18112/openneuro.ds004859.v1.0.0](https://doi.org/10.18112/openneuro.ds004859.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004859 >>> dataset = DS004859(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Sakakura2024']* ### *class* eegdash.dataset.DS004860(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Investigating the cognitive conflict triggered by moral judgment of accidental harm : an event-related potentials study * **Study:** `ds004860` (OpenNeuro) * **Author (year):** `Schwartz2023` * **Canonical:** — Also importable as: `DS004860`, `Schwartz2023`. Modality: `eeg`. Subjects: 31; recordings: 31; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004860](https://openneuro.org/datasets/ds004860) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004860](https://nemar.org/dataexplorer/detail?dataset_id=ds004860) DOI: [https://doi.org/10.18112/openneuro.ds004860.v1.0.0](https://doi.org/10.18112/openneuro.ds004860.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004860 >>> dataset = DS004860(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004865(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) pyFR: Delayed Free Recall of Word Lists, Preliminary Cognitive Electrophysiology Study * **Study:** `ds004865` (OpenNeuro) * **Author (year):** `Herrema2023_pyFR_Delayed_Free` * **Canonical:** `pyFR` Also importable as: `DS004865`, `Herrema2023_pyFR_Delayed_Free`, `pyFR`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Surgery`. Subjects: 42; recordings: 172; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004865](https://openneuro.org/datasets/ds004865) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004865](https://nemar.org/dataexplorer/detail?dataset_id=ds004865) DOI: [https://doi.org/10.18112/openneuro.ds004865.v2.0.1](https://doi.org/10.18112/openneuro.ds004865.v2.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004865 >>> dataset = DS004865(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['pyFR']* ### *class* eegdash.dataset.DS004883(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Registerd Report of ERN During Three Versions of a Flanker Task * **Study:** `ds004883` (OpenNeuro) * **Author (year):** `Clayson2023_Registerd` * **Canonical:** — Also importable as: `DS004883`, `Clayson2023_Registerd`. Modality: `eeg`. Subjects: 172; recordings: 516; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004883](https://openneuro.org/datasets/ds004883) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004883](https://nemar.org/dataexplorer/detail?dataset_id=ds004883) DOI: [https://doi.org/10.18112/openneuro.ds004883.v1.0.0](https://doi.org/10.18112/openneuro.ds004883.v1.0.0) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004883 >>> dataset = DS004883(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004902(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Resting-state EEG Dataset for Sleep Deprivation * **Study:** `ds004902` (OpenNeuro) * **Author (year):** `Xiang2023` * **Canonical:** — Also importable as: `DS004902`, `Xiang2023`. Modality: `eeg`. Subjects: 71; recordings: 218; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004902](https://openneuro.org/datasets/ds004902) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004902](https://nemar.org/dataexplorer/detail?dataset_id=ds004902) DOI: [https://doi.org/10.18112/openneuro.ds004902.v1.0.8](https://doi.org/10.18112/openneuro.ds004902.v1.0.8) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS004902 >>> dataset = DS004902(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004917(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Probability Decision-making Task with ambiguity * **Study:** `ds004917` (OpenNeuro) * **Author (year):** `FigueroaVargas2024` * **Canonical:** — Also importable as: `DS004917`, `FigueroaVargas2024`. Modality: `eeg`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004917](https://openneuro.org/datasets/ds004917) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004917](https://nemar.org/dataexplorer/detail?dataset_id=ds004917) DOI: [https://doi.org/10.18112/openneuro.ds004917.v1.0.1](https://doi.org/10.18112/openneuro.ds004917.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004917 >>> dataset = DS004917(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004929(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BallSqueezingHD * **Study:** `ds004929` (OpenNeuro) * **Author (year):** `Gao2024` * **Canonical:** — Also importable as: `DS004929`, `Gao2024`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 12; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004929](https://openneuro.org/datasets/ds004929) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004929](https://nemar.org/dataexplorer/detail?dataset_id=ds004929) DOI: [https://doi.org/10.18112/openneuro.ds004929.v1.0.0](https://doi.org/10.18112/openneuro.ds004929.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS004929 >>> dataset = DS004929(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004940(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neurophysiological measures of covert semantic processing in neurotypical adolescents actively ignoring spoken sentence inputs: A high-density event-related potential (ERP) study. * **Study:** `ds004940` (OpenNeuro) * **Author (year):** `Toffolo2024` * **Canonical:** — Also importable as: `DS004940`, `Toffolo2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 22; recordings: 48; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004940](https://openneuro.org/datasets/ds004940) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004940](https://nemar.org/dataexplorer/detail?dataset_id=ds004940) DOI: [https://doi.org/10.18112/openneuro.ds004940.v1.0.1](https://doi.org/10.18112/openneuro.ds004940.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS004940 >>> dataset = DS004940(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004942(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SpatialMemory * **Study:** `ds004942` (OpenNeuro) * **Author (year):** `Kieffaber2024` * **Canonical:** — Also importable as: `DS004942`, `Kieffaber2024`. Modality: `eeg`. Subjects: 62; recordings: 62; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004942](https://openneuro.org/datasets/ds004942) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004942](https://nemar.org/dataexplorer/detail?dataset_id=ds004942) DOI: [https://doi.org/10.18112/openneuro.ds004942.v1.0.0](https://doi.org/10.18112/openneuro.ds004942.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004942 >>> dataset = DS004942(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004944(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of BCI2000-compatible intraoperative ECoG with neuromorphic encoding * **Study:** `ds004944` (OpenNeuro) * **Author (year):** `Costa2024` * **Canonical:** `BCI2000_intraop` Also importable as: `DS004944`, `Costa2024`, `BCI2000_intraop`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 22; recordings: 44; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004944](https://openneuro.org/datasets/ds004944) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004944](https://nemar.org/dataexplorer/detail?dataset_id=ds004944) DOI: [https://doi.org/10.18112/openneuro.ds004944.v1.1.0](https://doi.org/10.18112/openneuro.ds004944.v1.1.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004944 >>> dataset = DS004944(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCI2000_intraop']* ### *class* eegdash.dataset.DS004951(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Braille letters - EEG * **Study:** `ds004951` (OpenNeuro) * **Author (year):** `Haupt2024_Braille` * **Canonical:** `Haupt2025` Also importable as: `DS004951`, `Haupt2024_Braille`, `Haupt2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Other`. Subjects: 11; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004951](https://openneuro.org/datasets/ds004951) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004951](https://nemar.org/dataexplorer/detail?dataset_id=ds004951) DOI: [https://doi.org/10.18112/openneuro.ds004951.v1.0.0](https://doi.org/10.18112/openneuro.ds004951.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004951 >>> dataset = DS004951(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Haupt2025']* ### *class* eegdash.dataset.DS004952(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ChineseEEG: A Chinese Linguistic Corpora EEG Dataset for Semantic Alignment and Neural Decoding * **Study:** `ds004952` (OpenNeuro) * **Author (year):** `Mou2024` * **Canonical:** — Also importable as: `DS004952`, `Mou2024`. Modality: `eeg`. Subjects: 10; recordings: 245; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004952](https://openneuro.org/datasets/ds004952) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004952](https://nemar.org/dataexplorer/detail?dataset_id=ds004952) DOI: [https://doi.org/10.18112/openneuro.ds004952.v1.2.2](https://doi.org/10.18112/openneuro.ds004952.v1.2.2) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004952 >>> dataset = DS004952(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004973(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) An fNIRS dataset for driving risk cognition of passengers in highly automated driving scenarios * **Study:** `ds004973` (OpenNeuro) * **Author (year):** `Zhang2024_driving_risk_cognition` * **Canonical:** — Also importable as: `DS004973`, `Zhang2024_driving_risk_cognition`. Modality: `fnirs`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 20; recordings: 222; tasks: 12. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004973](https://openneuro.org/datasets/ds004973) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004973](https://nemar.org/dataexplorer/detail?dataset_id=ds004973) DOI: [https://doi.org/10.18112/openneuro.ds004973.v1.0.1](https://doi.org/10.18112/openneuro.ds004973.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS004973 >>> dataset = DS004973(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004977(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CARLA: Adjusted common average referencing for cortico-cortical evoked potential data * **Study:** `ds004977` (OpenNeuro) * **Author (year):** `Huang2024` * **Canonical:** `CARLA` Also importable as: `DS004977`, `Huang2024`, `CARLA`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Epilepsy`. Subjects: 4; recordings: 6; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004977](https://openneuro.org/datasets/ds004977) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004977](https://nemar.org/dataexplorer/detail?dataset_id=ds004977) DOI: [https://doi.org/10.18112/openneuro.ds004977.v1.2.0](https://doi.org/10.18112/openneuro.ds004977.v1.2.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS004977 >>> dataset = DS004977(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['CARLA']* ### *class* eegdash.dataset.DS004980(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data set for a architectural affordances task * **Study:** `ds004980` (OpenNeuro) * **Author (year):** `Wang2024_architectural_affordances` * **Canonical:** — Also importable as: `DS004980`, `Wang2024_architectural_affordances`. Modality: `eeg`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004980](https://openneuro.org/datasets/ds004980) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004980](https://nemar.org/dataexplorer/detail?dataset_id=ds004980) DOI: [https://doi.org/10.18112/openneuro.ds004980.v1.0.0](https://doi.org/10.18112/openneuro.ds004980.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004980 >>> dataset = DS004980(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS004993(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) WIRED ICM Sample Dataset - Workshop on Intracranial Recordings in Humans, Epilepsy, DBS * **Study:** `ds004993` (OpenNeuro) * **Author (year):** `Hamilton2024` * **Canonical:** `WIRED_ICM` Also importable as: `DS004993`, `Hamilton2024`, `WIRED_ICM`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Epilepsy`. Subjects: 3; recordings: 3; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004993](https://openneuro.org/datasets/ds004993) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004993](https://nemar.org/dataexplorer/detail?dataset_id=ds004993) DOI: [https://doi.org/10.18112/openneuro.ds004993.v1.1.2](https://doi.org/10.18112/openneuro.ds004993.v1.1.2) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS004993 >>> dataset = DS004993(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['WIRED_ICM']* ### *class* eegdash.dataset.DS004995(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Time-Course of Food Representation in the Human Brain * **Study:** `ds004995` (OpenNeuro) * **Author (year):** `Moerel2024` * **Canonical:** `Moerel2023` Also importable as: `DS004995`, `Moerel2024`, `Moerel2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004995](https://openneuro.org/datasets/ds004995) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004995](https://nemar.org/dataexplorer/detail?dataset_id=ds004995) DOI: [https://doi.org/10.18112/openneuro.ds004995.v1.0.2](https://doi.org/10.18112/openneuro.ds004995.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004995 >>> dataset = DS004995(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Moerel2023']* ### *class* eegdash.dataset.DS004998(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Exploring the electrophysiology of Parkinson’s disease - magnetoencephalography combined with deep brain recordings from the subthalamic nucleus. * **Study:** `ds004998` (OpenNeuro) * **Author (year):** `Rassoulou2024` * **Canonical:** — Also importable as: `DS004998`, `Rassoulou2024`. Modality: `meg`; Experiment type: `Motor`; Subject type: `Parkinson's`. Subjects: 20; recordings: 145; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds004998](https://openneuro.org/datasets/ds004998) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds004998](https://nemar.org/dataexplorer/detail?dataset_id=ds004998) DOI: [https://doi.org/10.18112/openneuro.ds004998.v1.2.2](https://doi.org/10.18112/openneuro.ds004998.v1.2.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS004998 >>> dataset = DS004998(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005007(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory naming task with questions that begin or end with a wh-interrogative * **Study:** `ds005007` (OpenNeuro) * **Author (year):** `Kitazawa2024` * **Canonical:** `Kitazawa2025` Also importable as: `DS005007`, `Kitazawa2024`, `Kitazawa2025`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 40; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005007](https://openneuro.org/datasets/ds005007) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005007](https://nemar.org/dataexplorer/detail?dataset_id=ds005007) DOI: [https://doi.org/10.18112/openneuro.ds005007.v1.0.0](https://doi.org/10.18112/openneuro.ds005007.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005007 >>> dataset = DS005007(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kitazawa2025']* ### *class* eegdash.dataset.DS005021(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Tilt Illusion by Phase * **Study:** `ds005021` (OpenNeuro) * **Author (year):** `Williams2024` * **Canonical:** — Also importable as: `DS005021`, `Williams2024`. Modality: `eeg`. Subjects: 36; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005021](https://openneuro.org/datasets/ds005021) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005021](https://nemar.org/dataexplorer/detail?dataset_id=ds005021) DOI: [https://doi.org/10.18112/openneuro.ds005021.v1.2.1](https://doi.org/10.18112/openneuro.ds005021.v1.2.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005021 >>> dataset = DS005021(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005028(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Comparing P300 Flashing paradigms in online typing with language models * **Study:** `ds005028` (OpenNeuro) * **Author (year):** `Chandravadia2024` * **Canonical:** `Chandravadia2022` Also importable as: `DS005028`, `Chandravadia2024`, `Chandravadia2022`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Unknown`. Subjects: 11; recordings: 105; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005028](https://openneuro.org/datasets/ds005028) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005028](https://nemar.org/dataexplorer/detail?dataset_id=ds005028) DOI: [https://doi.org/10.18112/openneuro.ds005028.v1.0.0](https://doi.org/10.18112/openneuro.ds005028.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005028 >>> dataset = DS005028(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Chandravadia2022']* ### *class* eegdash.dataset.DS005034(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effect of theta tACS on working memory * **Study:** `ds005034` (OpenNeuro) * **Author (year):** `Pavlov2024_effect_theta_tACS` * **Canonical:** — Also importable as: `DS005034`, `Pavlov2024_effect_theta_tACS`. Modality: `eeg`. Subjects: 25; recordings: 100; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005034](https://openneuro.org/datasets/ds005034) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005034](https://nemar.org/dataexplorer/detail?dataset_id=ds005034) DOI: [https://doi.org/10.18112/openneuro.ds005034.v1.0.1](https://doi.org/10.18112/openneuro.ds005034.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005034 >>> dataset = DS005034(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005048(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 40Hz Auditory Entrainment * **Study:** `ds005048` (OpenNeuro) * **Author (year):** `Lahijanian2024` * **Canonical:** — Also importable as: `DS005048`, `Lahijanian2024`. Modality: `eeg`. Subjects: 35; recordings: 35; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005048](https://openneuro.org/datasets/ds005048) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005048](https://nemar.org/dataexplorer/detail?dataset_id=ds005048) DOI: [https://doi.org/10.18112/openneuro.ds005048.v1.0.1](https://doi.org/10.18112/openneuro.ds005048.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005048 >>> dataset = DS005048(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005059(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Paired Associates Learning: Memory for Word Pairs in Cued Recall * **Study:** `ds005059` (OpenNeuro) * **Author (year):** `Herrema2024_Paired` * **Canonical:** `PAL` Also importable as: `DS005059`, `Herrema2024_Paired`, `PAL`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 69; recordings: 282; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005059](https://openneuro.org/datasets/ds005059) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005059](https://nemar.org/dataexplorer/detail?dataset_id=ds005059) DOI: [https://doi.org/10.18112/openneuro.ds005059.v1.0.6](https://doi.org/10.18112/openneuro.ds005059.v1.0.6) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005059 >>> dataset = DS005059(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PAL']* ### *class* eegdash.dataset.DS005065(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Heuristics in risky decision-making relate to preferential representation of information MEG data * **Study:** `ds005065` (OpenNeuro) * **Author (year):** `Russek2024` * **Canonical:** — Also importable as: `DS005065`, `Russek2024`. Modality: `meg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 21; recordings: 275; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005065](https://openneuro.org/datasets/ds005065) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005065](https://nemar.org/dataexplorer/detail?dataset_id=ds005065) DOI: [https://doi.org/10.18112/openneuro.ds005065.v1.0.0](https://doi.org/10.18112/openneuro.ds005065.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005065 >>> dataset = DS005065(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005079(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Effects of Directed Therapeutic Intent on Live and Damaged Cells * **Study:** `ds005079` (OpenNeuro) * **Author (year):** `Cohen2024` * **Canonical:** — Also importable as: `DS005079`, `Cohen2024`. Modality: `eeg`. Subjects: 1; recordings: 60; tasks: 15. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005079](https://openneuro.org/datasets/ds005079) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005079](https://nemar.org/dataexplorer/detail?dataset_id=ds005079) DOI: [https://doi.org/10.18112/openneuro.ds005079.v2.0.0](https://doi.org/10.18112/openneuro.ds005079.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005079 >>> dataset = DS005079(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005083(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Safety and Accuracy of Stereoelectroencephalography for Pediatric Patients with Prior Craniotomy * **Study:** `ds005083` (OpenNeuro) * **Author (year):** `Yang2024` * **Canonical:** — Also importable as: `DS005083`, `Yang2024`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 61; recordings: 1357; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005083](https://openneuro.org/datasets/ds005083) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005083](https://nemar.org/dataexplorer/detail?dataset_id=ds005083) DOI: [https://doi.org/10.18112/openneuro.ds005083.v1.0.0](https://doi.org/10.18112/openneuro.ds005083.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005083 >>> dataset = DS005083(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005087(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) rapid-hemifield-object-eeg * **Study:** `ds005087` (OpenNeuro) * **Author (year):** `Robinson2024_rapid` * **Canonical:** — Also importable as: `DS005087`, `Robinson2024_rapid`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 20; recordings: 60; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005087](https://openneuro.org/datasets/ds005087) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005087](https://nemar.org/dataexplorer/detail?dataset_id=ds005087) DOI: [https://doi.org/10.18112/openneuro.ds005087.v1.0.1](https://doi.org/10.18112/openneuro.ds005087.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005087 >>> dataset = DS005087(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005089(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Proactive selective attention across competition contexts * **Study:** `ds005089` (OpenNeuro) * **Author (year):** `AguadoLopez2024` * **Canonical:** — Also importable as: `DS005089`, `AguadoLopez2024`. Modality: `eeg`. Subjects: 36; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005089](https://openneuro.org/datasets/ds005089) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005089](https://nemar.org/dataexplorer/detail?dataset_id=ds005089) DOI: [https://doi.org/10.18112/openneuro.ds005089.v1.0.1](https://doi.org/10.18112/openneuro.ds005089.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005089 >>> dataset = DS005089(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005095(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) STERNBERG DIFFICULT * **Study:** `ds005095` (OpenNeuro) * **Author (year):** `Zhozhikashvili2024` * **Canonical:** — Also importable as: `DS005095`, `Zhozhikashvili2024`. Modality: `eeg`. Subjects: 48; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005095](https://openneuro.org/datasets/ds005095) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005095](https://nemar.org/dataexplorer/detail?dataset_id=ds005095) DOI: [https://doi.org/10.18112/openneuro.ds005095.v1.0.2](https://doi.org/10.18112/openneuro.ds005095.v1.0.2) NEMAR citation count: 7 ### Examples ```pycon >>> from eegdash.dataset import DS005095 >>> dataset = DS005095(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005106(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 200 Objects Infants EEG * **Study:** `ds005106` (OpenNeuro) * **Author (year):** `Grootswagers2024` * **Canonical:** — Also importable as: `DS005106`, `Grootswagers2024`. Modality: `eeg`. Subjects: 42; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005106](https://openneuro.org/datasets/ds005106) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005106](https://nemar.org/dataexplorer/detail?dataset_id=ds005106) DOI: [https://doi.org/10.18112/openneuro.ds005106.v1.5.0](https://doi.org/10.18112/openneuro.ds005106.v1.5.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005106 >>> dataset = DS005106(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005107(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FACE-DEC * **Study:** `ds005107` (OpenNeuro) * **Author (year):** `Xu2024_DEC` * **Canonical:** — Also importable as: `DS005107`, `Xu2024_DEC`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 21; recordings: 350; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005107](https://openneuro.org/datasets/ds005107) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005107](https://nemar.org/dataexplorer/detail?dataset_id=ds005107) DOI: [https://doi.org/10.18112/openneuro.ds005107.v2.0.0](https://doi.org/10.18112/openneuro.ds005107.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005107 >>> dataset = DS005107(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005114(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: DPX Cog Ctl Task in Acute Mild TBI * **Study:** `ds005114` (OpenNeuro) * **Author (year):** `Cavanagh2024` * **Canonical:** — Also importable as: `DS005114`, `Cavanagh2024`. Modality: `eeg`. Subjects: 91; recordings: 223; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005114](https://openneuro.org/datasets/ds005114) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005114](https://nemar.org/dataexplorer/detail?dataset_id=ds005114) DOI: [https://doi.org/10.18112/openneuro.ds005114.v1.0.0](https://doi.org/10.18112/openneuro.ds005114.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005114 >>> dataset = DS005114(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005121(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Siefert2024 * **Study:** `ds005121` (OpenNeuro) * **Author (year):** `Siefert2024` * **Canonical:** — Also importable as: `DS005121`, `Siefert2024`. Modality: `eeg`. Subjects: 34; recordings: 39; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005121](https://openneuro.org/datasets/ds005121) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005121](https://nemar.org/dataexplorer/detail?dataset_id=ds005121) DOI: [https://doi.org/10.18112/openneuro.ds005121.v1.0.2](https://doi.org/10.18112/openneuro.ds005121.v1.0.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005121 >>> dataset = DS005121(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005131(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Evoked responses to elevated sounds * **Study:** `ds005131` (OpenNeuro) * **Author (year):** `Bialas2024` * **Canonical:** — Also importable as: `DS005131`, `Bialas2024`. Modality: `eeg`. Subjects: 58; recordings: 63; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005131](https://openneuro.org/datasets/ds005131) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005131](https://nemar.org/dataexplorer/detail?dataset_id=ds005131) DOI: [https://doi.org/10.18112/openneuro.ds005131.v1.0.1](https://doi.org/10.18112/openneuro.ds005131.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005131 >>> dataset = DS005131(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005169(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of intracranial EEG during cortical stimulation evoking visual effects * **Study:** `ds005169` (OpenNeuro) * **Author (year):** `Barborica2024` * **Canonical:** — Also importable as: `DS005169`, `Barborica2024`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 20; recordings: 112; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005169](https://openneuro.org/datasets/ds005169) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005169](https://nemar.org/dataexplorer/detail?dataset_id=ds005169) DOI: [https://doi.org/10.18112/openneuro.ds005169.v1.0.0](https://doi.org/10.18112/openneuro.ds005169.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005169 >>> dataset = DS005169(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005170(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Chisco * **Study:** `ds005170` (OpenNeuro) * **Author (year):** `Zhang2024_Chisco` * **Canonical:** `Chisco` Also importable as: `DS005170`, `Zhang2024_Chisco`, `Chisco`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 225; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005170](https://openneuro.org/datasets/ds005170) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005170](https://nemar.org/dataexplorer/detail?dataset_id=ds005170) DOI: [https://doi.org/10.18112/openneuro.ds005170.v1.1.2](https://doi.org/10.18112/openneuro.ds005170.v1.1.2) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005170 >>> dataset = DS005170(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Chisco']* ### *class* eegdash.dataset.DS005178(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Ear-EEG Sleep Monitoring 2023 (EESM23) * **Study:** `ds005178` (OpenNeuro) * **Author (year):** `Tabar2024` * **Canonical:** `EESM23` Also importable as: `DS005178`, `Tabar2024`, `EESM23`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 10; recordings: 140; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005178](https://openneuro.org/datasets/ds005178) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005178](https://nemar.org/dataexplorer/detail?dataset_id=ds005178) DOI: [https://doi.org/10.18112/openneuro.ds005178.v1.0.0](https://doi.org/10.18112/openneuro.ds005178.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005178 >>> dataset = DS005178(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EESM23']* ### *class* eegdash.dataset.DS005185(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Ear-EEG Sleep Monitoring 2019 (EESM19) * **Study:** `ds005185` (OpenNeuro) * **Author (year):** `Mikkelsen2024_Ear_Sleep_Monitoring` * **Canonical:** `EESM19` Also importable as: `DS005185`, `Mikkelsen2024_Ear_Sleep_Monitoring`, `EESM19`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 20; recordings: 356; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005185](https://openneuro.org/datasets/ds005185) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005185](https://nemar.org/dataexplorer/detail?dataset_id=ds005185) DOI: [https://doi.org/10.18112/openneuro.ds005185.v1.0.2](https://doi.org/10.18112/openneuro.ds005185.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS005185 >>> dataset = DS005185(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EESM19']* ### *class* eegdash.dataset.DS005189(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Search Superiority Recollection Familiarity * **Study:** `ds005189` (OpenNeuro) * **Author (year):** `Helbing2024` * **Canonical:** — Also importable as: `DS005189`, `Helbing2024`. Modality: `eeg`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005189](https://openneuro.org/datasets/ds005189) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005189](https://nemar.org/dataexplorer/detail?dataset_id=ds005189) DOI: [https://doi.org/10.18112/openneuro.ds005189.v1.0.1](https://doi.org/10.18112/openneuro.ds005189.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005189 >>> dataset = DS005189(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005207(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Surrey cEEGrid sleep data set * **Study:** `ds005207` (OpenNeuro) * **Author (year):** `Mikkelsen2024_Surrey_cEEGrid_sleep` * **Canonical:** `Surrey_cEEGrid_sleep` Also importable as: `DS005207`, `Mikkelsen2024_Surrey_cEEGrid_sleep`, `Surrey_cEEGrid_sleep`. Modality: `eeg`. Subjects: 20; recordings: 39; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005207](https://openneuro.org/datasets/ds005207) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005207](https://nemar.org/dataexplorer/detail?dataset_id=ds005207) DOI: [https://doi.org/10.18112/openneuro.ds005207.v1.0.0](https://doi.org/10.18112/openneuro.ds005207.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005207 >>> dataset = DS005207(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Surrey_cEEGrid_sleep']* ### *class* eegdash.dataset.DS005241(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NeuroMorph: A High-Temporal Resolution MEG Dataset for Morpheme-Based Linguistic Analysis * **Study:** `ds005241` (OpenNeuro) * **Author (year):** `Rodriguez2024` * **Canonical:** `NeuroMorph`, `neuromorph` Also importable as: `DS005241`, `Rodriguez2024`, `NeuroMorph`, `neuromorph`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 24; recordings: 117; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005241](https://openneuro.org/datasets/ds005241) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005241](https://nemar.org/dataexplorer/detail?dataset_id=ds005241) DOI: [https://doi.org/10.18112/openneuro.ds005241.v1.1.0](https://doi.org/10.18112/openneuro.ds005241.v1.1.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005241 >>> dataset = DS005241(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['NeuroMorph', 'neuromorph']* ### *class* eegdash.dataset.DS005261(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Gloups_MEG * **Study:** `ds005261` (OpenNeuro) * **Author (year):** `Todorovic2024` * **Canonical:** `Todorovic2023` Also importable as: `DS005261`, `Todorovic2024`, `Todorovic2023`. Modality: `meg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 17; recordings: 128; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005261](https://openneuro.org/datasets/ds005261) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005261](https://nemar.org/dataexplorer/detail?dataset_id=ds005261) DOI: [https://doi.org/10.18112/openneuro.ds005261.v3.0.0](https://doi.org/10.18112/openneuro.ds005261.v3.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005261 >>> dataset = DS005261(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Todorovic2023']* ### *class* eegdash.dataset.DS005262(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ArEEG: Arabic Inner Speech EEG dataset * **Study:** `ds005262` (OpenNeuro) * **Author (year):** `Metwalli2024` * **Canonical:** `ArEEG` Also importable as: `DS005262`, `Metwalli2024`, `ArEEG`. Modality: `eeg`. Subjects: 12; recordings: 186; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005262](https://openneuro.org/datasets/ds005262) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005262](https://nemar.org/dataexplorer/detail?dataset_id=ds005262) DOI: [https://doi.org/10.18112/openneuro.ds005262.v1.0.1](https://doi.org/10.18112/openneuro.ds005262.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005262 >>> dataset = DS005262(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ArEEG']* ### *class* eegdash.dataset.DS005273(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neural representation of consciously seen and unseen information * **Study:** `ds005273` (OpenNeuro) * **Author (year):** `Esteban2024` * **Canonical:** — Also importable as: `DS005273`, `Esteban2024`. Modality: `eeg`. Subjects: 33; recordings: 33; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005273](https://openneuro.org/datasets/ds005273) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005273](https://nemar.org/dataexplorer/detail?dataset_id=ds005273) DOI: [https://doi.org/10.18112/openneuro.ds005273.v1.0.0](https://doi.org/10.18112/openneuro.ds005273.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005273 >>> dataset = DS005273(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005274(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) UV_EEG * **Study:** `ds005274` (OpenNeuro) * **Author (year):** `Ito2024` * **Canonical:** — Also importable as: `DS005274`, `Ito2024`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 22; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005274](https://openneuro.org/datasets/ds005274) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005274](https://nemar.org/dataexplorer/detail?dataset_id=ds005274) DOI: [https://doi.org/10.18112/openneuro.ds005274.v1.0.0](https://doi.org/10.18112/openneuro.ds005274.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005274 >>> dataset = DS005274(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005279(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Picture-Word Interference Dataset * **Study:** `ds005279` (OpenNeuro) * **Author (year):** `Wei2024` * **Canonical:** — Also importable as: `DS005279`, `Wei2024`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 30; recordings: 90; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005279](https://openneuro.org/datasets/ds005279) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005279](https://nemar.org/dataexplorer/detail?dataset_id=ds005279) DOI: [https://doi.org/10.18112/openneuro.ds005279.v1.0.3](https://doi.org/10.18112/openneuro.ds005279.v1.0.3) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005279 >>> dataset = DS005279(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005280(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 223 By BP * **Study:** `ds005280` (OpenNeuro) * **Author (year):** `Xiangyue2024_223_BP` * **Canonical:** — Also importable as: `DS005280`, `Xiangyue2024_223_BP`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 223; recordings: 669; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005280](https://openneuro.org/datasets/ds005280) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005280](https://nemar.org/dataexplorer/detail?dataset_id=ds005280) DOI: [https://doi.org/10.18112/openneuro.ds005280.v1.0.0](https://doi.org/10.18112/openneuro.ds005280.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005280 >>> dataset = DS005280(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005284(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 26 By Biosemi * **Study:** `ds005284` (OpenNeuro) * **Author (year):** `Xiangyue2024_26_Biosemi` * **Canonical:** — Also importable as: `DS005284`, `Xiangyue2024_26_Biosemi`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005284](https://openneuro.org/datasets/ds005284) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005284](https://nemar.org/dataexplorer/detail?dataset_id=ds005284) DOI: [https://doi.org/10.18112/openneuro.ds005284.v1.0.0](https://doi.org/10.18112/openneuro.ds005284.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005284 >>> dataset = DS005284(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005285(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 29 By ANT * **Study:** `ds005285` (OpenNeuro) * **Author (year):** `Xiangyue2024_29_ANT` * **Canonical:** — Also importable as: `DS005285`, `Xiangyue2024_29_ANT`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 29; recordings: 116; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005285](https://openneuro.org/datasets/ds005285) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005285](https://nemar.org/dataexplorer/detail?dataset_id=ds005285) DOI: [https://doi.org/10.18112/openneuro.ds005285.v1.0.0](https://doi.org/10.18112/openneuro.ds005285.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005285 >>> dataset = DS005285(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005286(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 30 By ANT * **Study:** `ds005286` (OpenNeuro) * **Author (year):** `Xiangyue2024_30_ANT` * **Canonical:** — Also importable as: `DS005286`, `Xiangyue2024_30_ANT`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 30; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005286](https://openneuro.org/datasets/ds005286) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005286](https://nemar.org/dataexplorer/detail?dataset_id=ds005286) DOI: [https://doi.org/10.18112/openneuro.ds005286.v1.0.0](https://doi.org/10.18112/openneuro.ds005286.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005286 >>> dataset = DS005286(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005289(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 39 By BP * **Study:** `ds005289` (OpenNeuro) * **Author (year):** `Xiangyue2024_39_BP` * **Canonical:** — Also importable as: `DS005289`, `Xiangyue2024_39_BP`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 39; recordings: 195; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005289](https://openneuro.org/datasets/ds005289) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005289](https://nemar.org/dataexplorer/detail?dataset_id=ds005289) DOI: [https://doi.org/10.18112/openneuro.ds005289.v1.0.0](https://doi.org/10.18112/openneuro.ds005289.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005289 >>> dataset = DS005289(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005291(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 65 By ANT * **Study:** `ds005291` (OpenNeuro) * **Author (year):** `Xiangyue2024_65_ANT` * **Canonical:** — Also importable as: `DS005291`, `Xiangyue2024_65_ANT`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 65; recordings: 65; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005291](https://openneuro.org/datasets/ds005291) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005291](https://nemar.org/dataexplorer/detail?dataset_id=ds005291) DOI: [https://doi.org/10.18112/openneuro.ds005291.v1.0.0](https://doi.org/10.18112/openneuro.ds005291.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005291 >>> dataset = DS005291(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005292(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 142 by Biosemi * **Study:** `ds005292` (OpenNeuro) * **Author (year):** `Xiangyue2024_142_Biosemi` * **Canonical:** — Also importable as: `DS005292`, `Xiangyue2024_142_Biosemi`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 142; recordings: 426; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005292](https://openneuro.org/datasets/ds005292) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005292](https://nemar.org/dataexplorer/detail?dataset_id=ds005292) DOI: [https://doi.org/10.18112/openneuro.ds005292.v1.0.0](https://doi.org/10.18112/openneuro.ds005292.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005292 >>> dataset = DS005292(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005293(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 95 By BP * **Study:** `ds005293` (OpenNeuro) * **Author (year):** `Xiangyue2024_95_BP` * **Canonical:** — Also importable as: `DS005293`, `Xiangyue2024_95_BP`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 95; recordings: 570; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005293](https://openneuro.org/datasets/ds005293) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005293](https://nemar.org/dataexplorer/detail?dataset_id=ds005293) DOI: [https://doi.org/10.18112/openneuro.ds005293.v1.0.0](https://doi.org/10.18112/openneuro.ds005293.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005293 >>> dataset = DS005293(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005296(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Assessing sensitivity to semantic and syntactic information in deaf readers: An ERP study * **Study:** `ds005296` (OpenNeuro) * **Author (year):** `Emmorey2024` * **Canonical:** — Also importable as: `DS005296`, `Emmorey2024`. Modality: `eeg`. Subjects: 62; recordings: 62; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005296](https://openneuro.org/datasets/ds005296) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005296](https://nemar.org/dataexplorer/detail?dataset_id=ds005296) DOI: [https://doi.org/10.18112/openneuro.ds005296.v1.0.1](https://doi.org/10.18112/openneuro.ds005296.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005296 >>> dataset = DS005296(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005305(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Resting-state Microstates Correlates of Executive Functions * **Study:** `ds005305` (OpenNeuro) * **Author (year):** `Quentin2024` * **Canonical:** — Also importable as: `DS005305`, `Quentin2024`. Modality: `eeg`. Subjects: 165; recordings: 165; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005305](https://openneuro.org/datasets/ds005305) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005305](https://nemar.org/dataexplorer/detail?dataset_id=ds005305) DOI: [https://doi.org/10.18112/openneuro.ds005305.v1.0.1](https://doi.org/10.18112/openneuro.ds005305.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005305 >>> dataset = DS005305(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005307(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Laser-evoked potentials in the human spinal cord and cortex * **Study:** `ds005307` (OpenNeuro) * **Author (year):** `Nierula2024` * **Canonical:** `Nierula2019` Also importable as: `DS005307`, `Nierula2024`, `Nierula2019`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 7; recordings: 73; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005307](https://openneuro.org/datasets/ds005307) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005307](https://nemar.org/dataexplorer/detail?dataset_id=ds005307) DOI: [https://doi.org/10.18112/openneuro.ds005307.v1.0.1](https://doi.org/10.18112/openneuro.ds005307.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005307 >>> dataset = DS005307(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Nierula2019']* ### *class* eegdash.dataset.DS005340(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech * **Study:** `ds005340` (OpenNeuro) * **Author (year):** `Polonenko2024_Fundamental` * **Canonical:** — Also importable as: `DS005340`, `Polonenko2024_Fundamental`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 15; recordings: 15; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005340](https://openneuro.org/datasets/ds005340) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005340](https://nemar.org/dataexplorer/detail?dataset_id=ds005340) DOI: [https://doi.org/10.18112/openneuro.ds005340.v1.0.4](https://doi.org/10.18112/openneuro.ds005340.v1.0.4) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005340 >>> dataset = DS005340(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005342(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data offline and online during motor imagery for standing and sitting * **Study:** `ds005342` (OpenNeuro) * **Author (year):** `TrianaGuzman2024` * **Canonical:** — Also importable as: `DS005342`, `TrianaGuzman2024`. Modality: `eeg`. Subjects: 32; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005342](https://openneuro.org/datasets/ds005342) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005342](https://nemar.org/dataexplorer/detail?dataset_id=ds005342) DOI: [https://doi.org/10.18112/openneuro.ds005342.v1.0.3](https://doi.org/10.18112/openneuro.ds005342.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005342 >>> dataset = DS005342(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005343(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Gaffrey Lab Infant Microstates and Attention * **Study:** `ds005343` (OpenNeuro) * **Author (year):** `Bagdasarov2024` * **Canonical:** — Also importable as: `DS005343`, `Bagdasarov2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Development`. Subjects: 43; recordings: 43; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005343](https://openneuro.org/datasets/ds005343) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005343](https://nemar.org/dataexplorer/detail?dataset_id=ds005343) DOI: [https://doi.org/10.18112/openneuro.ds005343.v1.0.0](https://doi.org/10.18112/openneuro.ds005343.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005343 >>> dataset = DS005343(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005345(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Le Petit Prince (LPP) Multi-talker: Naturalistic 7T fMRI and EEG Dataset * **Study:** `ds005345` (OpenNeuro) * **Author (year):** `Ma2024` * **Canonical:** `LPP` Also importable as: `DS005345`, `Ma2024`, `LPP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005345](https://openneuro.org/datasets/ds005345) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005345](https://nemar.org/dataexplorer/detail?dataset_id=ds005345) DOI: [https://doi.org/10.18112/openneuro.ds005345.v1.0.1](https://doi.org/10.18112/openneuro.ds005345.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005345 >>> dataset = DS005345(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['LPP']* ### *class* eegdash.dataset.DS005346(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Naturalistic fMRI and MEG recordings during viewing of a reality TV show * **Study:** `ds005346` (OpenNeuro) * **Author (year):** `Li2024_Naturalistic_fMRI_viewing` * **Canonical:** — Also importable as: `DS005346`, `Li2024_Naturalistic_fMRI_viewing`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 30; recordings: 90; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005346](https://openneuro.org/datasets/ds005346) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005346](https://nemar.org/dataexplorer/detail?dataset_id=ds005346) DOI: [https://doi.org/10.18112/openneuro.ds005346.v1.0.5](https://doi.org/10.18112/openneuro.ds005346.v1.0.5) ### Examples ```pycon >>> from eegdash.dataset import DS005346 >>> dataset = DS005346(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005356(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEG: Major Depression & Probabilistic Learning Task * **Study:** `ds005356` (OpenNeuro) * **Author (year):** `DS5356_MajorDepression` * **Canonical:** — Also importable as: `DS005356`, `DS5356_MajorDepression`. Modality: `meg`; Experiment type: `Learning`; Subject type: `Depression`. Subjects: 85; recordings: 116; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005356](https://openneuro.org/datasets/ds005356) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005356](https://nemar.org/dataexplorer/detail?dataset_id=ds005356) DOI: [https://doi.org/10.18112/openneuro.ds005356.v1.5.0](https://doi.org/10.18112/openneuro.ds005356.v1.5.0) ### Examples ```pycon >>> from eegdash.dataset import DS005356 >>> dataset = DS005356(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005363(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Object recognition in healthy aging (ORHA) - EEG * **Study:** `ds005363` (OpenNeuro) * **Author (year):** `Haupt2024_Object` * **Canonical:** `ORHA` Also importable as: `DS005363`, `Haupt2024_Object`, `ORHA`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 43; recordings: 43; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005363](https://openneuro.org/datasets/ds005363) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005363](https://nemar.org/dataexplorer/detail?dataset_id=ds005363) DOI: [https://doi.org/10.18112/openneuro.ds005363.v1.0.0](https://doi.org/10.18112/openneuro.ds005363.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005363 >>> dataset = DS005363(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ORHA']* ### *class* eegdash.dataset.DS005383(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TMNRED, A Chinese Language EEG Dataset for Fuzzy Semantic Target Identification in Natural Reading Environments * **Study:** `ds005383` (OpenNeuro) * **Author (year):** `Bai2024` * **Canonical:** `TMNRED` Also importable as: `DS005383`, `Bai2024`, `TMNRED`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 30; recordings: 240; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005383](https://openneuro.org/datasets/ds005383) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005383](https://nemar.org/dataexplorer/detail?dataset_id=ds005383) DOI: [https://doi.org/10.18112/openneuro.ds005383.v1.0.0](https://doi.org/10.18112/openneuro.ds005383.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005383 >>> dataset = DS005383(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['TMNRED']* ### *class* eegdash.dataset.DS005385(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting-state EEG data before and after cognitive activity across the adult lifespan and a 5-year follow-up * **Study:** `ds005385` (OpenNeuro) * **Author (year):** `Wascher2024` * **Canonical:** — Also importable as: `DS005385`, `Wascher2024`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 608; recordings: 3264; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005385](https://openneuro.org/datasets/ds005385) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005385](https://nemar.org/dataexplorer/detail?dataset_id=ds005385) DOI: [https://doi.org/10.18112/openneuro.ds005385.v1.0.3](https://doi.org/10.18112/openneuro.ds005385.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005385 >>> dataset = DS005385(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005397(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Affordances of stairs * **Study:** `ds005397` (OpenNeuro) * **Author (year):** `Hilton2024` * **Canonical:** — Also importable as: `DS005397`, `Hilton2024`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 26; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005397](https://openneuro.org/datasets/ds005397) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005397](https://nemar.org/dataexplorer/detail?dataset_id=ds005397) DOI: [https://doi.org/10.18112/openneuro.ds005397.v1.0.4](https://doi.org/10.18112/openneuro.ds005397.v1.0.4) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005397 >>> dataset = DS005397(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005398(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Open iEEG Dataset (Pediatric iEEG, Wayne State University and UCLA) * **Study:** `ds005398` (OpenNeuro) * **Author (year):** `Zhang2024_Open_Pediatric_Wayne` * **Canonical:** — Also importable as: `DS005398`, `Zhang2024_Open_Pediatric_Wayne`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 185; recordings: 185; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005398](https://openneuro.org/datasets/ds005398) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005398](https://nemar.org/dataexplorer/detail?dataset_id=ds005398) DOI: [https://doi.org/10.18112/openneuro.ds005398.v1.1.1](https://doi.org/10.18112/openneuro.ds005398.v1.1.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005398 >>> dataset = DS005398(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005403(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Delayed Auditory Feedback EEG/EGG * **Study:** `ds005403` (OpenNeuro) * **Author (year):** `Veillette2024` * **Canonical:** `Veillette2019` Also importable as: `DS005403`, `Veillette2024`, `Veillette2019`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 32; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005403](https://openneuro.org/datasets/ds005403) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005403](https://nemar.org/dataexplorer/detail?dataset_id=ds005403) DOI: [https://doi.org/10.18112/openneuro.ds005403.v1.0.1](https://doi.org/10.18112/openneuro.ds005403.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005403 >>> dataset = DS005403(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Veillette2019']* ### *class* eegdash.dataset.DS005406(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG frequency tagging reveals the integration of dissimilar observed actions * **Study:** `ds005406` (OpenNeuro) * **Author (year):** `Formica2024` * **Canonical:** `Formica2025` Also importable as: `DS005406`, `Formica2024`, `Formica2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 29; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005406](https://openneuro.org/datasets/ds005406) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005406](https://nemar.org/dataexplorer/detail?dataset_id=ds005406) DOI: [https://doi.org/10.18112/openneuro.ds005406.v1.0.0](https://doi.org/10.18112/openneuro.ds005406.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005406 >>> dataset = DS005406(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Formica2025']* ### *class* eegdash.dataset.DS005407(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effect of speech masking on the subcortical response to speech * **Study:** `ds005407` (OpenNeuro) * **Author (year):** `Polonenko2024_effect` * **Canonical:** — Also importable as: `DS005407`, `Polonenko2024_effect`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 25; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005407](https://openneuro.org/datasets/ds005407) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005407](https://nemar.org/dataexplorer/detail?dataset_id=ds005407) DOI: [https://doi.org/10.18112/openneuro.ds005407.v1.0.1](https://doi.org/10.18112/openneuro.ds005407.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005407 >>> dataset = DS005407(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005408(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effect of speech masking on the subcortical response to speech * **Study:** `ds005408` (OpenNeuro) * **Author (year):** `Polonenko2024_effect_speech` * **Canonical:** — Also importable as: `DS005408`, `Polonenko2024_effect_speech`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 25; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005408](https://openneuro.org/datasets/ds005408) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005408](https://nemar.org/dataexplorer/detail?dataset_id=ds005408) DOI: [https://doi.org/10.18112/openneuro.ds005408.v1.0.0](https://doi.org/10.18112/openneuro.ds005408.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005408 >>> dataset = DS005408(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005410(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Semantic_conditioning * **Study:** `ds005410` (OpenNeuro) * **Author (year):** `Pavlov2024_Semantic_conditioning` * **Canonical:** — Also importable as: `DS005410`, `Pavlov2024_Semantic_conditioning`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 81; recordings: 81; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005410](https://openneuro.org/datasets/ds005410) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005410](https://nemar.org/dataexplorer/detail?dataset_id=ds005410) DOI: [https://doi.org/10.18112/openneuro.ds005410.v1.0.1](https://doi.org/10.18112/openneuro.ds005410.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005410 >>> dataset = DS005410(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005411(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Free Recall of Word Lists with Repeated Items * **Study:** `ds005411` (OpenNeuro) * **Author (year):** `Herrema2024_Free` * **Canonical:** — Also importable as: `DS005411`, `Herrema2024_Free`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 47; recordings: 193; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005411](https://openneuro.org/datasets/ds005411) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005411](https://nemar.org/dataexplorer/detail?dataset_id=ds005411) DOI: [https://doi.org/10.18112/openneuro.ds005411.v1.0.0](https://doi.org/10.18112/openneuro.ds005411.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005411 >>> dataset = DS005411(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005415(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Numbers * **Study:** `ds005415` (OpenNeuro) * **Author (year):** `Rockhill2024` * **Canonical:** — Also importable as: `DS005415`, `Rockhill2024`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Epilepsy`. Subjects: 13; recordings: 13; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005415](https://openneuro.org/datasets/ds005415) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005415](https://nemar.org/dataexplorer/detail?dataset_id=ds005415) DOI: [https://doi.org/10.18112/openneuro.ds005415.v1.0.0](https://doi.org/10.18112/openneuro.ds005415.v1.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005415 >>> dataset = DS005415(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005416(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Fatigue Characterization of EEG under Mixed Reality Stereo Vision * **Study:** `ds005416` (OpenNeuro) * **Author (year):** `Wu2024` * **Canonical:** — Also importable as: `DS005416`, `Wu2024`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 23; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005416](https://openneuro.org/datasets/ds005416) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005416](https://nemar.org/dataexplorer/detail?dataset_id=ds005416) DOI: [https://doi.org/10.18112/openneuro.ds005416.v1.0.1](https://doi.org/10.18112/openneuro.ds005416.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005416 >>> dataset = DS005416(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005420(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting state EEG with closed eyes and open eyes in females from 60 to 80 years old * **Study:** `ds005420` (OpenNeuro) * **Author (year):** `Gama2024` * **Canonical:** `Gama2019` Also importable as: `DS005420`, `Gama2024`, `Gama2019`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 37; recordings: 72; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005420](https://openneuro.org/datasets/ds005420) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005420](https://nemar.org/dataexplorer/detail?dataset_id=ds005420) DOI: [https://doi.org/10.18112/openneuro.ds005420.v1.0.0](https://doi.org/10.18112/openneuro.ds005420.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005420 >>> dataset = DS005420(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Gama2019']* ### *class* eegdash.dataset.DS005429(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory oddball comparison (Optimum-1, Learning-oddball, and the local–global paradigm) * **Study:** `ds005429` (OpenNeuro) * **Author (year):** `Rutiku2024` * **Canonical:** — Also importable as: `DS005429`, `Rutiku2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 61; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005429](https://openneuro.org/datasets/ds005429) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005429](https://nemar.org/dataexplorer/detail?dataset_id=ds005429) DOI: [https://doi.org/10.18112/openneuro.ds005429.v1.0.0](https://doi.org/10.18112/openneuro.ds005429.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005429 >>> dataset = DS005429(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005448(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) STReEF * **Study:** `ds005448` (OpenNeuro) * **Author (year):** `Jelsma2024` * **Canonical:** `STReEF` Also importable as: `DS005448`, `Jelsma2024`, `STReEF`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 13; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005448](https://openneuro.org/datasets/ds005448) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005448](https://nemar.org/dataexplorer/detail?dataset_id=ds005448) DOI: [https://doi.org/10.18112/openneuro.ds005448.v1.0.0](https://doi.org/10.18112/openneuro.ds005448.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005448 >>> dataset = DS005448(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['STReEF']* ### *class* eegdash.dataset.DS005473(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 29 By BP * **Study:** `ds005473` (OpenNeuro) * **Author (year):** `Xiangyue2024_29_BP` * **Canonical:** `Zhao2024` Also importable as: `DS005473`, `Xiangyue2024_29_BP`, `Zhao2024`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 29; recordings: 58; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005473](https://openneuro.org/datasets/ds005473) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005473](https://nemar.org/dataexplorer/detail?dataset_id=ds005473) DOI: [https://doi.org/10.18112/openneuro.ds005473.v1.0.0](https://doi.org/10.18112/openneuro.ds005473.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005473 >>> dataset = DS005473(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Zhao2024']* ### *class* eegdash.dataset.DS005486(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PREDICT * **Study:** `ds005486` (OpenNeuro) * **Author (year):** `Chowdhury2024` * **Canonical:** — Also importable as: `DS005486`, `Chowdhury2024`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Unknown`. Subjects: 159; recordings: 445; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005486](https://openneuro.org/datasets/ds005486) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005486](https://nemar.org/dataexplorer/detail?dataset_id=ds005486) DOI: [https://doi.org/10.18112/openneuro.ds005486.v1.0.1](https://doi.org/10.18112/openneuro.ds005486.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005486 >>> dataset = DS005486(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005489(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Free Recall with Open-Loop Stimulation at Encoding * **Study:** `ds005489` (OpenNeuro) * **Author (year):** `Herrema2024_Free_Recall` * **Canonical:** — Also importable as: `DS005489`, `Herrema2024_Free_Recall`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 37; recordings: 154; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005489](https://openneuro.org/datasets/ds005489) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005489](https://nemar.org/dataexplorer/detail?dataset_id=ds005489) DOI: [https://doi.org/10.18112/openneuro.ds005489.v1.0.3](https://doi.org/10.18112/openneuro.ds005489.v1.0.3) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005489 >>> dataset = DS005489(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005491(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Categorized Free Recall with Open-Loop Stimulation at Encoding * **Study:** `ds005491` (OpenNeuro) * **Author (year):** `Herrema2024_Categorized` * **Canonical:** `catFR_open_loop`, `RAM_catFR`, `catFR_stim` Also importable as: `DS005491`, `Herrema2024_Categorized`, `catFR_open_loop`, `RAM_catFR`, `catFR_stim`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 19; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005491](https://openneuro.org/datasets/ds005491) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005491](https://nemar.org/dataexplorer/detail?dataset_id=ds005491) DOI: [https://doi.org/10.18112/openneuro.ds005491.v1.0.0](https://doi.org/10.18112/openneuro.ds005491.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005491 >>> dataset = DS005491(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['catFR_open_loop', 'RAM_catFR', 'catFR_stim']* ### *class* eegdash.dataset.DS005494(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cued Recall of Paired Associates with Open-Loop Stimulation at Encoding or Retrieval * **Study:** `ds005494` (OpenNeuro) * **Author (year):** `Herrema2024_Cued` * **Canonical:** `Herrema2024` Also importable as: `DS005494`, `Herrema2024_Cued`, `Herrema2024`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 20; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005494](https://openneuro.org/datasets/ds005494) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005494](https://nemar.org/dataexplorer/detail?dataset_id=ds005494) DOI: [https://doi.org/10.18112/openneuro.ds005494.v1.0.1](https://doi.org/10.18112/openneuro.ds005494.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005494 >>> dataset = DS005494(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Herrema2024']* ### *class* eegdash.dataset.DS005505(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 1 * **Study:** `ds005505` (OpenNeuro) * **Author (year):** `Shirazi2024_R1` * **Canonical:** `HBN_r1` Also importable as: `DS005505`, `Shirazi2024_R1`, `HBN_r1`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 136; recordings: 1342; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005505](https://openneuro.org/datasets/ds005505) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005505](https://nemar.org/dataexplorer/detail?dataset_id=ds005505) DOI: [https://doi.org/10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005505 >>> dataset = DS005505(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r1']* ### *class* eegdash.dataset.DS005506(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 2 * **Study:** `ds005506` (OpenNeuro) * **Author (year):** `Shirazi2024_R2` * **Canonical:** `HBN_r2` Also importable as: `DS005506`, `Shirazi2024_R2`, `HBN_r2`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 150; recordings: 1405; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005506](https://openneuro.org/datasets/ds005506) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005506](https://nemar.org/dataexplorer/detail?dataset_id=ds005506) DOI: [https://doi.org/10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005506 >>> dataset = DS005506(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r2']* ### *class* eegdash.dataset.DS005507(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 3 * **Study:** `ds005507` (OpenNeuro) * **Author (year):** `Shirazi2024_R3` * **Canonical:** `HBN_r3` Also importable as: `DS005507`, `Shirazi2024_R3`, `HBN_r3`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 184; recordings: 1812; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005507](https://openneuro.org/datasets/ds005507) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005507](https://nemar.org/dataexplorer/detail?dataset_id=ds005507) DOI: [https://doi.org/10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005507 >>> dataset = DS005507(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r3']* ### *class* eegdash.dataset.DS005508(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 4 * **Study:** `ds005508` (OpenNeuro) * **Author (year):** `Shirazi2024_R4` * **Canonical:** `HBN_r4` Also importable as: `DS005508`, `Shirazi2024_R4`, `HBN_r4`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 324; recordings: 3342; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005508](https://openneuro.org/datasets/ds005508) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005508](https://nemar.org/dataexplorer/detail?dataset_id=ds005508) DOI: [https://doi.org/10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005508 >>> dataset = DS005508(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r4']* ### *class* eegdash.dataset.DS005509(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 5 * **Study:** `ds005509` (OpenNeuro) * **Author (year):** `Shirazi2024_R5` * **Canonical:** `HBN_r5` Also importable as: `DS005509`, `Shirazi2024_R5`, `HBN_r5`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 330; recordings: 3326; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005509](https://openneuro.org/datasets/ds005509) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005509](https://nemar.org/dataexplorer/detail?dataset_id=ds005509) DOI: [https://doi.org/10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005509 >>> dataset = DS005509(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r5']* ### *class* eegdash.dataset.DS005510(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 6 * **Study:** `ds005510` (OpenNeuro) * **Author (year):** `Shirazi2024_R6` * **Canonical:** `HBN_r6` Also importable as: `DS005510`, `Shirazi2024_R6`, `HBN_r6`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 135; recordings: 1227; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005510](https://openneuro.org/datasets/ds005510) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005510](https://nemar.org/dataexplorer/detail?dataset_id=ds005510) DOI: [https://doi.org/10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005510 >>> dataset = DS005510(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r6']* ### *class* eegdash.dataset.DS005512(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 8 * **Study:** `ds005512` (OpenNeuro) * **Author (year):** `Shirazi2024_R8` * **Canonical:** `HBN_r8` Also importable as: `DS005512`, `Shirazi2024_R8`, `HBN_r8`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 257; recordings: 2320; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005512](https://openneuro.org/datasets/ds005512) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005512](https://nemar.org/dataexplorer/detail?dataset_id=ds005512) DOI: [https://doi.org/10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS005512 >>> dataset = DS005512(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r8']* ### *class* eegdash.dataset.DS005514(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 9 * **Study:** `ds005514` (OpenNeuro) * **Author (year):** `Shirazi2024_R9` * **Canonical:** `HBN_r9` Also importable as: `DS005514`, `Shirazi2024_R9`, `HBN_r9`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 295; recordings: 2885; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005514](https://openneuro.org/datasets/ds005514) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005514](https://nemar.org/dataexplorer/detail?dataset_id=ds005514) DOI: [https://doi.org/10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005514 >>> dataset = DS005514(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r9']* ### *class* eegdash.dataset.DS005515(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 10 * **Study:** `ds005515` (OpenNeuro) * **Author (year):** `Shirazi2024_R10` * **Canonical:** `HBN_r10` Also importable as: `DS005515`, `Shirazi2024_R10`, `HBN_r10`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 533; recordings: 2516; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005515](https://openneuro.org/datasets/ds005515) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005515](https://nemar.org/dataexplorer/detail?dataset_id=ds005515) DOI: [https://doi.org/10.18112/openneuro.ds005515.v1.0.1](https://doi.org/10.18112/openneuro.ds005515.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005515 >>> dataset = DS005515(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r10']* ### *class* eegdash.dataset.DS005516(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 11 * **Study:** `ds005516` (OpenNeuro) * **Author (year):** `Shirazi2024_R11` * **Canonical:** `HBN_r11` Also importable as: `DS005516`, `Shirazi2024_R11`, `HBN_r11`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 430; recordings: 3397; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005516](https://openneuro.org/datasets/ds005516) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005516](https://nemar.org/dataexplorer/detail?dataset_id=ds005516) DOI: [https://doi.org/10.18112/openneuro.ds005516.v1.0.1](https://doi.org/10.18112/openneuro.ds005516.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005516 >>> dataset = DS005516(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r11']* ### *class* eegdash.dataset.DS005520(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Research data supporting ‘EEG recording during playing MOBA game’ * **Study:** `ds005520` (OpenNeuro) * **Author (year):** `Li2024_Research_supporting_playing` * **Canonical:** — Also importable as: `DS005520`, `Li2024_Research_supporting_playing`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 23; recordings: 69; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005520](https://openneuro.org/datasets/ds005520) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005520](https://nemar.org/dataexplorer/detail?dataset_id=ds005520) DOI: [https://doi.org/10.18112/openneuro.ds005520.v1.0.1](https://doi.org/10.18112/openneuro.ds005520.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005520 >>> dataset = DS005520(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005522(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Spatial Navigation Memory of Object Locations * **Study:** `ds005522` (OpenNeuro) * **Author (year):** `Herrema2024_Spatial` * **Canonical:** — Also importable as: `DS005522`, `Herrema2024_Spatial`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 55; recordings: 176; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005522](https://openneuro.org/datasets/ds005522) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005522](https://nemar.org/dataexplorer/detail?dataset_id=ds005522) DOI: [https://doi.org/10.18112/openneuro.ds005522.v1.0.0](https://doi.org/10.18112/openneuro.ds005522.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005522 >>> dataset = DS005522(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005523(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Spatial Memory of Object Locations with Open-Loop Stimulation at Encoding * **Study:** `ds005523` (OpenNeuro) * **Author (year):** `Herrema2024_Spatial_Memory` * **Canonical:** — Also importable as: `DS005523`, `Herrema2024_Spatial_Memory`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Surgery`. Subjects: 21; recordings: 102; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005523](https://openneuro.org/datasets/ds005523) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005523](https://nemar.org/dataexplorer/detail?dataset_id=ds005523) DOI: [https://doi.org/10.18112/openneuro.ds005523.v1.0.1](https://doi.org/10.18112/openneuro.ds005523.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005523 >>> dataset = DS005523(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005530(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Depotentiation of emotional reactivity using TMR during REM sleep * **Study:** `ds005530` (OpenNeuro) * **Author (year):** `Greco2024` * **Canonical:** — Also importable as: `DS005530`, `Greco2024`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 17; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005530](https://openneuro.org/datasets/ds005530) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005530](https://nemar.org/dataexplorer/detail?dataset_id=ds005530) DOI: [https://doi.org/10.18112/openneuro.ds005530.v1.0.9](https://doi.org/10.18112/openneuro.ds005530.v1.0.9) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005530 >>> dataset = DS005530(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005540(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EmoEEG-MC: A Multi-Context Emotional EEG Dataset for Cross-Context Emotion Decoding * **Study:** `ds005540` (OpenNeuro) * **Author (year):** `Xin2024` * **Canonical:** — Also importable as: `DS005540`, `Xin2024`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 59; recordings: 103; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005540](https://openneuro.org/datasets/ds005540) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005540](https://nemar.org/dataexplorer/detail?dataset_id=ds005540) DOI: [https://doi.org/10.18112/openneuro.ds005540.v1.0.7](https://doi.org/10.18112/openneuro.ds005540.v1.0.7) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005540 >>> dataset = DS005540(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005545(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory naming * **Study:** `ds005545` (OpenNeuro) * **Author (year):** `Kanno2024` * **Canonical:** `Kanno2025` Also importable as: `DS005545`, `Kanno2024`, `Kanno2025`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Surgery`. Subjects: 106; recordings: 336; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005545](https://openneuro.org/datasets/ds005545) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005545](https://nemar.org/dataexplorer/detail?dataset_id=ds005545) DOI: [https://doi.org/10.18112/openneuro.ds005545.v1.0.3](https://doi.org/10.18112/openneuro.ds005545.v1.0.3) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005545 >>> dataset = DS005545(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kanno2025']* ### *class* eegdash.dataset.DS005555(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Bitbrain Open Access Sleep (BOAS) dataset * **Study:** `ds005555` (OpenNeuro) * **Author (year):** `LopezLarraz2024` * **Canonical:** `BOAS` Also importable as: `DS005555`, `LopezLarraz2024`, `BOAS`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 128; recordings: 256; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005555](https://openneuro.org/datasets/ds005555) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005555](https://nemar.org/dataexplorer/detail?dataset_id=ds005555) DOI: [https://doi.org/10.18112/openneuro.ds005555.v1.1.1](https://doi.org/10.18112/openneuro.ds005555.v1.1.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005555 >>> dataset = DS005555(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BOAS']* ### *class* eegdash.dataset.DS005557(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier) * **Study:** `ds005557` (OpenNeuro) * **Author (year):** `Herrema2024_Classifier` * **Canonical:** — Also importable as: `DS005557`, `Herrema2024_Classifier`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Other`. Subjects: 16; recordings: 58; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005557](https://openneuro.org/datasets/ds005557) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005557](https://nemar.org/dataexplorer/detail?dataset_id=ds005557) DOI: [https://doi.org/10.18112/openneuro.ds005557.v1.0.0](https://doi.org/10.18112/openneuro.ds005557.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005557 >>> dataset = DS005557(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005558(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Categorized Free Recall with Closed-Loop Stimulation at Encoding (Encoding Classifier) * **Study:** `ds005558` (OpenNeuro) * **Author (year):** `Herrema2024_Categorized_Free` * **Canonical:** `catFR_closed_loop` Also importable as: `DS005558`, `Herrema2024_Categorized_Free`, `catFR_closed_loop`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Surgery`. Subjects: 7; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005558](https://openneuro.org/datasets/ds005558) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005558](https://nemar.org/dataexplorer/detail?dataset_id=ds005558) DOI: [https://doi.org/10.18112/openneuro.ds005558.v1.0.0](https://doi.org/10.18112/openneuro.ds005558.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005558 >>> dataset = DS005558(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['catFR_closed_loop']* ### *class* eegdash.dataset.DS005565(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neural associations between fingerspelling, print, and signs: An ERP priming study with deaf readers * **Study:** `ds005565` (OpenNeuro) * **Author (year):** `Lee2024_StudyWITH` * **Canonical:** — Also importable as: `DS005565`, `Lee2024_StudyWITH`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005565](https://openneuro.org/datasets/ds005565) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005565](https://nemar.org/dataexplorer/detail?dataset_id=ds005565) DOI: [https://doi.org/10.18112/openneuro.ds005565.v1.0.3](https://doi.org/10.18112/openneuro.ds005565.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005565 >>> dataset = DS005565(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005571(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Expectation of Conflict Stimuli * **Study:** `ds005571` (OpenNeuro) * **Author (year):** `MartinezMolina2024` * **Canonical:** — Also importable as: `DS005571`, `MartinezMolina2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 24; recordings: 45; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005571](https://openneuro.org/datasets/ds005571) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005571](https://nemar.org/dataexplorer/detail?dataset_id=ds005571) DOI: [https://doi.org/10.18112/openneuro.ds005571.v1.0.1](https://doi.org/10.18112/openneuro.ds005571.v1.0.1) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005571 >>> dataset = DS005571(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005574(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The “Podcast” ECoG dataset * **Study:** `ds005574` (OpenNeuro) * **Author (year):** `Zada2024` * **Canonical:** `Podcast` Also importable as: `DS005574`, `Zada2024`, `Podcast`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Unknown`. Subjects: 9; recordings: 9; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005574](https://openneuro.org/datasets/ds005574) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005574](https://nemar.org/dataexplorer/detail?dataset_id=ds005574) DOI: [https://doi.org/10.18112/openneuro.ds005574.v1.0.2](https://doi.org/10.18112/openneuro.ds005574.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS005574 >>> dataset = DS005574(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Podcast']* ### *class* eegdash.dataset.DS005586(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electroencephalographic responses to the number of objects in partially occluded and uncovered scenes * **Study:** `ds005586` (OpenNeuro) * **Author (year):** `Baykan2024` * **Canonical:** — Also importable as: `DS005586`, `Baykan2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 23; recordings: 23; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005586](https://openneuro.org/datasets/ds005586) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005586](https://nemar.org/dataexplorer/detail?dataset_id=ds005586) DOI: [https://doi.org/10.18112/openneuro.ds005586.v2.0.0](https://doi.org/10.18112/openneuro.ds005586.v2.0.0) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005586 >>> dataset = DS005586(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005594(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alphabetic Decision Task (Arial Light Font) * **Study:** `ds005594` (OpenNeuro) * **Author (year):** `Taylor2024` * **Canonical:** — Also importable as: `DS005594`, `Taylor2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 16; recordings: 16; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005594](https://openneuro.org/datasets/ds005594) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005594](https://nemar.org/dataexplorer/detail?dataset_id=ds005594) DOI: [https://doi.org/10.18112/openneuro.ds005594.v1.0.3](https://doi.org/10.18112/openneuro.ds005594.v1.0.3) NEMAR citation count: 1 ### Examples ```pycon >>> from eegdash.dataset import DS005594 >>> dataset = DS005594(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005620(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A repeated awakening study exploring the capacity of complexity measures to capture dreaming during propofol sedation * **Study:** `ds005620` (OpenNeuro) * **Author (year):** `Bajwa2024` * **Canonical:** — Also importable as: `DS005620`, `Bajwa2024`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 21; recordings: 202; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005620](https://openneuro.org/datasets/ds005620) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005620](https://nemar.org/dataexplorer/detail?dataset_id=ds005620) DOI: [https://doi.org/10.18112/openneuro.ds005620.v1.0.0](https://doi.org/10.18112/openneuro.ds005620.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005620 >>> dataset = DS005620(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005624(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Color Change Detection Task * **Study:** `ds005624` (OpenNeuro) * **Author (year):** `DS5624_ColorChangeDetection` * **Canonical:** — Also importable as: `DS005624`, `DS5624_ColorChangeDetection`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 24; recordings: 35; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005624](https://openneuro.org/datasets/ds005624) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005624](https://nemar.org/dataexplorer/detail?dataset_id=ds005624) DOI: [https://doi.org/10.18112/openneuro.ds005624.v1.0.0](https://doi.org/10.18112/openneuro.ds005624.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005624 >>> dataset = DS005624(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005628(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of Visual and Audiovisual Stimuli in Virtual Reality from the Edzna Archaeological Site * **Study:** `ds005628` (OpenNeuro) * **Author (year):** `RosadoAiza2024` * **Canonical:** — Also importable as: `DS005628`, `RosadoAiza2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 102; recordings: 306; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005628](https://openneuro.org/datasets/ds005628) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005628](https://nemar.org/dataexplorer/detail?dataset_id=ds005628) DOI: [https://doi.org/10.18112/openneuro.ds005628.v1.0.0](https://doi.org/10.18112/openneuro.ds005628.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005628 >>> dataset = DS005628(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005642(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) illusory-face-eeg * **Study:** `ds005642` (OpenNeuro) * **Author (year):** `Robinson2024_illusory` * **Canonical:** — Also importable as: `DS005642`, `Robinson2024_illusory`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005642](https://openneuro.org/datasets/ds005642) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005642](https://nemar.org/dataexplorer/detail?dataset_id=ds005642) DOI: [https://doi.org/10.18112/openneuro.ds005642.v1.0.1](https://doi.org/10.18112/openneuro.ds005642.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005642 >>> dataset = DS005642(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005648(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mapping object space dimensions: new insights from temporal dynamics * **Study:** `ds005648` (OpenNeuro) * **Author (year):** `Kidder2024` * **Canonical:** — Also importable as: `DS005648`, `Kidder2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005648](https://openneuro.org/datasets/ds005648) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005648](https://nemar.org/dataexplorer/detail?dataset_id=ds005648) DOI: [https://doi.org/10.18112/openneuro.ds005648.v1.0.3](https://doi.org/10.18112/openneuro.ds005648.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS005648 >>> dataset = DS005648(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005662(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A comprehensive EEG dataset for investigating visual touch perception * **Study:** `ds005662` (OpenNeuro) * **Author (year):** `Smit2024` * **Canonical:** — Also importable as: `DS005662`, `Smit2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 80; recordings: 80; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005662](https://openneuro.org/datasets/ds005662) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005662](https://nemar.org/dataexplorer/detail?dataset_id=ds005662) DOI: [https://doi.org/10.18112/openneuro.ds005662.v2.0.1](https://doi.org/10.18112/openneuro.ds005662.v2.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005662 >>> dataset = DS005662(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005670(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SEEG Resting State Recording * **Study:** `ds005670` (OpenNeuro) * **Author (year):** `Xu2024_SEEG_Resting_State` * **Canonical:** — Also importable as: `DS005670`, `Xu2024_SEEG_Resting_State`. Modality: `ieeg`; Experiment type: `Resting-state`; Subject type: `Epilepsy`. Subjects: 2; recordings: 2; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005670](https://openneuro.org/datasets/ds005670) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005670](https://nemar.org/dataexplorer/detail?dataset_id=ds005670) DOI: [https://doi.org/10.18112/openneuro.ds005670.v1.0.0](https://doi.org/10.18112/openneuro.ds005670.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005670 >>> dataset = DS005670(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005672(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PerceiveImagine * **Study:** `ds005672` (OpenNeuro) * **Author (year):** `Zhiyuan2024` * **Canonical:** — Also importable as: `DS005672`, `Zhiyuan2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 3; recordings: 3; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005672](https://openneuro.org/datasets/ds005672) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005672](https://nemar.org/dataexplorer/detail?dataset_id=ds005672) DOI: [https://doi.org/10.18112/openneuro.ds005672.v1.0.0](https://doi.org/10.18112/openneuro.ds005672.v1.0.0) NEMAR citation count: 2 ### Examples ```pycon >>> from eegdash.dataset import DS005672 >>> dataset = DS005672(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005688(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) visStim * **Study:** `ds005688` (OpenNeuro) * **Author (year):** `Tan2024` * **Canonical:** — Also importable as: `DS005688`, `Tan2024`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 20; recordings: 89; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005688](https://openneuro.org/datasets/ds005688) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005688](https://nemar.org/dataexplorer/detail?dataset_id=ds005688) DOI: [https://doi.org/10.18112/openneuro.ds005688.v1.0.1](https://doi.org/10.18112/openneuro.ds005688.v1.0.1) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005688 >>> dataset = DS005688(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005691(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SpinalExpect_Invasive * **Study:** `ds005691` (OpenNeuro) * **Author (year):** `Stenner2024_SpinalExpect` * **Canonical:** — Also importable as: `DS005691`, `Stenner2024_SpinalExpect`. Modality: `ieeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005691](https://openneuro.org/datasets/ds005691) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005691](https://nemar.org/dataexplorer/detail?dataset_id=ds005691) DOI: [https://doi.org/10.18112/openneuro.ds005691.v1.0.0](https://doi.org/10.18112/openneuro.ds005691.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005691 >>> dataset = DS005691(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005692(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SpinalExpect_NonInvasive * **Study:** `ds005692` (OpenNeuro) * **Author (year):** `Stenner2024_SpinalExpect_NonInvasive` * **Canonical:** — Also importable as: `DS005692`, `Stenner2024_SpinalExpect_NonInvasive`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 30; recordings: 59; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005692](https://openneuro.org/datasets/ds005692) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005692](https://nemar.org/dataexplorer/detail?dataset_id=ds005692) DOI: [https://doi.org/10.18112/openneuro.ds005692.v1.0.0](https://doi.org/10.18112/openneuro.ds005692.v1.0.0) NEMAR citation count: 0 ### Examples ```pycon >>> from eegdash.dataset import DS005692 >>> dataset = DS005692(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005697(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PerceiveImagine * **Study:** `ds005697` (OpenNeuro) * **Author (year):** `Li2024_PerceiveImagine` * **Canonical:** `PerceiveImagine` Also importable as: `DS005697`, `Li2024_PerceiveImagine`, `PerceiveImagine`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 51; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005697](https://openneuro.org/datasets/ds005697) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005697](https://nemar.org/dataexplorer/detail?dataset_id=ds005697) DOI: [https://doi.org/10.18112/openneuro.ds005697.v1.0.2](https://doi.org/10.18112/openneuro.ds005697.v1.0.2) NEMAR citation count: 3 ### Examples ```pycon >>> from eegdash.dataset import DS005697 >>> dataset = DS005697(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PerceiveImagine']* ### *class* eegdash.dataset.DS005752(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The NIMH Healthy Research Volunteer Dataset * **Study:** `ds005752` (OpenNeuro) * **Author (year):** `Nugent2024` * **Canonical:** — Also importable as: `DS005752`, `Nugent2024`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 123; recordings: 1055; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005752](https://openneuro.org/datasets/ds005752) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005752](https://nemar.org/dataexplorer/detail?dataset_id=ds005752) DOI: [https://doi.org/10.18112/openneuro.ds005752.v2.1.0](https://doi.org/10.18112/openneuro.ds005752.v2.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS005752 >>> dataset = DS005752(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005776(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrical_Thermal_FingerTapping_2015 * **Study:** `ds005776` (OpenNeuro) * **Author (year):** `Yucel2025_Electrical` * **Canonical:** `Yucel2015` Also importable as: `DS005776`, `Yucel2025_Electrical`, `Yucel2015`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 11; recordings: 46; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005776](https://openneuro.org/datasets/ds005776) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005776](https://nemar.org/dataexplorer/detail?dataset_id=ds005776) DOI: [https://doi.org/10.18112/openneuro.ds005776.v1.0.1](https://doi.org/10.18112/openneuro.ds005776.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005776 >>> dataset = DS005776(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Yucel2015']* ### *class* eegdash.dataset.DS005777(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrical_Morphine_Placebo_2018 * **Study:** `ds005777` (OpenNeuro) * **Author (year):** `Peng2025` * **Canonical:** `Peng2018` Also importable as: `DS005777`, `Peng2025`, `Peng2018`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Unknown`. Subjects: 14; recordings: 113; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005777](https://openneuro.org/datasets/ds005777) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005777](https://nemar.org/dataexplorer/detail?dataset_id=ds005777) DOI: [https://doi.org/10.18112/openneuro.ds005777.v1.0.1](https://doi.org/10.18112/openneuro.ds005777.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005777 >>> dataset = DS005777(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Peng2018']* ### *class* eegdash.dataset.DS005779(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Real-time personalized brain state-dependent TMS in healthy adults * **Study:** `ds005779` (OpenNeuro) * **Author (year):** `Khatri2025` * **Canonical:** — Also importable as: `DS005779`, `Khatri2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 19; recordings: 250; tasks: 16. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005779](https://openneuro.org/datasets/ds005779) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005779](https://nemar.org/dataexplorer/detail?dataset_id=ds005779) DOI: [https://doi.org/10.18112/openneuro.ds005779.v1.0.1](https://doi.org/10.18112/openneuro.ds005779.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005779 >>> dataset = DS005779(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005795(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MULTI-CLARID (Multimodal Category Learning and Resting-state Imaging Data) * **Study:** `ds005795` (OpenNeuro) * **Author (year):** `Stadler2025` * **Canonical:** — Also importable as: `DS005795`, `Stadler2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 34; recordings: 39; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005795](https://openneuro.org/datasets/ds005795) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005795](https://nemar.org/dataexplorer/detail?dataset_id=ds005795) DOI: [https://doi.org/10.18112/openneuro.ds005795.v1.0.0](https://doi.org/10.18112/openneuro.ds005795.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005795 >>> dataset = DS005795(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005810(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NOD-MEG * **Study:** `ds005810` (OpenNeuro) * **Author (year):** `Zhang2025_MEG` * **Canonical:** `NOD_MEG` Also importable as: `DS005810`, `Zhang2025_MEG`, `NOD_MEG`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 31; recordings: 305; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005810](https://openneuro.org/datasets/ds005810) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005810](https://nemar.org/dataexplorer/detail?dataset_id=ds005810) DOI: [https://doi.org/10.18112/openneuro.ds005810.v2.0.0](https://doi.org/10.18112/openneuro.ds005810.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005810 >>> dataset = DS005810(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['NOD_MEG']* ### *class* eegdash.dataset.DS005811(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NOD-EEG * **Study:** `ds005811` (OpenNeuro) * **Author (year):** `Zhang2025_EEG` * **Canonical:** `NOD_EEG` Also importable as: `DS005811`, `Zhang2025_EEG`, `NOD_EEG`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 19; recordings: 448; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005811](https://openneuro.org/datasets/ds005811) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005811](https://nemar.org/dataexplorer/detail?dataset_id=ds005811) DOI: [https://doi.org/10.18112/openneuro.ds005811.v1.0.9](https://doi.org/10.18112/openneuro.ds005811.v1.0.9) ### Examples ```pycon >>> from eegdash.dataset import DS005811 >>> dataset = DS005811(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['NOD_EEG']* ### *class* eegdash.dataset.DS005815(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Human EEG Dataset for Multisensory Perception and Mental Imagery * **Study:** `ds005815` (OpenNeuro) * **Author (year):** `Chang2025` * **Canonical:** — Also importable as: `DS005815`, `Chang2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 20; recordings: 103; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005815](https://openneuro.org/datasets/ds005815) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005815](https://nemar.org/dataexplorer/detail?dataset_id=ds005815) DOI: [https://doi.org/10.18112/openneuro.ds005815.v2.0.1](https://doi.org/10.18112/openneuro.ds005815.v2.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005815 >>> dataset = DS005815(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005841(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Experiment measuring ERPs in VR * **Study:** `ds005841` (OpenNeuro) * **Author (year):** `Karakashevska2025` * **Canonical:** — Also importable as: `DS005841`, `Karakashevska2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 48; recordings: 288; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005841](https://openneuro.org/datasets/ds005841) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005841](https://nemar.org/dataexplorer/detail?dataset_id=ds005841) DOI: [https://doi.org/10.18112/openneuro.ds005841.v1.0.0](https://doi.org/10.18112/openneuro.ds005841.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005841 >>> dataset = DS005841(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005857(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ltpDelayRepFRReadOnly * **Study:** `ds005857` (OpenNeuro) * **Author (year):** `Broitman2025` * **Canonical:** `Broitman2019` Also importable as: `DS005857`, `Broitman2025`, `Broitman2019`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Unknown`. Subjects: 29; recordings: 110; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005857](https://openneuro.org/datasets/ds005857) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005857](https://nemar.org/dataexplorer/detail?dataset_id=ds005857) DOI: [https://doi.org/10.18112/openneuro.ds005857.v1.0.0](https://doi.org/10.18112/openneuro.ds005857.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005857 >>> dataset = DS005857(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Broitman2019']* ### *class* eegdash.dataset.DS005863(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cognitive Electrophysiology in Socioeconomic Context in Adulthood * **Study:** `ds005863` (OpenNeuro) * **Author (year):** `Isbell2025_Cognitive` * **Canonical:** — Also importable as: `DS005863`, `Isbell2025_Cognitive`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 127; recordings: 357; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005863](https://openneuro.org/datasets/ds005863) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005863](https://nemar.org/dataexplorer/detail?dataset_id=ds005863) DOI: [https://doi.org/10.18112/openneuro.ds005863.v2.0.0](https://doi.org/10.18112/openneuro.ds005863.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005863 >>> dataset = DS005863(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005866(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Flankers-NEAR * **Study:** `ds005866` (OpenNeuro) * **Author (year):** `TerhuneCotter2025_NEAR` * **Canonical:** `Flankers_NEAR` Also importable as: `DS005866`, `TerhuneCotter2025_NEAR`, `Flankers_NEAR`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 60; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005866](https://openneuro.org/datasets/ds005866) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005866](https://nemar.org/dataexplorer/detail?dataset_id=ds005866) DOI: [https://doi.org/10.18112/openneuro.ds005866.v1.0.1](https://doi.org/10.18112/openneuro.ds005866.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005866 >>> dataset = DS005866(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Flankers_NEAR']* ### *class* eegdash.dataset.DS005868(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Flankers-FAR * **Study:** `ds005868` (OpenNeuro) * **Author (year):** `TerhuneCotter2025_FAR` * **Canonical:** `Flankers_FAR` Also importable as: `DS005868`, `TerhuneCotter2025_FAR`, `Flankers_FAR`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 48; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005868](https://openneuro.org/datasets/ds005868) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005868](https://nemar.org/dataexplorer/detail?dataset_id=ds005868) DOI: [https://doi.org/10.18112/openneuro.ds005868.v1.0.1](https://doi.org/10.18112/openneuro.ds005868.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005868 >>> dataset = DS005868(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Flankers_FAR']* ### *class* eegdash.dataset.DS005872(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEGEyeNet Dataset * **Study:** `ds005872` (OpenNeuro) * **Author (year):** `Plomecka2025` * **Canonical:** `EEGEyeNet` Also importable as: `DS005872`, `Plomecka2025`, `EEGEyeNet`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005872](https://openneuro.org/datasets/ds005872) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005872](https://nemar.org/dataexplorer/detail?dataset_id=ds005872) DOI: [https://doi.org/10.18112/openneuro.ds005872.v1.0.0](https://doi.org/10.18112/openneuro.ds005872.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005872 >>> dataset = DS005872(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EEGEyeNet']* ### *class* eegdash.dataset.DS005873(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SeizeIT2 * **Study:** `ds005873` (OpenNeuro) * **Author (year):** `Bhagubai2025` * **Canonical:** `SeizeIT2` Also importable as: `DS005873`, `Bhagubai2025`, `SeizeIT2`. Modality: `eeg, emg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 125; recordings: 5654; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005873](https://openneuro.org/datasets/ds005873) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005873](https://nemar.org/dataexplorer/detail?dataset_id=ds005873) DOI: [https://doi.org/10.18112/openneuro.ds005873.v1.1.0](https://doi.org/10.18112/openneuro.ds005873.v1.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS005873 >>> dataset = DS005873(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['SeizeIT2']* ### *class* eegdash.dataset.DS005876(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Song Familiarity * **Study:** `ds005876` (OpenNeuro) * **Author (year):** `Girard2025` * **Canonical:** — Also importable as: `DS005876`, `Girard2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 29; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005876](https://openneuro.org/datasets/ds005876) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005876](https://nemar.org/dataexplorer/detail?dataset_id=ds005876) DOI: [https://doi.org/10.18112/openneuro.ds005876.v1.0.1](https://doi.org/10.18112/openneuro.ds005876.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005876 >>> dataset = DS005876(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005907(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG: RL Task (3-Armed Bandit) with alcohol cues in hazardous drinkers and ctls * **Study:** `ds005907` (OpenNeuro) * **Author (year):** `Campbell2025` * **Canonical:** — Also importable as: `DS005907`, `Campbell2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Alcohol`. Subjects: 53; recordings: 53; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005907](https://openneuro.org/datasets/ds005907) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005907](https://nemar.org/dataexplorer/detail?dataset_id=ds005907) DOI: [https://doi.org/10.18112/openneuro.ds005907.v1.0.0](https://doi.org/10.18112/openneuro.ds005907.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005907 >>> dataset = DS005907(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005929(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motion-Yucel2014 * **Study:** `ds005929` (OpenNeuro) * **Author (year):** `MotionYucel2014` * **Canonical:** `Yucel2014`, `Motion_Yucel2014` Also importable as: `DS005929`, `MotionYucel2014`, `Yucel2014`, `Motion_Yucel2014`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 7; recordings: 7; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005929](https://openneuro.org/datasets/ds005929) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005929](https://nemar.org/dataexplorer/detail?dataset_id=ds005929) DOI: [https://doi.org/10.18112/openneuro.ds005929.v1.0.1](https://doi.org/10.18112/openneuro.ds005929.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005929 >>> dataset = DS005929(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Yucel2014', 'Motion_Yucel2014']* ### *class* eegdash.dataset.DS005930(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BallSqueezingHD_Gao2023 * **Study:** `ds005930` (OpenNeuro) * **Author (year):** `Gao2023` * **Canonical:** — Also importable as: `DS005930`, `Gao2023`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 12; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005930](https://openneuro.org/datasets/ds005930) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005930](https://nemar.org/dataexplorer/detail?dataset_id=ds005930) DOI: [https://doi.org/10.18112/openneuro.ds005930.v1.0.1](https://doi.org/10.18112/openneuro.ds005930.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005930 >>> dataset = DS005930(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005931(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visuomotor_task * **Study:** `ds005931` (OpenNeuro) * **Author (year):** `Ueda2025` * **Canonical:** — Also importable as: `DS005931`, `Ueda2025`. Modality: `ieeg`; Experiment type: `Motor`; Subject type: `Epilepsy`. Subjects: 8; recordings: 16; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005931](https://openneuro.org/datasets/ds005931) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005931](https://nemar.org/dataexplorer/detail?dataset_id=ds005931) DOI: [https://doi.org/10.18112/openneuro.ds005931.v1.0.0](https://doi.org/10.18112/openneuro.ds005931.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005931 >>> dataset = DS005931(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005932(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PWIe * **Study:** `ds005932` (OpenNeuro) * **Author (year):** `Holcomb2025` * **Canonical:** `PWIe` Also importable as: `DS005932`, `Holcomb2025`, `PWIe`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 29; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005932](https://openneuro.org/datasets/ds005932) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005932](https://nemar.org/dataexplorer/detail?dataset_id=ds005932) DOI: [https://doi.org/10.18112/openneuro.ds005932.v1.0.0](https://doi.org/10.18112/openneuro.ds005932.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005932 >>> dataset = DS005932(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PWIe']* ### *class* eegdash.dataset.DS005935(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mirror Neuron Study * **Study:** `ds005935` (OpenNeuro) * **Author (year):** `Li2025` * **Canonical:** — Also importable as: `DS005935`, `Li2025`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 21; recordings: 64; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005935](https://openneuro.org/datasets/ds005935) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005935](https://nemar.org/dataexplorer/detail?dataset_id=ds005935) DOI: [https://doi.org/10.18112/openneuro.ds005935.v1.0.0](https://doi.org/10.18112/openneuro.ds005935.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005935 >>> dataset = DS005935(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005946(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ERC_CoG PROMENADE - WP2 - MetaImagery (Metaphor and Mental Imagery) * **Study:** `ds005946` (OpenNeuro) * **Author (year):** `Frau2025` * **Canonical:** `PROMENADE` Also importable as: `DS005946`, `Frau2025`, `PROMENADE`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 39; recordings: 39; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005946](https://openneuro.org/datasets/ds005946) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005946](https://nemar.org/dataexplorer/detail?dataset_id=ds005946) DOI: [https://doi.org/10.18112/openneuro.ds005946.v1.0.1](https://doi.org/10.18112/openneuro.ds005946.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS005946 >>> dataset = DS005946(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PROMENADE']* ### *class* eegdash.dataset.DS005953(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_visual * **Study:** `ds005953` (OpenNeuro) * **Author (year):** `Winawer2025` * **Canonical:** — Also importable as: `DS005953`, `Winawer2025`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Surgery`. Subjects: 2; recordings: 3; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005953](https://openneuro.org/datasets/ds005953) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005953](https://nemar.org/dataexplorer/detail?dataset_id=ds005953) DOI: [https://doi.org/10.18112/openneuro.ds005953.v1.0.0](https://doi.org/10.18112/openneuro.ds005953.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005953 >>> dataset = DS005953(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005960(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) General Info: inst-comp-eeg * **Study:** `ds005960` (OpenNeuro) * **Author (year):** `Pena2025` * **Canonical:** — Also importable as: `DS005960`, `Pena2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 41; recordings: 41; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005960](https://openneuro.org/datasets/ds005960) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005960](https://nemar.org/dataexplorer/detail?dataset_id=ds005960) DOI: [https://doi.org/10.18112/openneuro.ds005960.v1.0.0](https://doi.org/10.18112/openneuro.ds005960.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005960 >>> dataset = DS005960(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS005963(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRESH Motor Dataset * **Study:** `ds005963` (OpenNeuro) * **Author (year):** `Mesquita2025` * **Canonical:** `Mesquita2019` Also importable as: `DS005963`, `Mesquita2025`, `Mesquita2019`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 10; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005963](https://openneuro.org/datasets/ds005963) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005963](https://nemar.org/dataexplorer/detail?dataset_id=ds005963) DOI: [https://doi.org/10.18112/openneuro.ds005963.v1.0.0](https://doi.org/10.18112/openneuro.ds005963.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005963 >>> dataset = DS005963(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Mesquita2019']* ### *class* eegdash.dataset.DS005964(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRESH Audio Dataset * **Study:** `ds005964` (OpenNeuro) * **Author (year):** `Luke2025` * **Canonical:** `Luke2019` Also importable as: `DS005964`, `Luke2025`, `Luke2019`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Unknown`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds005964](https://openneuro.org/datasets/ds005964) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds005964](https://nemar.org/dataexplorer/detail?dataset_id=ds005964) DOI: [https://doi.org/10.18112/openneuro.ds005964.v1.0.0](https://doi.org/10.18112/openneuro.ds005964.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS005964 >>> dataset = DS005964(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Luke2019']* ### *class* eegdash.dataset.DS006012(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A geometric shape regularity effect in the human brain: MEG dataset * **Study:** `ds006012` (OpenNeuro) * **Author (year):** `SableMeyer2025` * **Canonical:** — Also importable as: `DS006012`, `SableMeyer2025`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 21; recordings: 193; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006012](https://openneuro.org/datasets/ds006012) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006012](https://nemar.org/dataexplorer/detail?dataset_id=ds006012) DOI: [https://doi.org/10.18112/openneuro.ds006012.v1.0.1](https://doi.org/10.18112/openneuro.ds006012.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006012 >>> dataset = DS006012(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006018(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cognitive Electrophysiology in Socioeconomic Context in Adulthood: An EEG dataset * **Study:** `ds006018` (OpenNeuro) * **Author (year):** `Isbell2025_Adulthood` * **Canonical:** — Also importable as: `DS006018`, `Isbell2025_Adulthood`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 127; recordings: 357; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006018](https://openneuro.org/datasets/ds006018) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006018](https://nemar.org/dataexplorer/detail?dataset_id=ds006018) DOI: [https://doi.org/10.18112/openneuro.ds006018.v1.2.2](https://doi.org/10.18112/openneuro.ds006018.v1.2.2) ### Examples ```pycon >>> from eegdash.dataset import DS006018 >>> dataset = DS006018(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006033(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Synchronous EEG and fMRI dataset on inner speech * **Study:** `ds006033` (OpenNeuro) * **Author (year):** `Liwicki2025` * **Canonical:** — Also importable as: `DS006033`, `Liwicki2025`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 3; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006033](https://openneuro.org/datasets/ds006033) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006033](https://nemar.org/dataexplorer/detail?dataset_id=ds006033) DOI: [https://doi.org/10.18112/openneuro.ds006033.v1.0.1](https://doi.org/10.18112/openneuro.ds006033.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006033 >>> dataset = DS006033(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006035(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) somatomotor * **Study:** `ds006035` (OpenNeuro) * **Author (year):** `Lin2025` * **Canonical:** `Lin2019` Also importable as: `DS006035`, `Lin2025`, `Lin2019`. Modality: `meg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 15; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006035](https://openneuro.org/datasets/ds006035) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006035](https://nemar.org/dataexplorer/detail?dataset_id=ds006035) DOI: [https://doi.org/10.18112/openneuro.ds006035.v1.0.0](https://doi.org/10.18112/openneuro.ds006035.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006035 >>> dataset = DS006035(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Lin2019']* ### *class* eegdash.dataset.DS006036(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A complementary dataset of open-eyes EEG recordings in a photo-stimulation setting from: Alzheimer’s disease, Frontotemporal dementia and Healthy subjects * **Study:** `ds006036` (OpenNeuro) * **Author (year):** `Ntetska2025` * **Canonical:** — Also importable as: `DS006036`, `Ntetska2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Dementia`. Subjects: 88; recordings: 88; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006036](https://openneuro.org/datasets/ds006036) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006036](https://nemar.org/dataexplorer/detail?dataset_id=ds006036) DOI: [https://doi.org/10.18112/openneuro.ds006036.v1.0.6](https://doi.org/10.18112/openneuro.ds006036.v1.0.6) ### Examples ```pycon >>> from eegdash.dataset import DS006036 >>> dataset = DS006036(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006040(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sustained Attention Task (gradCPT) Dataset using simultaneous EEG-fMRI and DTI * **Study:** `ds006040` (OpenNeuro) * **Author (year):** `Cha2025` * **Canonical:** — Also importable as: `DS006040`, `Cha2025`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 28; recordings: 392; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006040](https://openneuro.org/datasets/ds006040) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006040](https://nemar.org/dataexplorer/detail?dataset_id=ds006040) DOI: [https://doi.org/10.18112/openneuro.ds006040.v1.0.2](https://doi.org/10.18112/openneuro.ds006040.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006040 >>> dataset = DS006040(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006065(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TSS_iEEG * **Study:** `ds006065` (OpenNeuro) * **Author (year):** `Kragel2025` * **Canonical:** — Also importable as: `DS006065`, `Kragel2025`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Surgery`. Subjects: 7; recordings: 45; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006065](https://openneuro.org/datasets/ds006065) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006065](https://nemar.org/dataexplorer/detail?dataset_id=ds006065) DOI: [https://doi.org/10.18112/openneuro.ds006065.v1.0.0](https://doi.org/10.18112/openneuro.ds006065.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006065 >>> dataset = DS006065(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006095(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mind in Motion Older Adults Walking Over Uneven Terrain * **Study:** `ds006095` (OpenNeuro) * **Author (year):** `Liu2025_Mind_Motion_Older` * **Canonical:** — Also importable as: `DS006095`, `Liu2025_Mind_Motion_Older`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 71; recordings: 1182; tasks: 9. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006095](https://openneuro.org/datasets/ds006095) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006095](https://nemar.org/dataexplorer/detail?dataset_id=ds006095) DOI: [https://doi.org/10.18112/openneuro.ds006095.v1.0.0](https://doi.org/10.18112/openneuro.ds006095.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006095 >>> dataset = DS006095(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006104(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG dataset for speech decoding * **Study:** `ds006104` (OpenNeuro) * **Author (year):** `Moreira2025` * **Canonical:** — Also importable as: `DS006104`, `Moreira2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 24; recordings: 56; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006104](https://openneuro.org/datasets/ds006104) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006104](https://nemar.org/dataexplorer/detail?dataset_id=ds006104) DOI: [https://doi.org/10.18112/openneuro.ds006104.v1.0.1](https://doi.org/10.18112/openneuro.ds006104.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006104 >>> dataset = DS006104(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006107(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_Neural_spatial_volatility * **Study:** `ds006107` (OpenNeuro) * **Author (year):** `Kuroda2025` * **Canonical:** `Kuroda2024` Also importable as: `DS006107`, `Kuroda2025`, `Kuroda2024`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Unknown`. Subjects: 166; recordings: 167; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006107](https://openneuro.org/datasets/ds006107) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006107](https://nemar.org/dataexplorer/detail?dataset_id=ds006107) DOI: [https://doi.org/10.18112/openneuro.ds006107.v1.0.0](https://doi.org/10.18112/openneuro.ds006107.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006107 >>> dataset = DS006107(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kuroda2024']* ### *class* eegdash.dataset.DS006126(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TDCS Modulation of Visual Cortex in Motor Imagery * **Study:** `ds006126` (OpenNeuro) * **Author (year):** `Mensah2025` * **Canonical:** — Also importable as: `DS006126`, `Mensah2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 90; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006126](https://openneuro.org/datasets/ds006126) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006126](https://nemar.org/dataexplorer/detail?dataset_id=ds006126) DOI: [https://doi.org/10.18112/openneuro.ds006126.v1.0.0](https://doi.org/10.18112/openneuro.ds006126.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006126 >>> dataset = DS006126(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006136(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) OWM-Dataset * **Study:** `ds006136` (OpenNeuro) * **Author (year):** `Omelyusik2025` * **Canonical:** `Omelyusik2026` Also importable as: `DS006136`, `Omelyusik2025`, `Omelyusik2026`. Modality: `ieeg`; Experiment type: `Memory`; Subject type: `Epilepsy`. Subjects: 13; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006136](https://openneuro.org/datasets/ds006136) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006136](https://nemar.org/dataexplorer/detail?dataset_id=ds006136) DOI: [https://doi.org/10.18112/openneuro.ds006136.v1.0.1](https://doi.org/10.18112/openneuro.ds006136.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006136 >>> dataset = DS006136(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Omelyusik2026']* ### *class* eegdash.dataset.DS006142(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Essex EEG Movie Memory dataset * **Study:** `ds006142` (OpenNeuro) * **Author (year):** `MatranFernandez2025` * **Canonical:** — Also importable as: `DS006142`, `MatranFernandez2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 27; recordings: 27; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006142](https://openneuro.org/datasets/ds006142) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006142](https://nemar.org/dataexplorer/detail?dataset_id=ds006142) DOI: [https://doi.org/10.18112/openneuro.ds006142.v1.0.2](https://doi.org/10.18112/openneuro.ds006142.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006142 >>> dataset = DS006142(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006159(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Implicit Learning EEG (BioSemi) * **Study:** `ds006159` (OpenNeuro) * **Author (year):** `LeganesFonteneau2025` * **Canonical:** `LeganesFonteneau2024` Also importable as: `DS006159`, `LeganesFonteneau2025`, `LeganesFonteneau2024`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 61; recordings: 61; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006159](https://openneuro.org/datasets/ds006159) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006159](https://nemar.org/dataexplorer/detail?dataset_id=ds006159) DOI: [https://doi.org/10.18112/openneuro.ds006159.v1.0.0](https://doi.org/10.18112/openneuro.ds006159.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006159 >>> dataset = DS006159(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['LeganesFonteneau2024']* ### *class* eegdash.dataset.DS006171(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG data during three near-threshold visual detection tasks: a no-cue task, a noninformative cue task (50% validity), and an informative cue task (100% validity) * **Study:** `ds006171` (OpenNeuro) * **Author (year):** `Melcon2025` * **Canonical:** `Melcon2024` Also importable as: `DS006171`, `Melcon2025`, `Melcon2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 36; recordings: 104; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006171](https://openneuro.org/datasets/ds006171) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006171](https://nemar.org/dataexplorer/detail?dataset_id=ds006171) DOI: [https://doi.org/10.18112/openneuro.ds006171.v1.0.0](https://doi.org/10.18112/openneuro.ds006171.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006171 >>> dataset = DS006171(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Melcon2024']* ### *class* eegdash.dataset.DS006222(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MultisensoryFlickerHealthyYoungAdults_AllSubjectsRawData * **Study:** `ds006222` (OpenNeuro) * **Author (year):** `Attokaren2025` * **Canonical:** — Also importable as: `DS006222`, `Attokaren2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 69; recordings: 70; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006222](https://openneuro.org/datasets/ds006222) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006222](https://nemar.org/dataexplorer/detail?dataset_id=ds006222) DOI: [https://doi.org/10.18112/openneuro.ds006222.v1.0.1](https://doi.org/10.18112/openneuro.ds006222.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006222 >>> dataset = DS006222(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006233(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Picture naming * **Study:** `ds006233` (OpenNeuro) * **Author (year):** `Kochi2025_Picture_naming` * **Canonical:** — Also importable as: `DS006233`, `Kochi2025_Picture_naming`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Surgery`. Subjects: 108; recordings: 347; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006233](https://openneuro.org/datasets/ds006233) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006233](https://nemar.org/dataexplorer/detail?dataset_id=ds006233) DOI: [https://doi.org/10.18112/openneuro.ds006233.v1.0.0](https://doi.org/10.18112/openneuro.ds006233.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006233 >>> dataset = DS006233(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006234(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory naming * **Study:** `ds006234` (OpenNeuro) * **Author (year):** `Kochi2025_Auditory_naming` * **Canonical:** — Also importable as: `DS006234`, `Kochi2025_Auditory_naming`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Surgery`. Subjects: 119; recordings: 378; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006234](https://openneuro.org/datasets/ds006234) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006234](https://nemar.org/dataexplorer/detail?dataset_id=ds006234) DOI: [https://doi.org/10.18112/openneuro.ds006234.v1.0.0](https://doi.org/10.18112/openneuro.ds006234.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006234 >>> dataset = DS006234(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006253(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MetaRDK * **Study:** `ds006253` (OpenNeuro) * **Author (year):** `Goueytes2024` * **Canonical:** `MetaRDK` Also importable as: `DS006253`, `Goueytes2024`, `MetaRDK`. Modality: `ieeg`; Experiment type: `Decision-making`; Subject type: `Epilepsy`. Subjects: 23; recordings: 201; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006253](https://openneuro.org/datasets/ds006253) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006253](https://nemar.org/dataexplorer/detail?dataset_id=ds006253) DOI: [https://doi.org/10.18112/openneuro.ds006253.v1.0.3](https://doi.org/10.18112/openneuro.ds006253.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS006253 >>> dataset = DS006253(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MetaRDK']* ### *class* eegdash.dataset.DS006260(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of psychophysiological data from children with learning difficulties who strengthen reading and math skills through assistive technology * **Study:** `ds006260` (OpenNeuro) * **Author (year):** `CoronaGonzalez2025` * **Canonical:** — Also importable as: `DS006260`, `CoronaGonzalez2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 76; recordings: 366; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006260](https://openneuro.org/datasets/ds006260) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006260](https://nemar.org/dataexplorer/detail?dataset_id=ds006260) DOI: [https://doi.org/10.18112/openneuro.ds006260.v1.0.1](https://doi.org/10.18112/openneuro.ds006260.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006260 >>> dataset = DS006260(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006269(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Tethered EEG Recordings in Syngap1 rats * **Study:** `ds006269` (OpenNeuro) * **Author (year):** `Pritchard2025` * **Canonical:** — Also importable as: `DS006269`, `Pritchard2025`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Other`. Subjects: 24; recordings: 40; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006269](https://openneuro.org/datasets/ds006269) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006269](https://nemar.org/dataexplorer/detail?dataset_id=ds006269) DOI: [https://doi.org/10.18112/openneuro.ds006269.v1.0.0](https://doi.org/10.18112/openneuro.ds006269.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006269 >>> dataset = DS006269(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006317(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Chisco-2.0 * **Study:** `ds006317` (OpenNeuro) * **Author (year):** `Zhang2025_Chisco_2_0` * **Canonical:** `Chisco2_0`, `Chisco20`, `CHISCO20` Also importable as: `DS006317`, `Zhang2025_Chisco_2_0`, `Chisco2_0`, `Chisco20`, `CHISCO20`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 2; recordings: 64; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006317](https://openneuro.org/datasets/ds006317) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006317](https://nemar.org/dataexplorer/detail?dataset_id=ds006317) DOI: [https://doi.org/10.18112/openneuro.ds006317.v1.1.0](https://doi.org/10.18112/openneuro.ds006317.v1.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS006317 >>> dataset = DS006317(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Chisco2_0', 'Chisco20', 'CHISCO20']* ### *class* eegdash.dataset.DS006334(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories * **Study:** `ds006334` (OpenNeuro) * **Author (year):** `Biau2025` * **Canonical:** — Also importable as: `DS006334`, `Biau2025`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 30; recordings: 128; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006334](https://openneuro.org/datasets/ds006334) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006334](https://nemar.org/dataexplorer/detail?dataset_id=ds006334) DOI: [https://doi.org/10.18112/openneuro.ds006334.v1.0.0](https://doi.org/10.18112/openneuro.ds006334.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006334 >>> dataset = DS006334(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006366(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mouse Sleep Staging Validation dataset (MSSV) * **Study:** `ds006366` (OpenNeuro) * **Author (year):** `Rose2025` * **Canonical:** `MSSV` Also importable as: `DS006366`, `Rose2025`, `MSSV`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 92; recordings: 148; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006366](https://openneuro.org/datasets/ds006366) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006366](https://nemar.org/dataexplorer/detail?dataset_id=ds006366) DOI: [https://doi.org/10.18112/openneuro.ds006366.v1.0.1](https://doi.org/10.18112/openneuro.ds006366.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006366 >>> dataset = DS006366(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MSSV']* ### *class* eegdash.dataset.DS006367(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Memory Reactivation Levels Remain Unaffected by Anticipated Interference * **Study:** `ds006367` (OpenNeuro) * **Author (year):** `DS6367_Memory_Reactivation` * **Canonical:** — Also importable as: `DS006367`, `DS6367_Memory_Reactivation`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 52; recordings: 52; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006367](https://openneuro.org/datasets/ds006367) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006367](https://nemar.org/dataexplorer/detail?dataset_id=ds006367) DOI: [https://doi.org/10.18112/openneuro.ds006367.v1.0.1](https://doi.org/10.18112/openneuro.ds006367.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006367 >>> dataset = DS006367(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006370(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Memory Reactivation Levels Remain Unaffected by Anticipated Interference Experiment 2 Dataset * **Study:** `ds006370` (OpenNeuro) * **Author (year):** `DS6370_Memory_Reactivation` * **Canonical:** — Also importable as: `DS006370`, `DS6370_Memory_Reactivation`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 56; recordings: 56; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006370](https://openneuro.org/datasets/ds006370) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006370](https://nemar.org/dataexplorer/detail?dataset_id=ds006370) DOI: [https://doi.org/10.18112/openneuro.ds006370.v1.0.1](https://doi.org/10.18112/openneuro.ds006370.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006370 >>> dataset = DS006370(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006374(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Expectation effects on repetition suppression in nociception * **Study:** `ds006374` (OpenNeuro) * **Author (year):** `Pohle2025` * **Canonical:** `Pohle2019` Also importable as: `DS006374`, `Pohle2025`, `Pohle2019`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 36; recordings: 358; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006374](https://openneuro.org/datasets/ds006374) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006374](https://nemar.org/dataexplorer/detail?dataset_id=ds006374) DOI: [https://doi.org/10.18112/openneuro.ds006374.v1.0.0](https://doi.org/10.18112/openneuro.ds006374.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006374 >>> dataset = DS006374(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Pohle2019']* ### *class* eegdash.dataset.DS006377(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) InclusionStudy * **Study:** `ds006377` (OpenNeuro) * **Author (year):** `Yucel2025_InclusionStudy` * **Canonical:** — Also importable as: `DS006377`, `Yucel2025_InclusionStudy`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 115; recordings: 690; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006377](https://openneuro.org/datasets/ds006377) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006377](https://nemar.org/dataexplorer/detail?dataset_id=ds006377) DOI: [https://doi.org/10.18112/openneuro.ds006377.v1.0.2](https://doi.org/10.18112/openneuro.ds006377.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006377 >>> dataset = DS006377(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006386(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PhysioMotion_Artifact * **Study:** `ds006386` (OpenNeuro) * **Author (year):** `Yu2025` * **Canonical:** `Yu2019` Also importable as: `DS006386`, `Yu2025`, `Yu2019`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 30; recordings: 180; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006386](https://openneuro.org/datasets/ds006386) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006386](https://nemar.org/dataexplorer/detail?dataset_id=ds006386) DOI: [https://doi.org/10.18112/openneuro.ds006386.v1.0.1](https://doi.org/10.18112/openneuro.ds006386.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006386 >>> dataset = DS006386(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Yu2019']* ### *class* eegdash.dataset.DS006392(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HED schema library for SCORE annotations example * **Study:** `ds006392` (OpenNeuro) * **Author (year):** `Attia2025` * **Canonical:** `Hermes2024` Also importable as: `DS006392`, `Attia2025`, `Hermes2024`. Modality: `ieeg`; Experiment type: `Perception`; Subject type: `Unknown`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006392](https://openneuro.org/datasets/ds006392) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006392](https://nemar.org/dataexplorer/detail?dataset_id=ds006392) DOI: [https://doi.org/10.18112/openneuro.ds006392.v1.0.1](https://doi.org/10.18112/openneuro.ds006392.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006392 >>> dataset = DS006392(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Hermes2024']* ### *class* eegdash.dataset.DS006394(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electrophysiological markers of surprise-induced failures of visual and auditory awareness * **Study:** `ds006394` (OpenNeuro) * **Author (year):** `Leong2025` * **Canonical:** — Also importable as: `DS006394`, `Leong2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 33; recordings: 60; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006394](https://openneuro.org/datasets/ds006394) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006394](https://nemar.org/dataexplorer/detail?dataset_id=ds006394) DOI: [https://doi.org/10.18112/openneuro.ds006394.v1.0.3](https://doi.org/10.18112/openneuro.ds006394.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS006394 >>> dataset = DS006394(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006434(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The auditory brainstem response to natural speech is not affected by selective attention * **Study:** `ds006434` (OpenNeuro) * **Author (year):** `Stoll2025` * **Canonical:** — Also importable as: `DS006434`, `Stoll2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 66; recordings: 118; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006434](https://openneuro.org/datasets/ds006434) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006434](https://nemar.org/dataexplorer/detail?dataset_id=ds006434) DOI: [https://doi.org/10.18112/openneuro.ds006434.v1.2.0](https://doi.org/10.18112/openneuro.ds006434.v1.2.0) ### Examples ```pycon >>> from eegdash.dataset import DS006434 >>> dataset = DS006434(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006437(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LIGHT Hypnotherapy * **Study:** `ds006437` (OpenNeuro) * **Author (year):** `DS6437_LIGHT_Hypnotherapy` * **Canonical:** — Also importable as: `DS006437`, `DS6437_LIGHT_Hypnotherapy`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Healthy`. Subjects: 9; recordings: 63; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006437](https://openneuro.org/datasets/ds006437) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006437](https://nemar.org/dataexplorer/detail?dataset_id=ds006437) DOI: [https://doi.org/10.18112/openneuro.ds006437.v1.1.0](https://doi.org/10.18112/openneuro.ds006437.v1.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS006437 >>> dataset = DS006437(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006446(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cueing the future to reduce temporal discounting * **Study:** `ds006446` (OpenNeuro) * **Author (year):** `Kinley2025` * **Canonical:** `Kinley2019` Also importable as: `DS006446`, `Kinley2025`, `Kinley2019`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 29; recordings: 29; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006446](https://openneuro.org/datasets/ds006446) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006446](https://nemar.org/dataexplorer/detail?dataset_id=ds006446) DOI: [https://doi.org/10.18112/openneuro.ds006446.v1.0.0](https://doi.org/10.18112/openneuro.ds006446.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006446 >>> dataset = DS006446(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kinley2019']* ### *class* eegdash.dataset.DS006459(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High-DensityvSparsefNIRS_WordColorStroop_Sparse_Anderson_2025 * **Study:** `ds006459` (OpenNeuro) * **Author (year):** `Anderson2025_Sparse` * **Canonical:** — Also importable as: `DS006459`, `Anderson2025_Sparse`. Modality: `fnirs`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006459](https://openneuro.org/datasets/ds006459) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006459](https://nemar.org/dataexplorer/detail?dataset_id=ds006459) DOI: [https://doi.org/10.18112/openneuro.ds006459.v1.0.0](https://doi.org/10.18112/openneuro.ds006459.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006459 >>> dataset = DS006459(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006460(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High-DensityvSparsefNIRS_WordColorStroop_HD_Anderson_2025 * **Study:** `ds006460` (OpenNeuro) * **Author (year):** `Anderson2025_HD` * **Canonical:** — Also importable as: `DS006460`, `Anderson2025_HD`. Modality: `fnirs`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006460](https://openneuro.org/datasets/ds006460) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006460](https://nemar.org/dataexplorer/detail?dataset_id=ds006460) DOI: [https://doi.org/10.18112/openneuro.ds006460.v1.0.0](https://doi.org/10.18112/openneuro.ds006460.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006460 >>> dataset = DS006460(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006465(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 3M-CPSEED:An EEG-based Dataset for Chinese Pinyin Production in Overt, Silent-intended, and Imagined Speech * **Study:** `ds006465` (OpenNeuro) * **Author (year):** `Ma2025` * **Canonical:** `CPSEED_3M`, `CPSEED` Also importable as: `DS006465`, `Ma2025`, `CPSEED_3M`, `CPSEED`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 20; recordings: 80; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006465](https://openneuro.org/datasets/ds006465) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006465](https://nemar.org/dataexplorer/detail?dataset_id=ds006465) DOI: [https://doi.org/10.18112/openneuro.ds006465.v2.0.0](https://doi.org/10.18112/openneuro.ds006465.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006465 >>> dataset = DS006465(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['CPSEED_3M', 'CPSEED']* ### *class* eegdash.dataset.DS006466(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HeartBEAM: Older Adult Resting State and Auditory Oddball Task EEG Data * **Study:** `ds006466` (OpenNeuro) * **Author (year):** `Kim2025_HeartBEAM_Older_Adult` * **Canonical:** `HeartBEAM` Also importable as: `DS006466`, `Kim2025_HeartBEAM_Older_Adult`, `HeartBEAM`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 66; recordings: 1257; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006466](https://openneuro.org/datasets/ds006466) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006466](https://nemar.org/dataexplorer/detail?dataset_id=ds006466) DOI: [https://doi.org/10.18112/openneuro.ds006466.v1.0.1](https://doi.org/10.18112/openneuro.ds006466.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006466 >>> dataset = DS006466(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HeartBEAM']* ### *class* eegdash.dataset.DS006468(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MEG-SCANS - A comprehensive magnetoencephalography speech dataset with Stories, Chirps And Noisy Sentences. * **Study:** `ds006468` (OpenNeuro) * **Author (year):** `Habersetzer2025` * **Canonical:** `MEG_SCANS` Also importable as: `DS006468`, `Habersetzer2025`, `MEG_SCANS`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 24; recordings: 189; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006468](https://openneuro.org/datasets/ds006468) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006468](https://nemar.org/dataexplorer/detail?dataset_id=ds006468) DOI: [https://doi.org/10.18112/openneuro.ds006468.v1.1.2](https://doi.org/10.18112/openneuro.ds006468.v1.1.2) ### Examples ```pycon >>> from eegdash.dataset import DS006468 >>> dataset = DS006468(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MEG_SCANS']* ### *class* eegdash.dataset.DS006480(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Young Adult Resting State and Auditory Oddball Task EEG Data * **Study:** `ds006480` (OpenNeuro) * **Author (year):** `Kim2025_Young_Adult_Resting` * **Canonical:** — Also importable as: `DS006480`, `Kim2025_Young_Adult_Resting`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 68; recordings: 68; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006480](https://openneuro.org/datasets/ds006480) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006480](https://nemar.org/dataexplorer/detail?dataset_id=ds006480) DOI: [https://doi.org/10.18112/openneuro.ds006480.v1.0.1](https://doi.org/10.18112/openneuro.ds006480.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006480 >>> dataset = DS006480(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006502(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Skill learning and consolidation in healthy humans * **Study:** `ds006502` (OpenNeuro) * **Author (year):** `Bonstrup2025` * **Canonical:** — Also importable as: `DS006502`, `Bonstrup2025`. Modality: `meg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 31; recordings: 380; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006502](https://openneuro.org/datasets/ds006502) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006502](https://nemar.org/dataexplorer/detail?dataset_id=ds006502) DOI: [https://doi.org/10.18112/openneuro.ds006502.v1.0.0](https://doi.org/10.18112/openneuro.ds006502.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006502 >>> dataset = DS006502(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006519(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of intracranial EEG during cortical stimulations evoking negative motor responses * **Study:** `ds006519` (OpenNeuro) * **Author (year):** `Barborica2025` * **Canonical:** — Also importable as: `DS006519`, `Barborica2025`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 21; recordings: 35; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006519](https://openneuro.org/datasets/ds006519) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006519](https://nemar.org/dataexplorer/detail?dataset_id=ds006519) DOI: [https://doi.org/10.18112/openneuro.ds006519.v1.0.0](https://doi.org/10.18112/openneuro.ds006519.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006519 >>> dataset = DS006519(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006525(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting EEG * **Study:** `ds006525` (OpenNeuro) * **Author (year):** `Neuroimaging2025` * **Canonical:** — Also importable as: `DS006525`, `Neuroimaging2025`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Unknown`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006525](https://openneuro.org/datasets/ds006525) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006525](https://nemar.org/dataexplorer/detail?dataset_id=ds006525) DOI: [https://doi.org/10.18112/openneuro.ds006525.v1.0.0](https://doi.org/10.18112/openneuro.ds006525.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006525 >>> dataset = DS006525(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006545(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Reliability-Dubois2024 * **Study:** `ds006545` (OpenNeuro) * **Author (year):** `ReliabilityDubois2024` * **Canonical:** `Dubois2024` Also importable as: `DS006545`, `ReliabilityDubois2024`, `Dubois2024`. Modality: `fnirs`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 49; recordings: 98; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006545](https://openneuro.org/datasets/ds006545) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006545](https://nemar.org/dataexplorer/detail?dataset_id=ds006545) DOI: [https://doi.org/10.18112/openneuro.ds006545.v1.0.0](https://doi.org/10.18112/openneuro.ds006545.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006545 >>> dataset = DS006545(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Dubois2024']* ### *class* eegdash.dataset.DS006547(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual EEG Study (BrainVision → BIDS) * **Study:** `ds006547` (OpenNeuro) * **Author (year):** `Ghaffari2025` * **Canonical:** `Ghaffari2024` Also importable as: `DS006547`, `Ghaffari2025`, `Ghaffari2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 31; recordings: 31; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006547](https://openneuro.org/datasets/ds006547) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006547](https://nemar.org/dataexplorer/detail?dataset_id=ds006547) DOI: [https://doi.org/10.18112/openneuro.ds006547.v1.0.0](https://doi.org/10.18112/openneuro.ds006547.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006547 >>> dataset = DS006547(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ghaffari2024']* ### *class* eegdash.dataset.DS006554(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Social Observation EEG raw data * **Study:** `ds006554` (OpenNeuro) * **Author (year):** `Su2025` * **Canonical:** — Also importable as: `DS006554`, `Su2025`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 47; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006554](https://openneuro.org/datasets/ds006554) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006554](https://nemar.org/dataexplorer/detail?dataset_id=ds006554) DOI: [https://doi.org/10.18112/openneuro.ds006554.v1.0.0](https://doi.org/10.18112/openneuro.ds006554.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006554 >>> dataset = DS006554(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006563(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dimension-based attention modulates early visual processing * **Study:** `ds006563` (OpenNeuro) * **Author (year):** `Gramann2025` * **Canonical:** — Also importable as: `DS006563`, `Gramann2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006563](https://openneuro.org/datasets/ds006563) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006563](https://nemar.org/dataexplorer/detail?dataset_id=ds006563) DOI: [https://doi.org/10.18112/openneuro.ds006563.v1.0.0](https://doi.org/10.18112/openneuro.ds006563.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006563 >>> dataset = DS006563(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006576(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The role of REM sleep in neural differentiation of memories in the hippocampus * **Study:** `ds006576` (OpenNeuro) * **Author (year):** `McDevitt2025` * **Canonical:** — Also importable as: `DS006576`, `McDevitt2025`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 57; recordings: 57; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006576](https://openneuro.org/datasets/ds006576) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006576](https://nemar.org/dataexplorer/detail?dataset_id=ds006576) DOI: [https://doi.org/10.18112/openneuro.ds006576.v1.0.3](https://doi.org/10.18112/openneuro.ds006576.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS006576 >>> dataset = DS006576(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006593(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) cBCI Matrix Multimodal Dataset * **Study:** `ds006593` (OpenNeuro) * **Author (year):** `Celik2025` * **Canonical:** — Also importable as: `DS006593`, `Celik2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 21; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006593](https://openneuro.org/datasets/ds006593) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006593](https://nemar.org/dataexplorer/detail?dataset_id=ds006593) DOI: [https://doi.org/10.18112/openneuro.ds006593.v1.0.0](https://doi.org/10.18112/openneuro.ds006593.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006593 >>> dataset = DS006593(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006629(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SINGSING * **Study:** `ds006629` (OpenNeuro) * **Author (year):** `Chanoine2025` * **Canonical:** `SINGSING` Also importable as: `DS006629`, `Chanoine2025`, `SINGSING`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 19; recordings: 38; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006629](https://openneuro.org/datasets/ds006629) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006629](https://nemar.org/dataexplorer/detail?dataset_id=ds006629) DOI: [https://doi.org/10.18112/openneuro.ds006629.v1.0.1](https://doi.org/10.18112/openneuro.ds006629.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006629 >>> dataset = DS006629(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['SINGSING']* ### *class* eegdash.dataset.DS006647(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Poetry Assessment EEG Dataset 2 * **Study:** `ds006647` (OpenNeuro) * **Author (year):** `Chaudhuri2025_D2` * **Canonical:** — Also importable as: `DS006647`, `Chaudhuri2025_D2`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 4; recordings: 4; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006647](https://openneuro.org/datasets/ds006647) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006647](https://nemar.org/dataexplorer/detail?dataset_id=ds006647) DOI: [https://doi.org/10.18112/openneuro.ds006647.v1.0.1](https://doi.org/10.18112/openneuro.ds006647.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006647 >>> dataset = DS006647(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006648(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Poetry Assessment EEG Dataset 1 * **Study:** `ds006648` (OpenNeuro) * **Author (year):** `Chaudhuri2025_D1` * **Canonical:** — Also importable as: `DS006648`, `Chaudhuri2025_D1`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 47; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006648](https://openneuro.org/datasets/ds006648) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006648](https://nemar.org/dataexplorer/detail?dataset_id=ds006648) DOI: [https://doi.org/10.18112/openneuro.ds006648.v1.0.0](https://doi.org/10.18112/openneuro.ds006648.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006648 >>> dataset = DS006648(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006673(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ball_squeeze_Carlton_2025 * **Study:** `ds006673` (OpenNeuro) * **Author (year):** `Carlton2025` * **Canonical:** — Also importable as: `DS006673`, `Carlton2025`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 17; recordings: 67; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006673](https://openneuro.org/datasets/ds006673) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006673](https://nemar.org/dataexplorer/detail?dataset_id=ds006673) DOI: [https://doi.org/10.18112/openneuro.ds006673.v1.0.2](https://doi.org/10.18112/openneuro.ds006673.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006673 >>> dataset = DS006673(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006695(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Validation of Sleep Staging with Forehead EEG Patch * **Study:** `ds006695` (OpenNeuro) * **Author (year):** `Onton2025` * **Canonical:** `Onton2024` Also importable as: `DS006695`, `Onton2025`, `Onton2024`. Modality: `eeg`; Experiment type: `Sleep`; Subject type: `Healthy`. Subjects: 19; recordings: 19; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006695](https://openneuro.org/datasets/ds006695) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006695](https://nemar.org/dataexplorer/detail?dataset_id=ds006695) DOI: [https://doi.org/10.18112/openneuro.ds006695.v1.0.2](https://doi.org/10.18112/openneuro.ds006695.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006695 >>> dataset = DS006695(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Onton2024']* ### *class* eegdash.dataset.DS006720(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alpha power indexes working memory load for durations * **Study:** `ds006720` (OpenNeuro) * **Author (year):** `Herbst2025` * **Canonical:** — Also importable as: `DS006720`, `Herbst2025`. Modality: `meg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 24; recordings: 246; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006720](https://openneuro.org/datasets/ds006720) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006720](https://nemar.org/dataexplorer/detail?dataset_id=ds006720) DOI: [https://doi.org/10.18112/openneuro.ds006720.v1.0.0](https://doi.org/10.18112/openneuro.ds006720.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006720 >>> dataset = DS006720(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006735(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Chimeric music reveals an interaction of pitch and time in electrophysiological signatures of music encoding * **Study:** `ds006735` (OpenNeuro) * **Author (year):** `Shan2025` * **Canonical:** — Also importable as: `DS006735`, `Shan2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 27; recordings: 27; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006735](https://openneuro.org/datasets/ds006735) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006735](https://nemar.org/dataexplorer/detail?dataset_id=ds006735) DOI: [https://doi.org/10.18112/openneuro.ds006735.v2.0.0](https://doi.org/10.18112/openneuro.ds006735.v2.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006735 >>> dataset = DS006735(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006761(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neural decoding of competitive decision-making in Rock-Paper-Scissors * **Study:** `ds006761` (OpenNeuro) * **Author (year):** `Moerel2025_Neural` * **Canonical:** — Also importable as: `DS006761`, `Moerel2025_Neural`. Modality: `eeg`; Experiment type: `Decision-making`; Subject type: `Healthy`. Subjects: 31; recordings: 31; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006761](https://openneuro.org/datasets/ds006761) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006761](https://nemar.org/dataexplorer/detail?dataset_id=ds006761) DOI: [https://doi.org/10.18112/openneuro.ds006761.v1.0.0](https://doi.org/10.18112/openneuro.ds006761.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006761 >>> dataset = DS006761(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006768(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multiple Object Monitoring (EEG) * **Study:** `ds006768` (OpenNeuro) * **Author (year):** `Lowe2025` * **Canonical:** — Also importable as: `DS006768`, `Lowe2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 30; recordings: 210; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006768](https://openneuro.org/datasets/ds006768) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006768](https://nemar.org/dataexplorer/detail?dataset_id=ds006768) DOI: [https://doi.org/10.18112/openneuro.ds006768.v1.1.0](https://doi.org/10.18112/openneuro.ds006768.v1.1.0) ### Examples ```pycon >>> from eegdash.dataset import DS006768 >>> dataset = DS006768(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006801(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting-state EEG before and after different study methods * **Study:** `ds006801` (OpenNeuro) * **Author (year):** `Alves2025` * **Canonical:** — Also importable as: `DS006801`, `Alves2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 21; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006801](https://openneuro.org/datasets/ds006801) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006801](https://nemar.org/dataexplorer/detail?dataset_id=ds006801) DOI: [https://doi.org/10.18112/openneuro.ds006801.v1.0.0](https://doi.org/10.18112/openneuro.ds006801.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006801 >>> dataset = DS006801(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006802(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Collaborative rule learning promotes interbrain information alignment * **Study:** `ds006802` (OpenNeuro) * **Author (year):** `Moerel2025_Collaborative` * **Canonical:** — Also importable as: `DS006802`, `Moerel2025_Collaborative`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 24; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006802](https://openneuro.org/datasets/ds006802) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006802](https://nemar.org/dataexplorer/detail?dataset_id=ds006802) DOI: [https://doi.org/10.18112/openneuro.ds006802.v1.0.0](https://doi.org/10.18112/openneuro.ds006802.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006802 >>> dataset = DS006802(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006803(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NeuroTechs Dataset for Stem Skills * **Study:** `ds006803` (OpenNeuro) * **Author (year):** `PechCanul2025` * **Canonical:** — Also importable as: `DS006803`, `PechCanul2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 63; recordings: 126; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006803](https://openneuro.org/datasets/ds006803) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006803](https://nemar.org/dataexplorer/detail?dataset_id=ds006803) DOI: [https://doi.org/10.18112/openneuro.ds006803.v1.1.1](https://doi.org/10.18112/openneuro.ds006803.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS006803 >>> dataset = DS006803(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006817(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual Attribute-Specific Contextual Trajectory Paradigm 2.0 * **Study:** `ds006817` (OpenNeuro) * **Author (year):** `Lowe2025` * **Canonical:** `VisualContextTrajectory_v2` Also importable as: `DS006817`, `Lowe2025`, `VisualContextTrajectory_v2`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006817](https://openneuro.org/datasets/ds006817) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006817](https://nemar.org/dataexplorer/detail?dataset_id=ds006817) DOI: [https://doi.org/10.18112/openneuro.ds006817.v1.0.0](https://doi.org/10.18112/openneuro.ds006817.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006817 >>> dataset = DS006817(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['VisualContextTrajectory_v2', 'Lowe2025']* ### *class* eegdash.dataset.DS006839(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG recordings during sham neurofeedback in virtual reality * **Study:** `ds006839` (OpenNeuro) * **Author (year):** `Gonzales2025` * **Canonical:** — Also importable as: `DS006839`, `Gonzales2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 36; recordings: 144; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006839](https://openneuro.org/datasets/ds006839) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006839](https://nemar.org/dataexplorer/detail?dataset_id=ds006839) DOI: [https://doi.org/10.18112/openneuro.ds006839.v1.0.0](https://doi.org/10.18112/openneuro.ds006839.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006839 >>> dataset = DS006839(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006840(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) IACKD: Intention Action Conflict EEG-Hand Kinematics Dataset * **Study:** `ds006840` (OpenNeuro) * **Author (year):** `Cai2025` * **Canonical:** `IACKD` Also importable as: `DS006840`, `Cai2025`, `IACKD`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 15; recordings: 128; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006840](https://openneuro.org/datasets/ds006840) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006840](https://nemar.org/dataexplorer/detail?dataset_id=ds006840) DOI: [https://doi.org/10.18112/openneuro.ds006840.v1.0.0](https://doi.org/10.18112/openneuro.ds006840.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006840 >>> dataset = DS006840(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['IACKD']* ### *class* eegdash.dataset.DS006848(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) AlphaDirection1: EEG, ECG, PPG in the resting state and working memory for sequentially and simultaneously presented digits * **Study:** `ds006848` (OpenNeuro) * **Author (year):** `Kosachenko2025` * **Canonical:** — Also importable as: `DS006848`, `Kosachenko2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 30; recordings: 52; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006848](https://openneuro.org/datasets/ds006848) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006848](https://nemar.org/dataexplorer/detail?dataset_id=ds006848) DOI: [https://doi.org/10.18112/openneuro.ds006848.v1.0.0](https://doi.org/10.18112/openneuro.ds006848.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006848 >>> dataset = DS006848(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006850(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Urban Appraisal: Physiological Recording during Rating of Different Urban Environments * **Study:** `ds006850` (OpenNeuro) * **Author (year):** `Zaehme2025` * **Canonical:** — Also importable as: `DS006850`, `Zaehme2025`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 63; recordings: 126; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006850](https://openneuro.org/datasets/ds006850) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006850](https://nemar.org/dataexplorer/detail?dataset_id=ds006850) DOI: [https://doi.org/10.18112/openneuro.ds006850.v1.0.0](https://doi.org/10.18112/openneuro.ds006850.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006850 >>> dataset = DS006850(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006861(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Targeted Neuromodulation of the Left Dorsolateral Prefrontal Cortex Alleviates Altered Affective Response Evaluation in Lonely Individuals * **Study:** `ds006861` (OpenNeuro) * **Author (year):** `Maka2025_Targeted` * **Canonical:** — Also importable as: `DS006861`, `Maka2025_Targeted`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 120; recordings: 239; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006861](https://openneuro.org/datasets/ds006861) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006861](https://nemar.org/dataexplorer/detail?dataset_id=ds006861) DOI: [https://doi.org/10.18112/openneuro.ds006861.v1.0.2](https://doi.org/10.18112/openneuro.ds006861.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS006861 >>> dataset = DS006861(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006866(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Discrepancy between self-report and neurophysiological markers of socio-affective responses in lonely individuals * **Study:** `ds006866` (OpenNeuro) * **Author (year):** `Maka2025_Discrepancy` * **Canonical:** — Also importable as: `DS006866`, `Maka2025_Discrepancy`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 148; recordings: 148; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006866](https://openneuro.org/datasets/ds006866) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006866](https://nemar.org/dataexplorer/detail?dataset_id=ds006866) DOI: [https://doi.org/10.18112/openneuro.ds006866.v1.0.0](https://doi.org/10.18112/openneuro.ds006866.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006866 >>> dataset = DS006866(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006890(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Longitudinal Multitask Wireless ECoG Data from Two Fully Implanted Macaca fuscata * **Study:** `ds006890` (OpenNeuro) * **Author (year):** `Yang2025_Longitudinal` * **Canonical:** — Also importable as: `DS006890`, `Yang2025_Longitudinal`. Modality: `ieeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 2; recordings: 870; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006890](https://openneuro.org/datasets/ds006890) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006890](https://nemar.org/dataexplorer/detail?dataset_id=ds006890) DOI: [https://doi.org/10.18112/openneuro.ds006890.v1.0.0](https://doi.org/10.18112/openneuro.ds006890.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006890 >>> dataset = DS006890(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006902(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Profound neuronal differences during Exercise-Induced Hypoalgesia between athletes and non-athletes revealed by functional near-infrared spectroscopy * **Study:** `ds006902` (OpenNeuro) * **Author (year):** `Geisler2025` * **Canonical:** — Also importable as: `DS006902`, `Geisler2025`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 42; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006902](https://openneuro.org/datasets/ds006902) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006902](https://nemar.org/dataexplorer/detail?dataset_id=ds006902) DOI: [https://doi.org/10.18112/openneuro.ds006902.v1.1.1](https://doi.org/10.18112/openneuro.ds006902.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS006902 >>> dataset = DS006902(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006903(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ball_squeeze_2025 * **Study:** `ds006903` (OpenNeuro) * **Author (year):** `here2025` * **Canonical:** — Also importable as: `DS006903`, `here2025`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 17; recordings: 67; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006903](https://openneuro.org/datasets/ds006903) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006903](https://nemar.org/dataexplorer/detail?dataset_id=ds006903) DOI: [https://doi.org/10.18112/openneuro.ds006903.v1.0.0](https://doi.org/10.18112/openneuro.ds006903.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006903 >>> dataset = DS006903(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006910(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory Naming EC * **Study:** `ds006910` (OpenNeuro) * **Author (year):** `Kochi2025_Auditory_Naming_EC` * **Canonical:** — Also importable as: `DS006910`, `Kochi2025_Auditory_Naming_EC`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Unknown`. Subjects: 121; recordings: 384; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006910](https://openneuro.org/datasets/ds006910) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006910](https://nemar.org/dataexplorer/detail?dataset_id=ds006910) DOI: [https://doi.org/10.18112/openneuro.ds006910.v1.0.1](https://doi.org/10.18112/openneuro.ds006910.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006910 >>> dataset = DS006910(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006914(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual Naming EC * **Study:** `ds006914` (OpenNeuro) * **Author (year):** `Kochi2025_Visual_Naming_EC` * **Canonical:** — Also importable as: `DS006914`, `Kochi2025_Visual_Naming_EC`. Modality: `ieeg`; Experiment type: `Other`; Subject type: `Epilepsy`. Subjects: 110; recordings: 353; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006914](https://openneuro.org/datasets/ds006914) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006914](https://nemar.org/dataexplorer/detail?dataset_id=ds006914) DOI: [https://doi.org/10.18112/openneuro.ds006914.v1.0.3](https://doi.org/10.18112/openneuro.ds006914.v1.0.3) ### Examples ```pycon >>> from eegdash.dataset import DS006914 >>> dataset = DS006914(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006921(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High Density Resting State EEG of Phantom Limb Pain and Controls * **Study:** `ds006921` (OpenNeuro) * **Author (year):** `Ramne2025` * **Canonical:** — Also importable as: `DS006921`, `Ramne2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 38; recordings: 152; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006921](https://openneuro.org/datasets/ds006921) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006921](https://nemar.org/dataexplorer/detail?dataset_id=ds006921) DOI: [https://doi.org/10.18112/openneuro.ds006921.v1.1.1](https://doi.org/10.18112/openneuro.ds006921.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS006921 >>> dataset = DS006921(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006923(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of Electroencephalograms of Juvenile Offenders * **Study:** `ds006923` (OpenNeuro) * **Author (year):** `Polo2025` * **Canonical:** — Also importable as: `DS006923`, `Polo2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 140; recordings: 280; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006923](https://openneuro.org/datasets/ds006923) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006923](https://nemar.org/dataexplorer/detail?dataset_id=ds006923) DOI: [https://doi.org/10.18112/openneuro.ds006923.v1.0.0](https://doi.org/10.18112/openneuro.ds006923.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006923 >>> dataset = DS006923(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006940(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset: EEG-Controlled Exoskeleton for Walking and Standing - A Longitudinal Study of Healthy Individuals * **Study:** `ds006940` (OpenNeuro) * **Author (year):** `Sarkar2025_StudyOF` * **Canonical:** — Also importable as: `DS006940`, `Sarkar2025_StudyOF`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 7; recordings: 935; tasks: 15. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006940](https://openneuro.org/datasets/ds006940) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006940](https://nemar.org/dataexplorer/detail?dataset_id=ds006940) DOI: [https://doi.org/10.18112/openneuro.ds006940.v1.0.0](https://doi.org/10.18112/openneuro.ds006940.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006940 >>> dataset = DS006940(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006945(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset: T1-Weighted Structural MRI and fMRI of Participants Viewing Self-Avatar Exoskeleton Walking (11 SWS Cycles) * **Study:** `ds006945` (OpenNeuro) * **Author (year):** `Sarkar2025_T1_Weighted_Structural` * **Canonical:** — Also importable as: `DS006945`, `Sarkar2025_T1_Weighted_Structural`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 14; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006945](https://openneuro.org/datasets/ds006945) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006945](https://nemar.org/dataexplorer/detail?dataset_id=ds006945) DOI: [https://doi.org/10.18112/openneuro.ds006945.v1.2.1](https://doi.org/10.18112/openneuro.ds006945.v1.2.1) ### Examples ```pycon >>> from eegdash.dataset import DS006945 >>> dataset = DS006945(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006963(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Control Processes Moderate Visual Working Memory Gating Dataset * **Study:** `ds006963` (OpenNeuro) * **Author (year):** `Ozdemir2025` * **Canonical:** — Also importable as: `DS006963`, `Ozdemir2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 32; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006963](https://openneuro.org/datasets/ds006963) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006963](https://nemar.org/dataexplorer/detail?dataset_id=ds006963) DOI: [https://doi.org/10.18112/openneuro.ds006963.v1.0.0](https://doi.org/10.18112/openneuro.ds006963.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS006963 >>> dataset = DS006963(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS006979(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Examining Perceptual Grouping on Stages of Processing in Visual Working Memory: An ERP Study * **Study:** `ds006979` (OpenNeuro) * **Author (year):** `Ramzaoui2025` * **Canonical:** `Ramzaoui2024` Also importable as: `DS006979`, `Ramzaoui2025`, `Ramzaoui2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 53; recordings: 56; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds006979](https://openneuro.org/datasets/ds006979) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds006979](https://nemar.org/dataexplorer/detail?dataset_id=ds006979) DOI: [https://doi.org/10.18112/openneuro.ds006979.v1.0.1](https://doi.org/10.18112/openneuro.ds006979.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS006979 >>> dataset = DS006979(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ramzaoui2024']* ### *class* eegdash.dataset.DS007006(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) VR-Compassion Cultivation Training * **Study:** `ds007006` (OpenNeuro) * **Author (year):** `Wu2025` * **Canonical:** — Also importable as: `DS007006`, `Wu2025`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 10; recordings: 50; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007006](https://openneuro.org/datasets/ds007006) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007006](https://nemar.org/dataexplorer/detail?dataset_id=ds007006) DOI: [https://doi.org/10.18112/openneuro.ds007006.v1.0.0](https://doi.org/10.18112/openneuro.ds007006.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007006 >>> dataset = DS007006(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007020(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Mortality Dataset in Parkinson’s Disease * **Study:** `ds007020` (OpenNeuro) * **Author (year):** `Jamshidi2025` * **Canonical:** — Also importable as: `DS007020`, `Jamshidi2025`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Parkinson's`. Subjects: 94; recordings: 94; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007020](https://openneuro.org/datasets/ds007020) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007020](https://nemar.org/dataexplorer/detail?dataset_id=ds007020) DOI: [https://doi.org/10.18112/openneuro.ds007020.v1.0.0](https://doi.org/10.18112/openneuro.ds007020.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007020 >>> dataset = DS007020(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007028(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Auditory Cortex Macaque Monkey DISC Data * **Study:** `ds007028` (OpenNeuro) * **Author (year):** `Kajikawa2025` * **Canonical:** `Kajikawa2000` Also importable as: `DS007028`, `Kajikawa2025`, `Kajikawa2000`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Other`. Subjects: 3; recordings: 3; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007028](https://openneuro.org/datasets/ds007028) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007028](https://nemar.org/dataexplorer/detail?dataset_id=ds007028) DOI: [https://doi.org/10.18112/openneuro.ds007028.v1.0.0](https://doi.org/10.18112/openneuro.ds007028.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007028 >>> dataset = DS007028(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kajikawa2000']* ### *class* eegdash.dataset.DS007052(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE N400 Word Processing * **Study:** `ds007052` (OpenNeuro) * **Author (year):** `Couperus2025_N400` * **Canonical:** `Couperus2021_N400` Also importable as: `DS007052`, `Couperus2025_N400`, `Couperus2021_N400`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 288; recordings: 288; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007052](https://openneuro.org/datasets/ds007052) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007052](https://nemar.org/dataexplorer/detail?dataset_id=ds007052) DOI: [https://doi.org/10.18112/openneuro.ds007052.v1.1.2](https://doi.org/10.18112/openneuro.ds007052.v1.1.2) ### Examples ```pycon >>> from eegdash.dataset import DS007052 >>> dataset = DS007052(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Couperus2021_N400']* ### *class* eegdash.dataset.DS007056(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE P300 Visual Oddball * **Study:** `ds007056` (OpenNeuro) * **Author (year):** `Couperus2025_P300` * **Canonical:** `Couperus2021_P300` Also importable as: `DS007056`, `Couperus2025_P300`, `Couperus2021_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 286; recordings: 286; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007056](https://openneuro.org/datasets/ds007056) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007056](https://nemar.org/dataexplorer/detail?dataset_id=ds007056) DOI: [https://doi.org/10.18112/openneuro.ds007056.v1.1.1](https://doi.org/10.18112/openneuro.ds007056.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS007056 >>> dataset = DS007056(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Couperus2021_P300']* ### *class* eegdash.dataset.DS007069(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE MMN Auditory Oddball * **Study:** `ds007069` (OpenNeuro) * **Author (year):** `Couperus2025_MMN` * **Canonical:** `Couperus2021_MMN` Also importable as: `DS007069`, `Couperus2025_MMN`, `Couperus2021_MMN`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 281; recordings: 281; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007069](https://openneuro.org/datasets/ds007069) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007069](https://nemar.org/dataexplorer/detail?dataset_id=ds007069) DOI: [https://doi.org/10.18112/openneuro.ds007069.v1.0.0](https://doi.org/10.18112/openneuro.ds007069.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007069 >>> dataset = DS007069(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Couperus2021_MMN']* ### *class* eegdash.dataset.DS007081(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Passive but accessible: Studied information is not actively stored in working memory, yet attended regardless of anticipated load * **Study:** `ds007081` (OpenNeuro) * **Author (year):** `Ylmaz2025` * **Canonical:** — Also importable as: `DS007081`, `Ylmaz2025`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 41; recordings: 41; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007081](https://openneuro.org/datasets/ds007081) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007081](https://nemar.org/dataexplorer/detail?dataset_id=ds007081) DOI: [https://doi.org/10.18112/openneuro.ds007081.v1.0.0](https://doi.org/10.18112/openneuro.ds007081.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007081 >>> dataset = DS007081(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007095(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RNS_Epilepsy-iBIDS * **Study:** `ds007095` (OpenNeuro) * **Author (year):** `Feng2025` * **Canonical:** — Also importable as: `DS007095`, `Feng2025`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 8; recordings: 6019; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007095](https://openneuro.org/datasets/ds007095) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007095](https://nemar.org/dataexplorer/detail?dataset_id=ds007095) DOI: [https://doi.org/10.18112/openneuro.ds007095.v1.0.0](https://doi.org/10.18112/openneuro.ds007095.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007095 >>> dataset = DS007095(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007096(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE N170 Face Perception * **Study:** `ds007096` (OpenNeuro) * **Author (year):** `Couperus2025_PURSUE_N170_Face` * **Canonical:** `Couperus2017` Also importable as: `DS007096`, `Couperus2025_PURSUE_N170_Face`, `Couperus2017`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 292; recordings: 292; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007096](https://openneuro.org/datasets/ds007096) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007096](https://nemar.org/dataexplorer/detail?dataset_id=ds007096) DOI: [https://doi.org/10.18112/openneuro.ds007096.v1.0.0](https://doi.org/10.18112/openneuro.ds007096.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007096 >>> dataset = DS007096(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Couperus2017']* ### *class* eegdash.dataset.DS007118(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_comprehensive_HFA_model_part1 * **Study:** `ds007118` (OpenNeuro) * **Author (year):** `Hatano2025_part1` * **Canonical:** `Hatano` Also importable as: `DS007118`, `Hatano2025_part1`, `Hatano`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Unknown`. Subjects: 65; recordings: 82; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007118](https://openneuro.org/datasets/ds007118) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007118](https://nemar.org/dataexplorer/detail?dataset_id=ds007118) DOI: [https://doi.org/10.18112/openneuro.ds007118.v1.0.0](https://doi.org/10.18112/openneuro.ds007118.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007118 >>> dataset = DS007118(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Hatano']* ### *class* eegdash.dataset.DS007119(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_comprehensive_HFA_model_part3 * **Study:** `ds007119` (OpenNeuro) * **Author (year):** `Hatano2025_part3` * **Canonical:** — Also importable as: `DS007119`, `Hatano2025_part3`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Unknown`. Subjects: 103; recordings: 106; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007119](https://openneuro.org/datasets/ds007119) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007119](https://nemar.org/dataexplorer/detail?dataset_id=ds007119) DOI: [https://doi.org/10.18112/openneuro.ds007119.v1.0.0](https://doi.org/10.18112/openneuro.ds007119.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007119 >>> dataset = DS007119(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007120(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) iEEG_comprehensive_HFA_model_part2 * **Study:** `ds007120` (OpenNeuro) * **Author (year):** `Hatano2025_part2` * **Canonical:** — Also importable as: `DS007120`, `Hatano2025_part2`. Modality: `ieeg`; Experiment type: `Sleep`; Subject type: `Epilepsy`. Subjects: 65; recordings: 70; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007120](https://openneuro.org/datasets/ds007120) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007120](https://nemar.org/dataexplorer/detail?dataset_id=ds007120) DOI: [https://doi.org/10.18112/openneuro.ds007120.v1.0.0](https://doi.org/10.18112/openneuro.ds007120.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007120 >>> dataset = DS007120(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007137(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE N2pc Visual Search * **Study:** `ds007137` (OpenNeuro) * **Author (year):** `Couperus2025_N2PC` * **Canonical:** `Couperus2021_N2pc` Also importable as: `DS007137`, `Couperus2025_N2PC`, `Couperus2021_N2pc`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 294; recordings: 294; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007137](https://openneuro.org/datasets/ds007137) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007137](https://nemar.org/dataexplorer/detail?dataset_id=ds007137) DOI: [https://doi.org/10.18112/openneuro.ds007137.v1.0.0](https://doi.org/10.18112/openneuro.ds007137.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007137 >>> dataset = DS007137(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Couperus2021_N2pc']* ### *class* eegdash.dataset.DS007139(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PURSUE LRP/ERN Flanker * **Study:** `ds007139` (OpenNeuro) * **Author (year):** `Couperus2025_LRP` * **Canonical:** `Couperus2021_LRP` Also importable as: `DS007139`, `Couperus2025_LRP`, `Couperus2021_LRP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 292; recordings: 292; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007139](https://openneuro.org/datasets/ds007139) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007139](https://nemar.org/dataexplorer/detail?dataset_id=ds007139) DOI: [https://doi.org/10.18112/openneuro.ds007139.v1.0.0](https://doi.org/10.18112/openneuro.ds007139.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007139 >>> dataset = DS007139(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Couperus2021_LRP']* ### *class* eegdash.dataset.DS007162(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Adaptive recruitment of cortex-wide recurrence for visual object recognition (EEG) * **Study:** `ds007162` (OpenNeuro) * **Author (year):** `DS7162_VisualRecognition` * **Canonical:** — Also importable as: `DS007162`, `DS7162_VisualRecognition`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 34; recordings: 69; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007162](https://openneuro.org/datasets/ds007162) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007162](https://nemar.org/dataexplorer/detail?dataset_id=ds007162) DOI: [https://doi.org/10.18112/openneuro.ds007162.v1.0.0](https://doi.org/10.18112/openneuro.ds007162.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007162 >>> dataset = DS007162(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007169(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal Cognitive Workload n-back Task, 4 Difficulties * **Study:** `ds007169` (OpenNeuro) * **Author (year):** `Barras2026_Multimodal` * **Canonical:** `Barras2021` Also importable as: `DS007169`, `Barras2026_Multimodal`, `Barras2021`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007169](https://openneuro.org/datasets/ds007169) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007169](https://nemar.org/dataexplorer/detail?dataset_id=ds007169) DOI: [https://doi.org/10.18112/openneuro.ds007169.v1.0.5](https://doi.org/10.18112/openneuro.ds007169.v1.0.5) ### Examples ```pycon >>> from eegdash.dataset import DS007169 >>> dataset = DS007169(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Barras2021']* ### *class* eegdash.dataset.DS007172(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG-Asymmetries Dataset * **Study:** `ds007172` (OpenNeuro) * **Author (year):** `Reinke2026` * **Canonical:** `EEGAsymmetries` Also importable as: `DS007172`, `Reinke2026`, `EEGAsymmetries`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 100; recordings: 501; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007172](https://openneuro.org/datasets/ds007172) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007172](https://nemar.org/dataexplorer/detail?dataset_id=ds007172) DOI: [https://doi.org/10.18112/openneuro.ds007172.v1.0.0](https://doi.org/10.18112/openneuro.ds007172.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007172 >>> dataset = DS007172(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EEGAsymmetries']* ### *class* eegdash.dataset.DS007175(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FFR-active-listening * **Study:** `ds007175` (OpenNeuro) * **Author (year):** `DS7175_FFR_ActiveListening` * **Canonical:** — Also importable as: `DS007175`, `DS7175_FFR_ActiveListening`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 41; recordings: 41; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007175](https://openneuro.org/datasets/ds007175) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007175](https://nemar.org/dataexplorer/detail?dataset_id=ds007175) DOI: [https://doi.org/10.18112/openneuro.ds007175.v1.0.1](https://doi.org/10.18112/openneuro.ds007175.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007175 >>> dataset = DS007175(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007176(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Longitudinal EEG Test-Retest Reliability in Healthy Individuals * **Study:** `ds007176` (OpenNeuro) * **Author (year):** `Isaza2026_Longitudinal` * **Canonical:** — Also importable as: `DS007176`, `Isaza2026_Longitudinal`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 45; recordings: 300; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007176](https://openneuro.org/datasets/ds007176) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007176](https://nemar.org/dataexplorer/detail?dataset_id=ds007176) DOI: [https://doi.org/10.18112/openneuro.ds007176.v1.0.1](https://doi.org/10.18112/openneuro.ds007176.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007176 >>> dataset = DS007176(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007180(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Exo-EEG Experiment * **Study:** `ds007180` (OpenNeuro) * **Author (year):** `FuentesGuerra2026` * **Canonical:** `FuentesGuerra2024` Also importable as: `DS007180`, `FuentesGuerra2026`, `FuentesGuerra2024`. Modality: `eeg`; Experiment type: `Unknown`; Subject type: `Healthy`. Subjects: 25; recordings: 25; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007180](https://openneuro.org/datasets/ds007180) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007180](https://nemar.org/dataexplorer/detail?dataset_id=ds007180) DOI: [https://doi.org/10.18112/openneuro.ds007180.v1.0.0](https://doi.org/10.18112/openneuro.ds007180.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007180 >>> dataset = DS007180(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['FuentesGuerra2024']* ### *class* eegdash.dataset.DS007181(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Structural MRI, Resting-state fMRI, and PSG/EEG Dataset of Zoster-associated Neuralgia * **Study:** `ds007181` (OpenNeuro) * **Author (year):** `Li2026` * **Canonical:** — Also importable as: `DS007181`, `Li2026`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 59; recordings: 59; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007181](https://openneuro.org/datasets/ds007181) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007181](https://nemar.org/dataexplorer/detail?dataset_id=ds007181) DOI: [https://doi.org/10.18112/openneuro.ds007181.v1.0.1](https://doi.org/10.18112/openneuro.ds007181.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007181 >>> dataset = DS007181(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007216(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A multi-session simultaneous EEG-fMRI dataset with online experience sampling * **Study:** `ds007216` (OpenNeuro) * **Author (year):** `Kucyi2026` * **Canonical:** `Kucyi2024` Also importable as: `DS007216`, `Kucyi2026`, `Kucyi2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 24; recordings: 187; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007216](https://openneuro.org/datasets/ds007216) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007216](https://nemar.org/dataexplorer/detail?dataset_id=ds007216) DOI: [https://doi.org/10.18112/openneuro.ds007216.v1.0.0](https://doi.org/10.18112/openneuro.ds007216.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007216 >>> dataset = DS007216(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kucyi2024']* ### *class* eegdash.dataset.DS007221(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cross-Environment Multi-Paradigm Motor Imagery EEG Dataset * **Study:** `ds007221` (OpenNeuro) * **Author (year):** `Xinwei2026` * **Canonical:** — Also importable as: `DS007221`, `Xinwei2026`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 84; recordings: 1265; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007221](https://openneuro.org/datasets/ds007221) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007221](https://nemar.org/dataexplorer/detail?dataset_id=ds007221) DOI: [https://doi.org/10.18112/openneuro.ds007221.v1.0.1](https://doi.org/10.18112/openneuro.ds007221.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007221 >>> dataset = DS007221(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007262(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cognitive Workload 8-level arithmetic * **Study:** `ds007262` (OpenNeuro) * **Author (year):** `Barras2026_Cognitive` * **Canonical:** `Barras2025` Also importable as: `DS007262`, `Barras2026_Cognitive`, `Barras2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007262](https://openneuro.org/datasets/ds007262) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007262](https://nemar.org/dataexplorer/detail?dataset_id=ds007262) DOI: [https://doi.org/10.18112/openneuro.ds007262.v1.0.6](https://doi.org/10.18112/openneuro.ds007262.v1.0.6) ### Examples ```pycon >>> from eegdash.dataset import DS007262 >>> dataset = DS007262(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Barras2025']* ### *class* eegdash.dataset.DS007314(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) tACS for Patients with Post-Stroke Anomia * **Study:** `ds007314` (OpenNeuro) * **Author (year):** `Martzoukou2026_tACS` * **Canonical:** `Martzoukou2024_Post` Also importable as: `DS007314`, `Martzoukou2026_tACS`, `Martzoukou2024_Post`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 2; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007314](https://openneuro.org/datasets/ds007314) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007314](https://nemar.org/dataexplorer/detail?dataset_id=ds007314) DOI: [https://doi.org/10.18112/openneuro.ds007314.v1.0.0](https://doi.org/10.18112/openneuro.ds007314.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007314 >>> dataset = DS007314(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Martzoukou2024_Post']* ### *class* eegdash.dataset.DS007315(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) tACS for Patients with Post-Stroke Anomia * **Study:** `ds007315` (OpenNeuro) * **Author (year):** `Martzoukou2026_tACS_Patients` * **Canonical:** `Martzoukou2024_Post_A` Also importable as: `DS007315`, `Martzoukou2026_tACS_Patients`, `Martzoukou2024_Post_A`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 2; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007315](https://openneuro.org/datasets/ds007315) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007315](https://nemar.org/dataexplorer/detail?dataset_id=ds007315) DOI: [https://doi.org/10.18112/openneuro.ds007315.v1.0.1](https://doi.org/10.18112/openneuro.ds007315.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007315 >>> dataset = DS007315(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Martzoukou2024_Post_A']* ### *class* eegdash.dataset.DS007322(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Personalized smartphone notifications bias auditory salience across processing stages * **Study:** `ds007322` (OpenNeuro) * **Author (year):** `Mishra2026` * **Canonical:** `Mishra2024` Also importable as: `DS007322`, `Mishra2026`, `Mishra2024`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 57; recordings: 57; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007322](https://openneuro.org/datasets/ds007322) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007322](https://nemar.org/dataexplorer/detail?dataset_id=ds007322) DOI: [https://doi.org/10.18112/openneuro.ds007322.v1.0.1](https://doi.org/10.18112/openneuro.ds007322.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007322 >>> dataset = DS007322(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Mishra2024']* ### *class* eegdash.dataset.DS007338(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEGEyeNet Dataset * **Study:** `ds007338` (OpenNeuro) * **Author (year):** `Plomecka2026` * **Canonical:** `EEGEyeNet_v2`, `EEGEYENET` Also importable as: `DS007338`, `Plomecka2026`, `EEGEyeNet_v2`, `EEGEYENET`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 1; recordings: 1; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007338](https://openneuro.org/datasets/ds007338) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007338](https://nemar.org/dataexplorer/detail?dataset_id=ds007338) DOI: [https://doi.org/10.18112/openneuro.ds007338.v1.0.0](https://doi.org/10.18112/openneuro.ds007338.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007338 >>> dataset = DS007338(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EEGEyeNet_v2', 'EEGEYENET']* ### *class* eegdash.dataset.DS007347(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sterotactic Focused Ultrasound Mesencephalotomy for the Treatment of Head and Neck Cancer Pain * **Study:** `ds007347` (OpenNeuro) * **Author (year):** `Elias2026` * **Canonical:** — Also importable as: `DS007347`, `Elias2026`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Cancer`. Subjects: 5; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007347](https://openneuro.org/datasets/ds007347) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007347](https://nemar.org/dataexplorer/detail?dataset_id=ds007347) DOI: [https://doi.org/10.18112/openneuro.ds007347.v1.0.0](https://doi.org/10.18112/openneuro.ds007347.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007347 >>> dataset = DS007347(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007353(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HAD-MEEG * **Study:** `ds007353` (OpenNeuro) * **Author (year):** `Zhang2026` * **Canonical:** `HAD_MEEG`, `HADMEEG` Also importable as: `DS007353`, `Zhang2026`, `HAD_MEEG`, `HADMEEG`. Modality: `eeg, meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 32; recordings: 473; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007353](https://openneuro.org/datasets/ds007353) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007353](https://nemar.org/dataexplorer/detail?dataset_id=ds007353) DOI: [https://doi.org/10.18112/openneuro.ds007353.v1.0.0](https://doi.org/10.18112/openneuro.ds007353.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007353 >>> dataset = DS007353(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HAD_MEEG', 'HADMEEG']* ### *class* eegdash.dataset.DS007358(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A subset of large-scale EEG dataset (India + Tanzania) * **Study:** `ds007358` (OpenNeuro) * **Author (year):** `Vianney2026` * **Canonical:** `Vianney2025` Also importable as: `DS007358`, `Vianney2026`, `Vianney2025`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 2000; recordings: 6000; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007358](https://openneuro.org/datasets/ds007358) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007358](https://nemar.org/dataexplorer/detail?dataset_id=ds007358) DOI: [https://doi.org/10.18112/openneuro.ds007358.v1.0.0](https://doi.org/10.18112/openneuro.ds007358.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007358 >>> dataset = DS007358(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Vianney2025']* ### *class* eegdash.dataset.DS007406(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG dataset on consumer responses to extreme versus traditional marketing videos * **Study:** `ds007406` (OpenNeuro) * **Author (year):** `Edit2026` * **Canonical:** `Edit2024` Also importable as: `DS007406`, `Edit2026`, `Edit2024`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 10; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007406](https://openneuro.org/datasets/ds007406) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007406](https://nemar.org/dataexplorer/detail?dataset_id=ds007406) DOI: [https://doi.org/10.18112/openneuro.ds007406.v1.0.0](https://doi.org/10.18112/openneuro.ds007406.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007406 >>> dataset = DS007406(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Edit2024']* ### *class* eegdash.dataset.DS007420(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A Light Weight Multi-Distance fNIRS Dataset for Ball-Squeezing Task and Purposeful Motion Artifact Creation Task * **Study:** `ds007420` (OpenNeuro) * **Author (year):** `Gao2026_Light_Weight_Multi` * **Canonical:** `Gao2024` Also importable as: `DS007420`, `Gao2026_Light_Weight_Multi`, `Gao2024`. Modality: `fnirs`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 12; recordings: 60; tasks: 4. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007420](https://openneuro.org/datasets/ds007420) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007420](https://nemar.org/dataexplorer/detail?dataset_id=ds007420) DOI: [https://doi.org/10.18112/openneuro.ds007420.v1.0.2](https://doi.org/10.18112/openneuro.ds007420.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS007420 >>> dataset = DS007420(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Gao2024']* ### *class* eegdash.dataset.DS007427(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Comprehensive methodology for sample enrichment in EEG biomarker studies for Alzheimer’s risk classification * **Study:** `ds007427` (OpenNeuro) * **Author (year):** `Isaza2026_Comprehensive` * **Canonical:** `HenaoIsaza2026` Also importable as: `DS007427`, `Isaza2026_Comprehensive`, `HenaoIsaza2026`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Dementia`. Subjects: 44; recordings: 44; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007427](https://openneuro.org/datasets/ds007427) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007427](https://nemar.org/dataexplorer/detail?dataset_id=ds007427) DOI: [https://doi.org/10.18112/openneuro.ds007427.v1.0.1](https://doi.org/10.18112/openneuro.ds007427.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007427 >>> dataset = DS007427(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HenaoIsaza2026']* ### *class* eegdash.dataset.DS007431(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Diffuse predictions stabilize and reshape neural code during memory encoding * **Study:** `ds007431` (OpenNeuro) * **Author (year):** `Ataseven2026` * **Canonical:** `Ataseven2024` Also importable as: `DS007431`, `Ataseven2026`, `Ataseven2024`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 47; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007431](https://openneuro.org/datasets/ds007431) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007431](https://nemar.org/dataexplorer/detail?dataset_id=ds007431) DOI: [https://doi.org/10.18112/openneuro.ds007431.v1.0.0](https://doi.org/10.18112/openneuro.ds007431.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007431 >>> dataset = DS007431(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Ataseven2024']* ### *class* eegdash.dataset.DS007445(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Thalamocortical ictal iEEG dataset * **Study:** `ds007445` (OpenNeuro) * **Author (year):** `Panchavati2026` * **Canonical:** — Also importable as: `DS007445`, `Panchavati2026`. Modality: `ieeg`; Experiment type: `Clinical/Intervention`; Subject type: `Epilepsy`. Subjects: 19; recordings: 66; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007445](https://openneuro.org/datasets/ds007445) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007445](https://nemar.org/dataexplorer/detail?dataset_id=ds007445) DOI: [https://doi.org/10.18112/openneuro.ds007445.v1.0.2](https://doi.org/10.18112/openneuro.ds007445.v1.0.2) ### Examples ```pycon >>> from eegdash.dataset import DS007445 >>> dataset = DS007445(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007454(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A common neural mechanism underlies experiences of passage of time * **Study:** `ds007454` (OpenNeuro) * **Author (year):** `DS7454_TimePerception` * **Canonical:** — Also importable as: `DS007454`, `DS7454_TimePerception`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 42; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007454](https://openneuro.org/datasets/ds007454) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007454](https://nemar.org/dataexplorer/detail?dataset_id=ds007454) DOI: [https://doi.org/10.18112/openneuro.ds007454.v1.0.1](https://doi.org/10.18112/openneuro.ds007454.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007454 >>> dataset = DS007454(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007463(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Very-High-Density Diffuse Optical Tomography System Validation Dataset * **Study:** `ds007463` (OpenNeuro) * **Author (year):** `Fogarty2026_Very` * **Canonical:** `Fogarty2025` Also importable as: `DS007463`, `Fogarty2026_Very`, `Fogarty2025`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 8; recordings: 88; tasks: 14. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007463](https://openneuro.org/datasets/ds007463) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007463](https://nemar.org/dataexplorer/detail?dataset_id=ds007463) DOI: [https://doi.org/10.18112/openneuro.ds007463.v1.1.1](https://doi.org/10.18112/openneuro.ds007463.v1.1.1) ### Examples ```pycon >>> from eegdash.dataset import DS007463 >>> dataset = DS007463(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Fogarty2025']* ### *class* eegdash.dataset.DS007471(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Joint agency EEG dataset * **Study:** `ds007471` (OpenNeuro) * **Author (year):** `Zhou2026` * **Canonical:** `Zhou2024` Also importable as: `DS007471`, `Zhou2026`, `Zhou2024`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 31; recordings: 31; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007471](https://openneuro.org/datasets/ds007471) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007471](https://nemar.org/dataexplorer/detail?dataset_id=ds007471) DOI: [https://doi.org/10.18112/openneuro.ds007471.v1.0.0](https://doi.org/10.18112/openneuro.ds007471.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007471 >>> dataset = DS007471(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Zhou2024']* ### *class* eegdash.dataset.DS007473(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High-Density Diffuse Optical Tomography Audiovisual Movie Viewing Dataset * **Study:** `ds007473` (OpenNeuro) * **Author (year):** `Fogarty2026_High` * **Canonical:** `Tripathy2024` Also importable as: `DS007473`, `Fogarty2026_High`, `Tripathy2024`. Modality: `fnirs`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 5; recordings: 189; tasks: 19. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007473](https://openneuro.org/datasets/ds007473) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007473](https://nemar.org/dataexplorer/detail?dataset_id=ds007473) DOI: [https://doi.org/10.18112/openneuro.ds007473.v1.0.0](https://doi.org/10.18112/openneuro.ds007473.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007473 >>> dataset = DS007473(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Tripathy2024']* ### *class* eegdash.dataset.DS007477(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) TimeSeries BIDS converted * **Study:** `ds007477` (OpenNeuro) * **Author (year):** `Niu2026` * **Canonical:** — Also importable as: `DS007477`, `Niu2026`. Modality: `fnirs`; Experiment type: `Unknown`; Subject type: `Unknown`. Subjects: 18; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007477](https://openneuro.org/datasets/ds007477) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007477](https://nemar.org/dataexplorer/detail?dataset_id=ds007477) DOI: [https://doi.org/10.18112/openneuro.ds007477.v1.0.1](https://doi.org/10.18112/openneuro.ds007477.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007477 >>> dataset = DS007477(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007521(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The effect of hunger and state preferences on the neural processing of food images * **Study:** `ds007521` (OpenNeuro) * **Author (year):** `Moerel2026` * **Canonical:** `Moerel2025` Also importable as: `DS007521`, `Moerel2026`, `Moerel2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 23; recordings: 46; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007521](https://openneuro.org/datasets/ds007521) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007521](https://nemar.org/dataexplorer/detail?dataset_id=ds007521) DOI: [https://doi.org/10.18112/openneuro.ds007521.v1.0.1](https://doi.org/10.18112/openneuro.ds007521.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007521 >>> dataset = DS007521(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Moerel2025']* ### *class* eegdash.dataset.DS007523(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LPP MEG Listen * **Study:** `ds007523` (OpenNeuro) * **Author (year):** `Bel2026` * **Canonical:** `Dascoli2025` Also importable as: `DS007523`, `Bel2026`, `Dascoli2025`. Modality: `meg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 58; recordings: 579; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007523](https://openneuro.org/datasets/ds007523) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007523](https://nemar.org/dataexplorer/detail?dataset_id=ds007523) DOI: [https://doi.org/10.18112/openneuro.ds007523.v1.0.0](https://doi.org/10.18112/openneuro.ds007523.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007523 >>> dataset = DS007523(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Dascoli2025']* ### *class* eegdash.dataset.DS007524(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LittlePrince_MEG_French_Read_Pallier2025 * **Study:** `ds007524` (OpenNeuro) * **Author (year):** `Pallier2025` * **Canonical:** `LittlePrince` Also importable as: `DS007524`, `Pallier2025`, `LittlePrince`. Modality: `meg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 50; recordings: 500; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007524](https://openneuro.org/datasets/ds007524) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007524](https://nemar.org/dataexplorer/detail?dataset_id=ds007524) DOI: [https://doi.org/10.18112/openneuro.ds007524.v1.0.1](https://doi.org/10.18112/openneuro.ds007524.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007524 >>> dataset = DS007524(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['LittlePrince']* ### *class* eegdash.dataset.DS007526(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PD-EEG: Resting-State & Walking EEG in Parkinson’s Disease * **Study:** `ds007526` (OpenNeuro) * **Author (year):** `Katzir2026` * **Canonical:** `PD_EEG`, `PDEEG` Also importable as: `DS007526`, `Katzir2026`, `PD_EEG`, `PDEEG`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Parkinson's`. Subjects: 144; recordings: 277; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007526](https://openneuro.org/datasets/ds007526) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007526](https://nemar.org/dataexplorer/detail?dataset_id=ds007526) DOI: [https://doi.org/10.18112/openneuro.ds007526.v1.0.0](https://doi.org/10.18112/openneuro.ds007526.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007526 >>> dataset = DS007526(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['PD_EEG', 'PDEEG']* ### *class* eegdash.dataset.DS007554(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal dataset from the CMx7-MM Experiment * **Study:** `ds007554` (OpenNeuro) * **Author (year):** `Ajra2026` * **Canonical:** — Also importable as: `DS007554`, `Ajra2026`. Modality: `eeg, fnirs`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 30; recordings: 1034; tasks: 7. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007554](https://openneuro.org/datasets/ds007554) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007554](https://nemar.org/dataexplorer/detail?dataset_id=ds007554) DOI: [https://doi.org/10.18112/openneuro.ds007554.v1.0.0](https://doi.org/10.18112/openneuro.ds007554.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007554 >>> dataset = DS007554(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007558(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG Pre/Post Intervention Dataset * **Study:** `ds007558` (OpenNeuro) * **Author (year):** `Qi2026` * **Canonical:** — Also importable as: `DS007558`, `Qi2026`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Unknown`. Subjects: 67; recordings: 121; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007558](https://openneuro.org/datasets/ds007558) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007558](https://nemar.org/dataexplorer/detail?dataset_id=ds007558) DOI: [https://doi.org/10.18112/openneuro.ds007558.v1.0.0](https://doi.org/10.18112/openneuro.ds007558.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007558 >>> dataset = DS007558(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.DS007591(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Delineating neural contributions to EEG-based speech decoding * **Study:** `ds007591` (OpenNeuro) * **Author (year):** `Sato2026_Delineating` * **Canonical:** `Sato2025` Also importable as: `DS007591`, `Sato2026_Delineating`, `Sato2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 3; recordings: 21; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007591](https://openneuro.org/datasets/ds007591) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007591](https://nemar.org/dataexplorer/detail?dataset_id=ds007591) DOI: [https://doi.org/10.18112/openneuro.ds007591.v1.0.1](https://doi.org/10.18112/openneuro.ds007591.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007591 >>> dataset = DS007591(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Sato2025']* ### *class* eegdash.dataset.DS007602(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG-Speech Brain Decoding Dataset * **Study:** `ds007602` (OpenNeuro) * **Author (year):** `Sato2026_Speech` * **Canonical:** `Sato2024` Also importable as: `DS007602`, `Sato2026_Speech`, `Sato2024`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 3; recordings: 113; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007602](https://openneuro.org/datasets/ds007602) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007602](https://nemar.org/dataexplorer/detail?dataset_id=ds007602) DOI: [https://doi.org/10.18112/openneuro.ds007602.v1.0.1](https://doi.org/10.18112/openneuro.ds007602.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import DS007602 >>> dataset = DS007602(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Sato2024']* ### *class* eegdash.dataset.DS007609(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Resting-State EEG and Trait Anxiety * **Study:** `ds007609` (OpenNeuro) * **Author (year):** `Shalamberidze2026` * **Canonical:** `Shalamberidze2025` Also importable as: `DS007609`, `Shalamberidze2026`, `Shalamberidze2025`. Modality: `eeg`; Experiment type: `Affect`; Subject type: `Healthy`. Subjects: 51; recordings: 51; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007609](https://openneuro.org/datasets/ds007609) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007609](https://nemar.org/dataexplorer/detail?dataset_id=ds007609) DOI: [https://doi.org/10.18112/openneuro.ds007609.v1.0.0](https://doi.org/10.18112/openneuro.ds007609.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007609 >>> dataset = DS007609(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Shalamberidze2025']* ### *class* eegdash.dataset.DS007615(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LDAEP and resting-state EEG in healthy women * **Study:** `ds007615` (OpenNeuro) * **Author (year):** `Normannseth2026` * **Canonical:** — Also importable as: `DS007615`, `Normannseth2026`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 69; recordings: 192; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/ds007615](https://openneuro.org/datasets/ds007615) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=ds007615](https://nemar.org/dataexplorer/detail?dataset_id=ds007615) DOI: [https://doi.org/10.18112/openneuro.ds007615.v1.0.0](https://doi.org/10.18112/openneuro.ds007615.v1.0.0) ### Examples ```pycon >>> from eegdash.dataset import DS007615 >>> dataset = DS007615(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Normannseth2026']* ### eegdash.dataset.Dascoli2025 alias of [`DS007523`](eegdash.dataset.DS007523.md#eegdash.dataset.DS007523) ### eegdash.dataset.Delorme alias of [`DS003061`](eegdash.dataset.DS003061.md#eegdash.dataset.DS003061) ### eegdash.dataset.Dubois2024 alias of [`DS006545`](eegdash.dataset.DS006545.md#eegdash.dataset.DS006545) ### *class* eegdash.dataset.EEG2025R1(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted) * **Study:** `EEG2025r1` (NeMAR) * **Author (year):** `Shirazi2024_R1_bdf` * **Canonical:** `HBN_r1_bdf` Also importable as: `EEG2025R1`, `Shirazi2024_R1_bdf`, `HBN_r1_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 136; recordings: 1342; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r1](https://openneuro.org/datasets/EEG2025r1) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r1](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r1) DOI: [https://doi.org/10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R1 >>> dataset = EEG2025R1(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r1_bdf']* ### *class* eegdash.dataset.EEG2025R10(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 10 (BDF Converted) * **Study:** `EEG2025r10` (NeMAR) * **Author (year):** `Shirazi2025_R10_bdf` * **Canonical:** `HBN_r10_bdf` Also importable as: `EEG2025R10`, `Shirazi2025_R10_bdf`, `HBN_r10_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 533; recordings: 2516; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r10](https://openneuro.org/datasets/EEG2025r10) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r10](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r10) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R10 >>> dataset = EEG2025R10(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r10_bdf']* ### *class* eegdash.dataset.EEG2025R10MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 10 (BDF Converted) * **Study:** `EEG2025r10mini` (NeMAR) * **Author (year):** `Shirazi2025_R10_bdf_mini` * **Canonical:** `HBN_r10_bdf_mini` Also importable as: `EEG2025R10MINI`, `Shirazi2025_R10_bdf_mini`, `HBN_r10_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 220; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r10mini](https://openneuro.org/datasets/EEG2025r10mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r10mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r10mini) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R10MINI >>> dataset = EEG2025R10MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r10_bdf_mini']* ### *class* eegdash.dataset.EEG2025R11(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 11 (BDF Converted) * **Study:** `EEG2025r11` (NeMAR) * **Author (year):** `Shirazi2025_R11_bdf` * **Canonical:** `HBN_r11_bdf` Also importable as: `EEG2025R11`, `Shirazi2025_R11_bdf`, `HBN_r11_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 430; recordings: 3397; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r11](https://openneuro.org/datasets/EEG2025r11) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r11](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r11) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R11 >>> dataset = EEG2025R11(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r11_bdf']* ### *class* eegdash.dataset.EEG2025R11MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 11 (BDF Converted) * **Study:** `EEG2025r11mini` (NeMAR) * **Author (year):** `Shirazi2025_R11_bdf_mini` * **Canonical:** `HBN_r11_bdf_mini` Also importable as: `EEG2025R11MINI`, `Shirazi2025_R11_bdf_mini`, `HBN_r11_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 220; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r11mini](https://openneuro.org/datasets/EEG2025r11mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r11mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r11mini) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R11MINI >>> dataset = EEG2025R11MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r11_bdf_mini']* ### *class* eegdash.dataset.EEG2025R1MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 1 (BDF Converted) * **Study:** `EEG2025r1mini` (NeMAR) * **Author (year):** `Shirazi2024_R1_bdf_mini` * **Canonical:** `HBN_r1_bdf_mini` Also importable as: `EEG2025R1MINI`, `Shirazi2024_R1_bdf_mini`, `HBN_r1_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 239; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r1mini](https://openneuro.org/datasets/EEG2025r1mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r1mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r1mini) DOI: [https://doi.org/10.18112/openneuro.ds005505.v1.0.1](https://doi.org/10.18112/openneuro.ds005505.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R1MINI >>> dataset = EEG2025R1MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r1_bdf_mini']* ### *class* eegdash.dataset.EEG2025R2(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted) * **Study:** `EEG2025r2` (NeMAR) * **Author (year):** `Shirazi2024_R2_bdf` * **Canonical:** `HBN_r2_bdf` Also importable as: `EEG2025R2`, `Shirazi2024_R2_bdf`, `HBN_r2_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 150; recordings: 1405; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r2](https://openneuro.org/datasets/EEG2025r2) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r2](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r2) DOI: [https://doi.org/10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R2 >>> dataset = EEG2025R2(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r2_bdf']* ### *class* eegdash.dataset.EEG2025R2MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 2 (BDF Converted) * **Study:** `EEG2025r2mini` (NeMAR) * **Author (year):** `Shirazi2024_R2_bdf_mini` * **Canonical:** `HBN_r2_bdf_mini` Also importable as: `EEG2025R2MINI`, `Shirazi2024_R2_bdf_mini`, `HBN_r2_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 240; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r2mini](https://openneuro.org/datasets/EEG2025r2mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r2mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r2mini) DOI: [https://doi.org/10.18112/openneuro.ds005506.v1.0.1](https://doi.org/10.18112/openneuro.ds005506.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R2MINI >>> dataset = EEG2025R2MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r2_bdf_mini']* ### *class* eegdash.dataset.EEG2025R3(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted) * **Study:** `EEG2025r3` (NeMAR) * **Author (year):** `Shirazi2024_R3_bdf` * **Canonical:** `HBN_r3_bdf` Also importable as: `EEG2025R3`, `Shirazi2024_R3_bdf`, `HBN_r3_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 184; recordings: 1812; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r3](https://openneuro.org/datasets/EEG2025r3) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r3](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r3) DOI: [https://doi.org/10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R3 >>> dataset = EEG2025R3(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r3_bdf']* ### *class* eegdash.dataset.EEG2025R3MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 3 (BDF Converted) * **Study:** `EEG2025r3mini` (NeMAR) * **Author (year):** `Shirazi2024_R3_bdf_mini` * **Canonical:** `HBN_r3_bdf_mini` Also importable as: `EEG2025R3MINI`, `Shirazi2024_R3_bdf_mini`, `HBN_r3_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 240; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r3mini](https://openneuro.org/datasets/EEG2025r3mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r3mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r3mini) DOI: [https://doi.org/10.18112/openneuro.ds005507.v1.0.1](https://doi.org/10.18112/openneuro.ds005507.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R3MINI >>> dataset = EEG2025R3MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r3_bdf_mini']* ### *class* eegdash.dataset.EEG2025R4(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted) * **Study:** `EEG2025r4` (NeMAR) * **Author (year):** `Shirazi2024_R4_bdf` * **Canonical:** `HBN_r4_bdf` Also importable as: `EEG2025R4`, `Shirazi2024_R4_bdf`, `HBN_r4_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 324; recordings: 3342; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r4](https://openneuro.org/datasets/EEG2025r4) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r4](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r4) DOI: [https://doi.org/10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R4 >>> dataset = EEG2025R4(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r4_bdf']* ### *class* eegdash.dataset.EEG2025R4MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 4 (BDF Converted) * **Study:** `EEG2025r4mini` (NeMAR) * **Author (year):** `Shirazi2024_R4_bdf_mini` * **Canonical:** `HBN_r4_bdf_mini` Also importable as: `EEG2025R4MINI`, `Shirazi2024_R4_bdf_mini`, `HBN_r4_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 240; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r4mini](https://openneuro.org/datasets/EEG2025r4mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r4mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r4mini) DOI: [https://doi.org/10.18112/openneuro.ds005508.v1.0.1](https://doi.org/10.18112/openneuro.ds005508.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R4MINI >>> dataset = EEG2025R4MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r4_bdf_mini']* ### *class* eegdash.dataset.EEG2025R5(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted) * **Study:** `EEG2025r5` (NeMAR) * **Author (year):** `Shirazi2024_R5_bdf` * **Canonical:** `HBN_r5_bdf` Also importable as: `EEG2025R5`, `Shirazi2024_R5_bdf`, `HBN_r5_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 330; recordings: 3326; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r5](https://openneuro.org/datasets/EEG2025r5) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r5](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r5) DOI: [https://doi.org/10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R5 >>> dataset = EEG2025R5(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r5_bdf']* ### *class* eegdash.dataset.EEG2025R5MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 5 (BDF Converted) * **Study:** `EEG2025r5mini` (NeMAR) * **Author (year):** `Shirazi2024_R5_bdf_mini` * **Canonical:** `HBN_r5_bdf_mini` Also importable as: `EEG2025R5MINI`, `Shirazi2024_R5_bdf_mini`, `HBN_r5_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 240; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r5mini](https://openneuro.org/datasets/EEG2025r5mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r5mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r5mini) DOI: [https://doi.org/10.18112/openneuro.ds005509.v1.0.1](https://doi.org/10.18112/openneuro.ds005509.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R5MINI >>> dataset = EEG2025R5MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r5_bdf_mini']* ### *class* eegdash.dataset.EEG2025R6(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted) * **Study:** `EEG2025r6` (NeMAR) * **Author (year):** `Shirazi2024_R6_bdf` * **Canonical:** `HBN_r6_bdf` Also importable as: `EEG2025R6`, `Shirazi2024_R6_bdf`, `HBN_r6_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 135; recordings: 1227; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r6](https://openneuro.org/datasets/EEG2025r6) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r6](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r6) DOI: [https://doi.org/10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R6 >>> dataset = EEG2025R6(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r6_bdf']* ### *class* eegdash.dataset.EEG2025R6MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 6 (BDF Converted) * **Study:** `EEG2025r6mini` (NeMAR) * **Author (year):** `Shirazi2024_R6_bdf_mini` * **Canonical:** `HBN_r6_bdf_mini` Also importable as: `EEG2025R6MINI`, `Shirazi2024_R6_bdf_mini`, `HBN_r6_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 237; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r6mini](https://openneuro.org/datasets/EEG2025r6mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r6mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r6mini) DOI: [https://doi.org/10.18112/openneuro.ds005510.v1.0.1](https://doi.org/10.18112/openneuro.ds005510.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R6MINI >>> dataset = EEG2025R6MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r6_bdf_mini']* ### *class* eegdash.dataset.EEG2025R7(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted) * **Study:** `EEG2025r7` (NeMAR) * **Author (year):** `Shirazi2024_R7_bdf` * **Canonical:** `HBN_r7_bdf` Also importable as: `EEG2025R7`, `Shirazi2024_R7_bdf`, `HBN_r7_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 381; recordings: 3100; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r7](https://openneuro.org/datasets/EEG2025r7) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r7](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r7) DOI: [https://doi.org/10.18112/openneuro.ds005511.v1.0.1](https://doi.org/10.18112/openneuro.ds005511.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R7 >>> dataset = EEG2025R7(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r7_bdf']* ### *class* eegdash.dataset.EEG2025R7MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 7 (BDF Converted) * **Study:** `EEG2025r7mini` (NeMAR) * **Author (year):** `Shirazi2024_R7_bdf_mini` * **Canonical:** `HBN_r7_bdf_mini` Also importable as: `EEG2025R7MINI`, `Shirazi2024_R7_bdf_mini`, `HBN_r7_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 239; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r7mini](https://openneuro.org/datasets/EEG2025r7mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r7mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r7mini) DOI: [https://doi.org/10.18112/openneuro.ds005511.v1.0.1](https://doi.org/10.18112/openneuro.ds005511.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R7MINI >>> dataset = EEG2025R7MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r7_bdf_mini']* ### *class* eegdash.dataset.EEG2025R8(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted) * **Study:** `EEG2025r8` (NeMAR) * **Author (year):** `Shirazi2024_R8_bdf` * **Canonical:** `HBN_r8_bdf` Also importable as: `EEG2025R8`, `Shirazi2024_R8_bdf`, `HBN_r8_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 257; recordings: 2320; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r8](https://openneuro.org/datasets/EEG2025r8) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r8](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r8) DOI: [https://doi.org/10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R8 >>> dataset = EEG2025R8(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r8_bdf']* ### *class* eegdash.dataset.EEG2025R8MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 8 (BDF Converted) * **Study:** `EEG2025r8mini` (NeMAR) * **Author (year):** `Shirazi2024_R8_bdf_mini` * **Canonical:** `HBN_r8_bdf_mini` Also importable as: `EEG2025R8MINI`, `Shirazi2024_R8_bdf_mini`, `HBN_r8_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 238; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r8mini](https://openneuro.org/datasets/EEG2025r8mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r8mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r8mini) DOI: [https://doi.org/10.18112/openneuro.ds005512.v1.0.1](https://doi.org/10.18112/openneuro.ds005512.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R8MINI >>> dataset = EEG2025R8MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r8_bdf_mini']* ### *class* eegdash.dataset.EEG2025R9(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted) * **Study:** `EEG2025r9` (NeMAR) * **Author (year):** `Shirazi2024_R9_bdf` * **Canonical:** `HBN_r9_bdf` Also importable as: `EEG2025R9`, `Shirazi2024_R9_bdf`, `HBN_r9_bdf`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 295; recordings: 2885; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r9](https://openneuro.org/datasets/EEG2025r9) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r9](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r9) DOI: [https://doi.org/10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R9 >>> dataset = EEG2025R9(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r9_bdf']* ### *class* eegdash.dataset.EEG2025R9MINI(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network (HBN) EEG - Release 9 (BDF Converted) * **Study:** `EEG2025r9mini` (NeMAR) * **Author (year):** `Shirazi2024_R9_bdf_mini` * **Canonical:** `HBN_r9_bdf_mini` Also importable as: `EEG2025R9MINI`, `Shirazi2024_R9_bdf_mini`, `HBN_r9_bdf_mini`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 20; recordings: 237; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/EEG2025r9mini](https://openneuro.org/datasets/EEG2025r9mini) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r9mini](https://nemar.org/dataexplorer/detail?dataset_id=EEG2025r9mini) DOI: [https://doi.org/10.18112/openneuro.ds005514.v1.0.1](https://doi.org/10.18112/openneuro.ds005514.v1.0.1) ### Examples ```pycon >>> from eegdash.dataset import EEG2025R9MINI >>> dataset = EEG2025R9MINI(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HBN_r9_bdf_mini']* ### eegdash.dataset.EEGAsymmetries alias of [`DS007172`](eegdash.dataset.DS007172.md#eegdash.dataset.DS007172) ### *class* eegdash.dataset.EEGBIDSDataset(data_dir=None, dataset='', allow_symlinks=False, modalities=None) Bases: `object` An interface to a local BIDS dataset containing electrophysiology recordings. This class centralizes interactions with a BIDS dataset on the local filesystem, providing methods to parse metadata, find files, and retrieve BIDS-related information. Supports multiple modalities including EEG, MEG, iEEG, and NIRS. The class uses MNE-BIDS constants to stay synchronized with the BIDS specification and automatically supports all file formats recognized by MNE. * **Parameters:** * **data_dir** (*str* *or* *Path*) – The path to the local BIDS dataset directory. * **dataset** (*str*) – A name for the dataset (e.g., “ds002718”). * **allow_symlinks** (*bool* *,* *default False*) – If True, accept broken symlinks (e.g., git-annex) for metadata extraction. If False, require actual readable files for data loading. Set to True when doing metadata digestion without loading raw data. * **modalities** (*list* *of* *str* *or* *None* *,* *default None*) – List of modalities to search for (e.g., [“eeg”, “meg”]). If None, defaults to all electrophysiology modalities from MNE-BIDS: [‘meg’, ‘eeg’, ‘ieeg’, ‘nirs’]. #### RAW_EXTENSIONS Mapping of file extensions to their companion files, dynamically built from mne_bids.config.reader. * **Type:** dict #### files List of all recording file paths found in the dataset. * **Type:** list of str #### detected_modality The modality of the first file found (e.g., ‘eeg’, ‘meg’). * **Type:** str ### Examples ```pycon >>> # Load EEG-only dataset >>> dataset = EEGBIDSDataset( ... data_dir="/path/to/ds002718", ... dataset="ds002718", ... modalities=["eeg"] ... ) ``` ```pycon >>> # Load dataset with multiple modalities >>> dataset = EEGBIDSDataset( ... data_dir="/path/to/ds005810", ... dataset="ds005810", ... modalities=["meg", "eeg"] ... ) ``` ```pycon >>> # Metadata extraction from git-annex (symlinks) >>> dataset = EEGBIDSDataset( ... data_dir="/path/to/dataset", ... dataset="ds000001", ... allow_symlinks=True ... ) ``` #### RAW_EXTENSIONS *= {'.CNT': ['.CNT'], '.EDF': ['.EDF'], '.EEG': ['.EEG'], '.bdf': ['.bdf'], '.bin': ['.bin'], '.cdt': ['.cdt'], '.cnt': ['.cnt'], '.con': ['.con'], '.ds': ['.ds'], '.edf': ['.edf'], '.fif': ['.fif'], '.lay': ['.lay'], '.pdf': ['.pdf'], '.set': ['.set', '.fdt'], '.snirf': ['.snirf'], '.sqd': ['.sqd'], '.vhdr': ['.vhdr', '.eeg', '.vmrk', '.dat']}* #### channel_labels(data_filepath: str) → list[str] Get a list of channel labels from channels.tsv. #### channel_types(data_filepath: str) → list[str] Get a list of channel types from channels.tsv. #### check_eeg_dataset() → bool Check if the BIDS dataset contains EEG data. * **Returns:** True if the dataset’s modality is EEG, False otherwise. * **Return type:** bool #### eeg_json(data_filepath: str) → dict[str, Any] Get the merged eeg.json metadata for a data file. * **Parameters:** **data_filepath** (*str*) – The path to the data file. * **Returns:** The merged eeg.json metadata. * **Return type:** dict #### get_all_participants_tsv() → dict[str, dict[str, Any]] Get all rows from participants.tsv as a dictionary. * **Returns:** A dictionary mapping participant_id to a dict of column values. Returns `{}` if no participants.tsv exists or it is empty. * **Return type:** dict #### get_bids_file_attribute(attribute: str, data_filepath: str) → Any Retrieve a specific attribute from BIDS metadata. * **Parameters:** * **attribute** (*str*) – The name of the attribute to retrieve (e.g., “sfreq”, “subject”). * **data_filepath** (*str*) – The path to the data file. * **Returns:** The value of the requested attribute, or None if not found. * **Return type:** Any #### get_bids_metadata_files(filepath: str | Path, metadata_file_extension: str) → list[Path] Retrieve all metadata files that apply to a given data file. Follows the BIDS inheritance principle to find all relevant metadata files (e.g., `channels.tsv`, `eeg.json`) for a specific recording. * **Parameters:** * **filepath** (*str* *or* *Path*) – The path to the data file. * **metadata_file_extension** (*str*) – The extension of the metadata file to search for (e.g., “channels.tsv”). * **Returns:** A list of paths to the matching metadata files. * **Return type:** list of Path #### get_files() → list[str] Get all EEG recording file paths in the BIDS dataset. * **Returns:** A list of file paths for all valid EEG recordings. * **Return type:** list of str #### get_orphan_participants() → dict[str, dict[str, Any]] Get participant rows that have no matching file in the dataset. Identifies subjects present in `participants.tsv` but with no corresponding recording file in `self.files`. * **Returns:** A dictionary mapping orphan participant_id to their TSV data. Returns `{}` if there are no orphans, no TSV, or no files. * **Return type:** dict #### get_relative_bidspath(filepath: str | Path) → str Get the dataset-relative path for a file. * **Parameters:** **filepath** (*str* *or* *Path*) – The absolute or relative path to a file in the BIDS dataset. * **Returns:** The path relative to the dataset root, prefixed with the dataset name. e.g., “ds004477/sub-001/eeg/sub-001_task-PES_eeg.json” * **Return type:** str #### num_times(data_filepath: str) → int Get the number of time points in the recording. Calculated from `SamplingFrequency` and `RecordingDuration` in the modality-specific JSON sidecar (e.g., `eeg.json` or `meg.json`). * **Parameters:** **data_filepath** (*str*) – The path to the data file. * **Returns:** The approximate number of time points. * **Return type:** int #### subject_participant_tsv(data_filepath: str) → dict[str, Any] Get the participants.tsv record for a subject. * **Parameters:** **data_filepath** (*str*) – The path to a data file belonging to the subject. * **Returns:** A dictionary of the subject’s information from participants.tsv. * **Return type:** dict ### *class* eegdash.dataset.EEGChallengeDataset(release: str, cache_dir: str, mini: bool = True, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset helper for the EEG 2025 Challenge. This class simplifies access to the EEG 2025 Challenge datasets. It is a specialized version of `EEGDashDataset` that is pre-configured for the challenge’s data releases. It automatically maps a release name (e.g., “R1”) to the corresponding OpenNeuro dataset and handles the selection of subject subsets (e.g., “mini” release). * **Parameters:** * **release** (*str*) – The name of the challenge release to load. Must be one of the keys in `RELEASE_TO_OPENNEURO_DATASET_MAP` (e.g., “R1”, “R2”, …, “R11”). * **cache_dir** (*str*) – The local directory where the dataset will be downloaded and cached. * **mini** (*bool* *,* *default True*) – If True, the dataset is restricted to the official “mini” subset of subjects for the specified release. If False, all subjects for the release are included. * **query** (*dict* *,* *optional*) – An additional MongoDB-style query to apply as a filter. This query is combined with the release and subject filters using a logical AND. The query must not contain the `dataset` key, as this is determined by the `release` parameter. * **s3_bucket** (*str* *,* *optional*) – The base S3 bucket URI where the challenge data is stored. Defaults to the official challenge bucket. * **\*\*kwargs** – Additional keyword arguments that are passed directly to the `EEGDashDataset` constructor. * **Raises:** **ValueError** – If the specified `release` is unknown, or if the `query` argument contains a `dataset` key. Also raised if `mini` is True and a requested subject is not part of the official mini-release subset. #### SEE ALSO `EEGDashDataset` : The base class for creating datasets from queries. ### *class* eegdash.dataset.EEGDashDataset(cache_dir: str | Path, query: dict[str, Any] = None, description_fields: list[str] | None = None, s3_bucket: str | None = None, records: list[dict] | None = None, download: bool = True, n_jobs: int = -1, eeg_dash_instance: Any = None, database: str | None = None, auth_token: str | None = None, on_error: str = 'raise', \*\*kwargs) Bases: `BaseConcatDataset` Create a new EEGDashDataset from a given query or local BIDS dataset directory and dataset name. An EEGDashDataset is pooled collection of EEGDashBaseDataset instances (individual recordings) and is a subclass of braindecode’s BaseConcatDataset. ### Examples Basic usage with dataset and subject filtering: ```pycon >>> from eegdash import EEGDashDataset >>> dataset = EEGDashDataset( ... cache_dir="./data", ... dataset="ds002718", ... subject="012" ... ) >>> print(f"Number of recordings: {len(dataset)}") ``` Filter by multiple subjects and specific task: ```pycon >>> subjects = ["012", "013", "014"] >>> dataset = EEGDashDataset( ... cache_dir="./data", ... dataset="ds002718", ... subject=subjects, ... task="RestingState" ... ) ``` Load and inspect EEG data from recordings: ```pycon >>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}") ... print(f"Duration: {raw.times[-1]:.1f} seconds") ``` Advanced filtering with raw MongoDB queries: ```pycon >>> from eegdash import EEGDashDataset >>> query = { ... "dataset": "ds002718", ... "subject": {"$in": ["012", "013"]}, ... "task": "RestingState" ... } >>> dataset = EEGDashDataset(cache_dir="./data", query=query) ``` Working with dataset collections and braindecode integration: ```pycon >>> # EEGDashDataset is a braindecode BaseConcatDataset >>> for i, recording in enumerate(dataset): ... if i >= 2: # limit output ... break ... print(f"Recording {i}: {recording.description}") ... raw = recording.load() ... print(f" Channels: {len(raw.ch_names)}, Duration: {raw.times[-1]:.1f}s") ``` * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Raw MongoDB query to filter records. If provided, it is merged with keyword filtering arguments (see `**kwargs`) using logical AND. You must provide at least a `dataset` (either in `query` or as a keyword argument). Only fields in `ALLOWED_QUERY_FIELDS` are considered for filtering. * **dataset** (*str*) – Dataset identifier (e.g., `"ds002718"`). Required if `query` does not already specify a dataset. * **task** (*str* *|* *list* *[**str* *]*) – Task name(s) to filter by (e.g., `"RestingState"`). * **subject** (*str* *|* *list* *[**str* *]*) – Subject identifier(s) to filter by (e.g., `"NDARCA153NKE"`). * **session** (*str* *|* *list* *[**str* *]*) – Session identifier(s) to filter by (e.g., `"1"`). * **run** (*str* *|* *list* *[**str* *]*) – Run identifier(s) to filter by (e.g., `"1"`). * **description_fields** (*list* *[**str* *]*) – Fields to extract from each record and include in dataset descriptions (e.g., “subject”, “session”, “run”, “task”). * **s3_bucket** (*str* *|* *None*) – Optional S3 bucket URI (e.g., “s3://mybucket”) to use instead of the default OpenNeuro bucket when downloading data files. * **records** (*list* *[**dict* *]* *|* *None*) – Pre-fetched metadata records. If provided, the dataset is constructed directly from these records and no MongoDB query is performed. * **download** (*bool* *,* *default True*) – If False, load from local BIDS files only. Local data are expected under `cache_dir / dataset`; no DB or S3 access is attempted. * **n_jobs** (*int*) – Number of parallel jobs to use where applicable (-1 uses all cores). * **eeg_dash_instance** (*EEGDash* *|* *None*) – Optional existing EEGDash client to reuse for DB queries. If None, a new client is created on demand, not used in the case of no download. * **database** (*str* *|* *None*) – Database name to use (e.g., “eegdash”, “eegdash_staging”). If None, uses the default database. * **auth_token** (*str* *|* *None*) – Authentication token for accessing protected databases. Required for staging or admin operations. * **on_error** (*str* *,* *default "raise"*) – How to handle `DataIntegrityError` when accessing `.raw` on individual recordings: - `"raise"` (default): propagate the exception. - `"warn"`: log the error as a warning and set `.raw` to `None`. - `"skip"`: silently set `.raw` to `None`. Use `drop_bad()` after iteration to remove skipped recordings. * **\*\*kwargs** (*dict*) – Additional keyword arguments serving two purposes: - Filtering: any keys present in `ALLOWED_QUERY_FIELDS` are treated as query filters (e.g., `dataset`, `subject`, `task`, …). - Dataset options: remaining keys are forwarded to `EEGDashRaw`. #### *property* cumulative_sizes *: list[int]* Recompute cumulative sizes from current dataset lengths. Overrides the cached version from BaseConcatDataset because individual dataset lengths can change after lazy raw loading (estimated ntimes from JSON metadata may differ from actual n_times in the raw file). #### download_all(n_jobs: int | None = None) → None Download missing remote files in parallel. * **Parameters:** **n_jobs** (*int* *|* *None*) – Number of parallel workers to use. If None, defaults to `self.n_jobs`. #### drop_bad() → list[dict] Remove skipped datasets and return their records. Call after accessing `.raw` on all datasets (e.g. after iteration or preprocessing) to clean up the dataset list. * **Returns:** Records that were removed because loading failed. * **Return type:** list of dict #### drop_short(min_samples: int) → list[dict] Remove recordings shorter than *min_samples* and return their records. This is useful when downstream processing (e.g., fixed-length windowing) requires a minimum number of samples per recording. Recordings whose `.raw` is `None` (failed to load) are also dropped. * **Parameters:** **min_samples** (*int*) – Minimum number of time-domain samples a recording must have to be kept. * **Returns:** Records that were removed. * **Return type:** list of dict #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None ### eegdash.dataset.EEGEYENET alias of [`DS007338`](eegdash.dataset.DS007338.md#eegdash.dataset.DS007338) ### eegdash.dataset.EEGEyeNet alias of [`DS005872`](eegdash.dataset.DS005872.md#eegdash.dataset.DS005872) ### eegdash.dataset.EEGEyeNet_v2 alias of [`DS007338`](eegdash.dataset.DS007338.md#eegdash.dataset.DS007338) ### eegdash.dataset.EEGMotorMovementImagery alias of [`DS004362`](eegdash.dataset.DS004362.md#eegdash.dataset.DS004362) ### eegdash.dataset.EESM17 alias of [`DS004348`](eegdash.dataset.DS004348.md#eegdash.dataset.DS004348) ### eegdash.dataset.EESM19 alias of [`DS005185`](eegdash.dataset.DS005185.md#eegdash.dataset.DS005185) ### eegdash.dataset.EESM23 alias of [`DS005178`](eegdash.dataset.DS005178.md#eegdash.dataset.DS005178) ### eegdash.dataset.EPFLP300 alias of [`NM000231`](eegdash.dataset.NM000231.md#eegdash.dataset.NM000231) ### eegdash.dataset.EPFLP300Dataset alias of [`NM000231`](eegdash.dataset.NM000231.md#eegdash.dataset.NM000231) ### eegdash.dataset.EPFL_P300 alias of [`NM000231`](eegdash.dataset.NM000231.md#eegdash.dataset.NM000231) ### eegdash.dataset.ERDetect alias of [`DS004774`](eegdash.dataset.DS004774.md#eegdash.dataset.DS004774) ### eegdash.dataset.ERPCORE alias of [`NM000132`](eegdash.dataset.NM000132.md#eegdash.dataset.NM000132) ### eegdash.dataset.ERP_CORE alias of [`NM000132`](eegdash.dataset.NM000132.md#eegdash.dataset.NM000132) ### eegdash.dataset.ER_Detect alias of [`DS004774`](eegdash.dataset.DS004774.md#eegdash.dataset.DS004774) ### eegdash.dataset.Edit2024 alias of [`DS007406`](eegdash.dataset.DS007406.md#eegdash.dataset.DS007406) ### eegdash.dataset.EldBETA alias of [`NM000130`](eegdash.dataset.NM000130.md#eegdash.dataset.NM000130) ### eegdash.dataset.Ester2022 alias of [`DS004519`](eegdash.dataset.DS004519.md#eegdash.dataset.DS004519) ### eegdash.dataset.Ester2024_E1 alias of [`DS004521`](eegdash.dataset.DS004521.md#eegdash.dataset.DS004521) ### eegdash.dataset.Ester2024_E2 alias of [`DS004520`](eegdash.dataset.DS004520.md#eegdash.dataset.DS004520) ### eegdash.dataset.FACED alias of [`NM000112`](eegdash.dataset.NM000112.md#eegdash.dataset.NM000112) ### eegdash.dataset.FLUX alias of [`DS004346`](eegdash.dataset.DS004346.md#eegdash.dataset.DS004346) ### eegdash.dataset.FRL_DiscreteGestures alias of [`NM000105`](eegdash.dataset.NM000105.md#eegdash.dataset.NM000105) ### eegdash.dataset.FRL_Handwriting alias of [`NM000106`](eegdash.dataset.NM000106.md#eegdash.dataset.NM000106) ### eegdash.dataset.FRL_WristControl alias of [`NM000107`](eegdash.dataset.NM000107.md#eegdash.dataset.NM000107) ### eegdash.dataset.FernandezRodriguez2023 alias of [`NM000240`](eegdash.dataset.NM000240.md#eegdash.dataset.NM000240) ### eegdash.dataset.Ferron2019 alias of [`DS004541`](eegdash.dataset.DS004541.md#eegdash.dataset.DS004541) ### eegdash.dataset.Flankers_FAR alias of [`DS005868`](eegdash.dataset.DS005868.md#eegdash.dataset.DS005868) ### eegdash.dataset.Flankers_NEAR alias of [`DS005866`](eegdash.dataset.DS005866.md#eegdash.dataset.DS005866) ### eegdash.dataset.Fogarty2025 alias of [`DS007463`](eegdash.dataset.DS007463.md#eegdash.dataset.DS007463) ### eegdash.dataset.Formica2025 alias of [`DS005406`](eegdash.dataset.DS005406.md#eegdash.dataset.DS005406) ### eegdash.dataset.ForrestGump_MEG alias of [`DS003633`](eegdash.dataset.DS003633.md#eegdash.dataset.DS003633) ### eegdash.dataset.FuentesGuerra2024 alias of [`DS007180`](eegdash.dataset.DS007180.md#eegdash.dataset.DS007180) ### eegdash.dataset.Gama2019 alias of [`DS005420`](eegdash.dataset.DS005420.md#eegdash.dataset.DS005420) ### eegdash.dataset.Gao2024 alias of [`DS007420`](eegdash.dataset.DS007420.md#eegdash.dataset.DS007420) ### eegdash.dataset.Gao2026 alias of [`NM000242`](eegdash.dataset.NM000242.md#eegdash.dataset.NM000242) ### eegdash.dataset.Ghaffari2024 alias of [`DS006547`](eegdash.dataset.DS006547.md#eegdash.dataset.DS006547) ### eegdash.dataset.GuttmannFlury2025_ME alias of [`NM000227`](eegdash.dataset.NM000227.md#eegdash.dataset.NM000227) ### eegdash.dataset.GuttmannFlury2025_MIME alias of [`NM000235`](eegdash.dataset.NM000235.md#eegdash.dataset.NM000235) ### eegdash.dataset.HADMEEG alias of [`DS007353`](eegdash.dataset.DS007353.md#eegdash.dataset.DS007353) ### eegdash.dataset.HAD_MEEG alias of [`DS007353`](eegdash.dataset.DS007353.md#eegdash.dataset.DS007353) ### eegdash.dataset.HBN_EEG_NC alias of [`NM000103`](eegdash.dataset.NM000103.md#eegdash.dataset.NM000103) ### eegdash.dataset.HBN_NoCommercial alias of [`NM000103`](eegdash.dataset.NM000103.md#eegdash.dataset.NM000103) ### eegdash.dataset.HBN_r1 alias of [`DS005505`](eegdash.dataset.DS005505.md#eegdash.dataset.DS005505) ### eegdash.dataset.HBN_r10 alias of [`DS005515`](eegdash.dataset.DS005515.md#eegdash.dataset.DS005515) ### eegdash.dataset.HBN_r10_bdf alias of [`EEG2025R10`](eegdash.dataset.EEG2025R10.md#eegdash.dataset.EEG2025R10) ### eegdash.dataset.HBN_r10_bdf_mini alias of [`EEG2025R10MINI`](eegdash.dataset.EEG2025R10MINI.md#eegdash.dataset.EEG2025R10MINI) ### eegdash.dataset.HBN_r11 alias of [`DS005516`](eegdash.dataset.DS005516.md#eegdash.dataset.DS005516) ### eegdash.dataset.HBN_r11_bdf alias of [`EEG2025R11`](eegdash.dataset.EEG2025R11.md#eegdash.dataset.EEG2025R11) ### eegdash.dataset.HBN_r11_bdf_mini alias of [`EEG2025R11MINI`](eegdash.dataset.EEG2025R11MINI.md#eegdash.dataset.EEG2025R11MINI) ### eegdash.dataset.HBN_r1_bdf alias of [`EEG2025R1`](eegdash.dataset.EEG2025R1.md#eegdash.dataset.EEG2025R1) ### eegdash.dataset.HBN_r1_bdf_mini alias of [`EEG2025R1MINI`](eegdash.dataset.EEG2025R1MINI.md#eegdash.dataset.EEG2025R1MINI) ### eegdash.dataset.HBN_r2 alias of [`DS005506`](eegdash.dataset.DS005506.md#eegdash.dataset.DS005506) ### eegdash.dataset.HBN_r2_bdf alias of [`EEG2025R2`](eegdash.dataset.EEG2025R2.md#eegdash.dataset.EEG2025R2) ### eegdash.dataset.HBN_r2_bdf_mini alias of [`EEG2025R2MINI`](eegdash.dataset.EEG2025R2MINI.md#eegdash.dataset.EEG2025R2MINI) ### eegdash.dataset.HBN_r3 alias of [`DS005507`](eegdash.dataset.DS005507.md#eegdash.dataset.DS005507) ### eegdash.dataset.HBN_r3_bdf alias of [`EEG2025R3`](eegdash.dataset.EEG2025R3.md#eegdash.dataset.EEG2025R3) ### eegdash.dataset.HBN_r3_bdf_mini alias of [`EEG2025R3MINI`](eegdash.dataset.EEG2025R3MINI.md#eegdash.dataset.EEG2025R3MINI) ### eegdash.dataset.HBN_r4 alias of [`DS005508`](eegdash.dataset.DS005508.md#eegdash.dataset.DS005508) ### eegdash.dataset.HBN_r4_bdf alias of [`EEG2025R4`](eegdash.dataset.EEG2025R4.md#eegdash.dataset.EEG2025R4) ### eegdash.dataset.HBN_r4_bdf_mini alias of [`EEG2025R4MINI`](eegdash.dataset.EEG2025R4MINI.md#eegdash.dataset.EEG2025R4MINI) ### eegdash.dataset.HBN_r5 alias of [`DS005509`](eegdash.dataset.DS005509.md#eegdash.dataset.DS005509) ### eegdash.dataset.HBN_r5_bdf alias of [`EEG2025R5`](eegdash.dataset.EEG2025R5.md#eegdash.dataset.EEG2025R5) ### eegdash.dataset.HBN_r5_bdf_mini alias of [`EEG2025R5MINI`](eegdash.dataset.EEG2025R5MINI.md#eegdash.dataset.EEG2025R5MINI) ### eegdash.dataset.HBN_r6 alias of [`DS005510`](eegdash.dataset.DS005510.md#eegdash.dataset.DS005510) ### eegdash.dataset.HBN_r6_bdf alias of [`EEG2025R6`](eegdash.dataset.EEG2025R6.md#eegdash.dataset.EEG2025R6) ### eegdash.dataset.HBN_r6_bdf_mini alias of [`EEG2025R6MINI`](eegdash.dataset.EEG2025R6MINI.md#eegdash.dataset.EEG2025R6MINI) ### eegdash.dataset.HBN_r7_bdf alias of [`EEG2025R7`](eegdash.dataset.EEG2025R7.md#eegdash.dataset.EEG2025R7) ### eegdash.dataset.HBN_r7_bdf_mini alias of [`EEG2025R7MINI`](eegdash.dataset.EEG2025R7MINI.md#eegdash.dataset.EEG2025R7MINI) ### eegdash.dataset.HBN_r8 alias of [`DS005512`](eegdash.dataset.DS005512.md#eegdash.dataset.DS005512) ### eegdash.dataset.HBN_r8_bdf alias of [`EEG2025R8`](eegdash.dataset.EEG2025R8.md#eegdash.dataset.EEG2025R8) ### eegdash.dataset.HBN_r8_bdf_mini alias of [`EEG2025R8MINI`](eegdash.dataset.EEG2025R8MINI.md#eegdash.dataset.EEG2025R8MINI) ### eegdash.dataset.HBN_r9 alias of [`DS005514`](eegdash.dataset.DS005514.md#eegdash.dataset.DS005514) ### eegdash.dataset.HBN_r9_bdf alias of [`EEG2025R9`](eegdash.dataset.EEG2025R9.md#eegdash.dataset.EEG2025R9) ### eegdash.dataset.HBN_r9_bdf_mini alias of [`EEG2025R9MINI`](eegdash.dataset.EEG2025R9MINI.md#eegdash.dataset.EEG2025R9MINI) ### eegdash.dataset.HEFMIICH alias of [`NM000347`](eegdash.dataset.NM000347.md#eegdash.dataset.NM000347) ### eegdash.dataset.HEFMI_ICH alias of [`NM000347`](eegdash.dataset.NM000347.md#eegdash.dataset.NM000347) ### eegdash.dataset.HID alias of [`DS004851`](eegdash.dataset.DS004851.md#eegdash.dataset.DS004851) ### eegdash.dataset.HUPiEEG alias of [`DS004100`](eegdash.dataset.DS004100.md#eegdash.dataset.DS004100) ### eegdash.dataset.Hatano alias of [`DS007118`](eegdash.dataset.DS007118.md#eegdash.dataset.DS007118) ### eegdash.dataset.Haupt2025 alias of [`DS004951`](eegdash.dataset.DS004951.md#eegdash.dataset.DS004951) ### eegdash.dataset.HealthyBrainNetwork alias of [`NM000103`](eegdash.dataset.NM000103.md#eegdash.dataset.NM000103) ### eegdash.dataset.HeartBEAM alias of [`DS006466`](eegdash.dataset.DS006466.md#eegdash.dataset.DS006466) ### eegdash.dataset.HenaoIsaza2026 alias of [`DS007427`](eegdash.dataset.DS007427.md#eegdash.dataset.DS007427) ### eegdash.dataset.Hermann2021 alias of [`DS003352`](eegdash.dataset.DS003352.md#eegdash.dataset.DS003352) ### eegdash.dataset.Hermes2024 alias of [`DS006392`](eegdash.dataset.DS006392.md#eegdash.dataset.DS006392) ### eegdash.dataset.Herrema2024 alias of [`DS005494`](eegdash.dataset.DS005494.md#eegdash.dataset.DS005494) ### eegdash.dataset.Hinss2021 alias of [`NM000206`](eegdash.dataset.NM000206.md#eegdash.dataset.NM000206) ### eegdash.dataset.Hinss2021_v2 alias of [`NM000343`](eegdash.dataset.NM000343.md#eegdash.dataset.NM000343) ### eegdash.dataset.Huang2022 alias of [`DS004457`](eegdash.dataset.DS004457.md#eegdash.dataset.DS004457) ### eegdash.dataset.Huebner2017 alias of [`NM000199`](eegdash.dataset.NM000199.md#eegdash.dataset.NM000199) ### eegdash.dataset.Huebner2018 alias of [`NM000195`](eegdash.dataset.NM000195.md#eegdash.dataset.NM000195) ### eegdash.dataset.HySER alias of [`NM000108`](eegdash.dataset.NM000108.md#eegdash.dataset.NM000108) ### eegdash.dataset.Hyser alias of [`NM000108`](eegdash.dataset.NM000108.md#eegdash.dataset.NM000108) ### eegdash.dataset.IACKD alias of [`DS006840`](eegdash.dataset.DS006840.md#eegdash.dataset.DS006840) ### eegdash.dataset.Jao2020 alias of [`NM000249`](eegdash.dataset.NM000249.md#eegdash.dataset.NM000249) ### eegdash.dataset.Johnson2024 alias of [`DS004850`](eegdash.dataset.DS004850.md#eegdash.dataset.DS004850) ### eegdash.dataset.Johnson2025 alias of [`DS004852`](eegdash.dataset.DS004852.md#eegdash.dataset.DS004852) ### eegdash.dataset.Kajikawa2000 alias of [`DS007028`](eegdash.dataset.DS007028.md#eegdash.dataset.DS007028) ### eegdash.dataset.Kalenkovich2019 alias of [`DS003703`](eegdash.dataset.DS003703.md#eegdash.dataset.DS003703) ### eegdash.dataset.Kanno2025 alias of [`DS005545`](eegdash.dataset.DS005545.md#eegdash.dataset.DS005545) ### eegdash.dataset.Kekecs2024 alias of [`DS004572`](eegdash.dataset.DS004572.md#eegdash.dataset.DS004572) ### eegdash.dataset.Kidder2024 alias of [`DS004278`](eegdash.dataset.DS004278.md#eegdash.dataset.DS004278) ### eegdash.dataset.Kim2025 alias of [`NM000127`](eegdash.dataset.NM000127.md#eegdash.dataset.NM000127) ### eegdash.dataset.Kinley2019 alias of [`DS006446`](eegdash.dataset.DS006446.md#eegdash.dataset.DS006446) ### eegdash.dataset.Kitazawa2025 alias of [`DS005007`](eegdash.dataset.DS005007.md#eegdash.dataset.DS005007) ### eegdash.dataset.Kucyi2024 alias of [`DS007216`](eegdash.dataset.DS007216.md#eegdash.dataset.DS007216) ### eegdash.dataset.Kuroda2024 alias of [`DS006107`](eegdash.dataset.DS006107.md#eegdash.dataset.DS006107) ### eegdash.dataset.LEMON alias of [`NM000179`](eegdash.dataset.NM000179.md#eegdash.dataset.NM000179) ### eegdash.dataset.LPP alias of [`DS005345`](eegdash.dataset.DS005345.md#eegdash.dataset.DS005345) ### eegdash.dataset.LeganesFonteneau2024 alias of [`DS006159`](eegdash.dataset.DS006159.md#eegdash.dataset.DS006159) ### eegdash.dataset.Lin2019 alias of [`DS006035`](eegdash.dataset.DS006035.md#eegdash.dataset.DS006035) ### eegdash.dataset.LittlePrince alias of [`DS007524`](eegdash.dataset.DS007524.md#eegdash.dataset.DS007524) ### eegdash.dataset.Liu2022EldBETA alias of [`NM000130`](eegdash.dataset.NM000130.md#eegdash.dataset.NM000130) ### eegdash.dataset.Lowe2025 alias of [`DS006817`](eegdash.dataset.DS006817.md#eegdash.dataset.DS006817) ### eegdash.dataset.Luke2019 alias of [`DS005964`](eegdash.dataset.DS005964.md#eegdash.dataset.DS005964) ### eegdash.dataset.MAMEM2 alias of [`NM000120`](eegdash.dataset.NM000120.md#eegdash.dataset.NM000120) ### eegdash.dataset.MAMEM2_SSVEP alias of [`NM000120`](eegdash.dataset.NM000120.md#eegdash.dataset.NM000120) ### eegdash.dataset.MAMEM3 alias of [`NM000121`](eegdash.dataset.NM000121.md#eegdash.dataset.NM000121) ### eegdash.dataset.MASC_MEG alias of [`NM000229`](eegdash.dataset.NM000229.md#eegdash.dataset.NM000229) ### eegdash.dataset.MAVIS alias of [`DS004010`](eegdash.dataset.DS004010.md#eegdash.dataset.DS004010) ### eegdash.dataset.MEGMEM alias of [`DS003694`](eegdash.dataset.DS003694.md#eegdash.dataset.DS003694) ### eegdash.dataset.MEG_MASC alias of [`NM000229`](eegdash.dataset.NM000229.md#eegdash.dataset.NM000229) ### eegdash.dataset.MEG_SCANS alias of [`DS006468`](eegdash.dataset.DS006468.md#eegdash.dataset.DS006468) ### eegdash.dataset.MNESomato alias of [`DS003104`](eegdash.dataset.DS003104.md#eegdash.dataset.DS003104) ### eegdash.dataset.MNESomatoData alias of [`DS003104`](eegdash.dataset.DS003104.md#eegdash.dataset.DS003104) ### eegdash.dataset.MNE_Sample_Data alias of [`DS000248`](eegdash.dataset.DS000248.md#eegdash.dataset.DS000248) ### eegdash.dataset.MSSV alias of [`DS006366`](eegdash.dataset.DS006366.md#eegdash.dataset.DS006366) ### eegdash.dataset.MUSING alias of [`DS003774`](eegdash.dataset.DS003774.md#eegdash.dataset.DS003774) ### eegdash.dataset.Maestu2021 alias of [`DS003483`](eegdash.dataset.DS003483.md#eegdash.dataset.DS003483) ### eegdash.dataset.Martzoukou2024_Post alias of [`DS007314`](eegdash.dataset.DS007314.md#eegdash.dataset.DS007314) ### eegdash.dataset.Martzoukou2024_Post_A alias of [`DS007315`](eegdash.dataset.DS007315.md#eegdash.dataset.DS007315) ### eegdash.dataset.Melcon2024 alias of [`DS006171`](eegdash.dataset.DS006171.md#eegdash.dataset.DS006171) ### eegdash.dataset.Mendola2020 alias of [`DS002001`](eegdash.dataset.DS002001.md#eegdash.dataset.DS002001) ### eegdash.dataset.Mesquita2019 alias of [`DS005963`](eegdash.dataset.DS005963.md#eegdash.dataset.DS005963) ### eegdash.dataset.MetaRDK alias of [`DS006253`](eegdash.dataset.DS006253.md#eegdash.dataset.DS006253) ### eegdash.dataset.Mheich2020 alias of [`DS002791`](eegdash.dataset.DS002791.md#eegdash.dataset.DS002791) ### eegdash.dataset.Mheich2024 alias of [`DS002833`](eegdash.dataset.DS002833.md#eegdash.dataset.DS002833) ### eegdash.dataset.Miller2021 alias of [`DS003708`](eegdash.dataset.DS003708.md#eegdash.dataset.DS003708) ### eegdash.dataset.Mishra2024 alias of [`DS007322`](eegdash.dataset.DS007322.md#eegdash.dataset.DS007322) ### eegdash.dataset.Mivalt2024 alias of [`DS004624`](eegdash.dataset.DS004624.md#eegdash.dataset.DS004624) ### eegdash.dataset.Moerel2023 alias of [`DS004995`](eegdash.dataset.DS004995.md#eegdash.dataset.DS004995) ### eegdash.dataset.Moerel2025 alias of [`DS007521`](eegdash.dataset.DS007521.md#eegdash.dataset.DS007521) ### eegdash.dataset.Moradi2024 alias of [`DS004598`](eegdash.dataset.DS004598.md#eegdash.dataset.DS004598) ### eegdash.dataset.Motion_Yucel2014 alias of [`DS005929`](eegdash.dataset.DS005929.md#eegdash.dataset.DS005929) ### *class* eegdash.dataset.NM000103(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Healthy Brain Network EEG - Not for Commercial Use * **Study:** `nm000103` (NeMAR) * **Author (year):** `Shirazi2017` * **Canonical:** `HealthyBrainNetwork`, `HBN_EEG_NC`, `HBN_NoCommercial` Also importable as: `NM000103`, `Shirazi2017`, `HealthyBrainNetwork`, `HBN_EEG_NC`, `HBN_NoCommercial`. Modality: `eeg`. Subjects: 447; recordings: 3522; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000103](https://openneuro.org/datasets/nm000103) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000103](https://nemar.org/dataexplorer/detail?dataset_id=nm000103) DOI: [https://doi.org/10.82901/nemar.nm000103](https://doi.org/10.82901/nemar.nm000103) ### Examples ```pycon >>> from eegdash.dataset import NM000103 >>> dataset = NM000103(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HealthyBrainNetwork', 'HBN_EEG_NC', 'HBN_NoCommercial']* ### *class* eegdash.dataset.NM000104(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography * **Study:** `nm000104` (NeMAR) * **Author (year):** `Sivakumar2024` * **Canonical:** `emg2qwerty` Also importable as: `NM000104`, `Sivakumar2024`, `emg2qwerty`. Modality: `emg`. Subjects: 108; recordings: 1136; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000104](https://openneuro.org/datasets/nm000104) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000104](https://nemar.org/dataexplorer/detail?dataset_id=nm000104) DOI: [https://doi.org/10.82901/nemar.nm000104](https://doi.org/10.82901/nemar.nm000104) ### Examples ```pycon >>> from eegdash.dataset import NM000104 >>> dataset = NM000104(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['emg2qwerty']* ### *class* eegdash.dataset.NM000105(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRL Discrete Gestures: Hand Gesture Recognition from Surface Electromyography * **Study:** `nm000105` (NeMAR) * **Author (year):** `Kaifosh2025` * **Canonical:** `FRL_DiscreteGestures` Also importable as: `NM000105`, `Kaifosh2025`, `FRL_DiscreteGestures`. Modality: `emg`. Subjects: 100; recordings: 100; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000105](https://openneuro.org/datasets/nm000105) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000105](https://nemar.org/dataexplorer/detail?dataset_id=nm000105) DOI: [https://doi.org/10.82901/nemar.nm000105](https://doi.org/10.82901/nemar.nm000105) ### Examples ```pycon >>> from eegdash.dataset import NM000105 >>> dataset = NM000105(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['FRL_DiscreteGestures']* ### *class* eegdash.dataset.NM000106(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRL Handwriting: Handwriting Decoding from Surface Electromyography * **Study:** `nm000106` (NeMAR) * **Author (year):** `Kaifosh2025_106` * **Canonical:** `FRL_Handwriting` Also importable as: `NM000106`, `Kaifosh2025_106`, `FRL_Handwriting`. Modality: `emg`. Subjects: 100; recordings: 807; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000106](https://openneuro.org/datasets/nm000106) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000106](https://nemar.org/dataexplorer/detail?dataset_id=nm000106) DOI: [https://doi.org/10.82901/nemar.nm000106](https://doi.org/10.82901/nemar.nm000106) ### Examples ```pycon >>> from eegdash.dataset import NM000106 >>> dataset = NM000106(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['FRL_Handwriting']* ### *class* eegdash.dataset.NM000107(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FRL Wrist Control: Wrist Movement Decoding from Surface Electromyography * **Study:** `nm000107` (NeMAR) * **Author (year):** `Kaifosh2025_107` * **Canonical:** `FRL_WristControl` Also importable as: `NM000107`, `Kaifosh2025_107`, `FRL_WristControl`. Modality: `emg`. Subjects: 100; recordings: 182; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000107](https://openneuro.org/datasets/nm000107) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000107](https://nemar.org/dataexplorer/detail?dataset_id=nm000107) DOI: [https://doi.org/10.82901/nemar.nm000107](https://doi.org/10.82901/nemar.nm000107) ### Examples ```pycon >>> from eegdash.dataset import NM000107 >>> dataset = NM000107(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['FRL_WristControl']* ### *class* eegdash.dataset.NM000108(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HySER: High-Density Surface Electromyogram Recordings * **Study:** `nm000108` (NeMAR) * **Author (year):** `Jiang2021` * **Canonical:** `HySER`, `Hyser` Also importable as: `NM000108`, `Jiang2021`, `HySER`, `Hyser`. Modality: `emg`. Subjects: 20; recordings: 1514; tasks: 38. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000108](https://openneuro.org/datasets/nm000108) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000108](https://nemar.org/dataexplorer/detail?dataset_id=nm000108) DOI: [https://doi.org/10.82901/nemar.nm000108](https://doi.org/10.82901/nemar.nm000108) ### Examples ```pycon >>> from eegdash.dataset import NM000108 >>> dataset = NM000108(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HySER', 'Hyser']* ### *class* eegdash.dataset.NM000109(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) EEG During Mental Arithmetic Tasks * **Study:** `nm000109` (NeMAR) * **Author (year):** `Zyma2019` * **Canonical:** — Also importable as: `NM000109`, `Zyma2019`. Modality: `eeg`. Subjects: 36; recordings: 72; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000109](https://openneuro.org/datasets/nm000109) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000109](https://nemar.org/dataexplorer/detail?dataset_id=nm000109) DOI: [https://doi.org/10.82901/nemar.nm000109](https://doi.org/10.82901/nemar.nm000109) ### Examples ```pycon >>> from eegdash.dataset import NM000109 >>> dataset = NM000109(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000110(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CHB-MIT * **Study:** `nm000110` (NeMAR) * **Author (year):** `Connolly2010` * **Canonical:** `CHBMIT`, `CHB_MIT` Also importable as: `NM000110`, `Connolly2010`, `CHBMIT`, `CHB_MIT`. Modality: `eeg`. Subjects: 24; recordings: 686; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000110](https://openneuro.org/datasets/nm000110) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000110](https://nemar.org/dataexplorer/detail?dataset_id=nm000110) DOI: [https://doi.org/10.82901/nemar.nm000110](https://doi.org/10.82901/nemar.nm000110) ### Examples ```pycon >>> from eegdash.dataset import NM000110 >>> dataset = NM000110(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['CHBMIT', 'CHB_MIT']* ### *class* eegdash.dataset.NM000112(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) FACED - Finer-grained Affective Computing EEG Dataset * **Study:** `nm000112` (NeMAR) * **Author (year):** `Liu2024_112` * **Canonical:** `FACED` Also importable as: `NM000112`, `Liu2024_112`, `FACED`. Modality: `eeg`. Subjects: 123; recordings: 123; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000112](https://openneuro.org/datasets/nm000112) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000112](https://nemar.org/dataexplorer/detail?dataset_id=nm000112) DOI: [https://doi.org/10.82901/nemar.nm000112](https://doi.org/10.82901/nemar.nm000112) ### Examples ```pycon >>> from eegdash.dataset import NM000112 >>> dataset = NM000112(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['FACED']* ### *class* eegdash.dataset.NM000113(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 2020 BCI competition, track 3 * **Study:** `nm000113` (NeMAR) * **Author (year):** `Lee2020` * **Canonical:** — Also importable as: `NM000113`, `Lee2020`. Modality: `eeg`. Subjects: 15; recordings: 45; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000113](https://openneuro.org/datasets/nm000113) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000113](https://nemar.org/dataexplorer/detail?dataset_id=nm000113) DOI: [https://doi.org/10.82901/nemar.nm000113](https://doi.org/10.82901/nemar.nm000113) ### Examples ```pycon >>> from eegdash.dataset import NM000113 >>> dataset = NM000113(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000114(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MDD Patients and Healthy Controls EEG Data * **Study:** `nm000114` (NeMAR) * **Author (year):** `Mumtaz2017` * **Canonical:** — Also importable as: `NM000114`, `Mumtaz2017`. Modality: `eeg`. Subjects: 64; recordings: 181; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000114](https://openneuro.org/datasets/nm000114) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000114](https://nemar.org/dataexplorer/detail?dataset_id=nm000114) DOI: [https://doi.org/10.82901/nemar.nm000114](https://doi.org/10.82901/nemar.nm000114) ### Examples ```pycon >>> from eegdash.dataset import NM000114 >>> dataset = NM000114(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000115(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Zhou2016 * **Study:** `nm000115` (NeMAR) * **Author (year):** `Zhou2016` * **Canonical:** — Also importable as: `NM000115`, `Zhou2016`. Modality: `eeg`. Subjects: 4; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000115](https://openneuro.org/datasets/nm000115) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000115](https://nemar.org/dataexplorer/detail?dataset_id=nm000115) DOI: [https://doi.org/10.82901/nemar.nm000115](https://doi.org/10.82901/nemar.nm000115) ### Examples ```pycon >>> from eegdash.dataset import NM000115 >>> dataset = NM000115(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000118(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Nakanishi2015 – SSVEP Nakanishi 2015 dataset * **Study:** `nm000118` (NeMAR) * **Author (year):** `Nakanishi2015` * **Canonical:** — Also importable as: `NM000118`, `Nakanishi2015`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 9; recordings: 9; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000118](https://openneuro.org/datasets/nm000118) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000118](https://nemar.org/dataexplorer/detail?dataset_id=nm000118) ### Examples ```pycon >>> from eegdash.dataset import NM000118 >>> dataset = NM000118(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000119(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Oikonomou2016 – SSVEP MAMEM 1 dataset * **Study:** `nm000119` (NeMAR) * **Author (year):** `Oikonomou2016_MAMEM1` * **Canonical:** `Oikonomou2016` Also importable as: `NM000119`, `Oikonomou2016_MAMEM1`, `Oikonomou2016`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 11; recordings: 47; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000119](https://openneuro.org/datasets/nm000119) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000119](https://nemar.org/dataexplorer/detail?dataset_id=nm000119) ### Examples ```pycon >>> from eegdash.dataset import NM000119 >>> dataset = NM000119(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Oikonomou2016']* ### *class* eegdash.dataset.NM000120(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Oikonomou2016 – SSVEP MAMEM 2 dataset * **Study:** `nm000120` (NeMAR) * **Author (year):** `Oikonomou2016_MAMEM2` * **Canonical:** `MAMEM2`, `SSVEPMAMEM2`, `MAMEM2_SSVEP` Also importable as: `NM000120`, `Oikonomou2016_MAMEM2`, `MAMEM2`, `SSVEPMAMEM2`, `MAMEM2_SSVEP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 11; recordings: 55; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000120](https://openneuro.org/datasets/nm000120) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000120](https://nemar.org/dataexplorer/detail?dataset_id=nm000120) ### Examples ```pycon >>> from eegdash.dataset import NM000120 >>> dataset = NM000120(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MAMEM2', 'SSVEPMAMEM2', 'MAMEM2_SSVEP']* ### *class* eegdash.dataset.NM000121(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Oikonomou2016 – SSVEP MAMEM 3 dataset * **Study:** `nm000121` (NeMAR) * **Author (year):** `Oikonomou2016_MAMEM3` * **Canonical:** `MAMEM3`, `SSVEP_MAMEM3` Also importable as: `NM000121`, `Oikonomou2016_MAMEM3`, `MAMEM3`, `SSVEP_MAMEM3`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 11; recordings: 110; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000121](https://openneuro.org/datasets/nm000121) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000121](https://nemar.org/dataexplorer/detail?dataset_id=nm000121) ### Examples ```pycon >>> from eegdash.dataset import NM000121 >>> dataset = NM000121(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MAMEM3', 'SSVEP_MAMEM3']* ### *class* eegdash.dataset.NM000122(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Chen2017 – Single-flicker online SSVEP BCI dataset * **Study:** `nm000122` (NeMAR) * **Author (year):** `Chen2017` * **Canonical:** — Also importable as: `NM000122`, `Chen2017`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000122](https://openneuro.org/datasets/nm000122) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000122](https://nemar.org/dataexplorer/detail?dataset_id=nm000122) ### Examples ```pycon >>> from eegdash.dataset import NM000122 >>> dataset = NM000122(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000123(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Kalunga2016 – SSVEP Exo dataset * **Study:** `nm000123` (NeMAR) * **Author (year):** `Kalunga2016` * **Canonical:** — Also importable as: `NM000123`, `Kalunga2016`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 12; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000123](https://openneuro.org/datasets/nm000123) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000123](https://nemar.org/dataexplorer/detail?dataset_id=nm000123) ### Examples ```pycon >>> from eegdash.dataset import NM000123 >>> dataset = NM000123(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000124(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Han2024 – SSVEP fatigue dataset with two frequency paradigms * **Study:** `nm000124` (NeMAR) * **Author (year):** `Han2024` * **Canonical:** — Also importable as: `NM000124`, `Han2024`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 24; recordings: 48; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000124](https://openneuro.org/datasets/nm000124) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000124](https://nemar.org/dataexplorer/detail?dataset_id=nm000124) ### Examples ```pycon >>> from eegdash.dataset import NM000124 >>> dataset = NM000124(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000125(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Lee2021 – SSVEP paradigm of the Mobile BCI dataset * **Study:** `nm000125` (NeMAR) * **Author (year):** `Lee2021_SSVEP` * **Canonical:** — Also importable as: `NM000125`, `Lee2021_SSVEP`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 23; recordings: 85; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000125](https://openneuro.org/datasets/nm000125) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000125](https://nemar.org/dataexplorer/detail?dataset_id=nm000125) ### Examples ```pycon >>> from eegdash.dataset import NM000125 >>> dataset = NM000125(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000126(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Wang2016 – SSVEP Wang 2016 dataset * **Study:** `nm000126` (NeMAR) * **Author (year):** `Wang2016` * **Canonical:** — Also importable as: `NM000126`, `Wang2016`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 34; recordings: 34; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000126](https://openneuro.org/datasets/nm000126) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000126](https://nemar.org/dataexplorer/detail?dataset_id=nm000126) ### Examples ```pycon >>> from eegdash.dataset import NM000126 >>> dataset = NM000126(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000127(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Kim2025 – 40-class beta-range SSVEP speller dataset * **Study:** `nm000127` (NeMAR) * **Author (year):** `Kim2025_SSVEP` * **Canonical:** `Kim2025` Also importable as: `NM000127`, `Kim2025_SSVEP`, `Kim2025`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 40; recordings: 240; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000127](https://openneuro.org/datasets/nm000127) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000127](https://nemar.org/dataexplorer/detail?dataset_id=nm000127) ### Examples ```pycon >>> from eegdash.dataset import NM000127 >>> dataset = NM000127(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Kim2025']* ### *class* eegdash.dataset.NM000128(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dong2023 – 59-subject 40-class SSVEP dataset * **Study:** `nm000128` (NeMAR) * **Author (year):** `Dong2023` * **Canonical:** — Also importable as: `NM000128`, `Dong2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 59; recordings: 59; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000128](https://openneuro.org/datasets/nm000128) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000128](https://nemar.org/dataexplorer/detail?dataset_id=nm000128) ### Examples ```pycon >>> from eegdash.dataset import NM000128 >>> dataset = NM000128(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000129(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Liu2020 – BETA SSVEP benchmark dataset * **Study:** `nm000129` (NeMAR) * **Author (year):** `Liu2020` * **Canonical:** `BetaSSVEP`, `BETA_SSVEP`, `BETA` Also importable as: `NM000129`, `Liu2020`, `BetaSSVEP`, `BETA_SSVEP`, `BETA`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 70; recordings: 70; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000129](https://openneuro.org/datasets/nm000129) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000129](https://nemar.org/dataexplorer/detail?dataset_id=nm000129) ### Examples ```pycon >>> from eegdash.dataset import NM000129 >>> dataset = NM000129(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BetaSSVEP', 'BETA_SSVEP', 'BETA']* ### *class* eegdash.dataset.NM000130(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Liu2022 – eldBETA SSVEP benchmark dataset for elderly population * **Study:** `nm000130` (NeMAR) * **Author (year):** `Liu2022` * **Canonical:** `EldBETA`, `eldBETA`, `Liu2022EldBETA` Also importable as: `NM000130`, `Liu2022`, `EldBETA`, `eldBETA`, `Liu2022EldBETA`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 100; recordings: 700; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000130](https://openneuro.org/datasets/nm000130) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000130](https://nemar.org/dataexplorer/detail?dataset_id=nm000130) ### Examples ```pycon >>> from eegdash.dataset import NM000130 >>> dataset = NM000130(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EldBETA', 'eldBETA', 'Liu2022EldBETA']* ### *class* eegdash.dataset.NM000131(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Wang2021 – Combined SSVEP dataset with single stimulus location for two inputs * **Study:** `nm000131` (NeMAR) * **Author (year):** `Wang2021` * **Canonical:** — Also importable as: `NM000131`, `Wang2021`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 8; recordings: 22; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000131](https://openneuro.org/datasets/nm000131) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000131](https://nemar.org/dataexplorer/detail?dataset_id=nm000131) ### Examples ```pycon >>> from eegdash.dataset import NM000131 >>> dataset = NM000131(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000132(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ERP CORE * **Study:** `nm000132` (NeMAR) * **Author (year):** `Kappenman2021` * **Canonical:** `ERPCORE`, `ERP_CORE` Also importable as: `NM000132`, `Kappenman2021`, `ERPCORE`, `ERP_CORE`. Modality: `eeg`. Subjects: 40; recordings: 240; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000132](https://openneuro.org/datasets/nm000132) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000132](https://nemar.org/dataexplorer/detail?dataset_id=nm000132) DOI: [https://doi.org/10.82901/nemar.nm000132](https://doi.org/10.82901/nemar.nm000132) ### Examples ```pycon >>> from eegdash.dataset import NM000132 >>> dataset = NM000132(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['ERPCORE', 'ERP_CORE']* ### *class* eegdash.dataset.NM000133(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alljoined1 * **Study:** `nm000133` (NeMAR) * **Author (year):** `Xu2024` * **Canonical:** `Alljoined1`, `Alljoined` Also importable as: `NM000133`, `Xu2024`, `Alljoined1`, `Alljoined`. Modality: `eeg`. Subjects: 8; recordings: 13; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000133](https://openneuro.org/datasets/nm000133) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000133](https://nemar.org/dataexplorer/detail?dataset_id=nm000133) DOI: [https://doi.org/10.82901/nemar.nm000133](https://doi.org/10.82901/nemar.nm000133) ### Examples ```pycon >>> from eegdash.dataset import NM000133 >>> dataset = NM000133(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Alljoined1', 'Alljoined']* ### *class* eegdash.dataset.NM000134(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alljoined-1.6M * **Study:** `nm000134` (NeMAR) * **Author (year):** `Xu2025` * **Canonical:** `Alljoined16M`, `Alljoined_16M`, `Alljoined1p6M` Also importable as: `NM000134`, `Xu2025`, `Alljoined16M`, `Alljoined_16M`, `Alljoined1p6M`. Modality: `eeg`. Subjects: 20; recordings: 1525; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000134](https://openneuro.org/datasets/nm000134) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000134](https://nemar.org/dataexplorer/detail?dataset_id=nm000134) DOI: [https://doi.org/10.82901/nemar.nm000134](https://doi.org/10.82901/nemar.nm000134) ### Examples ```pycon >>> from eegdash.dataset import NM000134 >>> dataset = NM000134(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Alljoined16M', 'Alljoined_16M', 'Alljoined1p6M']* ### *class* eegdash.dataset.NM000135(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-004 Motor Imagery dataset * **Study:** `nm000135` (NeMAR) * **Author (year):** `Leeb2014` * **Canonical:** `BNCI2014004` Also importable as: `NM000135`, `Leeb2014`, `BNCI2014004`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 1; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000135](https://openneuro.org/datasets/nm000135) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000135](https://nemar.org/dataexplorer/detail?dataset_id=nm000135) ### Examples ```pycon >>> from eegdash.dataset import NM000135 >>> dataset = NM000135(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2014004']* ### *class* eegdash.dataset.NM000136(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) GuttmannFlury2025-P300 * **Study:** `nm000136` (NeMAR) * **Author (year):** `GuttmannFlury2025` * **Canonical:** — Also importable as: `NM000136`, `GuttmannFlury2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 31; recordings: 63; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000136](https://openneuro.org/datasets/nm000136) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000136](https://nemar.org/dataexplorer/detail?dataset_id=nm000136) DOI: [https://doi.org/10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) ### Examples ```pycon >>> from eegdash.dataset import NM000136 >>> dataset = NM000136(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000137(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Classical motor imagery dataset with left hand, right hand, and rest * **Study:** `nm000137` (NeMAR) * **Author (year):** `Kaya2018` * **Canonical:** — Also importable as: `NM000137`, `Kaya2018`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 7; recordings: 17; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000137](https://openneuro.org/datasets/nm000137) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000137](https://nemar.org/dataexplorer/detail?dataset_id=nm000137) ### Examples ```pycon >>> from eegdash.dataset import NM000137 >>> dataset = NM000137(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000138(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alex Motor Imagery dataset * **Study:** `nm000138` (NeMAR) * **Author (year):** `Barachant2012` * **Canonical:** `AlexMI`, `AlexMotorImagery`, `AlexandreMotorImagery` Also importable as: `NM000138`, `Barachant2012`, `AlexMI`, `AlexMotorImagery`, `AlexandreMotorImagery`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000138](https://openneuro.org/datasets/nm000138) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000138](https://nemar.org/dataexplorer/detail?dataset_id=nm000138) ### Examples ```pycon >>> from eegdash.dataset import NM000138 >>> dataset = NM000138(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['AlexMI', 'AlexMotorImagery', 'AlexandreMotorImagery']* ### *class* eegdash.dataset.NM000139(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-001 Motor Imagery dataset * **Study:** `nm000139` (NeMAR) * **Author (year):** `Tangermann2014` * **Canonical:** `BNCI2014001`, `BCICIV1`, `BCICompIV1` Also importable as: `NM000139`, `Tangermann2014`, `BNCI2014001`, `BCICIV1`, `BCICompIV1`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 9; recordings: 108; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000139](https://openneuro.org/datasets/nm000139) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000139](https://nemar.org/dataexplorer/detail?dataset_id=nm000139) ### Examples ```pycon >>> from eegdash.dataset import NM000139 >>> dataset = NM000139(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2014001', 'BCICIV1', 'BCICompIV1']* ### *class* eegdash.dataset.NM000140(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-001 Motor Imagery dataset * **Study:** `nm000140` (NeMAR) * **Author (year):** `Faller2015` * **Canonical:** `BNCI2015`, `BNCI2015001` Also importable as: `NM000140`, `Faller2015`, `BNCI2015`, `BNCI2015001`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 12; recordings: 28; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000140](https://openneuro.org/datasets/nm000140) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000140](https://nemar.org/dataexplorer/detail?dataset_id=nm000140) ### Examples ```pycon >>> from eegdash.dataset import NM000140 >>> dataset = NM000140(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2015', 'BNCI2015001']* ### *class* eegdash.dataset.NM000141(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor execution dataset from Wairagkar et al 2018 * **Study:** `nm000141` (NeMAR) * **Author (year):** `Wairagkar2018` * **Canonical:** — Also importable as: `NM000141`, `Wairagkar2018`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 14; recordings: 14; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000141](https://openneuro.org/datasets/nm000141) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000141](https://nemar.org/dataexplorer/detail?dataset_id=nm000141) ### Examples ```pycon >>> from eegdash.dataset import NM000141 >>> dataset = NM000141(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000142(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Ear-EEG motor execution dataset from Wu et al 2020 * **Study:** `nm000142` (NeMAR) * **Author (year):** `Wu2020` * **Canonical:** — Also importable as: `NM000142`, `Wu2020`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 6; recordings: 13; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000142](https://openneuro.org/datasets/nm000142) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000142](https://nemar.org/dataexplorer/detail?dataset_id=nm000142) ### Examples ```pycon >>> from eegdash.dataset import NM000142 >>> dataset = NM000142(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000143(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI2003_IVa Motor Imagery dataset * **Study:** `nm000143` (NeMAR) * **Author (year):** `BNCI2003` * **Canonical:** `BCICIII_IVa`, `BCICompIII_IVa`, `BNCI2003_IVa` Also importable as: `NM000143`, `BNCI2003`, `BCICIII_IVa`, `BCICompIII_IVa`, `BNCI2003_IVa`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 5; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000143](https://openneuro.org/datasets/nm000143) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000143](https://nemar.org/dataexplorer/detail?dataset_id=nm000143) ### Examples ```pycon >>> from eegdash.dataset import NM000143 >>> dataset = NM000143(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCICIII_IVa', 'BCICompIII_IVa', 'BNCI2003_IVa']* ### *class* eegdash.dataset.NM000144(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-004 Mental tasks dataset * **Study:** `nm000144` (NeMAR) * **Author (year):** `Scherer2015` * **Canonical:** — Also importable as: `NM000144`, `Scherer2015`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 9; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000144](https://openneuro.org/datasets/nm000144) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000144](https://nemar.org/dataexplorer/detail?dataset_id=nm000144) ### Examples ```pycon >>> from eegdash.dataset import NM000144 >>> dataset = NM000144(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000145(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Munich Motor Imagery dataset * **Study:** `nm000145` (NeMAR) * **Author (year):** `GrosseWentrup2009` * **Canonical:** — Also importable as: `NM000145`, `GrosseWentrup2009`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 10; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000145](https://openneuro.org/datasets/nm000145) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000145](https://nemar.org/dataexplorer/detail?dataset_id=nm000145) ### Examples ```pycon >>> from eegdash.dataset import NM000145 >>> dataset = NM000145(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000146(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Imagery dataset from Weibo et al 2014 * **Study:** `nm000146` (NeMAR) * **Author (year):** `Yi2014` * **Canonical:** `Weibo2014` Also importable as: `NM000146`, `Yi2014`, `Weibo2014`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 10; recordings: 10; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000146](https://openneuro.org/datasets/nm000146) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000146](https://nemar.org/dataexplorer/detail?dataset_id=nm000146) ### Examples ```pycon >>> from eegdash.dataset import NM000146 >>> dataset = NM000146(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Weibo2014']* ### *class* eegdash.dataset.NM000147(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RomaniBF2025ERP * **Study:** `nm000147` (NeMAR) * **Author (year):** `RomaniBF2025` * **Canonical:** `Romani2025` Also importable as: `NM000147`, `RomaniBF2025`, `Romani2025`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 22; recordings: 120; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000147](https://openneuro.org/datasets/nm000147) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000147](https://nemar.org/dataexplorer/detail?dataset_id=nm000147) DOI: [https://doi.org/10.48550/arXiv.2510.10169](https://doi.org/10.48550/arXiv.2510.10169) ### Examples ```pycon >>> from eegdash.dataset import NM000147 >>> dataset = NM000147(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Romani2025']* ### *class* eegdash.dataset.NM000148(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor imagery BCI dataset with pupillometry augmentation * **Study:** `nm000148` (NeMAR) * **Author (year):** `Rozado2015` * **Canonical:** — Also importable as: `NM000148`, `Rozado2015`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 30; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000148](https://openneuro.org/datasets/nm000148) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000148](https://nemar.org/dataexplorer/detail?dataset_id=nm000148) ### Examples ```pycon >>> from eegdash.dataset import NM000148 >>> dataset = NM000148(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000149(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2019-001 Motor Imagery dataset for Spinal Cord Injury patients * **Study:** `nm000149` (NeMAR) * **Author (year):** `Ofner2019` * **Canonical:** — Also importable as: `NM000149`, `Ofner2019`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 10; recordings: 90; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000149](https://openneuro.org/datasets/nm000149) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000149](https://nemar.org/dataexplorer/detail?dataset_id=nm000149) ### Examples ```pycon >>> from eegdash.dataset import NM000149 >>> dataset = NM000149(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000150(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Liu2025 - NEMAR Dataset * **Study:** `nm000150` (NeMAR) * **Author (year):** `Liu2025_NEMAR` * **Canonical:** — Also importable as: `NM000150`, `Liu2025_NEMAR`. Modality: `eeg`. Subjects: 0; recordings: 0; tasks: 0. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000150](https://openneuro.org/datasets/nm000150) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000150](https://nemar.org/dataexplorer/detail?dataset_id=nm000150) ### Examples ```pycon >>> from eegdash.dataset import NM000150 >>> dataset = NM000150(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000151(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor imagery dataset for three imaginary states of the same upper extremity * **Study:** `nm000151` (NeMAR) * **Author (year):** `Tavakolan2017` * **Canonical:** — Also importable as: `NM000151`, `Tavakolan2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 12; recordings: 46; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000151](https://openneuro.org/datasets/nm000151) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000151](https://nemar.org/dataexplorer/detail?dataset_id=nm000151) ### Examples ```pycon >>> from eegdash.dataset import NM000151 >>> dataset = NM000151(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000152(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Upper-limb elbow-centered motor imagery dataset (10 classes) * **Study:** `nm000152` (NeMAR) * **Author (year):** `Zhang2017` * **Canonical:** — Also importable as: `NM000152`, `Zhang2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 12; recordings: 180; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000152](https://openneuro.org/datasets/nm000152) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000152](https://nemar.org/dataexplorer/detail?dataset_id=nm000152) ### Examples ```pycon >>> from eegdash.dataset import NM000152 >>> dataset = NM000152(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000155(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MUniverse Caillet et al 2023 * **Study:** `nm000155` (NeMAR) * **Author (year):** `Caillet2023` * **Canonical:** — Also importable as: `NM000155`, `Caillet2023`. Modality: `emg`. Subjects: 6; recordings: 11; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000155](https://openneuro.org/datasets/nm000155) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000155](https://nemar.org/dataexplorer/detail?dataset_id=nm000155) DOI: [https://doi.org/https://doi.org/10.7910/DVN/F9GWIW](https://doi.org/https://doi.org/10.7910/DVN/F9GWIW) ### Examples ```pycon >>> from eegdash.dataset import NM000155 >>> dataset = NM000155(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000157(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-B * **Study:** `nm000157` (NeMAR) * **Author (year):** `Mainsah2025` * **Canonical:** — Also importable as: `NM000157`, `Mainsah2025`. Modality: `eeg`. Subjects: 19; recordings: 544; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000157](https://openneuro.org/datasets/nm000157) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000157](https://nemar.org/dataexplorer/detail?dataset_id=nm000157) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000157 >>> dataset = NM000157(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000158(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset ``` [1]_ ``` from the study on motor imagery ``` [2]_ ``` * **Study:** `nm000158` (NeMAR) * **Author (year):** `Liu2024` * **Canonical:** — Also importable as: `NM000158`, `Liu2024`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 50; recordings: 50; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000158](https://openneuro.org/datasets/nm000158) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000158](https://nemar.org/dataexplorer/detail?dataset_id=nm000158) ### Examples ```pycon >>> from eegdash.dataset import NM000158 >>> dataset = NM000158(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000159(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MUniverse Avrillon et al 2024 * **Study:** `nm000159` (NeMAR) * **Author (year):** `Avrillon2024` * **Canonical:** — Also importable as: `NM000159`, `Avrillon2024`. Modality: `emg`. Subjects: 16; recordings: 124; tasks: 8. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000159](https://openneuro.org/datasets/nm000159) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000159](https://nemar.org/dataexplorer/detail?dataset_id=nm000159) DOI: [https://doi.org/https://doi.org/10.7910/DVN/L9OQY7](https://doi.org/https://doi.org/10.7910/DVN/L9OQY7) ### Examples ```pycon >>> from eegdash.dataset import NM000159 >>> dataset = NM000159(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000160(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multi-joint upper-limb MI dataset from Yi et al. 2025 * **Study:** `nm000160` (NeMAR) * **Author (year):** `Yi2025` * **Canonical:** — Also importable as: `NM000160`, `Yi2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 18; recordings: 141; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000160](https://openneuro.org/datasets/nm000160) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000160](https://nemar.org/dataexplorer/detail?dataset_id=nm000160) ### Examples ```pycon >>> from eegdash.dataset import NM000160 >>> dataset = NM000160(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000161(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2024-001 Handwritten Character Classification dataset * **Study:** `nm000161` (NeMAR) * **Author (year):** `Crell2024` * **Canonical:** — Also importable as: `NM000161`, `Crell2024`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 20; recordings: 40; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000161](https://openneuro.org/datasets/nm000161) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000161](https://nemar.org/dataexplorer/detail?dataset_id=nm000161) ### Examples ```pycon >>> from eegdash.dataset import NM000161 >>> dataset = NM000161(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000162(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2025-001 Motor Kinematics Reaching dataset * **Study:** `nm000162` (NeMAR) * **Author (year):** `Srisrisawang2025` * **Canonical:** `BNCI2025` Also importable as: `NM000162`, `Srisrisawang2025`, `BNCI2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 20; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000162](https://openneuro.org/datasets/nm000162) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000162](https://nemar.org/dataexplorer/detail?dataset_id=nm000162) ### Examples ```pycon >>> from eegdash.dataset import NM000162 >>> dataset = NM000162(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2025']* ### *class* eegdash.dataset.NM000163(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) c-VEP and Burst-VEP dataset from Castillos et al. (2023) * **Study:** `nm000163` (NeMAR) * **Author (year):** `Castillos2023_VEP` * **Canonical:** — Also importable as: `NM000163`, `Castillos2023_VEP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000163](https://openneuro.org/datasets/nm000163) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000163](https://nemar.org/dataexplorer/detail?dataset_id=nm000163) ### Examples ```pycon >>> from eegdash.dataset import NM000163 >>> dataset = NM000163(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000165(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) MUniverse Grison et al 2025 * **Study:** `nm000165` (NeMAR) * **Author (year):** `Grison2025` * **Canonical:** — Also importable as: `NM000165`, `Grison2025`. Modality: `emg`. Subjects: 1; recordings: 10; tasks: 10. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000165](https://openneuro.org/datasets/nm000165) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000165](https://nemar.org/dataexplorer/detail?dataset_id=nm000165) DOI: [https://doi.org/https://doi.org/10.7910/DVN/ID1WNQ](https://doi.org/https://doi.org/10.7910/DVN/ID1WNQ) ### Examples ```pycon >>> from eegdash.dataset import NM000165 >>> dataset = NM000165(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000166(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) M3CV: Multi-subject, Multi-session, Multi-task EEG Database * **Study:** `nm000166` (NeMAR) * **Author (year):** `Huang2018` * **Canonical:** — Also importable as: `NM000166`, `Huang2018`. Modality: `eeg`. Subjects: 95; recordings: 2469; tasks: 13. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000166](https://openneuro.org/datasets/nm000166) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000166](https://nemar.org/dataexplorer/detail?dataset_id=nm000166) DOI: [https://doi.org/10.1016/j.neuroimage.2022.119666](https://doi.org/10.1016/j.neuroimage.2022.119666) ### Examples ```pycon >>> from eegdash.dataset import NM000166 >>> dataset = NM000166(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000167(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor imagery dataset from Ma et al. 2020 * **Study:** `nm000167` (NeMAR) * **Author (year):** `Ma2020` * **Canonical:** — Also importable as: `NM000167`, `Ma2020`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 25; recordings: 375; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000167](https://openneuro.org/datasets/nm000167) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000167](https://nemar.org/dataexplorer/detail?dataset_id=nm000167) ### Examples ```pycon >>> from eegdash.dataset import NM000167 >>> dataset = NM000167(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000168(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-013 Error-Related Potentials dataset * **Study:** `nm000168` (NeMAR) * **Author (year):** `Chavarriaga2015` * **Canonical:** `Chavarriaga2010` Also importable as: `NM000168`, `Chavarriaga2015`, `Chavarriaga2010`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 6; recordings: 120; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000168](https://openneuro.org/datasets/nm000168) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000168](https://nemar.org/dataexplorer/detail?dataset_id=nm000168) ### Examples ```pycon >>> from eegdash.dataset import NM000168 >>> dataset = NM000168(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Chavarriaga2010']* ### *class* eegdash.dataset.NM000169(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-008 P300 dataset (ALS patients) * **Study:** `nm000169` (NeMAR) * **Author (year):** `Riccio2014` * **Canonical:** `BNCI2014008` Also importable as: `NM000169`, `Riccio2014`, `BNCI2014008`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 8; recordings: 8; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000169](https://openneuro.org/datasets/nm000169) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000169](https://nemar.org/dataexplorer/detail?dataset_id=nm000169) ### Examples ```pycon >>> from eegdash.dataset import NM000169 >>> dataset = NM000169(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2014008']* ### *class* eegdash.dataset.NM000170(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2025-002 Continuous 2D Trajectory Decoding dataset * **Study:** `nm000170` (NeMAR) * **Author (year):** `Pulferer2025` * **Canonical:** — Also importable as: `NM000170`, `Pulferer2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 10; recordings: 90; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000170](https://openneuro.org/datasets/nm000170) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000170](https://nemar.org/dataexplorer/detail?dataset_id=nm000170) ### Examples ```pycon >>> from eegdash.dataset import NM000170 >>> dataset = NM000170(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000171(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-002 Motor Imagery dataset * **Study:** `nm000171` (NeMAR) * **Author (year):** `Steyrl2014` * **Canonical:** `BNCI2014002` Also importable as: `NM000171`, `Steyrl2014`, `BNCI2014002`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 14; recordings: 112; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000171](https://openneuro.org/datasets/nm000171) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000171](https://nemar.org/dataexplorer/detail?dataset_id=nm000171) ### Examples ```pycon >>> from eegdash.dataset import NM000171 >>> dataset = NM000171(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2014002']* ### *class* eegdash.dataset.NM000172(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) High-gamma dataset described in Schirrmeister et al. 2017 * **Study:** `nm000172` (NeMAR) * **Author (year):** `Schirrmeister2017` * **Canonical:** — Also importable as: `NM000172`, `Schirrmeister2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 14; recordings: 28; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000172](https://openneuro.org/datasets/nm000172) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000172](https://nemar.org/dataexplorer/detail?dataset_id=nm000172) ### Examples ```pycon >>> from eegdash.dataset import NM000172 >>> dataset = NM000172(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000173(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Imagery ataset from Ofner et al 2017 * **Study:** `nm000173` (NeMAR) * **Author (year):** `Ofner2017` * **Canonical:** — Also importable as: `NM000173`, `Ofner2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 15; recordings: 300; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000173](https://openneuro.org/datasets/nm000173) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000173](https://nemar.org/dataexplorer/detail?dataset_id=nm000173) ### Examples ```pycon >>> from eegdash.dataset import NM000173 >>> dataset = NM000173(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000175(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) fNIRS Finger Tapping * **Study:** `nm000175` (NeMAR) * **Author (year):** `Luke2024` * **Canonical:** — Also importable as: `NM000175`, `Luke2024`. Modality: `fnirs`. Subjects: 5; recordings: 5; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000175](https://openneuro.org/datasets/nm000175) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000175](https://nemar.org/dataexplorer/detail?dataset_id=nm000175) ### Examples ```pycon >>> from eegdash.dataset import NM000175 >>> dataset = NM000175(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000176(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study K — 9x8 adaptive/checkerboard, 2 sessions (5 healthy subjects) * **Study:** `nm000176` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI` * **Canonical:** `BigP3BCI_StudyK`, `BigP3BCI_K` Also importable as: `NM000176`, `Mainsah2025_BigP3BCI`, `BigP3BCI_StudyK`, `BigP3BCI_K`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 5; recordings: 128; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000176](https://openneuro.org/datasets/nm000176) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000176](https://nemar.org/dataexplorer/detail?dataset_id=nm000176) ### Examples ```pycon >>> from eegdash.dataset import NM000176 >>> dataset = NM000176(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyK', 'BigP3BCI_K']* ### *class* eegdash.dataset.NM000179(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) LEMON: MPI Leipzig Mind-Brain-Body EEG (Resting State) * **Study:** `nm000179` (NeMAR) * **Author (year):** `Babayan2018` * **Canonical:** `LEMON` Also importable as: `NM000179`, `Babayan2018`, `LEMON`. Modality: `eeg`. Subjects: 215; recordings: 215; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000179](https://openneuro.org/datasets/nm000179) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000179](https://nemar.org/dataexplorer/detail?dataset_id=nm000179) DOI: [https://doi.org/10.1038/sdata.2018.308](https://doi.org/10.1038/sdata.2018.308) ### Examples ```pycon >>> from eegdash.dataset import NM000179 >>> dataset = NM000179(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['LEMON']* ### *class* eegdash.dataset.NM000180(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Brennan2019: EEG during Alice in Wonderland Listening * **Study:** `nm000180` (NeMAR) * **Author (year):** `Brennan2019` * **Canonical:** — Also importable as: `NM000180`, `Brennan2019`. Modality: `eeg`. Subjects: 45; recordings: 45; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000180](https://openneuro.org/datasets/nm000180) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000180](https://nemar.org/dataexplorer/detail?dataset_id=nm000180) DOI: [https://doi.org/10.1371/journal.pone.0207741](https://doi.org/10.1371/journal.pone.0207741) ### Examples ```pycon >>> from eegdash.dataset import NM000180 >>> dataset = NM000180(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000181(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) NMT: Neurodiagnostic Montage Template Scalp EEG * **Study:** `nm000181` (NeMAR) * **Author (year):** `Khan2019` * **Canonical:** — Also importable as: `NM000181`, `Khan2019`. Modality: `eeg`. Subjects: 2417; recordings: 2417; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000181](https://openneuro.org/datasets/nm000181) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000181](https://nemar.org/dataexplorer/detail?dataset_id=nm000181) DOI: [https://doi.org/10.5281/zenodo.10909103](https://doi.org/10.5281/zenodo.10909103) ### Examples ```pycon >>> from eegdash.dataset import NM000181 >>> dataset = NM000181(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000185(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sleep-EDF Expanded: Whole-Night PSG Recordings * **Study:** `nm000185` (NeMAR) * **Author (year):** `Kemp2000` * **Canonical:** `SleepEDF`, `SleepEDFExpanded` Also importable as: `NM000185`, `Kemp2000`, `SleepEDF`, `SleepEDFExpanded`. Modality: `eeg`. Subjects: 100; recordings: 197; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000185](https://openneuro.org/datasets/nm000185) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000185](https://nemar.org/dataexplorer/detail?dataset_id=nm000185) DOI: [https://doi.org/10.13026/C2X676](https://doi.org/10.13026/C2X676) ### Examples ```pycon >>> from eegdash.dataset import NM000185 >>> dataset = NM000185(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['SleepEDF', 'SleepEDFExpanded']* ### *class* eegdash.dataset.NM000186(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study E — 6x6 checkerboard (8 healthy subjects) * **Study:** `nm000186` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_E` * **Canonical:** `BigP3BCI_StudyE`, `BigP3BCI_E` Also importable as: `NM000186`, `Mainsah2025_BigP3BCI_E`, `BigP3BCI_StudyE`, `BigP3BCI_E`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 8; recordings: 88; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000186](https://openneuro.org/datasets/nm000186) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000186](https://nemar.org/dataexplorer/detail?dataset_id=nm000186) ### Examples ```pycon >>> from eegdash.dataset import NM000186 >>> dataset = NM000186(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyE', 'BigP3BCI_E']* ### *class* eegdash.dataset.NM000187(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study N — 9x8 dry/wet electrode comparison (8 ALS subjects) * **Study:** `nm000187` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_N` * **Canonical:** `BigP3BCI_StudyN` Also importable as: `NM000187`, `Mainsah2025_BigP3BCI_N`, `BigP3BCI_StudyN`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 8; recordings: 160; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000187](https://openneuro.org/datasets/nm000187) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000187](https://nemar.org/dataexplorer/detail?dataset_id=nm000187) ### Examples ```pycon >>> from eegdash.dataset import NM000187 >>> dataset = NM000187(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyN']* ### *class* eegdash.dataset.NM000188(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2014-009 P300 dataset * **Study:** `nm000188` (NeMAR) * **Author (year):** `Arico2014` * **Canonical:** `BNCI2014_009_P300` Also importable as: `NM000188`, `Arico2014`, `BNCI2014_009_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000188](https://openneuro.org/datasets/nm000188) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000188](https://nemar.org/dataexplorer/detail?dataset_id=nm000188) ### Examples ```pycon >>> from eegdash.dataset import NM000188 >>> dataset = NM000188(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2014_009_P300']* ### *class* eegdash.dataset.NM000189(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-003 P300 dataset * **Study:** `nm000189` (NeMAR) * **Author (year):** `Schreuder2015_P300` * **Canonical:** `BNCI2015_P300`, `BNCI2015_003_P300`, `BNCI2015_003_AMUSE` Also importable as: `NM000189`, `Schreuder2015_P300`, `BNCI2015_P300`, `BNCI2015_003_P300`, `BNCI2015_003_AMUSE`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000189](https://openneuro.org/datasets/nm000189) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000189](https://nemar.org/dataexplorer/detail?dataset_id=nm000189) ### Examples ```pycon >>> from eegdash.dataset import NM000189 >>> dataset = NM000189(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2015_P300', 'BNCI2015_003_P300', 'BNCI2015_003_AMUSE']* ### *class* eegdash.dataset.NM000190(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-012 PASS2D P300 dataset * **Study:** `nm000190` (NeMAR) * **Author (year):** `Hohne2015` * **Canonical:** — Also importable as: `NM000190`, `Hohne2015`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 20; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000190](https://openneuro.org/datasets/nm000190) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000190](https://nemar.org/dataexplorer/detail?dataset_id=nm000190) ### Examples ```pycon >>> from eegdash.dataset import NM000190 >>> dataset = NM000190(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000191(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study F — 6x6 multi-paradigm, 3 sessions (10 healthy subjects) * **Study:** `nm000191` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_F` * **Canonical:** `BigP3BCI_StudyF`, `BigP3BCI_F` Also importable as: `NM000191`, `Mainsah2025_BigP3BCI_F`, `BigP3BCI_StudyF`, `BigP3BCI_F`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 10; recordings: 270; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000191](https://openneuro.org/datasets/nm000191) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000191](https://nemar.org/dataexplorer/detail?dataset_id=nm000191) ### Examples ```pycon >>> from eegdash.dataset import NM000191 >>> dataset = NM000191(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyF', 'BigP3BCI_F']* ### *class* eegdash.dataset.NM000192(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-006 Music BCI dataset * **Study:** `nm000192` (NeMAR) * **Author (year):** `Treder2015_BNCI_006_Music` * **Canonical:** `BNCI2015_BNCI_006_Music`, `BNCI_2015_006_Music`, `BNCI2015_006_MusicBCI` Also importable as: `NM000192`, `Treder2015_BNCI_006_Music`, `BNCI2015_BNCI_006_Music`, `BNCI_2015_006_Music`, `BNCI2015_006_MusicBCI`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 11; recordings: 11; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000192](https://openneuro.org/datasets/nm000192) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000192](https://nemar.org/dataexplorer/detail?dataset_id=nm000192) ### Examples ```pycon >>> from eegdash.dataset import NM000192 >>> dataset = NM000192(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2015_BNCI_006_Music', 'BNCI_2015_006_Music', 'BNCI2015_006_MusicBCI']* ### *class* eegdash.dataset.NM000193(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Class for Kojima2024A dataset management. P300 dataset * **Study:** `nm000193` (NeMAR) * **Author (year):** `Kojima2024A_P300` * **Canonical:** — Also importable as: `NM000193`, `Kojima2024A_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 11; recordings: 66; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000193](https://openneuro.org/datasets/nm000193) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000193](https://nemar.org/dataexplorer/detail?dataset_id=nm000193) ### Examples ```pycon >>> from eegdash.dataset import NM000193 >>> dataset = NM000193(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000194(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-010 RSVP P300 dataset * **Study:** `nm000194` (NeMAR) * **Author (year):** `Acqualagna2015` * **Canonical:** — Also importable as: `NM000194`, `Acqualagna2015`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000194](https://openneuro.org/datasets/nm000194) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000194](https://nemar.org/dataexplorer/detail?dataset_id=nm000194) ### Examples ```pycon >>> from eegdash.dataset import NM000194 >>> dataset = NM000194(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000195(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mixture of LLP and EM for a visual matrix speller (ERP) dataset from * **Study:** `nm000195` (NeMAR) * **Author (year):** `Hubner2018` * **Canonical:** `Huebner2018` Also importable as: `NM000195`, `Hubner2018`, `Huebner2018`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 360; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000195](https://openneuro.org/datasets/nm000195) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000195](https://nemar.org/dataexplorer/detail?dataset_id=nm000195) ### Examples ```pycon >>> from eegdash.dataset import NM000195 >>> dataset = NM000195(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Huebner2018']* ### *class* eegdash.dataset.NM000196(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) c-VEP dataset from Thielen et al. (2015) * **Study:** `nm000196` (NeMAR) * **Author (year):** `Thielen2015` * **Canonical:** — Also importable as: `NM000196`, `Thielen2015`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 36; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000196](https://openneuro.org/datasets/nm000196) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000196](https://nemar.org/dataexplorer/detail?dataset_id=nm000196) ### Examples ```pycon >>> from eegdash.dataset import NM000196 >>> dataset = NM000196(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000197(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study M — 9x8 adaptive/checkerboard (21 ALS subjects) * **Study:** `nm000197` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_M` * **Canonical:** `BigP3BCI_StudyM`, `BigP3BCI_M` Also importable as: `NM000197`, `Mainsah2025_BigP3BCI_M`, `BigP3BCI_StudyM`, `BigP3BCI_M`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 21; recordings: 420; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000197](https://openneuro.org/datasets/nm000197) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000197](https://nemar.org/dataexplorer/detail?dataset_id=nm000197) ### Examples ```pycon >>> from eegdash.dataset import NM000197 >>> dataset = NM000197(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyM', 'BigP3BCI_M']* ### *class* eegdash.dataset.NM000198(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-008 Center Speller P300 dataset * **Study:** `nm000198` (NeMAR) * **Author (year):** `Treder2015_P300` * **Canonical:** `BNCI2015_008_P300`, `BNCI2015_008_CenterSpeller` Also importable as: `NM000198`, `Treder2015_P300`, `BNCI2015_008_P300`, `BNCI2015_008_CenterSpeller`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000198](https://openneuro.org/datasets/nm000198) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000198](https://nemar.org/dataexplorer/detail?dataset_id=nm000198) ### Examples ```pycon >>> from eegdash.dataset import NM000198 >>> dataset = NM000198(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2015_008_P300', 'BNCI2015_008_CenterSpeller']* ### *class* eegdash.dataset.NM000199(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Learning from label proportions for a visual matrix speller (ERP) * **Study:** `nm000199` (NeMAR) * **Author (year):** `Hubner2017` * **Canonical:** `Huebner2017` Also importable as: `NM000199`, `Hubner2017`, `Huebner2017`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 342; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000199](https://openneuro.org/datasets/nm000199) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000199](https://nemar.org/dataexplorer/detail?dataset_id=nm000199) ### Examples ```pycon >>> from eegdash.dataset import NM000199 >>> dataset = NM000199(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Huebner2017']* ### *class* eegdash.dataset.NM000200(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study I — 9x8 checkerboard/performance-based (13 healthy subjects) * **Study:** `nm000200` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_I` * **Canonical:** `BigP3BCI_StudyI`, `BigP3BCI_I` Also importable as: `NM000200`, `Mainsah2025_BigP3BCI_I`, `BigP3BCI_StudyI`, `BigP3BCI_I`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 265; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000200](https://openneuro.org/datasets/nm000200) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000200](https://nemar.org/dataexplorer/detail?dataset_id=nm000200) ### Examples ```pycon >>> from eegdash.dataset import NM000200 >>> dataset = NM000200(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyI', 'BigP3BCI_I']* ### *class* eegdash.dataset.NM000201(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) ERP paradigm of the Mobile BCI dataset * **Study:** `nm000201` (NeMAR) * **Author (year):** `Lee2021_ERP` * **Canonical:** — Also importable as: `NM000201`, `Lee2021_ERP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 24; recordings: 113; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000201](https://openneuro.org/datasets/nm000201) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000201](https://nemar.org/dataexplorer/detail?dataset_id=nm000201) ### Examples ```pycon >>> from eegdash.dataset import NM000201 >>> dataset = NM000201(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000204(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Bluetooth speaker experiment (14 subjects, 6 classes, 31 EEG ch) * **Study:** `nm000204` (NeMAR) * **Author (year):** `Lee2024_Bluetooth_speaker_14` * **Canonical:** — Also importable as: `NM000204`, `Lee2024_Bluetooth_speaker_14`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 14; recordings: 420; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000204](https://openneuro.org/datasets/nm000204) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000204](https://nemar.org/dataexplorer/detail?dataset_id=nm000204) ### Examples ```pycon >>> from eegdash.dataset import NM000204 >>> dataset = NM000204(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000205(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RSVP collaborative BCI dataset from Zheng et al 2020 * **Study:** `nm000205` (NeMAR) * **Author (year):** `Zheng2020` * **Canonical:** — Also importable as: `NM000205`, `Zheng2020`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 14; recordings: 84; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000205](https://openneuro.org/datasets/nm000205) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000205](https://nemar.org/dataexplorer/detail?dataset_id=nm000205) ### Examples ```pycon >>> from eegdash.dataset import NM000205 >>> dataset = NM000205(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000206(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Neuroergonomic 2021 dataset * **Study:** `nm000206` (NeMAR) * **Author (year):** `Hinss2021_Neuroergonomic` * **Canonical:** `Hinss2021` Also importable as: `NM000206`, `Hinss2021_Neuroergonomic`, `Hinss2021`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000206](https://openneuro.org/datasets/nm000206) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000206](https://nemar.org/dataexplorer/detail?dataset_id=nm000206) ### Examples ```pycon >>> from eegdash.dataset import NM000206 >>> dataset = NM000206(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Hinss2021']* ### *class* eegdash.dataset.NM000207(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Class for Kojima2024B dataset management. P300 dataset * **Study:** `nm000207` (NeMAR) * **Author (year):** `Kojima2024B_P300` * **Canonical:** — Also importable as: `NM000207`, `Kojima2024B_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 180; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000207](https://openneuro.org/datasets/nm000207) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000207](https://nemar.org/dataexplorer/detail?dataset_id=nm000207) ### Examples ```pycon >>> from eegdash.dataset import NM000207 >>> dataset = NM000207(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000208(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Door lock control experiment (15 subjects, 4 classes, 31 EEG ch) * **Study:** `nm000208` (NeMAR) * **Author (year):** `Lee2024_Door_lock_control` * **Canonical:** — Also importable as: `NM000208`, `Lee2024_Door_lock_control`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 14; recordings: 434; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000208](https://openneuro.org/datasets/nm000208) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000208](https://nemar.org/dataexplorer/detail?dataset_id=nm000208) ### Examples ```pycon >>> from eegdash.dataset import NM000208 >>> dataset = NM000208(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000209(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor imagery + spatial attention dataset from Forenzo & He 2023 * **Study:** `nm000209` (NeMAR) * **Author (year):** `Forenzo2023` * **Canonical:** — Also importable as: `NM000209`, `Forenzo2023`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 25; recordings: 150; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000209](https://openneuro.org/datasets/nm000209) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000209](https://nemar.org/dataexplorer/detail?dataset_id=nm000209) ### Examples ```pycon >>> from eegdash.dataset import NM000209 >>> dataset = NM000209(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000210(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BCIAUT-P300 dataset for autism from Simoes et al 2020 * **Study:** `nm000210` (NeMAR) * **Author (year):** `Simoes2020` * **Canonical:** `BCIAUTP300`, `BCIAUT_P300`, `BCIAUT` Also importable as: `NM000210`, `Simoes2020`, `BCIAUTP300`, `BCIAUT_P300`, `BCIAUT`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Development`. Subjects: 15; recordings: 210; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000210](https://openneuro.org/datasets/nm000210) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000210](https://nemar.org/dataexplorer/detail?dataset_id=nm000210) ### Examples ```pycon >>> from eegdash.dataset import NM000210 >>> dataset = NM000210(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BCIAUTP300', 'BCIAUT_P300', 'BCIAUT']* ### *class* eegdash.dataset.NM000211(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) RSVP ERP dataset for authentication from Zhang et al 2025 * **Study:** `nm000211` (NeMAR) * **Author (year):** `Zhang2025_RSVP` * **Canonical:** `Zhang2025` Also importable as: `NM000211`, `Zhang2025_RSVP`, `Zhang2025`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 240; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000211](https://openneuro.org/datasets/nm000211) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000211](https://nemar.org/dataexplorer/detail?dataset_id=nm000211) ### Examples ```pycon >>> from eegdash.dataset import NM000211 >>> dataset = NM000211(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Zhang2025']* ### *class* eegdash.dataset.NM000212(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-007 Motion VEP (mVEP) Speller dataset * **Study:** `nm000212` (NeMAR) * **Author (year):** `Schaeff2015` * **Canonical:** — Also importable as: `NM000212`, `Schaeff2015`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 16; recordings: 32; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000212](https://openneuro.org/datasets/nm000212) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000212](https://nemar.org/dataexplorer/detail?dataset_id=nm000212) ### Examples ```pycon >>> from eegdash.dataset import NM000212 >>> dataset = NM000212(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000213(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Television control experiment (30 subjects, 4 classes, 31 EEG ch) * **Study:** `nm000213` (NeMAR) * **Author (year):** `Lee2024_Television_control_30` * **Canonical:** — Also importable as: `NM000213`, `Lee2024_Television_control_30`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 30; recordings: 2300; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000213](https://openneuro.org/datasets/nm000213) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000213](https://nemar.org/dataexplorer/detail?dataset_id=nm000213) ### Examples ```pycon >>> from eegdash.dataset import NM000213 >>> dataset = NM000213(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000214(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) c-VEP dataset from Thielen et al. (2021) * **Study:** `nm000214` (NeMAR) * **Author (year):** `Thielen2021` * **Canonical:** — Also importable as: `NM000214`, `Thielen2021`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 30; recordings: 150; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000214](https://openneuro.org/datasets/nm000214) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000214](https://nemar.org/dataexplorer/detail?dataset_id=nm000214) ### Examples ```pycon >>> from eegdash.dataset import NM000214 >>> dataset = NM000214(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000215(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset BI2014b from a “Brain Invaders” experiment * **Study:** `nm000215` (NeMAR) * **Author (year):** `Korczowski2014_P300` * **Canonical:** `BrainInvaders2014b`, `BI2014b`, `BrainInvadersBI2014b` Also importable as: `NM000215`, `Korczowski2014_P300`, `BrainInvaders2014b`, `BI2014b`, `BrainInvadersBI2014b`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 38; recordings: 38; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000215](https://openneuro.org/datasets/nm000215) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000215](https://nemar.org/dataexplorer/detail?dataset_id=nm000215) ### Examples ```pycon >>> from eegdash.dataset import NM000215 >>> dataset = NM000215(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BrainInvaders2014b', 'BI2014b', 'BrainInvadersBI2014b']* ### *class* eegdash.dataset.NM000216(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset BI2015a from a “Brain Invaders” experiment * **Study:** `nm000216` (NeMAR) * **Author (year):** `Korczowski2015_P300` * **Canonical:** `BrainInvaders2015a`, `BI2015a` Also importable as: `NM000216`, `Korczowski2015_P300`, `BrainInvaders2015a`, `BI2015a`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 43; recordings: 129; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000216](https://openneuro.org/datasets/nm000216) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000216](https://nemar.org/dataexplorer/detail?dataset_id=nm000216) ### Examples ```pycon >>> from eegdash.dataset import NM000216 >>> dataset = NM000216(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BrainInvaders2015a', 'BI2015a']* ### *class* eegdash.dataset.NM000217(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset BI2015b from a “Brain Invaders” experiment * **Study:** `nm000217` (NeMAR) * **Author (year):** `Korczowski2015_P300_BI2015b` * **Canonical:** `BrainInvaders2015b`, `BI2015b` Also importable as: `NM000217`, `Korczowski2015_P300_BI2015b`, `BrainInvaders2015b`, `BI2015b`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 44; recordings: 176; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000217](https://openneuro.org/datasets/nm000217) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000217](https://nemar.org/dataexplorer/detail?dataset_id=nm000217) ### Examples ```pycon >>> from eegdash.dataset import NM000217 >>> dataset = NM000217(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BrainInvaders2015b', 'BI2015b']* ### *class* eegdash.dataset.NM000218(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study H — 9x8 checkerboard with gaze conditions (16 healthy subjects) * **Study:** `nm000218` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_H` * **Canonical:** `BigP3BCI_StudyH`, `BigP3BCI_H` Also importable as: `NM000218`, `Mainsah2025_BigP3BCI_H`, `BigP3BCI_StudyH`, `BigP3BCI_H`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 16; recordings: 372; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000218](https://openneuro.org/datasets/nm000218) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000218](https://nemar.org/dataexplorer/detail?dataset_id=nm000218) ### Examples ```pycon >>> from eegdash.dataset import NM000218 >>> dataset = NM000218(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyH', 'BigP3BCI_H']* ### *class* eegdash.dataset.NM000219(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2020-002 Attention Shift (Covert Spatial Attention) dataset * **Study:** `nm000219` (NeMAR) * **Author (year):** `Reichert2020` * **Canonical:** `BNCI2020`, `BNCI2020_002_AttentionShift`, `BNCI2020_002_CovertSpatialAttention` Also importable as: `NM000219`, `Reichert2020`, `BNCI2020`, `BNCI2020_002_AttentionShift`, `BNCI2020_002_CovertSpatialAttention`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 18; recordings: 18; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000219](https://openneuro.org/datasets/nm000219) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000219](https://nemar.org/dataexplorer/detail?dataset_id=nm000219) ### Examples ```pycon >>> from eegdash.dataset import NM000219 >>> dataset = NM000219(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2020', 'BNCI2020_002_AttentionShift', 'BNCI2020_002_CovertSpatialAttention']* ### *class* eegdash.dataset.NM000221(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Alphawaves dataset * **Study:** `nm000221` (NeMAR) * **Author (year):** `Cattan2017` * **Canonical:** `Alphawaves`, `Rodrigues2017`, `AlphaWaves` Also importable as: `NM000221`, `Cattan2017`, `Alphawaves`, `Rodrigues2017`, `AlphaWaves`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 19; recordings: 19; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000221](https://openneuro.org/datasets/nm000221) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000221](https://nemar.org/dataexplorer/detail?dataset_id=nm000221) ### Examples ```pycon >>> from eegdash.dataset import NM000221 >>> dataset = NM000221(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Alphawaves', 'Rodrigues2017', 'AlphaWaves']* ### *class* eegdash.dataset.NM000222(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Air conditioner control experiment (10 subjects, 4 classes, 25 EEG ch) * **Study:** `nm000222` (NeMAR) * **Author (year):** `Lee2024_Air_conditioner_control` * **Canonical:** — Also importable as: `NM000222`, `Lee2024_Air_conditioner_control`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 305; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000222](https://openneuro.org/datasets/nm000222) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000222](https://nemar.org/dataexplorer/detail?dataset_id=nm000222) ### Examples ```pycon >>> from eegdash.dataset import NM000222 >>> dataset = NM000222(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000223(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Electric light control experiment (15 subjects, 4 classes, 31 EEG ch) * **Study:** `nm000223` (NeMAR) * **Author (year):** `Lee2024_Electric_light_control` * **Canonical:** — Also importable as: `NM000223`, `Lee2024_Electric_light_control`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 465; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000223](https://openneuro.org/datasets/nm000223) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000223](https://nemar.org/dataexplorer/detail?dataset_id=nm000223) ### Examples ```pycon >>> from eegdash.dataset import NM000223 >>> dataset = NM000223(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000225(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) PhysioNet 2018 Challenge: Sleep Arousal Detection PSG (Training) * **Study:** `nm000225` (NeMAR) * **Author (year):** `Ghassemi2018` * **Canonical:** — Also importable as: `NM000225`, `Ghassemi2018`. Modality: `eeg`. Subjects: 1983; recordings: 1983; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000225](https://openneuro.org/datasets/nm000225) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000225](https://nemar.org/dataexplorer/detail?dataset_id=nm000225) DOI: [https://doi.org/10.13026/6phb-r450](https://doi.org/10.13026/6phb-r450) ### Examples ```pycon >>> from eegdash.dataset import NM000225 >>> dataset = NM000225(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000226(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Zhou2016 * **Study:** `nm000226` (NeMAR) * **Author (year):** `Zhou2016_226` * **Canonical:** `Zhou2016_NEMAR` Also importable as: `NM000226`, `Zhou2016_226`, `Zhou2016_NEMAR`. Modality: `eeg`. Subjects: 4; recordings: 24; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000226](https://openneuro.org/datasets/nm000226) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000226](https://nemar.org/dataexplorer/detail?dataset_id=nm000226) DOI: [https://doi.org/10.82901/nemar.nm000115](https://doi.org/10.82901/nemar.nm000115) ### Examples ```pycon >>> from eegdash.dataset import NM000226 >>> dataset = NM000226(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Zhou2016_NEMAR']* ### *class* eegdash.dataset.NM000227(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Eye-BCI Motor Execution dataset from Guttmann-Flury et al 2025 * **Study:** `nm000227` (NeMAR) * **Author (year):** `GuttmannFlury2025_Eye` * **Canonical:** `GuttmannFlury2025_ME` Also importable as: `NM000227`, `GuttmannFlury2025_Eye`, `GuttmannFlury2025_ME`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 31; recordings: 63; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000227](https://openneuro.org/datasets/nm000227) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000227](https://nemar.org/dataexplorer/detail?dataset_id=nm000227) ### Examples ```pycon >>> from eegdash.dataset import NM000227 >>> dataset = NM000227(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['GuttmannFlury2025_ME']* ### *class* eegdash.dataset.NM000228(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Nieuwland et al. 2018: Multi-site N400 Replication Study * **Study:** `nm000228` (NeMAR) * **Author (year):** `Nieuwland2018` * **Canonical:** — Also importable as: `NM000228`, `Nieuwland2018`. Modality: `eeg`. Subjects: 356; recordings: 397; tasks: 2. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000228](https://openneuro.org/datasets/nm000228) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000228](https://nemar.org/dataexplorer/detail?dataset_id=nm000228) DOI: [https://doi.org/10.7554/eLife.33468](https://doi.org/10.7554/eLife.33468) ### Examples ```pycon >>> from eegdash.dataset import NM000228 >>> dataset = NM000228(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000229(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Gwilliams et al. 2023 — Introducing MEG-MASC: a high-quality magneto-encephalography dataset for evaluating natural speech processing * **Study:** `nm000229` (NeMAR) * **Author (year):** `Gwilliams2023` * **Canonical:** `MASC_MEG`, `MEG_MASC` Also importable as: `NM000229`, `Gwilliams2023`, `MASC_MEG`, `MEG_MASC`. Modality: `eeg`. Subjects: 29; recordings: 1360; tasks: 79. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000229](https://openneuro.org/datasets/nm000229) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000229](https://nemar.org/dataexplorer/detail?dataset_id=nm000229) DOI: [https://doi.org/10.1038/s41597-023-02752-5](https://doi.org/10.1038/s41597-023-02752-5) ### Examples ```pycon >>> from eegdash.dataset import NM000229 >>> dataset = NM000229(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['MASC_MEG', 'MEG_MASC']* ### *class* eegdash.dataset.NM000230(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Lower-limb MI dataset for knee pain patients from Zuo et al. 2025 * **Study:** `nm000230` (NeMAR) * **Author (year):** `Zuo2025` * **Canonical:** — Also importable as: `NM000230`, `Zuo2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 30; recordings: 118; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000230](https://openneuro.org/datasets/nm000230) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000230](https://nemar.org/dataexplorer/detail?dataset_id=nm000230) ### Examples ```pycon >>> from eegdash.dataset import NM000230 >>> dataset = NM000230(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000231(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset from Hoffmann et al 2008 * **Study:** `nm000231` (NeMAR) * **Author (year):** `Hoffmann2008` * **Canonical:** `EPFLP300`, `EPFL_P300`, `EPFLP300Dataset` Also importable as: `NM000231`, `Hoffmann2008`, `EPFLP300`, `EPFL_P300`, `EPFLP300Dataset`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 8; recordings: 192; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000231](https://openneuro.org/datasets/nm000231) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000231](https://nemar.org/dataexplorer/detail?dataset_id=nm000231) ### Examples ```pycon >>> from eegdash.dataset import NM000231 >>> dataset = NM000231(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['EPFLP300', 'EPFL_P300', 'EPFLP300Dataset']* ### *class* eegdash.dataset.NM000232(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) THINGS-EEG2: A large and rich EEG dataset for modeling human visual object recognition * **Study:** `nm000232` (NeMAR) * **Author (year):** `Gifford2019` * **Canonical:** — Also importable as: `NM000232`, `Gifford2019`. Modality: `eeg`. Subjects: 10; recordings: 638; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000232](https://openneuro.org/datasets/nm000232) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000232](https://nemar.org/dataexplorer/detail?dataset_id=nm000232) DOI: [https://doi.org/10.17605/OSF.IO/3JK45](https://doi.org/10.17605/OSF.IO/3JK45) ### Examples ```pycon >>> from eegdash.dataset import NM000232 >>> dataset = NM000232(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000234(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2015-009 AMUSE (Auditory Multi-class Spatial ERP) dataset * **Study:** `nm000234` (NeMAR) * **Author (year):** `Schreuder2015_ERP` * **Canonical:** `BNCI2015_ERP` Also importable as: `NM000234`, `Schreuder2015_ERP`, `BNCI2015_ERP`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 42; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000234](https://openneuro.org/datasets/nm000234) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000234](https://nemar.org/dataexplorer/detail?dataset_id=nm000234) ### Examples ```pycon >>> from eegdash.dataset import NM000234 >>> dataset = NM000234(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2015_ERP']* ### *class* eegdash.dataset.NM000235(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Eye-BCI multimodal MI/ME dataset from Guttmann-Flury et al 2025 * **Study:** `nm000235` (NeMAR) * **Author (year):** `GuttmannFlury2025_Eye_BCI` * **Canonical:** `GuttmannFlury2025_MIME` Also importable as: `NM000235`, `GuttmannFlury2025_Eye_BCI`, `GuttmannFlury2025_MIME`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 31; recordings: 63; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000235](https://openneuro.org/datasets/nm000235) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000235](https://nemar.org/dataexplorer/detail?dataset_id=nm000235) ### Examples ```pycon >>> from eegdash.dataset import NM000235 >>> dataset = NM000235(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['GuttmannFlury2025_MIME']* ### *class* eegdash.dataset.NM000236(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Dataset of an EEG-based BCI experiment in Virtual Reality using P300 * **Study:** `nm000236` (NeMAR) * **Author (year):** `Cattan2019_P300` * **Canonical:** — Also importable as: `NM000236`, `Cattan2019_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 21; recordings: 2520; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000236](https://openneuro.org/datasets/nm000236) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000236](https://nemar.org/dataexplorer/detail?dataset_id=nm000236) ### Examples ```pycon >>> from eegdash.dataset import NM000236 >>> dataset = NM000236(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000237(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) 7-day motor imagery BCI EEG dataset from Zhou et al 2021 * **Study:** `nm000237` (NeMAR) * **Author (year):** `Zhou2021` * **Canonical:** — Also importable as: `NM000237`, `Zhou2021`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 20; recordings: 833; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000237](https://openneuro.org/datasets/nm000237) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000237](https://nemar.org/dataexplorer/detail?dataset_id=nm000237) ### Examples ```pycon >>> from eegdash.dataset import NM000237 >>> dataset = NM000237(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000238(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) SparrKULee: A Speech-Evoked Auditory Response Repository from KU Leuven, Containing the EEG of 85 Participants * **Study:** `nm000238` (NeMAR) * **Author (year):** `Accou2024` * **Canonical:** — Also importable as: `NM000238`, `Accou2024`. Modality: `eeg`. Subjects: 87; recordings: 4088; tasks: 366. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000238](https://openneuro.org/datasets/nm000238) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000238](https://nemar.org/dataexplorer/detail?dataset_id=nm000238) DOI: [https://doi.org/10.48804/K3VSND](https://doi.org/10.48804/K3VSND) ### Examples ```pycon >>> from eegdash.dataset import NM000238 >>> dataset = NM000238(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000239(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P-ary m-sequence-based c-VEP dataset from Martínez-Cagigal et al. (2023) * **Study:** `nm000239` (NeMAR) * **Author (year):** `MartinezCagigal2023` * **Canonical:** — Also importable as: `NM000239`, `MartinezCagigal2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 16; recordings: 640; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000239](https://openneuro.org/datasets/nm000239) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000239](https://nemar.org/dataexplorer/detail?dataset_id=nm000239) ### Examples ```pycon >>> from eegdash.dataset import NM000239 >>> dataset = NM000239(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000240(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Checkerboard m-sequence-based c-VEP dataset from * **Study:** `nm000240` (NeMAR) * **Author (year):** `FernandezRodriguez2025` * **Canonical:** `FernandezRodriguez2023` Also importable as: `NM000240`, `FernandezRodriguez2025`, `FernandezRodriguez2023`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 16; recordings: 383; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000240](https://openneuro.org/datasets/nm000240) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000240](https://nemar.org/dataexplorer/detail?dataset_id=nm000240) ### Examples ```pycon >>> from eegdash.dataset import NM000240 >>> dataset = NM000240(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['FernandezRodriguez2023']* ### *class* eegdash.dataset.NM000241(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CerebroVoice: Bilingual sEEG Speech Dataset * **Study:** `nm000241` (NeMAR) * **Author (year):** `Zhang2019` * **Canonical:** — Also importable as: `NM000241`, `Zhang2019`. Modality: `ieeg`. Subjects: 2; recordings: 18; tasks: 9. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000241](https://openneuro.org/datasets/nm000241) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000241](https://nemar.org/dataexplorer/detail?dataset_id=nm000241) DOI: [https://doi.org/10.5281/zenodo.13332808](https://doi.org/10.5281/zenodo.13332808) ### Examples ```pycon >>> from eegdash.dataset import NM000241 >>> dataset = NM000241(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000242(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Visual imagery EEG dataset from Gao et al 2026 * **Study:** `nm000242` (NeMAR) * **Author (year):** `Gao2026_Visual_imagery_et` * **Canonical:** `Gao2026` Also importable as: `NM000242`, `Gao2026_Visual_imagery_et`, `Gao2026`. Modality: `eeg`; Experiment type: `Other`; Subject type: `Healthy`. Subjects: 22; recordings: 125; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000242](https://openneuro.org/datasets/nm000242) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000242](https://nemar.org/dataexplorer/detail?dataset_id=nm000242) ### Examples ```pycon >>> from eegdash.dataset import NM000242 >>> dataset = NM000242(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Gao2026']* ### *class* eegdash.dataset.NM000243(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2016-002 Emergency Braking during Simulated Driving dataset * **Study:** `nm000243` (NeMAR) * **Author (year):** `Haufe2016` * **Canonical:** `BNCI2016`, `BNCI2016002` Also importable as: `NM000243`, `Haufe2016`, `BNCI2016`, `BNCI2016002`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 15; recordings: 15; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000243](https://openneuro.org/datasets/nm000243) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000243](https://nemar.org/dataexplorer/detail?dataset_id=nm000243) ### Examples ```pycon >>> from eegdash.dataset import NM000243 >>> dataset = NM000243(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BNCI2016', 'BNCI2016002']* ### *class* eegdash.dataset.NM000244(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) P300 dataset BI2014a from a “Brain Invaders” experiment * **Study:** `nm000244` (NeMAR) * **Author (year):** `Korczowski2014_P300_BI2014a` * **Canonical:** `BrainInvaders2014a`, `BI2014a` Also importable as: `NM000244`, `Korczowski2014_P300_BI2014a`, `BrainInvaders2014a`, `BI2014a`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 64; recordings: 64; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000244](https://openneuro.org/datasets/nm000244) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000244](https://nemar.org/dataexplorer/detail?dataset_id=nm000244) ### Examples ```pycon >>> from eegdash.dataset import NM000244 >>> dataset = NM000244(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BrainInvaders2014a', 'BI2014a']* ### *class* eegdash.dataset.NM000245(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Motor Imagery dataset from Cho et al 2017 * **Study:** `nm000245` (NeMAR) * **Author (year):** `Cho2017` * **Canonical:** — Also importable as: `NM000245`, `Cho2017`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 52; recordings: 52; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000245](https://openneuro.org/datasets/nm000245) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000245](https://nemar.org/dataexplorer/detail?dataset_id=nm000245) ### Examples ```pycon >>> from eegdash.dataset import NM000245 >>> dataset = NM000245(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000246(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multi-day MI-BCI dataset (WBCIC-SHU) from Yang et al 2025 * **Study:** `nm000246` (NeMAR) * **Author (year):** `Yang2025_Multi` * **Canonical:** `WBCIC_SHU`, `WBCICSHU` Also importable as: `NM000246`, `Yang2025_Multi`, `WBCIC_SHU`, `WBCICSHU`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 51; recordings: 153; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000246](https://openneuro.org/datasets/nm000246) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000246](https://nemar.org/dataexplorer/detail?dataset_id=nm000246) ### Examples ```pycon >>> from eegdash.dataset import NM000246 >>> dataset = NM000246(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['WBCIC_SHU', 'WBCICSHU']* ### *class* eegdash.dataset.NM000247(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study S1 — 9x8 face/house paradigm (10 healthy subjects) * **Study:** `nm000247` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_S1` * **Canonical:** `BigP3BCI_StudyS1`, `BigP3BCI_S1` Also importable as: `NM000247`, `Mainsah2025_BigP3BCI_S1`, `BigP3BCI_StudyS1`, `BigP3BCI_S1`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 120; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000247](https://openneuro.org/datasets/nm000247) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000247](https://nemar.org/dataexplorer/detail?dataset_id=nm000247) ### Examples ```pycon >>> from eegdash.dataset import NM000247 >>> dataset = NM000247(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_StudyS1', 'BigP3BCI_S1']* ### *class* eegdash.dataset.NM000248(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BigP3BCI Study L — 6x6 multi-paradigm (11 ALS subjects) * **Study:** `nm000248` (NeMAR) * **Author (year):** `Mainsah2025_BigP3BCI_L` * **Canonical:** — Also importable as: `NM000248`, `Mainsah2025_BigP3BCI_L`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 11; recordings: 330; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000248](https://openneuro.org/datasets/nm000248) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000248](https://nemar.org/dataexplorer/detail?dataset_id=nm000248) ### Examples ```pycon >>> from eegdash.dataset import NM000248 >>> dataset = NM000248(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000249(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BNCI 2022-001 EEG Correlates of Difficulty Level dataset * **Study:** `nm000249` (NeMAR) * **Author (year):** `Jao2022` * **Canonical:** `Jao2020` Also importable as: `NM000249`, `Jao2022`, `Jao2020`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 13; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000249](https://openneuro.org/datasets/nm000249) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000249](https://nemar.org/dataexplorer/detail?dataset_id=nm000249) ### Examples ```pycon >>> from eegdash.dataset import NM000249 >>> dataset = NM000249(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Jao2020']* ### *class* eegdash.dataset.NM000250(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Class for Dreyer2023 dataset management. MI dataset * **Study:** `nm000250` (NeMAR) * **Author (year):** `Dreyer2023` * **Canonical:** — Also importable as: `NM000250`, `Dreyer2023`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 87; recordings: 520; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000250](https://openneuro.org/datasets/nm000250) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000250](https://nemar.org/dataexplorer/detail?dataset_id=nm000250) ### Examples ```pycon >>> from eegdash.dataset import NM000250 >>> dataset = NM000250(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000251(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) He et al. 2025 — VocalMind: A Stereotactic EEG Dataset for Vocalized, Mimed, and Imagined Speech in Tonal Language * **Study:** `nm000251` (NeMAR) * **Author (year):** `He2025` * **Canonical:** — Also importable as: `NM000251`, `He2025`. Modality: `ieeg`. Subjects: 1; recordings: 6; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000251](https://openneuro.org/datasets/nm000251) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000251](https://nemar.org/dataexplorer/detail?dataset_id=nm000251) DOI: [https://doi.org/10.1038/s41597-025-04741-2](https://doi.org/10.1038/s41597-025-04741-2) ### Examples ```pycon >>> from eegdash.dataset import NM000251 >>> dataset = NM000251(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000253(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Wang et al. 2024 — Brain Treebank: Large-scale intracranial recordings from naturalistic language stimuli * **Study:** `nm000253` (NeMAR) * **Author (year):** `Wang2024_et_al_Brain` * **Canonical:** `BrainTreeBank` Also importable as: `NM000253`, `Wang2024_et_al_Brain`, `BrainTreeBank`. Modality: `ieeg`. Subjects: 10; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000253](https://openneuro.org/datasets/nm000253) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000253](https://nemar.org/dataexplorer/detail?dataset_id=nm000253) DOI: [https://doi.org/10.48550/arXiv.2411.08343](https://doi.org/10.48550/arXiv.2411.08343) ### Examples ```pycon >>> from eegdash.dataset import NM000253 >>> dataset = NM000253(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BrainTreeBank']* ### *class* eegdash.dataset.NM000254(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Naturalistic viewing: An open-access dataset using simultaneous EEG-fMRI * **Study:** `nm000254` (NeMAR) * **Author (year):** `Telesford2024` * **Canonical:** — Also importable as: `NM000254`, `Telesford2024`. Modality: `eeg`. Subjects: 22; recordings: 942; tasks: 12. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000254](https://openneuro.org/datasets/nm000254) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000254](https://nemar.org/dataexplorer/detail?dataset_id=nm000254) ### Examples ```pycon >>> from eegdash.dataset import NM000254 >>> dataset = NM000254(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000255(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Brain, Body, and Behaviour Dataset (1.0.0) - Experiment 2 * **Study:** `nm000255` (NeMAR) * **Author (year):** `Madsen2024_E2` * **Canonical:** — Also importable as: `NM000255`, `Madsen2024_E2`. Modality: `eeg`. Subjects: 30; recordings: 291; tasks: 5. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000255](https://openneuro.org/datasets/nm000255) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000255](https://nemar.org/dataexplorer/detail?dataset_id=nm000255) ### Examples ```pycon >>> from eegdash.dataset import NM000255 >>> dataset = NM000255(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000256(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) The Brain, Body, and Behaviour Dataset (1.0.0) - Experiment 3 * **Study:** `nm000256` (NeMAR) * **Author (year):** `Madsen2024_E3` * **Canonical:** — Also importable as: `NM000256`, `Madsen2024_E3`. Modality: `eeg`. Subjects: 29; recordings: 332; tasks: 6. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000256](https://openneuro.org/datasets/nm000256) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000256](https://nemar.org/dataexplorer/detail?dataset_id=nm000256) ### Examples ```pycon >>> from eegdash.dataset import NM000256 >>> dataset = NM000256(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000259(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Speier2017 * **Study:** `nm000259` (NeMAR) * **Author (year):** `Speier2017` * **Canonical:** — Also importable as: `NM000259`, `Speier2017`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 10; recordings: 60; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000259](https://openneuro.org/datasets/nm000259) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000259](https://nemar.org/dataexplorer/detail?dataset_id=nm000259) DOI: [https://doi.org/10.1371/journal.pone.0175382](https://doi.org/10.1371/journal.pone.0175382) ### Examples ```pycon >>> from eegdash.dataset import NM000259 >>> dataset = NM000259(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000260(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BrainInvaders2012 * **Study:** `nm000260` (NeMAR) * **Author (year):** `BrainInvaders2012` * **Canonical:** `BI2012`, `BrainInvaders` Also importable as: `NM000260`, `BrainInvaders2012`, `BI2012`, `BrainInvaders`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 23; recordings: 46; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000260](https://openneuro.org/datasets/nm000260) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000260](https://nemar.org/dataexplorer/detail?dataset_id=nm000260) DOI: [https://doi.org/10.5281/zenodo.2649006](https://doi.org/10.5281/zenodo.2649006) ### Examples ```pycon >>> from eegdash.dataset import NM000260 >>> dataset = NM000260(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BI2012', 'BrainInvaders']* ### *class* eegdash.dataset.NM000264(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) BrainInvaders2013a * **Study:** `nm000264` (NeMAR) * **Author (year):** `BrainInvaders2013` * **Canonical:** `BrainInvaders2013a`, `BI2013a` Also importable as: `NM000264`, `BrainInvaders2013`, `BrainInvaders2013a`, `BI2013a`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 24; recordings: 292; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000264](https://openneuro.org/datasets/nm000264) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000264](https://nemar.org/dataexplorer/detail?dataset_id=nm000264) DOI: [https://doi.org/10.5281/zenodo.1494163](https://doi.org/10.5281/zenodo.1494163) ### Examples ```pycon >>> from eegdash.dataset import NM000264 >>> dataset = NM000264(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BrainInvaders2013a', 'BI2013a']* ### *class* eegdash.dataset.NM000265(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) GuttmannFlury2025-MI * **Study:** `nm000265` (NeMAR) * **Author (year):** `GuttmannFlury2025_MI` * **Canonical:** — Also importable as: `NM000265`, `GuttmannFlury2025_MI`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 31; recordings: 126; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000265](https://openneuro.org/datasets/nm000265) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000265](https://nemar.org/dataexplorer/detail?dataset_id=nm000265) DOI: [https://doi.org/10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) ### Examples ```pycon >>> from eegdash.dataset import NM000265 >>> dataset = NM000265(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000266(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Sosulski2019 * **Study:** `nm000266` (NeMAR) * **Author (year):** `Sosulski2019` * **Canonical:** — Also importable as: `NM000266`, `Sosulski2019`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 13; recordings: 1060; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000266](https://openneuro.org/datasets/nm000266) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000266](https://nemar.org/dataexplorer/detail?dataset_id=nm000266) DOI: [https://doi.org/10.48550/arXiv.2109.06011](https://doi.org/10.48550/arXiv.2109.06011) ### Examples ```pycon >>> from eegdash.dataset import NM000266 >>> dataset = NM000266(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000267(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Shin2017A * **Study:** `nm000267` (NeMAR) * **Author (year):** `Shin2017_Shin2017A` * **Canonical:** `Shin2017A` Also importable as: `NM000267`, `Shin2017_Shin2017A`, `Shin2017A`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 29; recordings: 174; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000267](https://openneuro.org/datasets/nm000267) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000267](https://nemar.org/dataexplorer/detail?dataset_id=nm000267) DOI: [https://doi.org/10.1109/TNSRE.2016.2628057](https://doi.org/10.1109/TNSRE.2016.2628057) ### Examples ```pycon >>> from eegdash.dataset import NM000267 >>> dataset = NM000267(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Shin2017A']* ### *class* eegdash.dataset.NM000268(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Shin2017B * **Study:** `nm000268` (NeMAR) * **Author (year):** `Shin2017_Shin2017B` * **Canonical:** `Shin2017B` Also importable as: `NM000268`, `Shin2017_Shin2017B`, `Shin2017B`. Modality: `eeg`; Experiment type: `Memory`; Subject type: `Healthy`. Subjects: 29; recordings: 174; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000268](https://openneuro.org/datasets/nm000268) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000268](https://nemar.org/dataexplorer/detail?dataset_id=nm000268) DOI: [https://doi.org/10.1109/TNSRE.2016.2628057](https://doi.org/10.1109/TNSRE.2016.2628057) ### Examples ```pycon >>> from eegdash.dataset import NM000268 >>> dataset = NM000268(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Shin2017B']* ### *class* eegdash.dataset.NM000270(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) liu2025 - NEMAR Dataset * **Study:** `nm000270` (NeMAR) * **Author (year):** `Liu2025` * **Canonical:** — Also importable as: `NM000270`, `Liu2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 27; recordings: 797; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000270](https://openneuro.org/datasets/nm000270) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000270](https://nemar.org/dataexplorer/detail?dataset_id=nm000270) ### Examples ```pycon >>> from eegdash.dataset import NM000270 >>> dataset = NM000270(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000271(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) chang2025 - NEMAR Dataset * **Study:** `nm000271` (NeMAR) * **Author (year):** `Chang2025_2` * **Canonical:** `Chang2025` Also importable as: `NM000271`, `Chang2025_2`, `Chang2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Unknown`. Subjects: 28; recordings: 1245; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000271](https://openneuro.org/datasets/nm000271) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000271](https://nemar.org/dataexplorer/detail?dataset_id=nm000271) ### Examples ```pycon >>> from eegdash.dataset import NM000271 >>> dataset = NM000271(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Chang2025']* ### *class* eegdash.dataset.NM000272(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) romani-bf2025-erp - NEMAR Dataset * **Study:** `nm000272` (NeMAR) * **Author (year):** `Romani2025_BF_ERP` * **Canonical:** `Romani2025_erp` Also importable as: `NM000272`, `Romani2025_BF_ERP`, `Romani2025_erp`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Unknown`. Subjects: 22; recordings: 1022; tasks: 3. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000272](https://openneuro.org/datasets/nm000272) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000272](https://nemar.org/dataexplorer/detail?dataset_id=nm000272) ### Examples ```pycon >>> from eegdash.dataset import NM000272 >>> dataset = NM000272(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Romani2025_erp']* ### *class* eegdash.dataset.NM000277(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-G * **Study:** `nm000277` (NeMAR) * **Author (year):** `Mainsah2025_G` * **Canonical:** `BigP3BCI_G`, `BigP3BCI_StudyG` Also importable as: `NM000277`, `Mainsah2025_G`, `BigP3BCI_G`, `BigP3BCI_StudyG`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 20; recordings: 320; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000277](https://openneuro.org/datasets/nm000277) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000277](https://nemar.org/dataexplorer/detail?dataset_id=nm000277) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000277 >>> dataset = NM000277(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['BigP3BCI_G', 'BigP3BCI_StudyG']* ### *class* eegdash.dataset.NM000301(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-D * **Study:** `nm000301` (NeMAR) * **Author (year):** `Mainsah2025_D` * **Canonical:** — Also importable as: `NM000301`, `Mainsah2025_D`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 17; recordings: 307; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000301](https://openneuro.org/datasets/nm000301) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000301](https://nemar.org/dataexplorer/detail?dataset_id=nm000301) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000301 >>> dataset = NM000301(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000303(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-O * **Study:** `nm000303` (NeMAR) * **Author (year):** `Mainsah2025_O` * **Canonical:** — Also importable as: `NM000303`, `Mainsah2025_O`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Other`. Subjects: 18; recordings: 347; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000303](https://openneuro.org/datasets/nm000303) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000303](https://nemar.org/dataexplorer/detail?dataset_id=nm000303) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000303 >>> dataset = NM000303(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000310(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) GuttmannFlury2025-SSVEP * **Study:** `nm000310` (NeMAR) * **Author (year):** `GuttmannFlury2025_SSVEP` * **Canonical:** — Also importable as: `NM000310`, `GuttmannFlury2025_SSVEP`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 11; recordings: 26; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000310](https://openneuro.org/datasets/nm000310) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000310](https://nemar.org/dataexplorer/detail?dataset_id=nm000310) DOI: [https://doi.org/10.1038/s41597-025-04861-9](https://doi.org/10.1038/s41597-025-04861-9) ### Examples ```pycon >>> from eegdash.dataset import NM000310 >>> dataset = NM000310(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000311(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Multimodal upper-limb MI/ME EEG (Jeong et al. 2020) * **Study:** `nm000311` (NeMAR) * **Author (year):** `Jeong2020` * **Canonical:** — Also importable as: `NM000311`, `Jeong2020`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 25; recordings: 213; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000311](https://openneuro.org/datasets/nm000311) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000311](https://nemar.org/dataexplorer/detail?dataset_id=nm000311) DOI: [https://doi.org/10.82901/nemar.nm000311](https://doi.org/10.82901/nemar.nm000311) ### Examples ```pycon >>> from eegdash.dataset import NM000311 >>> dataset = NM000311(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000313(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-S2 * **Study:** `nm000313` (NeMAR) * **Author (year):** `Mainsah2025_S2` * **Canonical:** — Also importable as: `NM000313`, `Mainsah2025_S2`. Modality: `eeg`; Experiment type: `Perception`; Subject type: `Healthy`. Subjects: 24; recordings: 288; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000313](https://openneuro.org/datasets/nm000313) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000313](https://nemar.org/dataexplorer/detail?dataset_id=nm000313) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000313 >>> dataset = NM000313(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000321(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-Q * **Study:** `nm000321` (NeMAR) * **Author (year):** `Mainsah2025_Q` * **Canonical:** — Also importable as: `NM000321`, `Mainsah2025_Q`. Modality: `eeg`; Experiment type: `Clinical/Intervention`; Subject type: `Other`. Subjects: 36; recordings: 360; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000321](https://openneuro.org/datasets/nm000321) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000321](https://nemar.org/dataexplorer/detail?dataset_id=nm000321) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000321 >>> dataset = NM000321(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000323(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Lee2019-ERP * **Study:** `nm000323` (NeMAR) * **Author (year):** `Lee2019_ERP` * **Canonical:** `OpenBMI_ERP`, `OpenBMI_P300` Also importable as: `NM000323`, `Lee2019_ERP`, `OpenBMI_ERP`, `OpenBMI_P300`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 54; recordings: 216; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000323](https://openneuro.org/datasets/nm000323) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000323](https://nemar.org/dataexplorer/detail?dataset_id=nm000323) DOI: [https://doi.org/10.1093/gigascience/giz002](https://doi.org/10.1093/gigascience/giz002) ### Examples ```pycon >>> from eegdash.dataset import NM000323 >>> dataset = NM000323(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['OpenBMI_ERP', 'OpenBMI_P300']* ### *class* eegdash.dataset.NM000326(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-C * **Study:** `nm000326` (NeMAR) * **Author (year):** `Mainsah2025_C` * **Canonical:** — Also importable as: `NM000326`, `Mainsah2025_C`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 19; recordings: 341; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000326](https://openneuro.org/datasets/nm000326) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000326](https://nemar.org/dataexplorer/detail?dataset_id=nm000326) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000326 >>> dataset = NM000326(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000329(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Brandl2020 * **Study:** `nm000329` (NeMAR) * **Author (year):** `Brandl2020` * **Canonical:** — Also importable as: `NM000329`, `Brandl2020`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 16; recordings: 112; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000329](https://openneuro.org/datasets/nm000329) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000329](https://nemar.org/dataexplorer/detail?dataset_id=nm000329) DOI: [https://doi.org/10.3389/fnins.2020.566147](https://doi.org/10.3389/fnins.2020.566147) ### Examples ```pycon >>> from eegdash.dataset import NM000329 >>> dataset = NM000329(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000336(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-R * **Study:** `nm000336` (NeMAR) * **Author (year):** `Mainsah2025_R` * **Canonical:** — Also importable as: `NM000336`, `Mainsah2025_R`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 20; recordings: 480; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000336](https://openneuro.org/datasets/nm000336) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000336](https://nemar.org/dataexplorer/detail?dataset_id=nm000336) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000336 >>> dataset = NM000336(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000338(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Lee2019-MI * **Study:** `nm000338` (NeMAR) * **Author (year):** `Lee2019_MI` * **Canonical:** `OpenBMI_MI` Also importable as: `NM000338`, `Lee2019_MI`, `OpenBMI_MI`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 54; recordings: 216; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000338](https://openneuro.org/datasets/nm000338) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000338](https://nemar.org/dataexplorer/detail?dataset_id=nm000338) DOI: [https://doi.org/10.1093/gigascience/giz002](https://doi.org/10.1093/gigascience/giz002) ### Examples ```pycon >>> from eegdash.dataset import NM000338 >>> dataset = NM000338(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['OpenBMI_MI']* ### *class* eegdash.dataset.NM000339(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Stieger2021 * **Study:** `nm000339` (NeMAR) * **Author (year):** `Stieger2021` * **Canonical:** — Also importable as: `NM000339`, `Stieger2021`. Modality: `eeg`; Experiment type: `Learning`; Subject type: `Healthy`. Subjects: 62; recordings: 598; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000339](https://openneuro.org/datasets/nm000339) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000339](https://nemar.org/dataexplorer/detail?dataset_id=nm000339) DOI: [https://doi.org/10.1038/s41597-021-00883-1](https://doi.org/10.1038/s41597-021-00883-1) ### Examples ```pycon >>> from eegdash.dataset import NM000339 >>> dataset = NM000339(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000340(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-J * **Study:** `nm000340` (NeMAR) * **Author (year):** `Mainsah2025_J` * **Canonical:** — Also importable as: `NM000340`, `Mainsah2025_J`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 20; recordings: 502; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000340](https://openneuro.org/datasets/nm000340) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000340](https://nemar.org/dataexplorer/detail?dataset_id=nm000340) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000340 >>> dataset = NM000340(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000341(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Cattan2019-PHMD * **Study:** `nm000341` (NeMAR) * **Author (year):** `Cattan2019_PHMD` * **Canonical:** — Also importable as: `NM000341`, `Cattan2019_PHMD`. Modality: `eeg`; Experiment type: `Resting-state`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000341](https://openneuro.org/datasets/nm000341) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000341](https://nemar.org/dataexplorer/detail?dataset_id=nm000341) DOI: [https://doi.org/10.5281/zenodo.2617084](https://doi.org/10.5281/zenodo.2617084) ### Examples ```pycon >>> from eegdash.dataset import NM000341 >>> dataset = NM000341(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000342(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CastillosCVEP40 * **Study:** `nm000342` (NeMAR) * **Author (year):** `Castillos2023_CastillosCVEP40` * **Canonical:** `CastillosCVEP40` Also importable as: `NM000342`, `Castillos2023_CastillosCVEP40`, `CastillosCVEP40`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000342](https://openneuro.org/datasets/nm000342) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000342](https://nemar.org/dataexplorer/detail?dataset_id=nm000342) DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### Examples ```pycon >>> from eegdash.dataset import NM000342 >>> dataset = NM000342(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['CastillosCVEP40']* ### *class* eegdash.dataset.NM000343(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Hinss2021 * **Study:** `nm000343` (NeMAR) * **Author (year):** `Hinss2021` * **Canonical:** `Hinss2021_v2` Also importable as: `NM000343`, `Hinss2021`, `Hinss2021_v2`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 15; recordings: 30; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000343](https://openneuro.org/datasets/nm000343) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000343](https://nemar.org/dataexplorer/detail?dataset_id=nm000343) DOI: [https://doi.org/10.1038/s41597-022-01898-y](https://doi.org/10.1038/s41597-022-01898-y) ### Examples ```pycon >>> from eegdash.dataset import NM000343 >>> dataset = NM000343(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Hinss2021_v2']* ### *class* eegdash.dataset.NM000344(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CastillosBurstVEP100 * **Study:** `nm000344` (NeMAR) * **Author (year):** `Castillos2023_CastillosBurstVEP100` * **Canonical:** — Also importable as: `NM000344`, `Castillos2023_CastillosBurstVEP100`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000344](https://openneuro.org/datasets/nm000344) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000344](https://nemar.org/dataexplorer/detail?dataset_id=nm000344) DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### Examples ```pycon >>> from eegdash.dataset import NM000344 >>> dataset = NM000344(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000345(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CastillosBurstVEP40 * **Study:** `nm000345` (NeMAR) * **Author (year):** `Castillos2023_CastillosBurstVEP40` * **Canonical:** — Also importable as: `NM000345`, `Castillos2023_CastillosBurstVEP40`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000345](https://openneuro.org/datasets/nm000345) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000345](https://nemar.org/dataexplorer/detail?dataset_id=nm000345) DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### Examples ```pycon >>> from eegdash.dataset import NM000345 >>> dataset = NM000345(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000346(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) CastillosCVEP100 * **Study:** `nm000346` (NeMAR) * **Author (year):** `Castillos2023_CastillosCVEP100` * **Canonical:** — Also importable as: `NM000346`, `Castillos2023_CastillosCVEP100`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Healthy`. Subjects: 12; recordings: 12; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000346](https://openneuro.org/datasets/nm000346) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000346](https://nemar.org/dataexplorer/detail?dataset_id=nm000346) DOI: [https://doi.org/10.1016/j.neuroimage.2023.120446](https://doi.org/10.1016/j.neuroimage.2023.120446) ### Examples ```pycon >>> from eegdash.dataset import NM000346 >>> dataset = NM000346(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### *class* eegdash.dataset.NM000347(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) HefmiIch2025 * **Study:** `nm000347` (NeMAR) * **Author (year):** `HefmiIch2025` * **Canonical:** `HEFMI_ICH`, `HEFMIICH` Also importable as: `NM000347`, `HefmiIch2025`, `HEFMI_ICH`, `HEFMIICH`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Other`. Subjects: 37; recordings: 98; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000347](https://openneuro.org/datasets/nm000347) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000347](https://nemar.org/dataexplorer/detail?dataset_id=nm000347) DOI: [https://doi.org/10.1038/s41597-025-06100-7](https://doi.org/10.1038/s41597-025-06100-7) ### Examples ```pycon >>> from eegdash.dataset import NM000347 >>> dataset = NM000347(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['HEFMI_ICH', 'HEFMIICH']* ### *class* eegdash.dataset.NM000348(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Yang2025 * **Study:** `nm000348` (NeMAR) * **Author (year):** `Yang2025` * **Canonical:** — Also importable as: `NM000348`, `Yang2025`. Modality: `eeg`; Experiment type: `Motor`; Subject type: `Healthy`. Subjects: 51; recordings: 153; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000348](https://openneuro.org/datasets/nm000348) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000348](https://nemar.org/dataexplorer/detail?dataset_id=nm000348) DOI: [https://doi.org/10.1038/s41597-025-04826-y](https://doi.org/10.1038/s41597-025-04826-y) ### Examples ```pycon >>> from eegdash.dataset import NM000348 >>> dataset = NM000348(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= ['Yang2025']* ### *class* eegdash.dataset.NM000351(cache_dir: str, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) Mainsah2025-P * **Study:** `nm000351` (NeMAR) * **Author (year):** `Mainsah2025_P` * **Canonical:** — Also importable as: `NM000351`, `Mainsah2025_P`. Modality: `eeg`; Experiment type: `Attention`; Subject type: `Other`. Subjects: 19; recordings: 228; tasks: 1. * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Additional MongoDB-style filters to AND with the dataset selection. Must not contain the key `dataset`. * **s3_bucket** (*str* *|* *None*) – Base S3 bucket used to locate the data. * **\*\*kwargs** (*dict*) – Additional keyword arguments forwarded to [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). #### data_dir Local dataset cache directory (`cache_dir / dataset_id`). * **Type:** Path #### query Merged query with the dataset filter applied. * **Type:** dict #### records Metadata records used to build the dataset, if pre-fetched. * **Type:** list[dict] | None ### Notes Each item is a recording; recording-level metadata are available via `dataset.description`. `query` supports MongoDB-style filters on fields in `ALLOWED_QUERY_FIELDS` and is combined with the dataset filter. Dataset-specific caveats are not provided in the summary metadata. ### References OpenNeuro dataset: [https://openneuro.org/datasets/nm000351](https://openneuro.org/datasets/nm000351) NeMAR dataset: [https://nemar.org/dataexplorer/detail?dataset_id=nm000351](https://nemar.org/dataexplorer/detail?dataset_id=nm000351) DOI: [https://doi.org/10.13026/0byy-ry86](https://doi.org/10.13026/0byy-ry86) ### Examples ```pycon >>> from eegdash.dataset import NM000351 >>> dataset = NM000351(cache_dir="./data") >>> recording = dataset[0] >>> raw = recording.load() ``` #### canonical_name *= []* ### eegdash.dataset.NOD_EEG alias of [`DS005811`](eegdash.dataset.DS005811.md#eegdash.dataset.DS005811) ### eegdash.dataset.NOD_MEG alias of [`DS005810`](eegdash.dataset.DS005810.md#eegdash.dataset.DS005810) ### eegdash.dataset.NenckiSymfonia alias of [`DS004621`](eegdash.dataset.DS004621.md#eegdash.dataset.DS004621) ### eegdash.dataset.Neuma alias of [`DS004588`](eegdash.dataset.DS004588.md#eegdash.dataset.DS004588) ### eegdash.dataset.NeuroMorph alias of [`DS005241`](eegdash.dataset.DS005241.md#eegdash.dataset.DS005241) ### eegdash.dataset.Nierula2019 alias of [`DS005307`](eegdash.dataset.DS005307.md#eegdash.dataset.DS005307) ### eegdash.dataset.Ning2024 alias of [`DS004830`](eegdash.dataset.DS004830.md#eegdash.dataset.DS004830) ### eegdash.dataset.Normannseth2026 alias of [`DS007615`](eegdash.dataset.DS007615.md#eegdash.dataset.DS007615) ### eegdash.dataset.OMEGA alias of [`DS000247`](eegdash.dataset.DS000247.md#eegdash.dataset.DS000247) ### eegdash.dataset.ORHA alias of [`DS005363`](eegdash.dataset.DS005363.md#eegdash.dataset.DS005363) ### eegdash.dataset.OcularLDT alias of [`DS002312`](eegdash.dataset.DS002312.md#eegdash.dataset.DS002312) ### eegdash.dataset.Oikonomou2016 alias of [`NM000119`](eegdash.dataset.NM000119.md#eegdash.dataset.NM000119) ### eegdash.dataset.Omelyusik2026 alias of [`DS006136`](eegdash.dataset.DS006136.md#eegdash.dataset.DS006136) ### eegdash.dataset.Onton2024 alias of [`DS006695`](eegdash.dataset.DS006695.md#eegdash.dataset.DS006695) ### eegdash.dataset.OpenBMI_ERP alias of [`NM000323`](eegdash.dataset.NM000323.md#eegdash.dataset.NM000323) ### eegdash.dataset.OpenBMI_MI alias of [`NM000338`](eegdash.dataset.NM000338.md#eegdash.dataset.NM000338) ### eegdash.dataset.OpenBMI_P300 alias of [`NM000323`](eegdash.dataset.NM000323.md#eegdash.dataset.NM000323) ### eegdash.dataset.PAL alias of [`DS005059`](eegdash.dataset.DS005059.md#eegdash.dataset.DS005059) ### eegdash.dataset.PDEEG alias of [`DS007526`](eegdash.dataset.DS007526.md#eegdash.dataset.DS007526) ### eegdash.dataset.PD_EEG alias of [`DS007526`](eegdash.dataset.DS007526.md#eegdash.dataset.DS007526) ### eegdash.dataset.PEARLNeuro alias of [`DS004796`](eegdash.dataset.DS004796.md#eegdash.dataset.DS004796) ### eegdash.dataset.PEERS alias of [`DS004395`](eegdash.dataset.DS004395.md#eegdash.dataset.DS004395) ### eegdash.dataset.PRIOS alias of [`DS004370`](eegdash.dataset.DS004370.md#eegdash.dataset.DS004370) ### eegdash.dataset.PROMENADE alias of [`DS005946`](eegdash.dataset.DS005946.md#eegdash.dataset.DS005946) ### eegdash.dataset.PWIe alias of [`DS005932`](eegdash.dataset.DS005932.md#eegdash.dataset.DS005932) ### eegdash.dataset.Penalver2024 alias of [`DS004502`](eegdash.dataset.DS004502.md#eegdash.dataset.DS004502) ### eegdash.dataset.Peng2018 alias of [`DS005777`](eegdash.dataset.DS005777.md#eegdash.dataset.DS005777) ### eegdash.dataset.PerceiveImagine alias of [`DS005697`](eegdash.dataset.DS005697.md#eegdash.dataset.DS005697) ### eegdash.dataset.PhysionetMI alias of [`DS004362`](eegdash.dataset.DS004362.md#eegdash.dataset.DS004362) ### eegdash.dataset.Podcast alias of [`DS005574`](eegdash.dataset.DS005574.md#eegdash.dataset.DS005574) ### eegdash.dataset.Pohle2019 alias of [`DS006374`](eegdash.dataset.DS006374.md#eegdash.dataset.DS006374) ### eegdash.dataset.RAM_catFR alias of [`DS005491`](eegdash.dataset.DS005491.md#eegdash.dataset.DS005491) ### eegdash.dataset.RESPect_CCEP alias of [`DS004080`](eegdash.dataset.DS004080.md#eegdash.dataset.DS004080) ### eegdash.dataset.RESPect_intraop alias of [`DS003844`](eegdash.dataset.DS003844.md#eegdash.dataset.DS003844) ### eegdash.dataset.RESPect_longterm alias of [`DS003848`](eegdash.dataset.DS003848.md#eegdash.dataset.DS003848) ### eegdash.dataset.Ramzaoui2024 alias of [`DS006979`](eegdash.dataset.DS006979.md#eegdash.dataset.DS006979) ### eegdash.dataset.Rani2019 alias of [`DS004012`](eegdash.dataset.DS004012.md#eegdash.dataset.DS004012) ### eegdash.dataset.Rockhill2022 alias of [`DS004473`](eegdash.dataset.DS004473.md#eegdash.dataset.DS004473) ### eegdash.dataset.Rodrigues2017 alias of [`NM000221`](eegdash.dataset.NM000221.md#eegdash.dataset.NM000221) ### eegdash.dataset.Romani2025 alias of [`NM000147`](eegdash.dataset.NM000147.md#eegdash.dataset.NM000147) ### eegdash.dataset.Romani2025_erp alias of [`NM000272`](eegdash.dataset.NM000272.md#eegdash.dataset.NM000272) ### eegdash.dataset.Runabout alias of [`DS003620`](eegdash.dataset.DS003620.md#eegdash.dataset.DS003620) ### eegdash.dataset.SINGSING alias of [`DS006629`](eegdash.dataset.DS006629.md#eegdash.dataset.DS006629) ### eegdash.dataset.SSVEPMAMEM2 alias of [`NM000120`](eegdash.dataset.NM000120.md#eegdash.dataset.NM000120) ### eegdash.dataset.SSVEP_MAMEM3 alias of [`NM000121`](eegdash.dataset.NM000121.md#eegdash.dataset.NM000121) ### eegdash.dataset.STRONG alias of [`DS004849`](eegdash.dataset.DS004849.md#eegdash.dataset.DS004849) ### eegdash.dataset.STReEF alias of [`DS005448`](eegdash.dataset.DS005448.md#eegdash.dataset.DS005448) ### eegdash.dataset.Sakakura2024 alias of [`DS004859`](eegdash.dataset.DS004859.md#eegdash.dataset.DS004859) ### eegdash.dataset.Sakakura2025 alias of [`DS004551`](eegdash.dataset.DS004551.md#eegdash.dataset.DS004551) ### eegdash.dataset.Sato2024 alias of [`DS007602`](eegdash.dataset.DS007602.md#eegdash.dataset.DS007602) ### eegdash.dataset.Sato2025 alias of [`DS007591`](eegdash.dataset.DS007591.md#eegdash.dataset.DS007591) ### eegdash.dataset.SeizeIT2 alias of [`DS005873`](eegdash.dataset.DS005873.md#eegdash.dataset.DS005873) ### eegdash.dataset.Shalamberidze2025 alias of [`DS007609`](eegdash.dataset.DS007609.md#eegdash.dataset.DS007609) ### eegdash.dataset.Shin2017A alias of [`NM000267`](eegdash.dataset.NM000267.md#eegdash.dataset.NM000267) ### eegdash.dataset.Shin2017B alias of [`NM000268`](eegdash.dataset.NM000268.md#eegdash.dataset.NM000268) ### eegdash.dataset.SleepEDF alias of [`NM000185`](eegdash.dataset.NM000185.md#eegdash.dataset.NM000185) ### eegdash.dataset.SleepEDFExpanded alias of [`NM000185`](eegdash.dataset.NM000185.md#eegdash.dataset.NM000185) ### eegdash.dataset.Somato alias of [`DS003104`](eegdash.dataset.DS003104.md#eegdash.dataset.DS003104) ### eegdash.dataset.Surrey_cEEGrid_sleep alias of [`DS005207`](eegdash.dataset.DS005207.md#eegdash.dataset.DS005207) ### eegdash.dataset.THINGS alias of [`DS003825`](eegdash.dataset.DS003825.md#eegdash.dataset.DS003825) ### eegdash.dataset.THINGSMEG alias of [`DS004212`](eegdash.dataset.DS004212.md#eegdash.dataset.DS004212) ### eegdash.dataset.THINGS_EEG alias of [`DS003825`](eegdash.dataset.DS003825.md#eegdash.dataset.DS003825) ### eegdash.dataset.THINGS_MEG alias of [`DS004212`](eegdash.dataset.DS004212.md#eegdash.dataset.DS004212) ### eegdash.dataset.TMNRED alias of [`DS005383`](eegdash.dataset.DS005383.md#eegdash.dataset.DS005383) ### eegdash.dataset.TNO alias of [`DS004660`](eegdash.dataset.DS004660.md#eegdash.dataset.DS004660) ### eegdash.dataset.TX14 alias of [`DS004841`](eegdash.dataset.DS004841.md#eegdash.dataset.DS004841) ### eegdash.dataset.TX15 alias of [`DS004842`](eegdash.dataset.DS004842.md#eegdash.dataset.DS004842) ### eegdash.dataset.TX18 alias of [`DS004854`](eegdash.dataset.DS004854.md#eegdash.dataset.DS004854) ### eegdash.dataset.TX20 alias of [`DS004657`](eegdash.dataset.DS004657.md#eegdash.dataset.DS004657) ### eegdash.dataset.Todorovic2023 alias of [`DS005261`](eegdash.dataset.DS005261.md#eegdash.dataset.DS005261) ### eegdash.dataset.ToonFaces alias of [`DS004324`](eegdash.dataset.DS004324.md#eegdash.dataset.DS004324) ### eegdash.dataset.Touryan1999 alias of [`DS004118`](eegdash.dataset.DS004118.md#eegdash.dataset.DS004118) ### eegdash.dataset.Tripathy2024 alias of [`DS007473`](eegdash.dataset.DS007473.md#eegdash.dataset.DS007473) ### eegdash.dataset.VEPCON alias of [`DS003505`](eegdash.dataset.DS003505.md#eegdash.dataset.DS003505) ### eegdash.dataset.Veillette2019 alias of [`DS005403`](eegdash.dataset.DS005403.md#eegdash.dataset.DS005403) ### eegdash.dataset.Vianney2025 alias of [`DS007358`](eegdash.dataset.DS007358.md#eegdash.dataset.DS007358) ### eegdash.dataset.VisualContextTrajectory alias of [`DS004603`](eegdash.dataset.DS004603.md#eegdash.dataset.DS004603) ### eegdash.dataset.VisualContextTrajectory_v2 alias of [`DS006817`](eegdash.dataset.DS006817.md#eegdash.dataset.DS006817) ### eegdash.dataset.WBCICSHU alias of [`NM000246`](eegdash.dataset.NM000246.md#eegdash.dataset.NM000246) ### eegdash.dataset.WBCIC_SHU alias of [`NM000246`](eegdash.dataset.NM000246.md#eegdash.dataset.NM000246) ### eegdash.dataset.WIRED_ICM alias of [`DS004993`](eegdash.dataset.DS004993.md#eegdash.dataset.DS004993) ### eegdash.dataset.Wakeman2015 alias of [`DS000117`](eegdash.dataset.DS000117.md#eegdash.dataset.DS000117) ### eegdash.dataset.WakemanHenson alias of [`DS000117`](eegdash.dataset.DS000117.md#eegdash.dataset.DS000117) ### eegdash.dataset.WakemanHenson_EEG_MEG alias of [`DS002718`](eegdash.dataset.DS002718.md#eegdash.dataset.DS002718) ### eegdash.dataset.Weibo2014 alias of [`NM000146`](eegdash.dataset.NM000146.md#eegdash.dataset.NM000146) ### eegdash.dataset.Weisend2007 alias of [`DS004107`](eegdash.dataset.DS004107.md#eegdash.dataset.DS004107) ### eegdash.dataset.Wimmer2024 alias of [`DS004398`](eegdash.dataset.DS004398.md#eegdash.dataset.DS004398) ### eegdash.dataset.Yang2025 alias of [`NM000348`](eegdash.dataset.NM000348.md#eegdash.dataset.NM000348) ### eegdash.dataset.Yu2019 alias of [`DS006386`](eegdash.dataset.DS006386.md#eegdash.dataset.DS006386) ### eegdash.dataset.Yucel2014 alias of [`DS005929`](eegdash.dataset.DS005929.md#eegdash.dataset.DS005929) ### eegdash.dataset.Yucel2015 alias of [`DS005776`](eegdash.dataset.DS005776.md#eegdash.dataset.DS005776) ### eegdash.dataset.Zhang2025 alias of [`NM000211`](eegdash.dataset.NM000211.md#eegdash.dataset.NM000211) ### eegdash.dataset.Zhao2024 alias of [`DS005473`](eegdash.dataset.DS005473.md#eegdash.dataset.DS005473) ### eegdash.dataset.Zhou2016_NEMAR alias of [`NM000226`](eegdash.dataset.NM000226.md#eegdash.dataset.NM000226) ### eegdash.dataset.Zhou2024 alias of [`DS007471`](eegdash.dataset.DS007471.md#eegdash.dataset.DS007471) ### eegdash.dataset.catFR_Categorized_Free_Recall alias of [`DS004809`](eegdash.dataset.DS004809.md#eegdash.dataset.DS004809) ### eegdash.dataset.catFR_closed_loop alias of [`DS005558`](eegdash.dataset.DS005558.md#eegdash.dataset.DS005558) ### eegdash.dataset.catFR_open_loop alias of [`DS005491`](eegdash.dataset.DS005491.md#eegdash.dataset.DS005491) ### eegdash.dataset.catFR_stim alias of [`DS005491`](eegdash.dataset.DS005491.md#eegdash.dataset.DS005491) ### eegdash.dataset.eldBETA alias of [`NM000130`](eegdash.dataset.NM000130.md#eegdash.dataset.NM000130) ### eegdash.dataset.emg2qwerty alias of [`NM000104`](eegdash.dataset.NM000104.md#eegdash.dataset.NM000104) ### eegdash.dataset.neuromorph alias of [`DS005241`](eegdash.dataset.DS005241.md#eegdash.dataset.DS005241) ### eegdash.dataset.ocular_ldt alias of [`DS002312`](eegdash.dataset.DS002312.md#eegdash.dataset.DS002312) ### eegdash.dataset.pyFR alias of [`DS004865`](eegdash.dataset.DS004865.md#eegdash.dataset.DS004865) ### eegdash.dataset.register_openneuro_datasets(summary_file: str | Path | None = None, , base_class=None, namespace: Dict[str, Any] | None = None, add_to_all: bool = True, from_api: bool = False, api_url: str = 'https://data.eegdash.org/api', database: str = 'eegdash') → Dict[str, type] Dynamically create and register dataset classes from a summary file or API. This function reads a CSV file or queries the API containing summaries of datasets and dynamically creates a Python class for each dataset. These classes inherit from a specified base class and are pre-configured with the dataset’s ID. * **Parameters:** * **summary_file** (*str* *or* *pathlib.Path*) – The path to the CSV file containing the dataset summaries. * **base_class** (*type* *,* *optional*) – The base class from which the new dataset classes will inherit. If not provided, `eegdash.api.EEGDashDataset` is used. * **namespace** (*dict* *,* *optional*) – The namespace (e.g., globals()) into which the newly created classes will be injected. Defaults to the local globals() of this module. * **add_to_all** (*bool* *,* *default True*) – If True, the names of the newly created classes are added to the \_\_all_\_ list of the target namespace, making them importable with from … import \*. * **Returns:** A dictionary mapping the names of the registered classes to the class types themselves. * **Return type:** dict[str, type] # neuromorph: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import neuromorph dataset = neuromorph(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = neuromorph(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = neuromorph( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{neuromorph, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `NEUROMORPH` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `NEUROMORPH` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/neuromorph) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=neuromorph) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [neuromorph](https://openneuro.org/datasets/neuromorph) - NeMAR: [neuromorph](https://nemar.org/dataexplorer/detail?dataset_id=neuromorph) ## API Reference Use the `neuromorph` class to access this dataset programmatically. ### eegdash.dataset.neuromorph alias of [`DS005241`](eegdash.dataset.DS005241.md#eegdash.dataset.DS005241) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/neuromorph) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=neuromorph) # ocular_ldt: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import ocular_ldt dataset = ocular_ldt(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = ocular_ldt(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = ocular_ldt( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{ocular_ldt, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `OCULAR_LDT` | |----------------|-------------------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `OCULAR_LDT` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/ocular_ldt) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=ocular_ldt) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [ocular_ldt](https://openneuro.org/datasets/ocular_ldt) - NeMAR: [ocular_ldt](https://nemar.org/dataexplorer/detail?dataset_id=ocular_ldt) ## API Reference Use the `ocular_ldt` class to access this dataset programmatically. ### eegdash.dataset.ocular_ldt alias of [`DS002312`](eegdash.dataset.DS002312.md#eegdash.dataset.DS002312) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/ocular_ldt) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=ocular_ldt) # pyFR: EEG dataset Dataset from OpenNeuro. Access recordings and metadata through EEGDash. **Citation:** Unknown (—). ``` ** ``` . Modality: — Subjects: — Recordings: — License: — Source: OpenNeuro Metadata: Limited (0%) ## Quickstart ### Get Started **Install** ```bash pip install eegdash ``` **Access the data** ```python from eegdash.dataset import pyFR dataset = pyFR(cache_dir="./data") # Get the raw object of the first recording raw = dataset.datasets[0].raw print(raw.info) ``` ### Query & Filter **Filter by subject** ```python dataset = pyFR(cache_dir="./data", subject="01") ``` **Advanced query** ```python dataset = pyFR( cache_dir="./data", query={"subject": {"$in": ["01", "02"]}}, ) ``` **Iterate recordings** ```python for rec in dataset: print(rec.subject, rec.raw.info['sfreq']) ``` ### Cite This Dataset If you use this dataset in your research, please cite the original authors. **BibTeX** ```bibtex @dataset{pyfr, } ``` ## About This Dataset No README content is available for this dataset. ## Dataset Information | Dataset ID | `PYFR` | |----------------|-------------------------------------------------------------------------------------------------------------------| | Title | — | | Author (year) | — | | Canonical | — | | Importable as | `PYFR` | | Year | — | | Authors | Unknown | | License | — | | Citation / DOI | Unknown | | Source links | [OpenNeuro](https://openneuro.org/datasets/pyfr) | [NeMAR](https://nemar.org/dataexplorer/detail?dataset_id=pyfr) | ## Technical Details - Subjects: — - Recordings: — - Tasks: — - Channels: Varies - Sampling rate (Hz): Varies - Duration (hours): Not calculated - Pathology: Not specified - Modality: — - Type: — - Size on disk: — - File count: — - Format: BIDS - License: See source - DOI: — - Source: OpenNeuro - OpenNeuro: [pyfr](https://openneuro.org/datasets/pyfr) - NeMAR: [pyfr](https://nemar.org/dataexplorer/detail?dataset_id=pyfr) ## API Reference Use the `pyFR` class to access this dataset programmatically. ### eegdash.dataset.pyFR alias of [`DS004865`](eegdash.dataset.DS004865.md#eegdash.dataset.DS004865) ## See Also * `eegdash.dataset.EEGDashDataset` * `eegdash.dataset` * [OpenNeuro dataset page](https://openneuro.org/datasets/pyfr) * [NeMAR dataset page](https://nemar.org/dataexplorer/detail?dataset_id=pyfr) # eegdash.dataset.registry module ### eegdash.dataset.registry.fetch_chart_data_from_api(api_url: str = 'https://data.eegdash.org/api', database: str = 'eegdash', limit: int = 1000) → tuple[DataFrame, dict[str, Any]] Fetch pre-aggregated chart data from API. This uses the optimized /datasets/chart-data endpoint which returns only chart-relevant fields and pre-computed aggregations. Falls back to /datasets/summary if chart-data endpoint is unavailable. * **Parameters:** * **api_url** (*str*) – Base API URL * **database** (*str*) – Database name * **limit** (*int*) – Maximum datasets to fetch * **Returns:** DataFrame with dataset records and dict with pre-computed aggregations * **Return type:** tuple[pd.DataFrame, dict] ### eegdash.dataset.registry.fetch_datasets_from_api(api_url: str = 'https://data.eegdash.org/api', database: str = 'eegdash', force_refresh: bool = False) → DataFrame Fetch dataset summaries from API and return as DataFrame matching CSV structure. Note: This function makes a single API call to /datasets/summary. Stats (nchans_counts, sfreq_counts) are already embedded in dataset documents via the compute-stats endpoint, so no separate stats call is needed. * **Parameters:** * **api_url** (*str*) – Base API URL. * **database** (*str*) – Database name. * **force_refresh** (*bool*) – If True, bypass the local cache and always fetch from the API. ### eegdash.dataset.registry.register_openneuro_datasets(summary_file: str | Path | None = None, , base_class=None, namespace: Dict[str, Any] | None = None, add_to_all: bool = True, from_api: bool = False, api_url: str = 'https://data.eegdash.org/api', database: str = 'eegdash') → Dict[str, type] Dynamically create and register dataset classes from a summary file or API. This function reads a CSV file or queries the API containing summaries of datasets and dynamically creates a Python class for each dataset. These classes inherit from a specified base class and are pre-configured with the dataset’s ID. * **Parameters:** * **summary_file** (*str* *or* *pathlib.Path*) – The path to the CSV file containing the dataset summaries. * **base_class** (*type* *,* *optional*) – The base class from which the new dataset classes will inherit. If not provided, `eegdash.api.EEGDashDataset` is used. * **namespace** (*dict* *,* *optional*) – The namespace (e.g., globals()) into which the newly created classes will be injected. Defaults to the local globals() of this module. * **add_to_all** (*bool* *,* *default True*) – If True, the names of the newly created classes are added to the \_\_all_\_ list of the target namespace, making them importable with from … import \*. * **Returns:** A dictionary mapping the names of the registered classes to the class types themselves. * **Return type:** dict[str, type] # eegdash.downloader module File downloading utilities for EEG data from cloud storage. This module provides functions for downloading EEG data files and BIDS dependencies from AWS S3 storage, with support for caching and progress tracking. It handles the communication between the EEGDash metadata database and the actual EEG data stored in the cloud. ### eegdash.downloader.download_files(files: Sequence[tuple[str, Path]] | Iterable[tuple[str, Path]], , filesystem: S3FileSystem | None = None, skip_existing: bool = True, skip_missing: bool = False) → list[Path] Download multiple S3 URIs to local destinations. * **Parameters:** * **files** (*iterable* *of* *(**str* *,* *Path* *)*) – Pairs of (S3 URI, local destination path). * **filesystem** (*s3fs.S3FileSystem* *|* *None*) – Optional pre-created filesystem to reuse across multiple downloads. * **skip_existing** (*bool*) – If True, do not download files that already exist locally. * **skip_missing** (*bool*) – If True, skip files that do not exist on S3 instead of raising. ### eegdash.downloader.download_s3_file(s3_path: str, local_path: Path, , filesystem: S3FileSystem | None = None) → Path Download a single file from S3 to a local path. Handles the download of a raw EEG data file from an S3 bucket, caching it at the specified local path. Creates parent directories if they do not exist. * **Parameters:** * **s3_path** (*str*) – The full S3 URI of the file to download. * **local_path** (*pathlib.Path*) – The local file path where the downloaded file will be saved. * **filesystem** (*s3fs.S3FileSystem* *|* *None*) – Optional pre-created filesystem to reuse across multiple downloads. * **Returns:** The local path to the downloaded file. * **Return type:** pathlib.Path ### eegdash.downloader.get_s3_filesystem() → S3FileSystem Get an anonymous S3 filesystem object. Initializes and returns an `s3fs.S3FileSystem` for anonymous access to public S3 buckets, configured for the ‘us-east-2’ region. * **Returns:** An S3 filesystem object. * **Return type:** s3fs.S3FileSystem ### eegdash.downloader.get_s3path(s3_bucket: str, filepath: str) → str Construct an S3 URI from a bucket and file path. * **Parameters:** * **s3_bucket** (*str*) – The S3 bucket name (e.g., “s3://my-bucket”). * **filepath** (*str*) – The path to the file within the bucket. * **Returns:** The full S3 URI (e.g., “s3://my-bucket/path/to/file”). * **Return type:** str # eegdash.hbn package ## Submodules * [eegdash.hbn.preprocessing module](eegdash.hbn.preprocessing.md) * [eegdash.hbn.windows module](eegdash.hbn.windows.md) ## Module contents Healthy Brain Network (HBN) specific utilities and preprocessing. This module provides specialized functions for working with the Healthy Brain Network dataset, including preprocessing pipelines, annotation handling, and windowing utilities tailored for HBN EEG data analysis. ### eegdash.hbn.add_aux_anchors(raw: Raw, stim_desc: str = 'stimulus_anchor', resp_desc: str = 'response_anchor') → Raw Add auxiliary annotations for stimulus and response onsets. This function inspects existing “contrast_trial_start” annotations and adds new, zero-duration “anchor” annotations at the precise onsets of stimuli and responses for each trial. * **Parameters:** * **raw** (*mne.io.Raw*) – The raw data object with “contrast_trial_start” annotations. * **stim_desc** (*str* *,* *default "stimulus_anchor"*) – The description for the new stimulus annotations. * **resp_desc** (*str* *,* *default "response_anchor"*) – The description for the new response annotations. * **Returns:** The raw object with the auxiliary annotations added. * **Return type:** mne.io.Raw ### eegdash.hbn.add_extras_columns(windows_concat_ds: BaseConcatDataset, original_concat_ds: BaseConcatDataset, desc: str = 'contrast_trial_start', keys: tuple = ('target', 'rt_from_stimulus', 'rt_from_trialstart', 'stimulus_onset', 'response_onset', 'correct', 'response_type')) → BaseConcatDataset Add columns from annotation extras to a windowed dataset’s metadata. This function propagates trial-level information stored in the extras of annotations to the metadata DataFrame of a WindowsDataset. * **Parameters:** * **windows_concat_ds** (*BaseConcatDataset*) – The windowed dataset whose metadata will be updated. * **original_concat_ds** (*BaseConcatDataset*) – The original (non-windowed) dataset containing the raw data and annotations with the extras to be added. * **desc** (*str* *,* *default "contrast_trial_start"*) – The description of the annotations to source the extras from. * **keys** (*tuple* *,* *default* *(* *...* *)*) – The keys to extract from each annotation’s extras dictionary and add as columns to the metadata. * **Returns:** The windows_concat_ds with updated metadata. * **Return type:** BaseConcatDataset ### eegdash.hbn.annotate_trials_with_target(raw: Raw, target_field: str = 'rt_from_stimulus', epoch_length: float = 2.0, require_stimulus: bool = True, require_response: bool = True) → Raw Create trial annotations with a specified target value. This function reads the BIDS events file associated with the raw object, builds a trial table, and creates new MNE annotations for each trial. The annotations are labeled “contrast_trial_start” and their extras dictionary is populated with trial metrics, including a “target” key. * **Parameters:** * **raw** (*mne.io.Raw*) – The raw data object. Must have a single associated file name from which the BIDS path can be derived. * **target_field** (*str* *,* *default "rt_from_stimulus"*) – The column from the trial table to use as the “target” value in the annotation extras. * **epoch_length** (*float* *,* *default 2.0*) – The duration to set for each new annotation. * **require_stimulus** (*bool* *,* *default True*) – If True, only include trials that have a recorded stimulus event. * **require_response** (*bool* *,* *default True*) – If True, only include trials that have a recorded response event. * **Returns:** The raw object with the new annotations set. * **Return type:** mne.io.Raw * **Raises:** **KeyError** – If target_field is not a valid column in the built trial table. ### eegdash.hbn.build_trial_table(events_df: DataFrame) → DataFrame Build a table of contrast trials from an events DataFrame. This function processes a DataFrame of events (typically from a BIDS events.tsv file) to identify contrast trials and extract relevant metrics like stimulus onset, response onset, and reaction times. * **Parameters:** **events_df** (*pandas.DataFrame*) – A DataFrame containing event information, with at least “onset” and “value” columns. * **Returns:** A DataFrame where each row represents a single contrast trial, with columns for onsets, reaction times, and response correctness. * **Return type:** pandas.DataFrame ### *class* eegdash.hbn.hbn_ec_ec_reannotation Bases: `Preprocessor` Preprocessor to reannotate HBN data for eyes-open/eyes-closed events. This preprocessor is specifically designed for Healthy Brain Network (HBN) datasets. It identifies existing annotations for “instructed_toCloseEyes” and “instructed_toOpenEyes” and creates new, regularly spaced annotations for “eyes_closed” and “eyes_open” segments, respectively. This is useful for creating windowed datasets based on these new, more precise event markers. ### Notes This class inherits from `braindecode.preprocessing.Preprocessor` and is intended to be used within a braindecode preprocessing pipeline. #### transform(raw: Raw) → Raw Create new annotations for eyes-open and eyes-closed periods. This function finds the original “instructed_to…” annotations and generates new annotations every 2 seconds within specific time ranges relative to the original markers: - “eyes_closed”: 15s to 29s after “instructed_toCloseEyes” - “eyes_open”: 5s to 19s after “instructed_toOpenEyes” The original annotations in the mne.io.Raw object are replaced by this new set of annotations. * **Parameters:** **raw** (*mne.io.Raw*) – The raw MNE object containing the HBN data and original annotations. * **Returns:** The raw MNE object with the modified annotations. * **Return type:** mne.io.Raw ### eegdash.hbn.keep_only_recordings_with(desc: str, concat_ds: BaseConcatDataset) → BaseConcatDataset Filter a concatenated dataset to keep only recordings with a specific annotation. * **Parameters:** * **desc** (*str*) – The description of the annotation that must be present in a recording for it to be kept. * **concat_ds** (*BaseConcatDataset*) – The concatenated dataset to filter. * **Returns:** A new concatenated dataset containing only the filtered recordings. * **Return type:** BaseConcatDataset # eegdash.hbn.preprocessing module Preprocessing utilities specific to the Healthy Brain Network dataset. This module contains preprocessing classes and functions designed specifically for HBN EEG data, including specialized annotation handling for eyes-open/eyes-closed paradigms and other HBN-specific preprocessing steps. ### *class* eegdash.hbn.preprocessing.hbn_ec_ec_reannotation Bases: `Preprocessor` Preprocessor to reannotate HBN data for eyes-open/eyes-closed events. This preprocessor is specifically designed for Healthy Brain Network (HBN) datasets. It identifies existing annotations for “instructed_toCloseEyes” and “instructed_toOpenEyes” and creates new, regularly spaced annotations for “eyes_closed” and “eyes_open” segments, respectively. This is useful for creating windowed datasets based on these new, more precise event markers. ### Notes This class inherits from `braindecode.preprocessing.Preprocessor` and is intended to be used within a braindecode preprocessing pipeline. #### transform(raw: Raw) → Raw Create new annotations for eyes-open and eyes-closed periods. This function finds the original “instructed_to…” annotations and generates new annotations every 2 seconds within specific time ranges relative to the original markers: - “eyes_closed”: 15s to 29s after “instructed_toCloseEyes” - “eyes_open”: 5s to 19s after “instructed_toOpenEyes” The original annotations in the mne.io.Raw object are replaced by this new set of annotations. * **Parameters:** **raw** (*mne.io.Raw*) – The raw MNE object containing the HBN data and original annotations. * **Returns:** The raw MNE object with the modified annotations. * **Return type:** mne.io.Raw # eegdash.hbn.windows module Windowing and trial processing utilities for HBN datasets. This module provides functions for building trial tables, adding auxiliary anchors, annotating trials with targets, and filtering recordings based on various criteria. These utilities are specifically designed for working with HBN EEG data structures and experimental paradigms. ### eegdash.hbn.windows.add_aux_anchors(raw: Raw, stim_desc: str = 'stimulus_anchor', resp_desc: str = 'response_anchor') → Raw Add auxiliary annotations for stimulus and response onsets. This function inspects existing “contrast_trial_start” annotations and adds new, zero-duration “anchor” annotations at the precise onsets of stimuli and responses for each trial. * **Parameters:** * **raw** (*mne.io.Raw*) – The raw data object with “contrast_trial_start” annotations. * **stim_desc** (*str* *,* *default "stimulus_anchor"*) – The description for the new stimulus annotations. * **resp_desc** (*str* *,* *default "response_anchor"*) – The description for the new response annotations. * **Returns:** The raw object with the auxiliary annotations added. * **Return type:** mne.io.Raw ### eegdash.hbn.windows.add_extras_columns(windows_concat_ds: BaseConcatDataset, original_concat_ds: BaseConcatDataset, desc: str = 'contrast_trial_start', keys: tuple = ('target', 'rt_from_stimulus', 'rt_from_trialstart', 'stimulus_onset', 'response_onset', 'correct', 'response_type')) → BaseConcatDataset Add columns from annotation extras to a windowed dataset’s metadata. This function propagates trial-level information stored in the extras of annotations to the metadata DataFrame of a WindowsDataset. * **Parameters:** * **windows_concat_ds** (*BaseConcatDataset*) – The windowed dataset whose metadata will be updated. * **original_concat_ds** (*BaseConcatDataset*) – The original (non-windowed) dataset containing the raw data and annotations with the extras to be added. * **desc** (*str* *,* *default "contrast_trial_start"*) – The description of the annotations to source the extras from. * **keys** (*tuple* *,* *default* *(* *...* *)*) – The keys to extract from each annotation’s extras dictionary and add as columns to the metadata. * **Returns:** The windows_concat_ds with updated metadata. * **Return type:** BaseConcatDataset ### eegdash.hbn.windows.annotate_trials_with_target(raw: Raw, target_field: str = 'rt_from_stimulus', epoch_length: float = 2.0, require_stimulus: bool = True, require_response: bool = True) → Raw Create trial annotations with a specified target value. This function reads the BIDS events file associated with the raw object, builds a trial table, and creates new MNE annotations for each trial. The annotations are labeled “contrast_trial_start” and their extras dictionary is populated with trial metrics, including a “target” key. * **Parameters:** * **raw** (*mne.io.Raw*) – The raw data object. Must have a single associated file name from which the BIDS path can be derived. * **target_field** (*str* *,* *default "rt_from_stimulus"*) – The column from the trial table to use as the “target” value in the annotation extras. * **epoch_length** (*float* *,* *default 2.0*) – The duration to set for each new annotation. * **require_stimulus** (*bool* *,* *default True*) – If True, only include trials that have a recorded stimulus event. * **require_response** (*bool* *,* *default True*) – If True, only include trials that have a recorded response event. * **Returns:** The raw object with the new annotations set. * **Return type:** mne.io.Raw * **Raises:** **KeyError** – If target_field is not a valid column in the built trial table. ### eegdash.hbn.windows.build_trial_table(events_df: DataFrame) → DataFrame Build a table of contrast trials from an events DataFrame. This function processes a DataFrame of events (typically from a BIDS events.tsv file) to identify contrast trials and extract relevant metrics like stimulus onset, response onset, and reaction times. * **Parameters:** **events_df** (*pandas.DataFrame*) – A DataFrame containing event information, with at least “onset” and “value” columns. * **Returns:** A DataFrame where each row represents a single contrast trial, with columns for onsets, reaction times, and response correctness. * **Return type:** pandas.DataFrame ### eegdash.hbn.windows.keep_only_recordings_with(desc: str, concat_ds: BaseConcatDataset) → BaseConcatDataset Filter a concatenated dataset to keep only recordings with a specific annotation. * **Parameters:** * **desc** (*str*) – The description of the annotation that must be present in a recording for it to be kept. * **concat_ds** (*BaseConcatDataset*) – The concatenated dataset to filter. * **Returns:** A new concatenated dataset containing only the filtered recordings. * **Return type:** BaseConcatDataset # eegdash.http_api_client module HTTP API client for EEGDash REST API. ### *class* eegdash.http_api_client.EEGDashAPIClient(api_url: str | None = None, database: str = 'eegdash', auth_token: str | None = None) Bases: `object` HTTP client for EEGDash API. * **Parameters:** * **api_url** (*str* *,* *optional*) – Base API URL. Default: [https://data.eegdash.org](https://data.eegdash.org) * **database** (*str* *,* *default "eegdash"*) – Database name (“eegdash”, “eegdash_staging”, or “eegdash_v1”). * **auth_token** (*str* *,* *optional*) – Auth token for admin write operations. #### count_documents(query: dict[str, Any] | None = None, \*\*kwargs) → int Count documents matching query. #### find(query: dict[str, Any] | None = None, limit: int | None = None, skip: int | None = None, \*\*kwargs) → list[dict[str, Any]] Query records. Auto-paginates if no limit specified. #### find_datasets(query: dict[str, Any] | None = None, limit: int = 1000) → list[dict[str, Any]] Find datasets matching query. #### find_one(query: dict[str, Any] | None = None, \*\*kwargs) → dict[str, Any] | None Find a single record. #### get_dataset(dataset_id: str) → dict[str, Any] | None Fetch a dataset document by ID. #### insert_many(records: list[dict[str, Any]]) → int Insert multiple records (requires auth). #### insert_one(record: dict[str, Any]) → str Insert single record (requires auth). #### update_dataset(dataset_id: str, update: dict[str, Any]) → int Update dataset metadata (requires auth). * **Parameters:** * **dataset_id** (*str*) – The dataset identifier. * **update** (*dict*) – Fields to update (will be wrapped in $set automatically). * **Returns:** Modified count (1 or 0). * **Return type:** int #### update_many(query: dict[str, Any], update: dict[str, Any]) → tuple[int, int] Update records matching query (requires auth). * **Parameters:** * **query** (*dict*) – Filter query to match records. * **update** (*dict*) – Fields to set (wrapped in $set automatically). * **Return type:** tuple of (matched_count, modified_count) #### upsert_many(records: list[dict[str, Any]]) → dict[str, int] Upsert multiple records (requires auth). New endpoint that uses bulk upsert based on dataset+bidspath. ### eegdash.http_api_client.get_client(api_url: str | None = None, database: str = 'eegdash', auth_token: str | None = None) → EEGDashAPIClient Get an API client instance. # eegdash.local_bids module Local BIDS discovery helpers. These utilities support offline workflows (no DB/S3) by discovering BIDS recordings on the filesystem and returning EEGDash v2 records. ### eegdash.local_bids.discover_local_bids_records(dataset_root: str | Path, filters: dict[str, Any]) → list[dict[str, Any]] Discover local BIDS recordings and build EEGDash v2 records. * **Parameters:** * **dataset_root** (*str* *|* *Path*) – Local dataset directory (e.g., `/path/to/ds005509`). * **filters** (*dict*) – Filters dict. Must include `'dataset'` and may include BIDS entities like `'subject'`, `'session'`, `'task'`, `'run'`, plus `'modality'` (default: `'eeg'`). * **Returns:** A list of v2 records, one for each matched recording file. * **Return type:** list[dict[str, Any]] ### Notes Matching is performed via `mne_bids.find_matching_paths()` using datatypes/suffixes derived from the `'modality'` filter. The returned records use `storage.backend='local'` and point `storage.base` at `dataset_root`. # eegdash.logging module Logging configuration for EEGDash. This module sets up centralized logging for the EEGDash package using Rich for enhanced console output formatting. It provides a consistent logging interface across all modules. # eegdash package ## Subpackages * [eegdash.dataset package](eegdash.dataset.md) * [Submodules](eegdash.dataset.md#submodules) * [eegdash.dataset.base module](eegdash.dataset.base.md) * [eegdash.dataset.bids_dataset module](eegdash.dataset.bids_dataset.md) * [eegdash.dataset.dataset module](eegdash.dataset.dataset.md) * [eegdash.dataset.exceptions module](eegdash.dataset.exceptions.md) * [eegdash.dataset.io module](eegdash.dataset.io.md) * [eegdash.dataset.registry module](eegdash.dataset.registry.md) * [Module contents](eegdash.dataset.md#module-contents) * [eegdash.hbn package](eegdash.hbn.md) * [Submodules](eegdash.hbn.md#submodules) * [eegdash.hbn.preprocessing module](eegdash.hbn.preprocessing.md) * [eegdash.hbn.windows module](eegdash.hbn.windows.md) * [Module contents](eegdash.hbn.md#module-contents) ## Submodules * [eegdash.api module](eegdash.api.md) * [eegdash.bids_metadata module](eegdash.bids_metadata.md) * [eegdash.const module](eegdash.const.md) * [eegdash.downloader module](eegdash.downloader.md) * [eegdash.http_api_client module](eegdash.http_api_client.md) * [eegdash.local_bids module](eegdash.local_bids.md) * [eegdash.logging module](eegdash.logging.md) * [eegdash.paths module](eegdash.paths.md) * [eegdash.schemas module](eegdash.schemas.md) * [EEGDash Data Schemas](eegdash.schemas.md#eegdash-data-schemas) * [Core Concepts](eegdash.schemas.md#core-concepts) * [Usage](eegdash.schemas.md#usage) ## Module contents EEGDash: A comprehensive platform for EEG data management and analysis. EEGDash provides a unified interface for accessing, querying, and analyzing large-scale EEG datasets. It integrates with cloud storage and REST APIs to streamline EEG research workflows. ### *exception* eegdash.DataIntegrityError(message: str, record: dict[str, Any] | None = None, issues: list[str] | None = None, authors: list[str] | None = None, contact_info: list[str] | None = None, source_url: str | None = None) Bases: `EEGDashError` Raised when a dataset record has known data integrity issues. This exception is raised when attempting to load a record that has been flagged during ingestion as having missing companion files or other integrity problems. #### record The problematic record metadata. * **Type:** dict #### issues List of specific integrity issues found. * **Type:** list[str] #### authors Dataset authors who can be contacted about the issue. * **Type:** list[str] #### contact_info Contact information for reporting the issue. * **Type:** list[str] | None #### source_url URL to the dataset source for reporting issues. * **Type:** str | None ### Examples ```pycon >>> try: ... dataset.raw # Attempt to load data ... except DataIntegrityError as e: ... print(f"Cannot load: {e.issues}") ... print(f"Contact authors: {e.authors}") ``` #### *classmethod* from_record(record: dict[str, Any]) → DataIntegrityError Create a DataIntegrityError from a record with integrity issues. * **Parameters:** **record** (*dict*) – Record containing `_data_integrity_issues` and optionally `_dataset_authors`, `_dataset_contact`, `_source_url`. * **Returns:** Exception with all relevant context. * **Return type:** DataIntegrityError #### log_error() → None Log the error using the EEGDash logger with rich formatting. #### log_warning() → None Log the integrity issues as warnings (non-blocking). #### print_rich(console: Console | None = None) → None Print a rich formatted version of the error to the console. * **Parameters:** **console** (*Console* *,* *optional*) – Rich console to print to. If None, creates a new one. #### *classmethod* warn_from_record(record: dict[str, Any]) → None Log a warning about data integrity issues without raising an exception. Use this when you want to warn about issues but still allow loading. * **Parameters:** **record** (*dict*) – Record containing `_data_integrity_issues` and optionally `_dataset_authors`, `_dataset_contact`, `_source_url`. ### *class* eegdash.EEGChallengeDataset(release: str, cache_dir: str, mini: bool = True, query: dict | None = None, s3_bucket: str | None = None, \*\*kwargs) Bases: [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) A dataset helper for the EEG 2025 Challenge. This class simplifies access to the EEG 2025 Challenge datasets. It is a specialized version of `EEGDashDataset` that is pre-configured for the challenge’s data releases. It automatically maps a release name (e.g., “R1”) to the corresponding OpenNeuro dataset and handles the selection of subject subsets (e.g., “mini” release). * **Parameters:** * **release** (*str*) – The name of the challenge release to load. Must be one of the keys in `RELEASE_TO_OPENNEURO_DATASET_MAP` (e.g., “R1”, “R2”, …, “R11”). * **cache_dir** (*str*) – The local directory where the dataset will be downloaded and cached. * **mini** (*bool* *,* *default True*) – If True, the dataset is restricted to the official “mini” subset of subjects for the specified release. If False, all subjects for the release are included. * **query** (*dict* *,* *optional*) – An additional MongoDB-style query to apply as a filter. This query is combined with the release and subject filters using a logical AND. The query must not contain the `dataset` key, as this is determined by the `release` parameter. * **s3_bucket** (*str* *,* *optional*) – The base S3 bucket URI where the challenge data is stored. Defaults to the official challenge bucket. * **\*\*kwargs** – Additional keyword arguments that are passed directly to the `EEGDashDataset` constructor. * **Raises:** **ValueError** – If the specified `release` is unknown, or if the `query` argument contains a `dataset` key. Also raised if `mini` is True and a requested subject is not part of the official mini-release subset. #### SEE ALSO [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset) : The base class for creating datasets from queries. ### *class* eegdash.EEGDash(, database: str = 'eegdash', api_url: str | None = None, auth_token: str | None = None) Bases: `object` High-level interface to the EEGDash metadata database. Provides methods to query, insert, and update metadata records stored in the EEGDash database via REST API gateway. For working with collections of recordings as PyTorch datasets, prefer [`EEGDashDataset`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset). Create a new EEGDash client. * **Parameters:** * **database** (*str* *,* *default "eegdash"*) – Name of the MongoDB database to connect to. Common values: `"eegdash"` (production), `"eegdash_staging"` (staging), `"eegdash_v1"` (legacy archive). * **api_url** (*str* *,* *optional*) – Override the default API URL. If not provided, uses the default public endpoint or the `EEGDASH_API_URL` environment variable. * **auth_token** (*str* *,* *optional*) – Authentication token for admin write operations. Not required for public read operations. ### Examples ```pycon >>> eegdash = EEGDash() # production >>> eegdash = EEGDash(database="eegdash_staging") # staging >>> records = eegdash.find({"dataset": "ds002718"}) ``` #### count(query: dict[str, Any] = None, , \*\*kwargs) → int Count documents matching the query. * **Parameters:** * **query** (*dict* *,* *optional*) – Complete query dictionary. This is a positional-only argument. * **\*\*kwargs** – User-friendly field filters (same as find()). * **Returns:** Number of matching documents. * **Return type:** int ### Examples ```pycon >>> eeg = EEGDash() >>> count = eeg.count({}) # count all >>> count = eeg.count(dataset="ds002718") # count by dataset ``` #### exists(query: dict[str, Any] = None, , \*\*kwargs) → bool Check if at least one record matches the query. * **Parameters:** * **query** (*dict* *,* *optional*) – Complete query dictionary. This is a positional-only argument. * **\*\*kwargs** – User-friendly field filters (same as find()). * **Returns:** True if at least one matching record exists; False otherwise. * **Return type:** bool ### Examples ```pycon >>> eeg = EEGDash() >>> eeg.exists(dataset="ds002718") # check by dataset >>> eeg.exists({"data_name": "ds002718_sub-001_eeg.set"}) # check by data_name ``` #### find(query: dict[str, Any] = None, , \*\*kwargs) → list[Mapping[str, Any]] Find records in the collection. ### Examples ```pycon >>> from eegdash import EEGDash >>> eegdash = EEGDash() >>> eegdash.find({"dataset": "ds002718", "subject": {"$in": ["012", "013"]}}) # pre-built query >>> eegdash.find(dataset="ds002718", subject="012") # keyword filters >>> eegdash.find(dataset="ds002718", subject=["012", "013"]) # sequence -> $in >>> eegdash.find({}) # fetch all (use with care) >>> eegdash.find({"dataset": "ds002718"}, subject=["012", "013"]) # combine query + kwargs (AND) ``` * **Parameters:** * **query** (*dict* *,* *optional*) – Complete MongoDB query dictionary. This is a positional-only argument. * **\*\*kwargs** – User-friendly field filters that are converted to a MongoDB query. Values can be scalars (e.g., `"sub-01"`) or sequences (translated to `$in` queries). Special parameters: `limit` (int) and `skip` (int) for pagination. * **Returns:** DB records that match the query. * **Return type:** list of dict #### find_datasets(query: dict[str, Any] | None = None, limit: int = 1000) → list[Mapping[str, Any]] Find datasets matching query. * **Parameters:** * **query** (*dict*) – Filter query. * **limit** (*int*) – Max number of datasets to return. * **Returns:** List of dataset metadata documents. * **Return type:** list of dict #### find_one(query: dict[str, Any] = None, , \*\*kwargs) → Mapping[str, Any] | None Find a single record matching the query. * **Parameters:** * **query** (*dict* *,* *optional*) – Complete query dictionary. This is a positional-only argument. * **\*\*kwargs** – User-friendly field filters (same as find()). * **Returns:** The first matching record, or None if no match. * **Return type:** dict or None ### Examples ```pycon >>> eeg = EEGDash() >>> record = eeg.find_one(data_name="ds002718_sub-001_eeg.set") ``` #### get_dataset(dataset_id: str) → Mapping[str, Any] | None Fetch metadata for a specific dataset. * **Parameters:** **dataset_id** (*str*) – The unique identifier of the dataset (e.g., ‘ds002718’). * **Returns:** The dataset metadata document, or None if not found. * **Return type:** dict or None #### insert(records: dict[str, Any] | list[dict[str, Any]]) → int Insert one or more records (requires auth_token). * **Parameters:** **records** (*dict* *or* *list* *of* *dict*) – A single record or list of records to insert. * **Returns:** Number of records inserted. * **Return type:** int ### Examples ```pycon >>> eeg = EEGDash(auth_token="...") >>> eeg.insert({"dataset": "ds001", "subject": "01", ...}) # single >>> eeg.insert([record1, record2, record3]) # batch ``` #### update_dataset(dataset_id: str, update: dict[str, Any]) → int Update metadata for a specific dataset (requires auth_token). * **Parameters:** * **dataset_id** (*str*) – The unique identifier of the dataset (e.g., ‘ds002718’). * **update** (*dict*) – Dictionary of fields to update. * **Returns:** Number of documents modified (0 or 1). * **Return type:** int ### Examples ```pycon >>> eeg = EEGDash(auth_token="...") >>> eeg.update_dataset("ds002718", {"clinical.is_clinical": True}) ``` #### update_field(query: dict[str, Any] = None, , , update: dict[str, Any], \*\*kwargs) → tuple[int, int] Update fields on records matching the query (requires auth_token). Use this to add or modify fields across matching records, e.g., after re-extracting entities with an improved algorithm. * **Parameters:** * **query** (*dict* *,* *optional*) – Filter query to match records. This is a positional-only argument. * **update** (*dict*) – Fields to update. Keys are field names, values are new values. * **\*\*kwargs** – User-friendly field filters (same as find()). * **Returns:** Number of records matched and actually modified. * **Return type:** tuple of (matched_count, modified_count) ### Examples ```pycon >>> eeg = EEGDash(auth_token="...") >>> # Update entities for all records in a dataset >>> eeg.update_field({"dataset": "ds002718"}, update={"entities": {"subject": "01"}}) >>> # Using kwargs for filter >>> eeg.update_field(dataset="ds002718", update={"entities": new_entities}) >>> # Combine query + kwargs >>> eeg.update_field({"dataset": "ds002718"}, subject="01", update={"entities": new_entities}) ``` ### *class* eegdash.EEGDashDataset(cache_dir: str | Path, query: dict[str, Any] = None, description_fields: list[str] | None = None, s3_bucket: str | None = None, records: list[dict] | None = None, download: bool = True, n_jobs: int = -1, eeg_dash_instance: Any = None, database: str | None = None, auth_token: str | None = None, on_error: str = 'raise', \*\*kwargs) Bases: `BaseConcatDataset` Create a new EEGDashDataset from a given query or local BIDS dataset directory and dataset name. An EEGDashDataset is pooled collection of EEGDashBaseDataset instances (individual recordings) and is a subclass of braindecode’s BaseConcatDataset. ### Examples Basic usage with dataset and subject filtering: ```pycon >>> from eegdash import EEGDashDataset >>> dataset = EEGDashDataset( ... cache_dir="./data", ... dataset="ds002718", ... subject="012" ... ) >>> print(f"Number of recordings: {len(dataset)}") ``` Filter by multiple subjects and specific task: ```pycon >>> subjects = ["012", "013", "014"] >>> dataset = EEGDashDataset( ... cache_dir="./data", ... dataset="ds002718", ... subject=subjects, ... task="RestingState" ... ) ``` Load and inspect EEG data from recordings: ```pycon >>> if len(dataset) > 0: ... recording = dataset[0] ... raw = recording.load() ... print(f"Sampling rate: {raw.info['sfreq']} Hz") ... print(f"Number of channels: {len(raw.ch_names)}") ... print(f"Duration: {raw.times[-1]:.1f} seconds") ``` Advanced filtering with raw MongoDB queries: ```pycon >>> from eegdash import EEGDashDataset >>> query = { ... "dataset": "ds002718", ... "subject": {"$in": ["012", "013"]}, ... "task": "RestingState" ... } >>> dataset = EEGDashDataset(cache_dir="./data", query=query) ``` Working with dataset collections and braindecode integration: ```pycon >>> # EEGDashDataset is a braindecode BaseConcatDataset >>> for i, recording in enumerate(dataset): ... if i >= 2: # limit output ... break ... print(f"Recording {i}: {recording.description}") ... raw = recording.load() ... print(f" Channels: {len(raw.ch_names)}, Duration: {raw.times[-1]:.1f}s") ``` * **Parameters:** * **cache_dir** (*str* *|* *Path*) – Directory where data are cached locally. * **query** (*dict* *|* *None*) – Raw MongoDB query to filter records. If provided, it is merged with keyword filtering arguments (see `**kwargs`) using logical AND. You must provide at least a `dataset` (either in `query` or as a keyword argument). Only fields in `ALLOWED_QUERY_FIELDS` are considered for filtering. * **dataset** (*str*) – Dataset identifier (e.g., `"ds002718"`). Required if `query` does not already specify a dataset. * **task** (*str* *|* *list* *[**str* *]*) – Task name(s) to filter by (e.g., `"RestingState"`). * **subject** (*str* *|* *list* *[**str* *]*) – Subject identifier(s) to filter by (e.g., `"NDARCA153NKE"`). * **session** (*str* *|* *list* *[**str* *]*) – Session identifier(s) to filter by (e.g., `"1"`). * **run** (*str* *|* *list* *[**str* *]*) – Run identifier(s) to filter by (e.g., `"1"`). * **description_fields** (*list* *[**str* *]*) – Fields to extract from each record and include in dataset descriptions (e.g., “subject”, “session”, “run”, “task”). * **s3_bucket** (*str* *|* *None*) – Optional S3 bucket URI (e.g., “s3://mybucket”) to use instead of the default OpenNeuro bucket when downloading data files. * **records** (*list* *[**dict* *]* *|* *None*) – Pre-fetched metadata records. If provided, the dataset is constructed directly from these records and no MongoDB query is performed. * **download** (*bool* *,* *default True*) – If False, load from local BIDS files only. Local data are expected under `cache_dir / dataset`; no DB or S3 access is attempted. * **n_jobs** (*int*) – Number of parallel jobs to use where applicable (-1 uses all cores). * **eeg_dash_instance** (*EEGDash* *|* *None*) – Optional existing EEGDash client to reuse for DB queries. If None, a new client is created on demand, not used in the case of no download. * **database** (*str* *|* *None*) – Database name to use (e.g., “eegdash”, “eegdash_staging”). If None, uses the default database. * **auth_token** (*str* *|* *None*) – Authentication token for accessing protected databases. Required for staging or admin operations. * **on_error** (*str* *,* *default "raise"*) – How to handle `DataIntegrityError` when accessing `.raw` on individual recordings: - `"raise"` (default): propagate the exception. - `"warn"`: log the error as a warning and set `.raw` to `None`. - `"skip"`: silently set `.raw` to `None`. Use [`drop_bad()`](eegdash.EEGDashDataset.md#eegdash.EEGDashDataset.drop_bad) after iteration to remove skipped recordings. * **\*\*kwargs** (*dict*) – Additional keyword arguments serving two purposes: - Filtering: any keys present in `ALLOWED_QUERY_FIELDS` are treated as query filters (e.g., `dataset`, `subject`, `task`, …). - Dataset options: remaining keys are forwarded to `EEGDashRaw`. #### *property* cumulative_sizes *: list[int]* Recompute cumulative sizes from current dataset lengths. Overrides the cached version from BaseConcatDataset because individual dataset lengths can change after lazy raw loading (estimated ntimes from JSON metadata may differ from actual n_times in the raw file). #### download_all(n_jobs: int | None = None) → None Download missing remote files in parallel. * **Parameters:** **n_jobs** (*int* *|* *None*) – Number of parallel workers to use. If None, defaults to `self.n_jobs`. #### drop_bad() → list[dict] Remove skipped datasets and return their records. Call after accessing `.raw` on all datasets (e.g. after iteration or preprocessing) to clean up the dataset list. * **Returns:** Records that were removed because loading failed. * **Return type:** list of dict #### drop_short(min_samples: int) → list[dict] Remove recordings shorter than *min_samples* and return their records. This is useful when downstream processing (e.g., fixed-length windowing) requires a minimum number of samples per recording. Recordings whose `.raw` is `None` (failed to load) are also dropped. * **Parameters:** **min_samples** (*int*) – Minimum number of time-domain samples a recording must have to be kept. * **Returns:** Records that were removed. * **Return type:** list of dict #### save(path, overwrite=False) Save the dataset to disk. * **Parameters:** * **path** (*str* *or* *Path*) – Destination file path. * **overwrite** (*bool* *,* *default False*) – If True, overwrite existing file. * **Return type:** None # eegdash.paths module Path utilities and cache directory management. This module provides functions for resolving consistent cache directories and path management throughout the EEGDash package, with integration to MNE-Python’s configuration system. ### eegdash.paths.get_default_cache_dir() → Path Resolve the default cache directory for EEGDash data. The function determines the cache directory based on the following priority order: > 1. The path specified by the `EEGDASH_CACHE_DIR` environment variable. > 2. A hidden directory named `.eegdash_cache` in the current working > : directory. > 3. The path specified by the `MNE_DATA` configuration in the MNE-Python > : config file (fallback). * **Returns:** The resolved, absolute path to the default cache directory. * **Return type:** pathlib.Path # eegdash.schemas module ## EEGDash Data Schemas This module defines the core data structures used throughout EEGDash to represent neuroimaging datasets and individual recording files. It provides two types of schemas for each core object: 1. **Pydantic Models** (`*Model`): Used for strict data validation, serialization, and schema generation (e.g., for APIs). 2. **TypedDict Definitions**: Used for high-performance internal usage, static type checking, and efficient loading of large metadata collections. ### Core Concepts The data model is organized into a two-level hierarchy: * **Dataset**: Represents a collection of data (e.g., “ds001785”). It contains study-level metadata such as: \* Identity (ID, name, source) \* Demographics (subject ages, sex distribution) \* Clinical (diagnosis, purpose) \* Experiment Paradigm (tasks, stimuli) \* Provenance (timestamps, authors) * **Record**: Represents a single data file within a dataset (e.g., a specific .vhdr or .edf file). It is optimized for fast access and contains: \* File location (storage backend, path) \* BIDS Entities (subject, session, task, run) \* Basic signal properties (sampling rate, channel names) ### Usage Creating a Dataset: ```python from eegdash.schemas import create_dataset ds = create_dataset( dataset_id="ds001", name="My Study", subjects_count=20, ages=[20, 25, 30], recording_modality=["eeg"], ) ``` Creating a Record: ```python from eegdash.schemas import create_record rec = create_record( dataset="ds001", storage_base="https://my.storage.com", bids_relpath="sub-01/eeg/sub-01_task-rest_eeg.edf", subject="01", task="rest", ) ``` ### *class* eegdash.schemas.Clinical Bases: `TypedDict` Clinical classification metadata (dataset-level). #### Deprecated Deprecated since version Use: the `tags` field with `pathology` key instead. #### is_clinical True if the dataset contains clinical population data. * **Type:** bool #### purpose The clinical condition or purpose (e.g., “epilepsy”, “depression”). * **Type:** str | None #### is_clinical *: bool* #### purpose *: str | None* ### *class* eegdash.schemas.Dataset Bases: `TypedDict` TypedDict schema for a full Dataset document. This Dictionary represents all metadata available for a study/dataset. #### dataset_id Unique identifier (e.g., “ds001785”). * **Type:** str #### name Descriptive title of the dataset. * **Type:** str #### canonical_name Canonical / community-recognised name(s) for the dataset, each a valid Python identifier (e.g. `["BrainTreeBank"]`, `["SleepEDF", "SleepEDFPlus"]`). Used to register importable class aliases alongside the `DS…`-style ID. Empty list or `None` means no alias is registered. * **Type:** list[str] | None #### source Origin source (e.g., “openneuro”, “nemar”). * **Type:** str #### readme Content of the dataset’s README file. * **Type:** str | None #### recording_modality List of recording modalities (e.g., [“eeg”, “meg”]). * **Type:** list[str] #### datatypes BIDS datatypes present (e.g., [“eeg”, “anat”]). * **Type:** list[str] #### experimental_modalities Stimulus types used (e.g., [“visual”, “auditory”]). * **Type:** list[str] | None #### bids_version Version of the BIDS standard used. * **Type:** str | None #### license License string (e.g., “CC0”). * **Type:** str | None #### authors List of author names. * **Type:** list[str] #### funding List of funding sources. * **Type:** list[str] #### dataset_doi Digital Object Identifier for the dataset. * **Type:** str | None #### associated_paper_doi DOI of the paper associated with the dataset. * **Type:** str | None #### tasks List of task names found in the dataset. * **Type:** list[str] #### sessions List of session names. * **Type:** list[str] #### total_files Total file count. * **Type:** int | None #### size_bytes Total dataset size in bytes. * **Type:** int | None #### data_processed Indicates if the data has been pre-processed. * **Type:** bool | None #### study_domain General domain of the study. * **Type:** str | None #### study_design Description of the study design. * **Type:** str | None #### contributing_labs List of labs contributing to the dataset. * **Type:** list[str] | None #### n_contributing_labs Count of contributing labs. * **Type:** int | None #### demographics Summary of subject demographics. * **Type:** Demographics #### tags Classification tags (pathology, modality, type). * **Type:** Tags #### clinical Clinical classification details (deprecated, use tags instead). * **Type:** Clinical #### external_links Links to external resources. * **Type:** ExternalLinks #### repository_stats Stats for the source repository (if applicable). * **Type:** RepositoryStats | None #### senior_author Name of the senior author. * **Type:** str | None #### contact_info Contact emails or names. * **Type:** list[str] | None #### timestamps Timestamps for data processing and creation. * **Type:** Timestamps #### nemar_citation_count Number of papers citing this dataset (from NEMAR citations repository). * **Type:** int | None #### associated_paper_doi *: str | None* #### authors *: list[str]* #### bids_version *: str | None* #### canonical_name *: list[str] | None* #### clinical *: Clinical* #### contact_info *: list[str] | None* #### contributing_labs *: list[str] | None* #### data_processed *: bool | None* #### dataset_doi *: str | None* #### dataset_id *: str* #### datatypes *: list[str]* #### demographics *: Demographics* #### experimental_modalities *: list[str] | None* #### external_links *: ExternalLinks* #### funding *: list[str]* #### ingestion_fingerprint *: str | None* #### license *: str | None* #### n_contributing_labs *: int | None* #### name *: str* #### nemar_citation_count *: int | None* #### readme *: str | None* #### recording_modality *: list[str]* #### repository_stats *: RepositoryStats | None* #### senior_author *: str | None* #### sessions *: list[str]* #### size_bytes *: int | None* #### source *: str* #### storage *: Storage | None* #### study_design *: str | None* #### study_domain *: str | None* #### tags *: Tags* #### tasks *: list[str]* #### timestamps *: Timestamps* #### total_files *: int | None* ### *class* eegdash.schemas.DatasetModel(, dataset_id: Annotated[str, MinLen(min_length=1)], source: Annotated[str, MinLen(min_length=1)], recording_modality: Annotated[list[str], MinLen(min_length=1)], ingestion_fingerprint: str | None = None, senior_author: str | None = None, contact_info: list[str] | None = None, timestamps: dict[str, Any] | None = None, storage: StorageModel | None = None, \*\*extra_data: Any) Bases: `BaseModel` Pydantic model for dataset-level metadata. Create a new model by parsing and validating input data from keyword arguments. Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. self is explicitly positional-only to allow self as a field name. #### contact_info *: list[str] | None* #### dataset_id *: str* #### ingestion_fingerprint *: str | None* #### model_config *= {'extra': 'allow'}* Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. #### recording_modality *: list[str]* #### senior_author *: str | None* #### source *: str* #### storage *: StorageModel | None* #### timestamps *: dict[str, Any] | None* ### *class* eegdash.schemas.Demographics Bases: `TypedDict` Subject demographics summary for a dataset. #### subjects_count Total number of subjects. * **Type:** int #### ages List of all subject ages (if available). * **Type:** list[int] #### age_min Minimum age in the cohort. * **Type:** int | None #### age_max Maximum age in the cohort. * **Type:** int | None #### age_mean Mean age of subjects. * **Type:** float | None #### species Species of subjects (e.g., “Human”, “Mouse”). * **Type:** str | None #### sex_distribution Count of subjects by sex (e.g., {“m”: 50, “f”: 45}). * **Type:** dict[str, int] | None #### handedness_distribution Count of subjects by handedness (e.g., {“r”: 80, “l”: 15}). * **Type:** dict[str, int] | None #### age_max *: int | None* #### age_mean *: float | None* #### age_min *: int | None* #### ages *: list[int]* #### handedness_distribution *: dict[str, int] | None* #### sex_distribution *: dict[str, int] | None* #### species *: str | None* #### subjects_count *: int* ### *class* eegdash.schemas.Entities Bases: `TypedDict` BIDS entities parsed from the file path. #### subject Subject label (e.g., “01”). * **Type:** str | None #### session Session label (e.g., “pre”). * **Type:** str | None #### task Task label (e.g., “rest”). * **Type:** str | None #### run Run label (e.g., “1” or “01”). * **Type:** str | None #### acquisition Acquisition label (e.g., “bipolar”, “PSG”). * **Type:** str | None #### acquisition *: str | None* #### run *: str | None* #### session *: str | None* #### subject *: str | None* #### task *: str | None* ### *class* eegdash.schemas.EntitiesModel(, subject: str | None = None, session: str | None = None, task: str | None = None, run: str | None = None, acquisition: str | None = None, \*\*extra_data: Any) Bases: `BaseModel` Pydantic model for BIDS entities. Create a new model by parsing and validating input data from keyword arguments. Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. self is explicitly positional-only to allow self as a field name. #### acquisition *: str | None* #### model_config *= {'extra': 'allow'}* Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. #### run *: str | None* #### session *: str | None* #### subject *: str | None* #### task *: str | None* ### *class* eegdash.schemas.ExternalLinks Bases: `TypedDict` Relevant external hyperlinks for the dataset. #### source_url URL to the primary data source (e.g. OpenNeuro page). * **Type:** str | None #### osf_url URL to the Open Science Framework project. * **Type:** str | None #### github_url URL to the associated GitHub repository. * **Type:** str | None #### paper_url URL to the primary publication. * **Type:** str | None #### github_url *: str | None* #### osf_url *: str | None* #### paper_url *: str | None* #### source_url *: str | None* ### *class* eegdash.schemas.ManifestFileModel(, path: str | None = None, name: str | None = None, \*\*extra_data: Any) Bases: `BaseModel` Pydantic model for a file entry in a manifest. Create a new model by parsing and validating input data from keyword arguments. Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. self is explicitly positional-only to allow self as a field name. #### model_config *= {'extra': 'allow'}* Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. #### name *: str | None* #### path *: str | None* #### path_or_name() → str Return the path or name of the file. ### *class* eegdash.schemas.ManifestModel(, source: str | None = None, files: list[str | ManifestFileModel], \*\*extra_data: Any) Bases: `BaseModel` Pydantic model for a dataset file manifest. Create a new model by parsing and validating input data from keyword arguments. Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. self is explicitly positional-only to allow self as a field name. #### files *: list[str | ManifestFileModel]* #### model_config *= {'extra': 'allow'}* Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. #### source *: str | None* ### *class* eegdash.schemas.Record Bases: `TypedDict` TypedDict schema for a Record document. Represents a single data file and its metadata. This structure is kept flat and minimal to ensure fast loading times when querying millions of records. #### dataset Foreign key matching `Dataset.dataset_id`. * **Type:** str #### data_name Unique name for the data item (e.g., “ds001_sub-01_task-rest”). * **Type:** str #### bidspath Legacy path identifier (e.g., “ds001/sub-01/eeg/…”). * **Type:** str #### bids_relpath Standard BIDS relative path (e.g., “sub-01/eeg/…”). * **Type:** str #### datatype BIDS datatype (e.g., “eeg”). * **Type:** str #### suffix Filename suffix (e.g., “eeg”). * **Type:** str #### extension File extension (e.g., “.vhdr”). * **Type:** str #### recording_modality Modality of the recording. * **Type:** list[str] | None #### entities BIDS entities dict (subject, session, etc.). * **Type:** Entities #### entities_mne BIDS entities sanitized for compatibility with MNE-Python (e.g. numeric numeric runs). * **Type:** Entities #### storage Storage location details. * **Type:** Storage #### ch_names List of channel names. * **Type:** list[str] | None #### sampling_frequency Sampling rate in Hz. * **Type:** float | None #### nchans Channel count. * **Type:** int | None #### ntimes Number of time points. * **Type:** int | None #### digested_at Timestamp of when this record was processed. * **Type:** str #### bids_relpath *: str* #### bidspath *: str* #### ch_names *: list[str] | None* #### data_name *: str* #### dataset *: str* #### datatype *: str* #### digested_at *: str* #### entities *: Entities* #### entities_mne *: Entities* #### extension *: str* #### nchans *: int | None* #### ntimes *: int | None* #### recording_modality *: list[str] | None* #### sampling_frequency *: float | None* #### storage *: Storage* #### suffix *: str* ### *class* eegdash.schemas.RecordModel(, dataset: Annotated[str, MinLen(min_length=1)], bids_relpath: Annotated[str, MinLen(min_length=1)], storage: StorageModel, recording_modality: Annotated[list[str], MinLen(min_length=1)], datatype: str | None = None, suffix: str | None = None, extension: str | None = None, entities: EntitiesModel | dict[str, Any] | None = None, \*\*extra_data: Any) Bases: `BaseModel` Pydantic model for a single recording file. Create a new model by parsing and validating input data from keyword arguments. Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. self is explicitly positional-only to allow self as a field name. #### bids_relpath *: str* #### dataset *: str* #### datatype *: str | None* #### entities *: EntitiesModel | dict[str, Any] | None* #### extension *: str | None* #### model_config *= {'extra': 'allow'}* Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. #### recording_modality *: list[str]* #### storage *: StorageModel* #### suffix *: str | None* ### *class* eegdash.schemas.RepositoryStats Bases: `TypedDict` Statistics for git-based repositories (e.g. GIN). #### stars Number of stars. * **Type:** int #### forks Number of forks. * **Type:** int #### watchers Number of watchers. * **Type:** int #### forks *: int* #### stars *: int* #### watchers *: int* ### *class* eegdash.schemas.Storage Bases: `TypedDict` Remote storage location details. #### backend Storage backend protocol. * **Type:** {‘s3’, ‘https’, ‘local’} #### base Base URI (e.g., “s3://openneuro.org/ds000001”). * **Type:** str #### raw_key Path relative to base to reach the file. * **Type:** str #### dep_keys Paths relative to base for sidecar files (e.g., .json, .vhdr). * **Type:** list[str] #### backend *: Literal['s3', 'https', 'local']* #### base *: str* #### dep_keys *: list[str]* #### raw_key *: str* ### *class* eegdash.schemas.StorageModel(\*, backend: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], base: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], raw_key: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], dep_keys: list[str] = , \*\*extra_data: ~typing.Any) Bases: `BaseModel` Pydantic model for storage location details. Create a new model by parsing and validating input data from keyword arguments. Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. self is explicitly positional-only to allow self as a field name. #### backend *: str* #### base *: str* #### dep_keys *: list[str]* #### model_config *= {'extra': 'allow'}* Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. #### raw_key *: str* ### *class* eegdash.schemas.Timestamps Bases: `TypedDict` Processing and lifecycle timestamps. #### digested_at ISO 8601 timestamp of when the data was processed by EEGDash. * **Type:** str #### dataset_created_at ISO 8601 timestamp of when the dataset was originally created. * **Type:** str | None #### dataset_modified_at ISO 8601 timestamp of when the dataset was last updated. * **Type:** str | None #### dataset_created_at *: str | None* #### dataset_modified_at *: str | None* #### digested_at *: str* ### eegdash.schemas.create_dataset(, dataset_id: str, name: str | None = None, canonical_name: list[str] | None = None, source: str = 'openneuro', readme: str | None = None, recording_modality: list[str] | None = None, datatypes: list[str] | None = None, modalities: list[str] | None = None, experimental_modalities: list[str] | None = None, bids_version: str | None = None, license: str | None = None, authors: list[str] | None = None, funding: list[str] | None = None, dataset_doi: str | None = None, associated_paper_doi: str | None = None, tasks: list[str] | None = None, sessions: list[str] | None = None, total_files: int | None = None, size_bytes: int | None = None, data_processed: bool | None = None, study_domain: str | None = None, study_design: str | None = None, subjects_count: int | None = None, ages: list[int] | None = None, age_mean: float | None = None, species: str | None = None, sex_distribution: dict[str, int] | None = None, handedness_distribution: dict[str, int] | None = None, contributing_labs: list[str] | None = None, tags_pathology: list[str] | None = None, tags_modality: list[str] | None = None, tags_type: list[str] | None = None, is_clinical: bool | None = None, clinical_purpose: str | None = None, source_url: str | None = None, osf_url: str | None = None, github_url: str | None = None, paper_url: str | None = None, stars: int | None = None, forks: int | None = None, watchers: int | None = None, senior_author: str | None = None, contact_info: list[str] | None = None, digested_at: str | None = None, dataset_created_at: str | None = None, dataset_modified_at: str | None = None, storage: Storage | None = None) → Dataset Create a Dataset document. This helper function constructs a `Dataset` TypedDict with default values and logic to handle nested structures like demographics, clinical info, and external links. * **Parameters:** * **dataset_id** (*str*) – Dataset identifier (e.g., “ds001785”). * **name** (*str* *,* *optional*) – Dataset title/name. * **canonical_name** (*list* *[**str* *]* *,* *optional*) – Canonical / community-recognised name(s) for the dataset (each a valid Python identifier, e.g. `["BrainTreeBank"]` or `["SleepEDF", "SleepEDFPlus"]`). Used by the dataset class registry to expose importable aliases. Empty list or `None` registers no aliases. * **source** (*str* *,* *default "openneuro"*) – Data source (“openneuro”, “nemar”, “gin”). * **recording_modality** (*list* *[**str* *]* *,* *optional*) – Recording types (e.g., [“eeg”, “meg”, “ieeg”]). * **datatypes** (*list* *[**str* *]* *,* *optional*) – BIDS datatypes present in the dataset (e.g., [“eeg”, “anat”, “beh”]). * **experimental_modalities** (*list* *[**str* *]* *,* *optional*) – Stimulus/experimental modalities (e.g., [“visual”, “auditory”, “tactile”]). * **bids_version** (*str* *,* *optional*) – BIDS version of the dataset. * **license** (*str* *,* *optional*) – Dataset license (e.g., “CC0”, “CC-BY-4.0”). * **authors** (*list* *[**str* *]* *,* *optional*) – Dataset authors. * **funding** (*list* *[**str* *]* *,* *optional*) – Funding sources. * **dataset_doi** (*str* *,* *optional*) – Dataset DOI. * **associated_paper_doi** (*str* *,* *optional*) – DOI of associated publication. * **tasks** (*list* *[**str* *]* *,* *optional*) – Tasks in the dataset. * **sessions** (*list* *[**str* *]* *,* *optional*) – Sessions in the dataset. * **total_files** (*int* *,* *optional*) – Total number of files. * **size_bytes** (*int* *,* *optional*) – Total size in bytes. * **data_processed** (*bool* *,* *optional*) – Whether data is processed. * **study_domain** (*str* *,* *optional*) – Study domain/topic. * **study_design** (*str* *,* *optional*) – Study design description. * **subjects_count** (*int* *,* *optional*) – Number of subjects. * **ages** (*list* *[**int* *]* *,* *optional*) – Subject ages. * **age_mean** (*float* *,* *optional*) – Mean age of subjects. * **species** (*str* *,* *optional*) – Species (e.g., “Human”). * **sex_distribution** (*dict* *[**str* *,* *int* *]* *,* *optional*) – Sex distribution (e.g., {“m”: 50, “f”: 45}). * **handedness_distribution** (*dict* *[**str* *,* *int* *]* *,* *optional*) – Handedness distribution (e.g., {“r”: 80, “l”: 15}). * **contributing_labs** (*list* *[**str* *]* *,* *optional*) – Labs that contributed data (for multi-site studies). * **is_clinical** (*bool* *,* *optional*) – Whether this is clinical data. * **clinical_purpose** (*str* *,* *optional*) – Clinical purpose (e.g., “epilepsy”, “depression”). * **paradigm_modality** (*str* *,* *optional*) – Experimental modality (e.g., “visual”, “auditory”, “text”, “multisensory”, “resting_state”). * **cognitive_domain** (*str* *,* *optional*) – Cognitive domain (e.g., “attention”, “memory”, “motor”). * **is_10_20_system** (*bool* *,* *optional*) – Whether electrodes follow the 10-20 system. * **source_url** (*str* *,* *optional*) – Primary URL to the dataset source. * **osf_url** (*str* *,* *optional*) – Open Science Framework URL. * **github_url** (*str* *,* *optional*) – GitHub repository URL. * **paper_url** (*str* *,* *optional*) – URL to associated paper. * **stars** (*int* *,* *optional*) – Repository stars count (for git-based sources). * **forks** (*int* *,* *optional*) – Repository forks count. * **watchers** (*int* *,* *optional*) – Repository watchers count. * **digested_at** (*str* *,* *optional*) – ISO 8601 timestamp. If not provided, no timestamp is set (for deterministic output). * **dataset_modified_at** (*str* *,* *optional*) – Last modification timestamp. * **Returns:** A fully populated Dataset document. * **Return type:** Dataset ### eegdash.schemas.create_record(, dataset: str, storage_base: str, bids_relpath: str, subject: str | None = None, session: str | None = None, task: str | None = None, run: str | None = None, acquisition: str | None = None, dep_keys: list[str] | None = None, datatype: str = 'eeg', suffix: str = 'eeg', storage_backend: Literal['s3', 'https', 'local'] = 's3', recording_modality: list[str] | None = None, ch_names: list[str] | None = None, sampling_frequency: float | None = None, nchans: int | None = None, ntimes: int | None = None, digested_at: str | None = None) → Record Create an EEGDash record. Helper to construct a valid `Record` TypedDict. * **Parameters:** * **dataset** (*str*) – Dataset identifier (e.g., “ds000001”). * **storage_base** (*str*) – Remote storage base URI (e.g., “s3://openneuro.org/ds000001”). * **bids_relpath** (*str*) – BIDS-relative path to the raw file (e.g., “sub-01/eeg/sub-01_task-rest_eeg.vhdr”). * **subject** (*str* *,* *optional*) – BIDS entities. * **session** (*str* *,* *optional*) – BIDS entities. * **task** (*str* *,* *optional*) – BIDS entities. * **run** (*str* *,* *optional*) – BIDS entities. * **acquisition** (*str* *,* *optional*) – BIDS entities. * **dep_keys** (*list* *[**str* *]* *,* *optional*) – Dependency paths relative to storage_base. * **datatype** (*str* *,* *default "eeg"*) – BIDS datatype. * **suffix** (*str* *,* *default "eeg"*) – BIDS suffix. * **storage_backend** ( *{"s3"* *,* *"https"* *,* *"local"}* *,* *default "s3"*) – Storage backend type. * **recording_modality** (*list* *[**str* *]* *,* *optional*) – Recording modalities (e.g., [“eeg”, “meg”, “ieeg”]). * **digested_at** (*str* *,* *optional*) – ISO 8601 timestamp. Defaults to current time. * **Returns:** A slim EEGDash record optimized for loading. * **Return type:** Record ### Notes Clinical and paradigm info is stored at the Dataset level, not per-file. ### Examples ```pycon >>> record = create_record( ... dataset="ds000001", ... storage_base="s3://openneuro.org/ds000001", ... bids_relpath="sub-01/eeg/sub-01_task-rest_eeg.vhdr", ... subject="01", ... task="rest", ... ) ``` ### eegdash.schemas.validate_dataset(dataset: dict[str, Any]) → list[str] Validate a dataset has required fields. Returns list of errors. ### eegdash.schemas.validate_record(record: dict[str, Any]) → list[str] Validate a record has required fields. Returns list of errors. ### Notes - bids_relpath is the canonical unique identifier for records - bidspath is a computed field (dataset + “/” + bids_relpath) and not strictly required - storage.raw_key always equals bids_relpath when created via create_record # Nemar Datasets * [EEG2025R1: eeg dataset, 136 subjects](eegdash.dataset.EEG2025R1.md) * [EEG2025R10: eeg dataset, 533 subjects](eegdash.dataset.EEG2025R10.md) * [EEG2025R10MINI: eeg dataset, 20 subjects](eegdash.dataset.EEG2025R10MINI.md) * [EEG2025R11: eeg dataset, 430 subjects](eegdash.dataset.EEG2025R11.md) * [EEG2025R11MINI: eeg dataset, 20 subjects](eegdash.dataset.EEG2025R11MINI.md) * [EEG2025R1MINI: eeg dataset, 20 subjects](eegdash.dataset.EEG2025R1MINI.md) * [EEG2025R2: eeg dataset, 150 subjects](eegdash.dataset.EEG2025R2.md) * [EEG2025R2MINI: eeg dataset, 20 subjects](eegdash.dataset.EEG2025R2MINI.md) * [EEG2025R3: eeg dataset, 184 subjects](eegdash.dataset.EEG2025R3.md) * [EEG2025R3MINI: eeg dataset, 20 subjects](eegdash.dataset.EEG2025R3MINI.md) * [EEG2025R4: eeg dataset, 324 subjects](eegdash.dataset.EEG2025R4.md) * [EEG2025R4MINI: eeg dataset, 20 subjects](eegdash.dataset.EEG2025R4MINI.md) * [EEG2025R5: eeg dataset, 330 subjects](eegdash.dataset.EEG2025R5.md) * [EEG2025R5MINI: eeg dataset, 20 subjects](eegdash.dataset.EEG2025R5MINI.md) * [EEG2025R6: eeg dataset, 135 subjects](eegdash.dataset.EEG2025R6.md) * [EEG2025R6MINI: eeg dataset, 20 subjects](eegdash.dataset.EEG2025R6MINI.md) * [EEG2025R7: eeg dataset, 381 subjects](eegdash.dataset.EEG2025R7.md) * [EEG2025R7MINI: eeg dataset, 20 subjects](eegdash.dataset.EEG2025R7MINI.md) * [EEG2025R8: eeg dataset, 257 subjects](eegdash.dataset.EEG2025R8.md) * [EEG2025R8MINI: eeg dataset, 20 subjects](eegdash.dataset.EEG2025R8MINI.md) * [EEG2025R9: eeg dataset, 295 subjects](eegdash.dataset.EEG2025R9.md) * [EEG2025R9MINI: eeg dataset, 20 subjects](eegdash.dataset.EEG2025R9MINI.md) * [NM000103: eeg dataset, 447 subjects](eegdash.dataset.NM000103.md) * [NM000104: emg dataset, 108 subjects](eegdash.dataset.NM000104.md) * [NM000105: emg dataset, 100 subjects](eegdash.dataset.NM000105.md) * [NM000106: emg dataset, 100 subjects](eegdash.dataset.NM000106.md) * [NM000107: emg dataset, 100 subjects](eegdash.dataset.NM000107.md) * [NM000108: emg dataset, 20 subjects](eegdash.dataset.NM000108.md) * [NM000109: eeg dataset, 36 subjects](eegdash.dataset.NM000109.md) * [NM000110: eeg dataset, 24 subjects](eegdash.dataset.NM000110.md) * [NM000112: eeg dataset, 123 subjects](eegdash.dataset.NM000112.md) * [NM000113: eeg dataset, 15 subjects](eegdash.dataset.NM000113.md) * [NM000114: eeg dataset, 64 subjects](eegdash.dataset.NM000114.md) * [NM000115: eeg dataset, 4 subjects](eegdash.dataset.NM000115.md) * [NM000118: eeg dataset, 9 subjects](eegdash.dataset.NM000118.md) * [NM000119: eeg dataset, 11 subjects](eegdash.dataset.NM000119.md) * [NM000120: eeg dataset, 11 subjects](eegdash.dataset.NM000120.md) * [NM000121: eeg dataset, 11 subjects](eegdash.dataset.NM000121.md) * [NM000122: eeg dataset, 12 subjects](eegdash.dataset.NM000122.md) * [NM000123: eeg dataset, 12 subjects](eegdash.dataset.NM000123.md) * [NM000124: eeg dataset, 24 subjects](eegdash.dataset.NM000124.md) * [NM000125: eeg dataset, 23 subjects](eegdash.dataset.NM000125.md) * [NM000126: eeg dataset, 34 subjects](eegdash.dataset.NM000126.md) * [NM000127: eeg dataset, 40 subjects](eegdash.dataset.NM000127.md) * [NM000128: eeg dataset, 59 subjects](eegdash.dataset.NM000128.md) * [NM000129: eeg dataset, 70 subjects](eegdash.dataset.NM000129.md) * [NM000130: eeg dataset, 100 subjects](eegdash.dataset.NM000130.md) * [NM000131: eeg dataset, 8 subjects](eegdash.dataset.NM000131.md) * [NM000132: eeg dataset, 40 subjects](eegdash.dataset.NM000132.md) * [NM000133: eeg dataset, 8 subjects](eegdash.dataset.NM000133.md) * [NM000134: eeg dataset, 20 subjects](eegdash.dataset.NM000134.md) * [NM000135: eeg dataset, 1 subjects](eegdash.dataset.NM000135.md) * [NM000136: eeg dataset, 31 subjects](eegdash.dataset.NM000136.md) * [NM000137: eeg dataset, 7 subjects](eegdash.dataset.NM000137.md) * [NM000138: eeg dataset, 8 subjects](eegdash.dataset.NM000138.md) * [NM000139: eeg dataset, 9 subjects](eegdash.dataset.NM000139.md) * [NM000140: eeg dataset, 12 subjects](eegdash.dataset.NM000140.md) * [NM000141: eeg dataset, 14 subjects](eegdash.dataset.NM000141.md) * [NM000142: eeg dataset, 6 subjects](eegdash.dataset.NM000142.md) * [NM000143: eeg dataset, 5 subjects](eegdash.dataset.NM000143.md) * [NM000144: eeg dataset, 9 subjects](eegdash.dataset.NM000144.md) * [NM000145: eeg dataset, 10 subjects](eegdash.dataset.NM000145.md) * [NM000146: eeg dataset, 10 subjects](eegdash.dataset.NM000146.md) * [NM000147: eeg dataset, 22 subjects](eegdash.dataset.NM000147.md) * [NM000148: eeg dataset, 30 subjects](eegdash.dataset.NM000148.md) * [NM000149: eeg dataset, 10 subjects](eegdash.dataset.NM000149.md) * [NM000150: eeg dataset](eegdash.dataset.NM000150.md) * [NM000151: eeg dataset, 12 subjects](eegdash.dataset.NM000151.md) * [NM000152: eeg dataset, 12 subjects](eegdash.dataset.NM000152.md) * [NM000155: emg dataset, 6 subjects](eegdash.dataset.NM000155.md) * [NM000157: eeg dataset, 19 subjects](eegdash.dataset.NM000157.md) * [NM000158: eeg dataset, 50 subjects](eegdash.dataset.NM000158.md) * [NM000159: emg dataset, 16 subjects](eegdash.dataset.NM000159.md) * [NM000160: eeg dataset, 18 subjects](eegdash.dataset.NM000160.md) * [NM000161: eeg dataset, 20 subjects](eegdash.dataset.NM000161.md) * [NM000162: eeg dataset, 20 subjects](eegdash.dataset.NM000162.md) * [NM000163: eeg dataset, 12 subjects](eegdash.dataset.NM000163.md) * [NM000165: emg dataset, 1 subjects](eegdash.dataset.NM000165.md) * [NM000166: eeg dataset, 95 subjects](eegdash.dataset.NM000166.md) * [NM000167: eeg dataset, 25 subjects](eegdash.dataset.NM000167.md) * [NM000168: eeg dataset, 6 subjects](eegdash.dataset.NM000168.md) * [NM000169: eeg dataset, 8 subjects](eegdash.dataset.NM000169.md) * [NM000170: eeg dataset, 10 subjects](eegdash.dataset.NM000170.md) * [NM000171: eeg dataset, 14 subjects](eegdash.dataset.NM000171.md) * [NM000172: eeg dataset, 14 subjects](eegdash.dataset.NM000172.md) * [NM000173: eeg dataset, 15 subjects](eegdash.dataset.NM000173.md) * [NM000175: fnirs dataset, 5 subjects](eegdash.dataset.NM000175.md) * [NM000176: eeg dataset, 5 subjects](eegdash.dataset.NM000176.md) * [NM000179: eeg dataset, 215 subjects](eegdash.dataset.NM000179.md) * [NM000180: eeg dataset, 45 subjects](eegdash.dataset.NM000180.md) * [NM000181: eeg dataset, 2417 subjects](eegdash.dataset.NM000181.md) * [NM000185: eeg dataset, 100 subjects](eegdash.dataset.NM000185.md) * [NM000186: eeg dataset, 8 subjects](eegdash.dataset.NM000186.md) * [NM000187: eeg dataset, 8 subjects](eegdash.dataset.NM000187.md) * [NM000188: eeg dataset, 10 subjects](eegdash.dataset.NM000188.md) * [NM000189: eeg dataset, 10 subjects](eegdash.dataset.NM000189.md) * [NM000190: eeg dataset, 10 subjects](eegdash.dataset.NM000190.md) * [NM000191: eeg dataset, 10 subjects](eegdash.dataset.NM000191.md) * [NM000192: eeg dataset, 11 subjects](eegdash.dataset.NM000192.md) * [NM000193: eeg dataset, 11 subjects](eegdash.dataset.NM000193.md) * [NM000194: eeg dataset, 12 subjects](eegdash.dataset.NM000194.md) * [NM000195: eeg dataset, 12 subjects](eegdash.dataset.NM000195.md) * [NM000196: eeg dataset, 12 subjects](eegdash.dataset.NM000196.md) * [NM000197: eeg dataset, 21 subjects](eegdash.dataset.NM000197.md) * [NM000198: eeg dataset, 13 subjects](eegdash.dataset.NM000198.md) * [NM000199: eeg dataset, 13 subjects](eegdash.dataset.NM000199.md) * [NM000200: eeg dataset, 13 subjects](eegdash.dataset.NM000200.md) * [NM000201: eeg dataset, 24 subjects](eegdash.dataset.NM000201.md) * [NM000204: eeg dataset, 14 subjects](eegdash.dataset.NM000204.md) * [NM000205: eeg dataset, 14 subjects](eegdash.dataset.NM000205.md) * [NM000206: eeg dataset, 15 subjects](eegdash.dataset.NM000206.md) * [NM000207: eeg dataset, 15 subjects](eegdash.dataset.NM000207.md) * [NM000208: eeg dataset, 14 subjects](eegdash.dataset.NM000208.md) * [NM000209: eeg dataset, 25 subjects](eegdash.dataset.NM000209.md) * [NM000210: eeg dataset, 15 subjects](eegdash.dataset.NM000210.md) * [NM000211: eeg dataset, 15 subjects](eegdash.dataset.NM000211.md) * [NM000212: eeg dataset, 16 subjects](eegdash.dataset.NM000212.md) * [NM000213: eeg dataset, 30 subjects](eegdash.dataset.NM000213.md) * [NM000214: eeg dataset, 30 subjects](eegdash.dataset.NM000214.md) * [NM000215: eeg dataset, 38 subjects](eegdash.dataset.NM000215.md) * [NM000216: eeg dataset, 43 subjects](eegdash.dataset.NM000216.md) * [NM000217: eeg dataset, 44 subjects](eegdash.dataset.NM000217.md) * [NM000218: eeg dataset, 16 subjects](eegdash.dataset.NM000218.md) * [NM000219: eeg dataset, 18 subjects](eegdash.dataset.NM000219.md) * [NM000221: eeg dataset, 19 subjects](eegdash.dataset.NM000221.md) * [NM000222: eeg dataset, 10 subjects](eegdash.dataset.NM000222.md) * [NM000223: eeg dataset, 15 subjects](eegdash.dataset.NM000223.md) * [NM000225: eeg dataset, 1983 subjects](eegdash.dataset.NM000225.md) * [NM000226: eeg dataset, 4 subjects](eegdash.dataset.NM000226.md) * [NM000227: eeg dataset, 31 subjects](eegdash.dataset.NM000227.md) * [NM000228: eeg dataset, 356 subjects](eegdash.dataset.NM000228.md) * [NM000229: eeg dataset, 29 subjects](eegdash.dataset.NM000229.md) * [NM000230: eeg dataset, 30 subjects](eegdash.dataset.NM000230.md) * [NM000231: eeg dataset, 8 subjects](eegdash.dataset.NM000231.md) * [NM000232: eeg dataset, 10 subjects](eegdash.dataset.NM000232.md) * [NM000234: eeg dataset, 21 subjects](eegdash.dataset.NM000234.md) * [NM000235: eeg dataset, 31 subjects](eegdash.dataset.NM000235.md) * [NM000236: eeg dataset, 21 subjects](eegdash.dataset.NM000236.md) * [NM000237: eeg dataset, 20 subjects](eegdash.dataset.NM000237.md) * [NM000238: eeg dataset, 87 subjects](eegdash.dataset.NM000238.md) * [NM000239: eeg dataset, 16 subjects](eegdash.dataset.NM000239.md) * [NM000240: eeg dataset, 16 subjects](eegdash.dataset.NM000240.md) * [NM000241: ieeg dataset, 2 subjects](eegdash.dataset.NM000241.md) * [NM000242: eeg dataset, 22 subjects](eegdash.dataset.NM000242.md) * [NM000243: eeg dataset, 15 subjects](eegdash.dataset.NM000243.md) * [NM000244: eeg dataset, 64 subjects](eegdash.dataset.NM000244.md) * [NM000245: eeg dataset, 52 subjects](eegdash.dataset.NM000245.md) * [NM000246: eeg dataset, 51 subjects](eegdash.dataset.NM000246.md) * [NM000247: eeg dataset, 10 subjects](eegdash.dataset.NM000247.md) * [NM000248: eeg dataset, 11 subjects](eegdash.dataset.NM000248.md) * [NM000249: eeg dataset, 13 subjects](eegdash.dataset.NM000249.md) * [NM000250: eeg dataset, 87 subjects](eegdash.dataset.NM000250.md) * [NM000251: ieeg dataset, 1 subjects](eegdash.dataset.NM000251.md) * [NM000253: ieeg dataset, 10 subjects](eegdash.dataset.NM000253.md) * [NM000254: eeg dataset, 22 subjects](eegdash.dataset.NM000254.md) * [NM000255: eeg dataset, 30 subjects](eegdash.dataset.NM000255.md) * [NM000256: eeg dataset, 29 subjects](eegdash.dataset.NM000256.md) * [NM000259: eeg dataset, 10 subjects](eegdash.dataset.NM000259.md) * [NM000260: eeg dataset, 23 subjects](eegdash.dataset.NM000260.md) * [NM000264: eeg dataset, 24 subjects](eegdash.dataset.NM000264.md) * [NM000265: eeg dataset, 31 subjects](eegdash.dataset.NM000265.md) * [NM000266: eeg dataset, 13 subjects](eegdash.dataset.NM000266.md) * [NM000267: eeg dataset, 29 subjects](eegdash.dataset.NM000267.md) * [NM000268: eeg dataset, 29 subjects](eegdash.dataset.NM000268.md) * [NM000270: eeg dataset, 27 subjects](eegdash.dataset.NM000270.md) * [NM000271: eeg dataset, 28 subjects](eegdash.dataset.NM000271.md) * [NM000272: eeg dataset, 22 subjects](eegdash.dataset.NM000272.md) * [NM000277: eeg dataset, 20 subjects](eegdash.dataset.NM000277.md) * [NM000301: eeg dataset, 17 subjects](eegdash.dataset.NM000301.md) * [NM000303: eeg dataset, 18 subjects](eegdash.dataset.NM000303.md) * [NM000310: eeg dataset, 11 subjects](eegdash.dataset.NM000310.md) * [NM000311: eeg dataset, 25 subjects](eegdash.dataset.NM000311.md) * [NM000313: eeg dataset, 24 subjects](eegdash.dataset.NM000313.md) * [NM000321: eeg dataset, 36 subjects](eegdash.dataset.NM000321.md) * [NM000323: eeg dataset, 54 subjects](eegdash.dataset.NM000323.md) * [NM000326: eeg dataset, 19 subjects](eegdash.dataset.NM000326.md) * [NM000329: eeg dataset, 16 subjects](eegdash.dataset.NM000329.md) * [NM000336: eeg dataset, 20 subjects](eegdash.dataset.NM000336.md) * [NM000338: eeg dataset, 54 subjects](eegdash.dataset.NM000338.md) * [NM000339: eeg dataset, 62 subjects](eegdash.dataset.NM000339.md) * [NM000340: eeg dataset, 20 subjects](eegdash.dataset.NM000340.md) * [NM000341: eeg dataset, 12 subjects](eegdash.dataset.NM000341.md) * [NM000342: eeg dataset, 12 subjects](eegdash.dataset.NM000342.md) * [NM000343: eeg dataset, 15 subjects](eegdash.dataset.NM000343.md) * [NM000344: eeg dataset, 12 subjects](eegdash.dataset.NM000344.md) * [NM000345: eeg dataset, 12 subjects](eegdash.dataset.NM000345.md) * [NM000346: eeg dataset, 12 subjects](eegdash.dataset.NM000346.md) * [NM000347: eeg dataset, 37 subjects](eegdash.dataset.NM000347.md) * [NM000348: eeg dataset, 51 subjects](eegdash.dataset.NM000348.md) * [NM000351: eeg dataset, 19 subjects](eegdash.dataset.NM000351.md) # Openneuro Datasets * [DS000117: meg dataset, 17 subjects](eegdash.dataset.DS000117.md) * [DS000246: meg dataset, 2 subjects](eegdash.dataset.DS000246.md) * [DS000247: meg dataset, 6 subjects](eegdash.dataset.DS000247.md) * [DS000248: meg dataset, 2 subjects](eegdash.dataset.DS000248.md) * [DS001785: eeg dataset, 18 subjects](eegdash.dataset.DS001785.md) * [DS001787: eeg dataset, 24 subjects](eegdash.dataset.DS001787.md) * [DS001810: eeg dataset, 47 subjects](eegdash.dataset.DS001810.md) * [DS001849: eeg dataset, 20 subjects](eegdash.dataset.DS001849.md) * [DS001971: eeg dataset, 20 subjects](eegdash.dataset.DS001971.md) * [DS002001: meg dataset, 11 subjects](eegdash.dataset.DS002001.md) * [DS002034: eeg dataset, 14 subjects](eegdash.dataset.DS002034.md) * [DS002094: eeg dataset, 20 subjects](eegdash.dataset.DS002094.md) * [DS002158: eeg dataset, 20 subjects](eegdash.dataset.DS002158.md) * [DS002181: eeg dataset, 226 subjects](eegdash.dataset.DS002181.md) * [DS002218: eeg dataset, 18 subjects](eegdash.dataset.DS002218.md) * [DS002312: meg dataset, 19 subjects](eegdash.dataset.DS002312.md) * [DS002336: eeg dataset, 10 subjects](eegdash.dataset.DS002336.md) * [DS002338: eeg dataset, 17 subjects](eegdash.dataset.DS002338.md) * [DS002550: meg dataset, 22 subjects](eegdash.dataset.DS002550.md) * [DS002578: eeg dataset, 2 subjects](eegdash.dataset.DS002578.md) * [DS002680: eeg dataset, 14 subjects](eegdash.dataset.DS002680.md) * [DS002691: eeg dataset, 20 subjects](eegdash.dataset.DS002691.md) * [DS002712: meg dataset, 25 subjects](eegdash.dataset.DS002712.md) * [DS002718: eeg dataset, 18 subjects](eegdash.dataset.DS002718.md) * [DS002720: eeg dataset, 18 subjects](eegdash.dataset.DS002720.md) * [DS002721: eeg dataset, 31 subjects](eegdash.dataset.DS002721.md) * [DS002722: eeg dataset, 19 subjects](eegdash.dataset.DS002722.md) * [DS002723: eeg dataset, 8 subjects](eegdash.dataset.DS002723.md) * [DS002724: eeg dataset, 10 subjects](eegdash.dataset.DS002724.md) * [DS002725: eeg dataset, 21 subjects](eegdash.dataset.DS002725.md) * [DS002761: meg dataset, 25 subjects](eegdash.dataset.DS002761.md) * [DS002778: eeg dataset, 31 subjects](eegdash.dataset.DS002778.md) * [DS002791: eeg dataset, 23 subjects](eegdash.dataset.DS002791.md) * [DS002799: ieeg dataset, 27 subjects](eegdash.dataset.DS002799.md) * [DS002814: eeg dataset, 21 subjects](eegdash.dataset.DS002814.md) * [DS002833: eeg dataset, 20 subjects](eegdash.dataset.DS002833.md) * [DS002885: meg dataset, 2 subjects](eegdash.dataset.DS002885.md) * [DS002893: eeg dataset, 49 subjects](eegdash.dataset.DS002893.md) * [DS002908: meg dataset, 13 subjects](eegdash.dataset.DS002908.md) * [DS003004: eeg dataset, 34 subjects](eegdash.dataset.DS003004.md) * [DS003029: ieeg dataset, 35 subjects](eegdash.dataset.DS003029.md) * [DS003039: eeg dataset, 19 subjects](eegdash.dataset.DS003039.md) * [DS003061: eeg dataset, 13 subjects](eegdash.dataset.DS003061.md) * [DS003078: ieeg dataset, 6 subjects](eegdash.dataset.DS003078.md) * [DS003082: meg dataset, 2 subjects](eegdash.dataset.DS003082.md) * [DS003104: meg dataset, 1 subjects](eegdash.dataset.DS003104.md) * [DS003190: eeg dataset, 19 subjects](eegdash.dataset.DS003190.md) * [DS003194: eeg dataset, 15 subjects](eegdash.dataset.DS003194.md) * [DS003195: eeg dataset, 10 subjects](eegdash.dataset.DS003195.md) * [DS003343: eeg dataset, 20 subjects](eegdash.dataset.DS003343.md) * [DS003352: meg dataset, 18 subjects](eegdash.dataset.DS003352.md) * [DS003374: ieeg dataset, 9 subjects](eegdash.dataset.DS003374.md) * [DS003380: eeg dataset, 1 subjects](eegdash.dataset.DS003380.md) * [DS003392: meg dataset, 12 subjects](eegdash.dataset.DS003392.md) * [DS003420: eeg dataset, 23 subjects](eegdash.dataset.DS003420.md) * [DS003421: eeg dataset, 20 subjects](eegdash.dataset.DS003421.md) * [DS003458: eeg dataset, 23 subjects](eegdash.dataset.DS003458.md) * [DS003474: eeg dataset, 122 subjects](eegdash.dataset.DS003474.md) * [DS003478: eeg dataset, 122 subjects](eegdash.dataset.DS003478.md) * [DS003483: meg dataset, 21 subjects](eegdash.dataset.DS003483.md) * [DS003490: eeg dataset, 50 subjects](eegdash.dataset.DS003490.md) * [DS003498: ieeg dataset, 20 subjects](eegdash.dataset.DS003498.md) * [DS003505: eeg dataset, 19 subjects](eegdash.dataset.DS003505.md) * [DS003506: eeg dataset, 56 subjects](eegdash.dataset.DS003506.md) * [DS003509: eeg dataset, 56 subjects](eegdash.dataset.DS003509.md) * [DS003516: eeg dataset, 25 subjects](eegdash.dataset.DS003516.md) * [DS003517: eeg dataset, 17 subjects](eegdash.dataset.DS003517.md) * [DS003518: eeg dataset, 110 subjects](eegdash.dataset.DS003518.md) * [DS003519: eeg dataset, 27 subjects](eegdash.dataset.DS003519.md) * [DS003522: eeg dataset, 96 subjects](eegdash.dataset.DS003522.md) * [DS003523: eeg dataset, 91 subjects](eegdash.dataset.DS003523.md) * [DS003555: eeg dataset, 30 subjects](eegdash.dataset.DS003555.md) * [DS003568: meg dataset, 51 subjects](eegdash.dataset.DS003568.md) * [DS003570: eeg dataset, 40 subjects](eegdash.dataset.DS003570.md) * [DS003574: eeg dataset, 18 subjects](eegdash.dataset.DS003574.md) * [DS003602: eeg dataset, 118 subjects](eegdash.dataset.DS003602.md) * [DS003620: eeg dataset, 44 subjects](eegdash.dataset.DS003620.md) * [DS003626: eeg dataset, 10 subjects](eegdash.dataset.DS003626.md) * [DS003633: meg dataset, 12 subjects](eegdash.dataset.DS003633.md) * [DS003638: eeg dataset, 57 subjects](eegdash.dataset.DS003638.md) * [DS003645: eeg, meg dataset, 19 subjects](eegdash.dataset.DS003645.md) * [DS003655: eeg dataset, 156 subjects](eegdash.dataset.DS003655.md) * [DS003670: eeg dataset, 25 subjects](eegdash.dataset.DS003670.md) * [DS003682: meg dataset, 28 subjects](eegdash.dataset.DS003682.md) * [DS003688: ieeg dataset, 51 subjects](eegdash.dataset.DS003688.md) * [DS003690: eeg dataset, 75 subjects](eegdash.dataset.DS003690.md) * [DS003694: meg dataset, 28 subjects](eegdash.dataset.DS003694.md) * [DS003702: eeg dataset, 47 subjects](eegdash.dataset.DS003702.md) * [DS003703: meg dataset, 34 subjects](eegdash.dataset.DS003703.md) * [DS003708: ieeg dataset, 1 subjects](eegdash.dataset.DS003708.md) * [DS003710: eeg dataset, 13 subjects](eegdash.dataset.DS003710.md) * [DS003739: eeg dataset, 30 subjects](eegdash.dataset.DS003739.md) * [DS003751: eeg dataset, 38 subjects](eegdash.dataset.DS003751.md) * [DS003753: eeg dataset, 25 subjects](eegdash.dataset.DS003753.md) * [DS003766: eeg dataset, 31 subjects](eegdash.dataset.DS003766.md) * [DS003768: eeg dataset, 33 subjects](eegdash.dataset.DS003768.md) * [DS003774: eeg dataset, 20 subjects](eegdash.dataset.DS003774.md) * [DS003775: eeg dataset, 111 subjects](eegdash.dataset.DS003775.md) * [DS003800: eeg dataset, 13 subjects](eegdash.dataset.DS003800.md) * [DS003801: eeg dataset, 20 subjects](eegdash.dataset.DS003801.md) * [DS003805: eeg dataset, 1 subjects](eegdash.dataset.DS003805.md) * [DS003810: eeg dataset, 10 subjects](eegdash.dataset.DS003810.md) * [DS003816: eeg dataset, 48 subjects](eegdash.dataset.DS003816.md) * [DS003822: eeg dataset, 25 subjects](eegdash.dataset.DS003822.md) * [DS003825: eeg dataset, 50 subjects](eegdash.dataset.DS003825.md) * [DS003838: eeg dataset, 65 subjects](eegdash.dataset.DS003838.md) * [DS003844: ieeg dataset, 6 subjects](eegdash.dataset.DS003844.md) * [DS003846: eeg dataset, 19 subjects](eegdash.dataset.DS003846.md) * [DS003848: ieeg dataset, 6 subjects](eegdash.dataset.DS003848.md) * [DS003876: ieeg dataset, 39 subjects](eegdash.dataset.DS003876.md) * [DS003885: eeg dataset, 24 subjects](eegdash.dataset.DS003885.md) * [DS003887: eeg dataset, 24 subjects](eegdash.dataset.DS003887.md) * [DS003922: meg dataset, 14 subjects](eegdash.dataset.DS003922.md) * [DS003944: eeg dataset, 82 subjects](eegdash.dataset.DS003944.md) * [DS003947: eeg dataset, 61 subjects](eegdash.dataset.DS003947.md) * [DS003969: eeg dataset, 98 subjects](eegdash.dataset.DS003969.md) * [DS003987: eeg dataset, 23 subjects](eegdash.dataset.DS003987.md) * [DS004000: eeg dataset, 43 subjects](eegdash.dataset.DS004000.md) * [DS004010: eeg dataset, 24 subjects](eegdash.dataset.DS004010.md) * [DS004011: meg dataset, 22 subjects](eegdash.dataset.DS004011.md) * [DS004012: meg dataset, 30 subjects](eegdash.dataset.DS004012.md) * [DS004015: eeg dataset, 36 subjects](eegdash.dataset.DS004015.md) * [DS004017: eeg dataset, 21 subjects](eegdash.dataset.DS004017.md) * [DS004018: eeg dataset, 16 subjects](eegdash.dataset.DS004018.md) * [DS004019: eeg dataset, 62 subjects](eegdash.dataset.DS004019.md) * [DS004022: eeg dataset, 7 subjects](eegdash.dataset.DS004022.md) * [DS004024: eeg dataset, 13 subjects](eegdash.dataset.DS004024.md) * [DS004033: eeg dataset, 18 subjects](eegdash.dataset.DS004033.md) * [DS004040: eeg dataset, 13 subjects](eegdash.dataset.DS004040.md) * [DS004043: eeg dataset, 20 subjects](eegdash.dataset.DS004043.md) * [DS004067: eeg dataset, 80 subjects](eegdash.dataset.DS004067.md) * [DS004075: eeg dataset, 29 subjects](eegdash.dataset.DS004075.md) * [DS004078: meg dataset, 12 subjects](eegdash.dataset.DS004078.md) * [DS004080: ieeg dataset, 74 subjects](eegdash.dataset.DS004080.md) * [DS004100: ieeg dataset, 57 subjects](eegdash.dataset.DS004100.md) * [DS004105: eeg dataset, 17 subjects](eegdash.dataset.DS004105.md) * [DS004106: eeg dataset, 27 subjects](eegdash.dataset.DS004106.md) * [DS004107: meg dataset, 9 subjects](eegdash.dataset.DS004107.md) * [DS004117: eeg dataset, 23 subjects](eegdash.dataset.DS004117.md) * [DS004118: eeg dataset, 156 subjects](eegdash.dataset.DS004118.md) * [DS004119: eeg dataset, 21 subjects](eegdash.dataset.DS004119.md) * [DS004120: eeg dataset, 109 subjects](eegdash.dataset.DS004120.md) * [DS004121: eeg dataset, 21 subjects](eegdash.dataset.DS004121.md) * [DS004122: eeg dataset, 32 subjects](eegdash.dataset.DS004122.md) * [DS004123: eeg dataset, 29 subjects](eegdash.dataset.DS004123.md) * [DS004127: ieeg dataset, 8 subjects](eegdash.dataset.DS004127.md) * [DS004147: eeg dataset, 12 subjects](eegdash.dataset.DS004147.md) * [DS004148: eeg dataset, 60 subjects](eegdash.dataset.DS004148.md) * [DS004151: eeg dataset, 57 subjects](eegdash.dataset.DS004151.md) * [DS004152: eeg dataset, 21 subjects](eegdash.dataset.DS004152.md) * [DS004166: eeg dataset, 71 subjects](eegdash.dataset.DS004166.md) * [DS004194: ieeg dataset, 14 subjects](eegdash.dataset.DS004194.md) * [DS004196: eeg dataset, 4 subjects](eegdash.dataset.DS004196.md) * [DS004200: eeg dataset, 20 subjects](eegdash.dataset.DS004200.md) * [DS004212: meg dataset, 5 subjects](eegdash.dataset.DS004212.md) * [DS004229: meg dataset, 2 subjects](eegdash.dataset.DS004229.md) * [DS004252: eeg dataset, 1 subjects](eegdash.dataset.DS004252.md) * [DS004256: eeg dataset, 53 subjects](eegdash.dataset.DS004256.md) * [DS004262: eeg dataset, 21 subjects](eegdash.dataset.DS004262.md) * [DS004264: eeg dataset, 21 subjects](eegdash.dataset.DS004264.md) * [DS004276: meg dataset, 19 subjects](eegdash.dataset.DS004276.md) * [DS004278: meg dataset, 30 subjects](eegdash.dataset.DS004278.md) * [DS004279: eeg dataset, 56 subjects](eegdash.dataset.DS004279.md) * [DS004284: eeg dataset, 18 subjects](eegdash.dataset.DS004284.md) * [DS004295: eeg dataset, 26 subjects](eegdash.dataset.DS004295.md) * [DS004306: eeg dataset, 12 subjects](eegdash.dataset.DS004306.md) * [DS004315: eeg dataset, 50 subjects](eegdash.dataset.DS004315.md) * [DS004317: eeg dataset, 50 subjects](eegdash.dataset.DS004317.md) * [DS004324: eeg dataset, 26 subjects](eegdash.dataset.DS004324.md) * [DS004330: meg dataset, 30 subjects](eegdash.dataset.DS004330.md) * [DS004346: meg dataset, 1 subjects](eegdash.dataset.DS004346.md) * [DS004347: eeg dataset, 24 subjects](eegdash.dataset.DS004347.md) * [DS004348: eeg dataset, 9 subjects](eegdash.dataset.DS004348.md) * [DS004350: eeg dataset, 24 subjects](eegdash.dataset.DS004350.md) * [DS004356: eeg dataset, 22 subjects](eegdash.dataset.DS004356.md) * [DS004357: eeg dataset, 16 subjects](eegdash.dataset.DS004357.md) * [DS004362: eeg dataset, 109 subjects](eegdash.dataset.DS004362.md) * [DS004367: eeg dataset, 40 subjects](eegdash.dataset.DS004367.md) * [DS004368: eeg dataset, 39 subjects](eegdash.dataset.DS004368.md) * [DS004369: eeg dataset, 41 subjects](eegdash.dataset.DS004369.md) * [DS004370: ieeg dataset, 7 subjects](eegdash.dataset.DS004370.md) * [DS004381: eeg dataset, 18 subjects](eegdash.dataset.DS004381.md) * [DS004388: eeg dataset, 40 subjects](eegdash.dataset.DS004388.md) * [DS004389: eeg dataset, 26 subjects](eegdash.dataset.DS004389.md) * [DS004395: eeg dataset, 364 subjects](eegdash.dataset.DS004395.md) * [DS004398: meg dataset, 1 subjects](eegdash.dataset.DS004398.md) * [DS004408: eeg dataset, 19 subjects](eegdash.dataset.DS004408.md) * [DS004444: eeg dataset, 30 subjects](eegdash.dataset.DS004444.md) * [DS004446: eeg dataset, 30 subjects](eegdash.dataset.DS004446.md) * [DS004447: eeg dataset, 22 subjects](eegdash.dataset.DS004447.md) * [DS004448: eeg dataset, 56 subjects](eegdash.dataset.DS004448.md) * [DS004457: ieeg dataset, 5 subjects](eegdash.dataset.DS004457.md) * [DS004460: eeg dataset, 20 subjects](eegdash.dataset.DS004460.md) * [DS004473: ieeg dataset, 8 subjects](eegdash.dataset.DS004473.md) * [DS004475: eeg dataset, 30 subjects](eegdash.dataset.DS004475.md) * [DS004477: eeg dataset, 9 subjects](eegdash.dataset.DS004477.md) * [DS004483: meg dataset, 19 subjects](eegdash.dataset.DS004483.md) * [DS004502: eeg dataset, 48 subjects](eegdash.dataset.DS004502.md) * [DS004504: eeg dataset, 88 subjects](eegdash.dataset.DS004504.md) * [DS004505: eeg dataset, 25 subjects](eegdash.dataset.DS004505.md) * [DS004511: eeg dataset, 45 subjects](eegdash.dataset.DS004511.md) * [DS004514: eeg, fnirs dataset, 12 subjects](eegdash.dataset.DS004514.md) * [DS004515: eeg dataset, 54 subjects](eegdash.dataset.DS004515.md) * [DS004517: eeg dataset, 7 subjects](eegdash.dataset.DS004517.md) * [DS004519: eeg dataset, 40 subjects](eegdash.dataset.DS004519.md) * [DS004520: eeg dataset, 33 subjects](eegdash.dataset.DS004520.md) * [DS004521: eeg dataset, 34 subjects](eegdash.dataset.DS004521.md) * [DS004532: eeg dataset, 110 subjects](eegdash.dataset.DS004532.md) * [DS004541: eeg, fnirs dataset, 8 subjects](eegdash.dataset.DS004541.md) * [DS004551: ieeg dataset, 114 subjects](eegdash.dataset.DS004551.md) * [DS004554: eeg dataset, 16 subjects](eegdash.dataset.DS004554.md) * [DS004561: eeg dataset, 23 subjects](eegdash.dataset.DS004561.md) * [DS004563: eeg dataset, 40 subjects](eegdash.dataset.DS004563.md) * [DS004572: eeg dataset, 52 subjects](eegdash.dataset.DS004572.md) * [DS004574: eeg dataset, 146 subjects](eegdash.dataset.DS004574.md) * [DS004577: eeg dataset, 103 subjects](eegdash.dataset.DS004577.md) * [DS004579: eeg dataset, 139 subjects](eegdash.dataset.DS004579.md) * [DS004580: eeg dataset, 147 subjects](eegdash.dataset.DS004580.md) * [DS004582: eeg dataset, 73 subjects](eegdash.dataset.DS004582.md) * [DS004584: eeg dataset, 149 subjects](eegdash.dataset.DS004584.md) * [DS004587: eeg dataset, 103 subjects](eegdash.dataset.DS004587.md) * [DS004588: eeg dataset, 42 subjects](eegdash.dataset.DS004588.md) * [DS004595: eeg dataset, 53 subjects](eegdash.dataset.DS004595.md) * [DS004598: eeg dataset, 9 subjects](eegdash.dataset.DS004598.md) * [DS004602: eeg dataset, 182 subjects](eegdash.dataset.DS004602.md) * [DS004603: eeg dataset, 37 subjects](eegdash.dataset.DS004603.md) * [DS004621: eeg dataset, 42 subjects](eegdash.dataset.DS004621.md) * [DS004624: ieeg dataset, 3 subjects](eegdash.dataset.DS004624.md) * [DS004625: eeg dataset, 32 subjects](eegdash.dataset.DS004625.md) * [DS004626: eeg dataset, 52 subjects](eegdash.dataset.DS004626.md) * [DS004635: eeg dataset, 48 subjects](eegdash.dataset.DS004635.md) * [DS004642: ieeg dataset, 10 subjects](eegdash.dataset.DS004642.md) * [DS004657: eeg dataset, 24 subjects](eegdash.dataset.DS004657.md) * [DS004660: eeg dataset, 21 subjects](eegdash.dataset.DS004660.md) * [DS004661: eeg dataset, 17 subjects](eegdash.dataset.DS004661.md) * [DS004696: ieeg dataset, 8 subjects](eegdash.dataset.DS004696.md) * [DS004703: ieeg dataset, 10 subjects](eegdash.dataset.DS004703.md) * [DS004706: eeg dataset, 34 subjects](eegdash.dataset.DS004706.md) * [DS004718: eeg dataset, 51 subjects](eegdash.dataset.DS004718.md) * [DS004738: meg dataset, 4 subjects](eegdash.dataset.DS004738.md) * [DS004745: eeg dataset, 6 subjects](eegdash.dataset.DS004745.md) * [DS004752: eeg, ieeg dataset, 15 subjects](eegdash.dataset.DS004752.md) * [DS004770: ieeg dataset, 10 subjects](eegdash.dataset.DS004770.md) * [DS004771: eeg dataset, 61 subjects](eegdash.dataset.DS004771.md) * [DS004774: ieeg dataset, 14 subjects](eegdash.dataset.DS004774.md) * [DS004784: eeg dataset, 1 subjects](eegdash.dataset.DS004784.md) * [DS004785: eeg dataset, 17 subjects](eegdash.dataset.DS004785.md) * [DS004789: ieeg dataset, 273 subjects](eegdash.dataset.DS004789.md) * [DS004796: eeg dataset, 79 subjects](eegdash.dataset.DS004796.md) * [DS004802: eeg dataset, 39 subjects](eegdash.dataset.DS004802.md) * [DS004809: ieeg dataset, 252 subjects](eegdash.dataset.DS004809.md) * [DS004816: eeg dataset, 20 subjects](eegdash.dataset.DS004816.md) * [DS004817: eeg dataset, 20 subjects](eegdash.dataset.DS004817.md) * [DS004819: ieeg dataset, 1 subjects](eegdash.dataset.DS004819.md) * [DS004830: fnirs dataset, 12 subjects](eegdash.dataset.DS004830.md) * [DS004837: meg dataset, 60 subjects](eegdash.dataset.DS004837.md) * [DS004840: eeg dataset, 9 subjects](eegdash.dataset.DS004840.md) * [DS004841: eeg dataset, 20 subjects](eegdash.dataset.DS004841.md) * [DS004842: eeg dataset, 14 subjects](eegdash.dataset.DS004842.md) * [DS004843: eeg dataset, 14 subjects](eegdash.dataset.DS004843.md) * [DS004844: eeg dataset, 17 subjects](eegdash.dataset.DS004844.md) * [DS004849: eeg dataset, 1 subjects](eegdash.dataset.DS004849.md) * [DS004850: eeg dataset, 1 subjects](eegdash.dataset.DS004850.md) * [DS004851: eeg dataset, 66 subjects](eegdash.dataset.DS004851.md) * [DS004852: eeg dataset, 1 subjects](eegdash.dataset.DS004852.md) * [DS004853: eeg dataset, 1 subjects](eegdash.dataset.DS004853.md) * [DS004854: eeg dataset, 1 subjects](eegdash.dataset.DS004854.md) * [DS004855: eeg dataset, 1 subjects](eegdash.dataset.DS004855.md) * [DS004859: ieeg dataset, 7 subjects](eegdash.dataset.DS004859.md) * [DS004860: eeg dataset, 31 subjects](eegdash.dataset.DS004860.md) * [DS004865: ieeg dataset, 42 subjects](eegdash.dataset.DS004865.md) * [DS004883: eeg dataset, 172 subjects](eegdash.dataset.DS004883.md) * [DS004902: eeg dataset, 71 subjects](eegdash.dataset.DS004902.md) * [DS004917: eeg dataset, 24 subjects](eegdash.dataset.DS004917.md) * [DS004929: fnirs dataset, 12 subjects](eegdash.dataset.DS004929.md) * [DS004940: eeg dataset, 22 subjects](eegdash.dataset.DS004940.md) * [DS004942: eeg dataset, 62 subjects](eegdash.dataset.DS004942.md) * [DS004944: ieeg dataset, 22 subjects](eegdash.dataset.DS004944.md) * [DS004951: eeg dataset, 11 subjects](eegdash.dataset.DS004951.md) * [DS004952: eeg dataset, 10 subjects](eegdash.dataset.DS004952.md) * [DS004973: fnirs dataset, 20 subjects](eegdash.dataset.DS004973.md) * [DS004977: ieeg dataset, 4 subjects](eegdash.dataset.DS004977.md) * [DS004980: eeg dataset, 17 subjects](eegdash.dataset.DS004980.md) * [DS004993: ieeg dataset, 3 subjects](eegdash.dataset.DS004993.md) * [DS004995: eeg dataset, 20 subjects](eegdash.dataset.DS004995.md) * [DS004998: meg dataset, 20 subjects](eegdash.dataset.DS004998.md) * [DS005007: ieeg dataset, 40 subjects](eegdash.dataset.DS005007.md) * [DS005021: eeg dataset, 36 subjects](eegdash.dataset.DS005021.md) * [DS005028: eeg dataset, 11 subjects](eegdash.dataset.DS005028.md) * [DS005034: eeg dataset, 25 subjects](eegdash.dataset.DS005034.md) * [DS005048: eeg dataset, 35 subjects](eegdash.dataset.DS005048.md) * [DS005059: ieeg dataset, 69 subjects](eegdash.dataset.DS005059.md) * [DS005065: meg dataset, 21 subjects](eegdash.dataset.DS005065.md) * [DS005079: eeg dataset, 1 subjects](eegdash.dataset.DS005079.md) * [DS005083: ieeg dataset, 61 subjects](eegdash.dataset.DS005083.md) * [DS005087: eeg dataset, 20 subjects](eegdash.dataset.DS005087.md) * [DS005089: eeg dataset, 36 subjects](eegdash.dataset.DS005089.md) * [DS005095: eeg dataset, 48 subjects](eegdash.dataset.DS005095.md) * [DS005106: eeg dataset, 42 subjects](eegdash.dataset.DS005106.md) * [DS005107: meg dataset, 21 subjects](eegdash.dataset.DS005107.md) * [DS005114: eeg dataset, 91 subjects](eegdash.dataset.DS005114.md) * [DS005121: eeg dataset, 34 subjects](eegdash.dataset.DS005121.md) * [DS005131: eeg dataset, 58 subjects](eegdash.dataset.DS005131.md) * [DS005169: ieeg dataset, 20 subjects](eegdash.dataset.DS005169.md) * [DS005170: eeg dataset, 5 subjects](eegdash.dataset.DS005170.md) * [DS005178: eeg dataset, 10 subjects](eegdash.dataset.DS005178.md) * [DS005185: eeg dataset, 20 subjects](eegdash.dataset.DS005185.md) * [DS005189: eeg dataset, 30 subjects](eegdash.dataset.DS005189.md) * [DS005207: eeg dataset, 20 subjects](eegdash.dataset.DS005207.md) * [DS005241: meg dataset, 24 subjects](eegdash.dataset.DS005241.md) * [DS005261: meg dataset, 17 subjects](eegdash.dataset.DS005261.md) * [DS005262: eeg dataset, 12 subjects](eegdash.dataset.DS005262.md) * [DS005273: eeg dataset, 33 subjects](eegdash.dataset.DS005273.md) * [DS005274: eeg dataset, 22 subjects](eegdash.dataset.DS005274.md) * [DS005279: meg dataset, 30 subjects](eegdash.dataset.DS005279.md) * [DS005280: eeg dataset, 223 subjects](eegdash.dataset.DS005280.md) * [DS005284: eeg dataset, 26 subjects](eegdash.dataset.DS005284.md) * [DS005285: eeg dataset, 29 subjects](eegdash.dataset.DS005285.md) * [DS005286: eeg dataset, 30 subjects](eegdash.dataset.DS005286.md) * [DS005289: eeg dataset, 39 subjects](eegdash.dataset.DS005289.md) * [DS005291: eeg dataset, 65 subjects](eegdash.dataset.DS005291.md) * [DS005292: eeg dataset, 142 subjects](eegdash.dataset.DS005292.md) * [DS005293: eeg dataset, 95 subjects](eegdash.dataset.DS005293.md) * [DS005296: eeg dataset, 62 subjects](eegdash.dataset.DS005296.md) * [DS005305: eeg dataset, 165 subjects](eegdash.dataset.DS005305.md) * [DS005307: eeg dataset, 7 subjects](eegdash.dataset.DS005307.md) * [DS005340: eeg dataset, 15 subjects](eegdash.dataset.DS005340.md) * [DS005342: eeg dataset, 32 subjects](eegdash.dataset.DS005342.md) * [DS005343: eeg dataset, 43 subjects](eegdash.dataset.DS005343.md) * [DS005345: eeg dataset, 26 subjects](eegdash.dataset.DS005345.md) * [DS005346: meg dataset, 30 subjects](eegdash.dataset.DS005346.md) * [DS005356: meg dataset, 85 subjects](eegdash.dataset.DS005356.md) * [DS005363: eeg dataset, 43 subjects](eegdash.dataset.DS005363.md) * [DS005383: eeg dataset, 30 subjects](eegdash.dataset.DS005383.md) * [DS005385: eeg dataset, 608 subjects](eegdash.dataset.DS005385.md) * [DS005397: eeg dataset, 26 subjects](eegdash.dataset.DS005397.md) * [DS005398: ieeg dataset, 185 subjects](eegdash.dataset.DS005398.md) * [DS005403: eeg dataset, 32 subjects](eegdash.dataset.DS005403.md) * [DS005406: eeg dataset, 29 subjects](eegdash.dataset.DS005406.md) * [DS005407: eeg dataset, 25 subjects](eegdash.dataset.DS005407.md) * [DS005408: eeg dataset, 25 subjects](eegdash.dataset.DS005408.md) * [DS005410: eeg dataset, 81 subjects](eegdash.dataset.DS005410.md) * [DS005411: ieeg dataset, 47 subjects](eegdash.dataset.DS005411.md) * [DS005415: ieeg dataset, 13 subjects](eegdash.dataset.DS005415.md) * [DS005416: eeg dataset, 23 subjects](eegdash.dataset.DS005416.md) * [DS005420: eeg dataset, 37 subjects](eegdash.dataset.DS005420.md) * [DS005429: eeg dataset, 15 subjects](eegdash.dataset.DS005429.md) * [DS005448: ieeg dataset, 13 subjects](eegdash.dataset.DS005448.md) * [DS005473: eeg dataset, 29 subjects](eegdash.dataset.DS005473.md) * [DS005486: eeg dataset, 159 subjects](eegdash.dataset.DS005486.md) * [DS005489: ieeg dataset, 37 subjects](eegdash.dataset.DS005489.md) * [DS005491: ieeg dataset, 19 subjects](eegdash.dataset.DS005491.md) * [DS005494: ieeg dataset, 20 subjects](eegdash.dataset.DS005494.md) * [DS005505: eeg dataset, 136 subjects](eegdash.dataset.DS005505.md) * [DS005506: eeg dataset, 150 subjects](eegdash.dataset.DS005506.md) * [DS005507: eeg dataset, 184 subjects](eegdash.dataset.DS005507.md) * [DS005508: eeg dataset, 324 subjects](eegdash.dataset.DS005508.md) * [DS005509: eeg dataset, 330 subjects](eegdash.dataset.DS005509.md) * [DS005510: eeg dataset, 135 subjects](eegdash.dataset.DS005510.md) * [DS005512: eeg dataset, 257 subjects](eegdash.dataset.DS005512.md) * [DS005514: eeg dataset, 295 subjects](eegdash.dataset.DS005514.md) * [DS005515: eeg dataset, 533 subjects](eegdash.dataset.DS005515.md) * [DS005516: eeg dataset, 430 subjects](eegdash.dataset.DS005516.md) * [DS005520: eeg dataset, 23 subjects](eegdash.dataset.DS005520.md) * [DS005522: ieeg dataset, 55 subjects](eegdash.dataset.DS005522.md) * [DS005523: ieeg dataset, 21 subjects](eegdash.dataset.DS005523.md) * [DS005530: eeg dataset, 17 subjects](eegdash.dataset.DS005530.md) * [DS005540: eeg dataset, 59 subjects](eegdash.dataset.DS005540.md) * [DS005545: ieeg dataset, 106 subjects](eegdash.dataset.DS005545.md) * [DS005555: eeg dataset, 128 subjects](eegdash.dataset.DS005555.md) * [DS005557: ieeg dataset, 16 subjects](eegdash.dataset.DS005557.md) * [DS005558: ieeg dataset, 7 subjects](eegdash.dataset.DS005558.md) * [DS005565: eeg dataset, 24 subjects](eegdash.dataset.DS005565.md) * [DS005571: eeg dataset, 24 subjects](eegdash.dataset.DS005571.md) * [DS005574: ieeg dataset, 9 subjects](eegdash.dataset.DS005574.md) * [DS005586: eeg dataset, 23 subjects](eegdash.dataset.DS005586.md) * [DS005594: eeg dataset, 16 subjects](eegdash.dataset.DS005594.md) * [DS005620: eeg dataset, 21 subjects](eegdash.dataset.DS005620.md) * [DS005624: ieeg dataset, 24 subjects](eegdash.dataset.DS005624.md) * [DS005628: eeg dataset, 102 subjects](eegdash.dataset.DS005628.md) * [DS005642: eeg dataset, 21 subjects](eegdash.dataset.DS005642.md) * [DS005648: eeg dataset, 21 subjects](eegdash.dataset.DS005648.md) * [DS005662: eeg dataset, 80 subjects](eegdash.dataset.DS005662.md) * [DS005670: ieeg dataset, 2 subjects](eegdash.dataset.DS005670.md) * [DS005672: eeg dataset, 3 subjects](eegdash.dataset.DS005672.md) * [DS005688: eeg dataset, 20 subjects](eegdash.dataset.DS005688.md) * [DS005691: ieeg dataset, 8 subjects](eegdash.dataset.DS005691.md) * [DS005692: eeg dataset, 30 subjects](eegdash.dataset.DS005692.md) * [DS005697: eeg dataset, 51 subjects](eegdash.dataset.DS005697.md) * [DS005752: meg dataset, 123 subjects](eegdash.dataset.DS005752.md) * [DS005776: fnirs dataset, 11 subjects](eegdash.dataset.DS005776.md) * [DS005777: fnirs dataset, 14 subjects](eegdash.dataset.DS005777.md) * [DS005779: eeg dataset, 19 subjects](eegdash.dataset.DS005779.md) * [DS005795: eeg dataset, 34 subjects](eegdash.dataset.DS005795.md) * [DS005810: meg dataset, 31 subjects](eegdash.dataset.DS005810.md) * [DS005811: eeg dataset, 19 subjects](eegdash.dataset.DS005811.md) * [DS005815: eeg dataset, 20 subjects](eegdash.dataset.DS005815.md) * [DS005841: eeg dataset, 48 subjects](eegdash.dataset.DS005841.md) * [DS005857: eeg dataset, 29 subjects](eegdash.dataset.DS005857.md) * [DS005863: eeg dataset, 127 subjects](eegdash.dataset.DS005863.md) * [DS005866: eeg dataset, 60 subjects](eegdash.dataset.DS005866.md) * [DS005868: eeg dataset, 48 subjects](eegdash.dataset.DS005868.md) * [DS005872: eeg dataset, 1 subjects](eegdash.dataset.DS005872.md) * [DS005873: eeg, emg dataset, 125 subjects](eegdash.dataset.DS005873.md) * [DS005876: eeg dataset, 29 subjects](eegdash.dataset.DS005876.md) * [DS005907: eeg dataset, 53 subjects](eegdash.dataset.DS005907.md) * [DS005929: fnirs dataset, 7 subjects](eegdash.dataset.DS005929.md) * [DS005930: fnirs dataset, 12 subjects](eegdash.dataset.DS005930.md) * [DS005931: ieeg dataset, 8 subjects](eegdash.dataset.DS005931.md) * [DS005932: eeg dataset, 29 subjects](eegdash.dataset.DS005932.md) * [DS005935: fnirs dataset, 21 subjects](eegdash.dataset.DS005935.md) * [DS005946: eeg dataset, 39 subjects](eegdash.dataset.DS005946.md) * [DS005953: ieeg dataset, 2 subjects](eegdash.dataset.DS005953.md) * [DS005960: eeg dataset, 41 subjects](eegdash.dataset.DS005960.md) * [DS005963: fnirs dataset, 10 subjects](eegdash.dataset.DS005963.md) * [DS005964: fnirs dataset, 17 subjects](eegdash.dataset.DS005964.md) * [DS006012: meg dataset, 21 subjects](eegdash.dataset.DS006012.md) * [DS006018: eeg dataset, 127 subjects](eegdash.dataset.DS006018.md) * [DS006033: eeg dataset, 3 subjects](eegdash.dataset.DS006033.md) * [DS006035: meg dataset, 5 subjects](eegdash.dataset.DS006035.md) * [DS006036: eeg dataset, 88 subjects](eegdash.dataset.DS006036.md) * [DS006040: eeg dataset, 28 subjects](eegdash.dataset.DS006040.md) * [DS006065: ieeg dataset, 7 subjects](eegdash.dataset.DS006065.md) * [DS006095: eeg dataset, 71 subjects](eegdash.dataset.DS006095.md) * [DS006104: eeg dataset, 24 subjects](eegdash.dataset.DS006104.md) * [DS006107: ieeg dataset, 166 subjects](eegdash.dataset.DS006107.md) * [DS006126: eeg dataset, 5 subjects](eegdash.dataset.DS006126.md) * [DS006136: ieeg dataset, 13 subjects](eegdash.dataset.DS006136.md) * [DS006142: eeg dataset, 27 subjects](eegdash.dataset.DS006142.md) * [DS006159: eeg dataset, 61 subjects](eegdash.dataset.DS006159.md) * [DS006171: eeg dataset, 36 subjects](eegdash.dataset.DS006171.md) * [DS006222: eeg dataset, 69 subjects](eegdash.dataset.DS006222.md) * [DS006233: ieeg dataset, 108 subjects](eegdash.dataset.DS006233.md) * [DS006234: ieeg dataset, 119 subjects](eegdash.dataset.DS006234.md) * [DS006253: ieeg dataset, 23 subjects](eegdash.dataset.DS006253.md) * [DS006260: eeg dataset, 76 subjects](eegdash.dataset.DS006260.md) * [DS006269: eeg dataset, 24 subjects](eegdash.dataset.DS006269.md) * [DS006317: eeg dataset, 2 subjects](eegdash.dataset.DS006317.md) * [DS006334: meg dataset, 30 subjects](eegdash.dataset.DS006334.md) * [DS006366: eeg dataset, 92 subjects](eegdash.dataset.DS006366.md) * [DS006367: eeg dataset, 52 subjects](eegdash.dataset.DS006367.md) * [DS006370: eeg dataset, 56 subjects](eegdash.dataset.DS006370.md) * [DS006374: eeg dataset, 36 subjects](eegdash.dataset.DS006374.md) * [DS006377: fnirs dataset, 115 subjects](eegdash.dataset.DS006377.md) * [DS006386: eeg dataset, 30 subjects](eegdash.dataset.DS006386.md) * [DS006392: ieeg dataset, 1 subjects](eegdash.dataset.DS006392.md) * [DS006394: eeg dataset, 33 subjects](eegdash.dataset.DS006394.md) * [DS006434: eeg dataset, 66 subjects](eegdash.dataset.DS006434.md) * [DS006437: eeg dataset, 9 subjects](eegdash.dataset.DS006437.md) * [DS006446: eeg dataset, 29 subjects](eegdash.dataset.DS006446.md) * [DS006459: fnirs dataset, 17 subjects](eegdash.dataset.DS006459.md) * [DS006460: fnirs dataset, 17 subjects](eegdash.dataset.DS006460.md) * [DS006465: eeg dataset, 20 subjects](eegdash.dataset.DS006465.md) * [DS006466: eeg dataset, 66 subjects](eegdash.dataset.DS006466.md) * [DS006468: meg dataset, 24 subjects](eegdash.dataset.DS006468.md) * [DS006480: eeg dataset, 68 subjects](eegdash.dataset.DS006480.md) * [DS006502: meg dataset, 31 subjects](eegdash.dataset.DS006502.md) * [DS006519: ieeg dataset, 21 subjects](eegdash.dataset.DS006519.md) * [DS006525: eeg dataset, 34 subjects](eegdash.dataset.DS006525.md) * [DS006545: fnirs dataset, 49 subjects](eegdash.dataset.DS006545.md) * [DS006547: eeg dataset, 31 subjects](eegdash.dataset.DS006547.md) * [DS006554: eeg dataset, 47 subjects](eegdash.dataset.DS006554.md) * [DS006563: eeg dataset, 12 subjects](eegdash.dataset.DS006563.md) * [DS006576: eeg dataset, 57 subjects](eegdash.dataset.DS006576.md) * [DS006593: eeg dataset, 21 subjects](eegdash.dataset.DS006593.md) * [DS006629: meg dataset, 19 subjects](eegdash.dataset.DS006629.md) * [DS006647: eeg dataset, 4 subjects](eegdash.dataset.DS006647.md) * [DS006648: eeg dataset, 47 subjects](eegdash.dataset.DS006648.md) * [DS006673: fnirs dataset, 17 subjects](eegdash.dataset.DS006673.md) * [DS006695: eeg dataset, 19 subjects](eegdash.dataset.DS006695.md) * [DS006720: meg dataset, 24 subjects](eegdash.dataset.DS006720.md) * [DS006735: eeg dataset, 27 subjects](eegdash.dataset.DS006735.md) * [DS006761: eeg dataset, 31 subjects](eegdash.dataset.DS006761.md) * [DS006768: eeg dataset, 30 subjects](eegdash.dataset.DS006768.md) * [DS006801: eeg dataset, 21 subjects](eegdash.dataset.DS006801.md) * [DS006802: eeg dataset, 24 subjects](eegdash.dataset.DS006802.md) * [DS006803: eeg dataset, 63 subjects](eegdash.dataset.DS006803.md) * [DS006817: eeg dataset, 34 subjects](eegdash.dataset.DS006817.md) * [DS006839: eeg dataset, 36 subjects](eegdash.dataset.DS006839.md) * [DS006840: eeg dataset, 15 subjects](eegdash.dataset.DS006840.md) * [DS006848: eeg dataset, 30 subjects](eegdash.dataset.DS006848.md) * [DS006850: eeg dataset, 63 subjects](eegdash.dataset.DS006850.md) * [DS006861: eeg dataset, 120 subjects](eegdash.dataset.DS006861.md) * [DS006866: eeg dataset, 148 subjects](eegdash.dataset.DS006866.md) * [DS006890: ieeg dataset, 2 subjects](eegdash.dataset.DS006890.md) * [DS006902: fnirs dataset, 42 subjects](eegdash.dataset.DS006902.md) * [DS006903: fnirs dataset, 17 subjects](eegdash.dataset.DS006903.md) * [DS006910: ieeg dataset, 121 subjects](eegdash.dataset.DS006910.md) * [DS006914: ieeg dataset, 110 subjects](eegdash.dataset.DS006914.md) * [DS006921: eeg dataset, 38 subjects](eegdash.dataset.DS006921.md) * [DS006923: eeg dataset, 140 subjects](eegdash.dataset.DS006923.md) * [DS006940: eeg dataset, 7 subjects](eegdash.dataset.DS006940.md) * [DS006945: eeg dataset, 5 subjects](eegdash.dataset.DS006945.md) * [DS006963: eeg dataset, 32 subjects](eegdash.dataset.DS006963.md) * [DS006979: eeg dataset, 53 subjects](eegdash.dataset.DS006979.md) * [DS007006: eeg dataset, 10 subjects](eegdash.dataset.DS007006.md) * [DS007020: eeg dataset, 94 subjects](eegdash.dataset.DS007020.md) * [DS007028: eeg dataset, 3 subjects](eegdash.dataset.DS007028.md) * [DS007052: eeg dataset, 288 subjects](eegdash.dataset.DS007052.md) * [DS007056: eeg dataset, 286 subjects](eegdash.dataset.DS007056.md) * [DS007069: eeg dataset, 281 subjects](eegdash.dataset.DS007069.md) * [DS007081: eeg dataset, 41 subjects](eegdash.dataset.DS007081.md) * [DS007095: ieeg dataset, 8 subjects](eegdash.dataset.DS007095.md) * [DS007096: eeg dataset, 292 subjects](eegdash.dataset.DS007096.md) * [DS007118: ieeg dataset, 65 subjects](eegdash.dataset.DS007118.md) * [DS007119: ieeg dataset, 103 subjects](eegdash.dataset.DS007119.md) * [DS007120: ieeg dataset, 65 subjects](eegdash.dataset.DS007120.md) * [DS007137: eeg dataset, 294 subjects](eegdash.dataset.DS007137.md) * [DS007139: eeg dataset, 292 subjects](eegdash.dataset.DS007139.md) * [DS007162: eeg dataset, 34 subjects](eegdash.dataset.DS007162.md) * [DS007169: eeg dataset, 18 subjects](eegdash.dataset.DS007169.md) * [DS007172: eeg dataset, 100 subjects](eegdash.dataset.DS007172.md) * [DS007175: eeg dataset, 41 subjects](eegdash.dataset.DS007175.md) * [DS007176: eeg dataset, 45 subjects](eegdash.dataset.DS007176.md) * [DS007180: eeg dataset, 25 subjects](eegdash.dataset.DS007180.md) * [DS007181: eeg dataset, 59 subjects](eegdash.dataset.DS007181.md) * [DS007216: eeg dataset, 24 subjects](eegdash.dataset.DS007216.md) * [DS007221: eeg dataset, 84 subjects](eegdash.dataset.DS007221.md) * [DS007262: eeg dataset, 18 subjects](eegdash.dataset.DS007262.md) * [DS007314: eeg dataset, 2 subjects](eegdash.dataset.DS007314.md) * [DS007315: eeg dataset, 2 subjects](eegdash.dataset.DS007315.md) * [DS007322: eeg dataset, 57 subjects](eegdash.dataset.DS007322.md) * [DS007338: eeg dataset, 1 subjects](eegdash.dataset.DS007338.md) * [DS007347: eeg dataset, 5 subjects](eegdash.dataset.DS007347.md) * [DS007353: eeg, meg dataset, 32 subjects](eegdash.dataset.DS007353.md) * [DS007358: eeg dataset, 2000 subjects](eegdash.dataset.DS007358.md) * [DS007406: eeg dataset, 10 subjects](eegdash.dataset.DS007406.md) * [DS007420: fnirs dataset, 12 subjects](eegdash.dataset.DS007420.md) * [DS007427: eeg dataset, 44 subjects](eegdash.dataset.DS007427.md) * [DS007431: eeg dataset, 47 subjects](eegdash.dataset.DS007431.md) * [DS007445: ieeg dataset, 19 subjects](eegdash.dataset.DS007445.md) * [DS007454: eeg dataset, 42 subjects](eegdash.dataset.DS007454.md) * [DS007463: fnirs dataset, 8 subjects](eegdash.dataset.DS007463.md) * [DS007471: eeg dataset, 31 subjects](eegdash.dataset.DS007471.md) * [DS007473: fnirs dataset, 5 subjects](eegdash.dataset.DS007473.md) * [DS007477: fnirs dataset, 18 subjects](eegdash.dataset.DS007477.md) * [DS007521: eeg dataset, 23 subjects](eegdash.dataset.DS007521.md) * [DS007523: meg dataset, 58 subjects](eegdash.dataset.DS007523.md) * [DS007524: meg dataset, 50 subjects](eegdash.dataset.DS007524.md) * [DS007526: eeg dataset, 144 subjects](eegdash.dataset.DS007526.md) * [DS007554: eeg, fnirs dataset, 30 subjects](eegdash.dataset.DS007554.md) * [DS007558: eeg dataset, 67 subjects](eegdash.dataset.DS007558.md) * [DS007591: eeg dataset, 3 subjects](eegdash.dataset.DS007591.md) * [DS007602: eeg dataset, 3 subjects](eegdash.dataset.DS007602.md) * [DS007609: eeg dataset, 51 subjects](eegdash.dataset.DS007609.md) * [DS007615: eeg dataset, 69 subjects](eegdash.dataset.DS007615.md) # Other Datasets * [ABSeqMEG: EEG dataset](eegdash.dataset.ABSeqMEG.md) * [ANDI: EEG dataset](eegdash.dataset.ANDI.md) * [APPLESEED: EEG dataset](eegdash.dataset.APPLESEED.md) * [AlexMI: EEG dataset](eegdash.dataset.AlexMI.md) * [AlexMotorImagery: EEG dataset](eegdash.dataset.AlexMotorImagery.md) * [AlexandreMotorImagery: EEG dataset](eegdash.dataset.AlexandreMotorImagery.md) * [Alljoined: EEG dataset](eegdash.dataset.Alljoined.md) * [Alljoined1: EEG dataset](eegdash.dataset.Alljoined1.md) * [Alljoined16M: EEG dataset](eegdash.dataset.Alljoined16M.md) * [Alljoined1p6M: EEG dataset](eegdash.dataset.Alljoined1p6M.md) * [Alljoined_16M: EEG dataset](eegdash.dataset.Alljoined_16M.md) * [AlphaWaves: EEG dataset](eegdash.dataset.AlphaWaves.md) * [Alphawaves: EEG dataset](eegdash.dataset.Alphawaves.md) * [ArEEG: EEG dataset](eegdash.dataset.ArEEG.md) * [Ataseven2024: EEG dataset](eegdash.dataset.Ataseven2024.md) * [BCI2000_Intracranial: EEG dataset](eegdash.dataset.BCI2000_Intracranial.md) * [BCI2000_intraop: EEG dataset](eegdash.dataset.BCI2000_intraop.md) * [BCIAUT: EEG dataset](eegdash.dataset.BCIAUT.md) * [BCIAUTP300: EEG dataset](eegdash.dataset.BCIAUTP300.md) * [BCIAUT_P300: EEG dataset](eegdash.dataset.BCIAUT_P300.md) * [BCICIII_IVa: EEG dataset](eegdash.dataset.BCICIII_IVa.md) * [BCICIV1: EEG dataset](eegdash.dataset.BCICIV1.md) * [BCICompIII_IVa: EEG dataset](eegdash.dataset.BCICompIII_IVa.md) * [BCICompIV1: EEG dataset](eegdash.dataset.BCICompIV1.md) * [BCIT: EEG dataset](eegdash.dataset.BCIT.md) * [BCITAdvancedGuardDuty: EEG dataset](eegdash.dataset.BCITAdvancedGuardDuty.md) * [BCITBaselineDriving: EEG dataset](eegdash.dataset.BCITBaselineDriving.md) * [BCITMindWandering: EEG dataset](eegdash.dataset.BCITMindWandering.md) * [BCIT_Auditory_Cueing: EEG dataset](eegdash.dataset.BCIT_Auditory_Cueing.md) * [BCIT_Traffic_Complexity: EEG dataset](eegdash.dataset.BCIT_Traffic_Complexity.md) * [BETA: EEG dataset](eegdash.dataset.BETA.md) * [BETA_SSVEP: EEG dataset](eegdash.dataset.BETA_SSVEP.md) * [BI2012: EEG dataset](eegdash.dataset.BI2012.md) * [BI2013a: EEG dataset](eegdash.dataset.BI2013a.md) * [BI2014a: EEG dataset](eegdash.dataset.BI2014a.md) * [BI2014b: EEG dataset](eegdash.dataset.BI2014b.md) * [BI2015a: EEG dataset](eegdash.dataset.BI2015a.md) * [BI2015b: EEG dataset](eegdash.dataset.BI2015b.md) * [BMI_HDEEG_D1: EEG dataset](eegdash.dataset.BMI_HDEEG_D1.md) * [BMI_HDEEG_D2: EEG dataset](eegdash.dataset.BMI_HDEEG_D2.md) * [BMI_HDEEG_D3: EEG dataset](eegdash.dataset.BMI_HDEEG_D3.md) * [BMI_HDEEG_D4: EEG dataset](eegdash.dataset.BMI_HDEEG_D4.md) * [BNCI2003_IVa: EEG dataset](eegdash.dataset.BNCI2003_IVa.md) * [BNCI2014001: EEG dataset](eegdash.dataset.BNCI2014001.md) * [BNCI2014002: EEG dataset](eegdash.dataset.BNCI2014002.md) * [BNCI2014004: EEG dataset](eegdash.dataset.BNCI2014004.md) * [BNCI2014008: EEG dataset](eegdash.dataset.BNCI2014008.md) * [BNCI2014_009_P300: EEG dataset](eegdash.dataset.BNCI2014_009_P300.md) * [BNCI2015: EEG dataset](eegdash.dataset.BNCI2015.md) * [BNCI2015001: EEG dataset](eegdash.dataset.BNCI2015001.md) * [BNCI2015_003_AMUSE: EEG dataset](eegdash.dataset.BNCI2015_003_AMUSE.md) * [BNCI2015_003_P300: EEG dataset](eegdash.dataset.BNCI2015_003_P300.md) * [BNCI2015_006_MusicBCI: EEG dataset](eegdash.dataset.BNCI2015_006_MusicBCI.md) * [BNCI2015_008_CenterSpeller: EEG dataset](eegdash.dataset.BNCI2015_008_CenterSpeller.md) * [BNCI2015_008_P300: EEG dataset](eegdash.dataset.BNCI2015_008_P300.md) * [BNCI2015_BNCI_006_Music: EEG dataset](eegdash.dataset.BNCI2015_BNCI_006_Music.md) * [BNCI2015_ERP: EEG dataset](eegdash.dataset.BNCI2015_ERP.md) * [BNCI2015_P300: EEG dataset](eegdash.dataset.BNCI2015_P300.md) * [BNCI2016: EEG dataset](eegdash.dataset.BNCI2016.md) * [BNCI2016002: EEG dataset](eegdash.dataset.BNCI2016002.md) * [BNCI2020: EEG dataset](eegdash.dataset.BNCI2020.md) * [BNCI2020_002_AttentionShift: EEG dataset](eegdash.dataset.BNCI2020_002_AttentionShift.md) * [BNCI2020_002_CovertSpatialAttention: EEG dataset](eegdash.dataset.BNCI2020_002_CovertSpatialAttention.md) * [BNCI2025: EEG dataset](eegdash.dataset.BNCI2025.md) * [BNCI_2015_006_Music: EEG dataset](eegdash.dataset.BNCI_2015_006_Music.md) * [BOAS: EEG dataset](eegdash.dataset.BOAS.md) * [Barras2021: EEG dataset](eegdash.dataset.Barras2021.md) * [Barras2025: EEG dataset](eegdash.dataset.Barras2025.md) * [BetaSSVEP: EEG dataset](eegdash.dataset.BetaSSVEP.md) * [BigP3BCI_E: EEG dataset](eegdash.dataset.BigP3BCI_E.md) * [BigP3BCI_F: EEG dataset](eegdash.dataset.BigP3BCI_F.md) * [BigP3BCI_G: EEG dataset](eegdash.dataset.BigP3BCI_G.md) * [BigP3BCI_H: EEG dataset](eegdash.dataset.BigP3BCI_H.md) * [BigP3BCI_I: EEG dataset](eegdash.dataset.BigP3BCI_I.md) * [BigP3BCI_K: EEG dataset](eegdash.dataset.BigP3BCI_K.md) * [BigP3BCI_M: EEG dataset](eegdash.dataset.BigP3BCI_M.md) * [BigP3BCI_S1: EEG dataset](eegdash.dataset.BigP3BCI_S1.md) * [BigP3BCI_StudyE: EEG dataset](eegdash.dataset.BigP3BCI_StudyE.md) * [BigP3BCI_StudyF: EEG dataset](eegdash.dataset.BigP3BCI_StudyF.md) * [BigP3BCI_StudyG: EEG dataset](eegdash.dataset.BigP3BCI_StudyG.md) * [BigP3BCI_StudyH: EEG dataset](eegdash.dataset.BigP3BCI_StudyH.md) * [BigP3BCI_StudyI: EEG dataset](eegdash.dataset.BigP3BCI_StudyI.md) * [BigP3BCI_StudyK: EEG dataset](eegdash.dataset.BigP3BCI_StudyK.md) * [BigP3BCI_StudyM: EEG dataset](eegdash.dataset.BigP3BCI_StudyM.md) * [BigP3BCI_StudyN: EEG dataset](eegdash.dataset.BigP3BCI_StudyN.md) * [BigP3BCI_StudyS1: EEG dataset](eegdash.dataset.BigP3BCI_StudyS1.md) * [Bogacz2024: EEG dataset](eegdash.dataset.Bogacz2024.md) * [BrainInvaders: EEG dataset](eegdash.dataset.BrainInvaders.md) * [BrainInvaders2013a: EEG dataset](eegdash.dataset.BrainInvaders2013a.md) * [BrainInvaders2014a: EEG dataset](eegdash.dataset.BrainInvaders2014a.md) * [BrainInvaders2014b: EEG dataset](eegdash.dataset.BrainInvaders2014b.md) * [BrainInvaders2015a: EEG dataset](eegdash.dataset.BrainInvaders2015a.md) * [BrainInvaders2015b: EEG dataset](eegdash.dataset.BrainInvaders2015b.md) * [BrainInvadersBI2014b: EEG dataset](eegdash.dataset.BrainInvadersBI2014b.md) * [BrainTreeBank: EEG dataset](eegdash.dataset.BrainTreeBank.md) * [Broitman2019: EEG dataset](eegdash.dataset.Broitman2019.md) * [CARLA: EEG dataset](eegdash.dataset.CARLA.md) * [CHBMIT: EEG dataset](eegdash.dataset.CHBMIT.md) * [CHB_MIT: EEG dataset](eegdash.dataset.CHB_MIT.md) * [CHISCO20: EEG dataset](eegdash.dataset.CHISCO20.md) * [CPSEED: EEG dataset](eegdash.dataset.CPSEED.md) * [CPSEED_3M: EEG dataset](eegdash.dataset.CPSEED_3M.md) * [CastillosCVEP40: EEG dataset](eegdash.dataset.CastillosCVEP40.md) * [CatFR: EEG dataset](eegdash.dataset.CatFR.md) * [Chandravadia2022: EEG dataset](eegdash.dataset.Chandravadia2022.md) * [Chang2025: EEG dataset](eegdash.dataset.Chang2025.md) * [Chavarriaga2010: EEG dataset](eegdash.dataset.Chavarriaga2010.md) * [Chisco: EEG dataset](eegdash.dataset.Chisco.md) * [Chisco20: EEG dataset](eegdash.dataset.Chisco20.md) * [Chisco2_0: EEG dataset](eegdash.dataset.Chisco2_0.md) * [Cote2015: EEG dataset](eegdash.dataset.Cote2015.md) * [Couperus2017: EEG dataset](eegdash.dataset.Couperus2017.md) * [Couperus2021_LRP: EEG dataset](eegdash.dataset.Couperus2021_LRP.md) * [Couperus2021_MMN: EEG dataset](eegdash.dataset.Couperus2021_MMN.md) * [Couperus2021_N2pc: EEG dataset](eegdash.dataset.Couperus2021_N2pc.md) * [Couperus2021_N400: EEG dataset](eegdash.dataset.Couperus2021_N400.md) * [Couperus2021_P300: EEG dataset](eegdash.dataset.Couperus2021_P300.md) * [DENS: EEG dataset](eegdash.dataset.DENS.md) * [Dascoli2025: EEG dataset](eegdash.dataset.Dascoli2025.md) * [Delorme: EEG dataset](eegdash.dataset.Delorme.md) * [Dubois2024: EEG dataset](eegdash.dataset.Dubois2024.md) * [EEGAsymmetries: EEG dataset](eegdash.dataset.EEGAsymmetries.md) * [EEGEYENET: EEG dataset](eegdash.dataset.EEGEYENET.md) * [EEGEyeNet: EEG dataset](eegdash.dataset.EEGEyeNet.md) * [EEGEyeNet_v2: EEG dataset](eegdash.dataset.EEGEyeNet_v2.md) * [EEGMotorMovementImagery: EEG dataset](eegdash.dataset.EEGMotorMovementImagery.md) * [EESM17: EEG dataset](eegdash.dataset.EESM17.md) * [EESM19: EEG dataset](eegdash.dataset.EESM19.md) * [EESM23: EEG dataset](eegdash.dataset.EESM23.md) * [EPFLP300: EEG dataset](eegdash.dataset.EPFLP300.md) * [EPFLP300Dataset: EEG dataset](eegdash.dataset.EPFLP300Dataset.md) * [EPFL_P300: EEG dataset](eegdash.dataset.EPFL_P300.md) * [ERDetect: EEG dataset](eegdash.dataset.ERDetect.md) * [ERPCORE: EEG dataset](eegdash.dataset.ERPCORE.md) * [ERP_CORE: EEG dataset](eegdash.dataset.ERP_CORE.md) * [ER_Detect: EEG dataset](eegdash.dataset.ER_Detect.md) * [Edit2024: EEG dataset](eegdash.dataset.Edit2024.md) * [EldBETA: EEG dataset](eegdash.dataset.EldBETA.md) * [Ester2022: EEG dataset](eegdash.dataset.Ester2022.md) * [Ester2024_E1: EEG dataset](eegdash.dataset.Ester2024_E1.md) * [Ester2024_E2: EEG dataset](eegdash.dataset.Ester2024_E2.md) * [FACED: EEG dataset](eegdash.dataset.FACED.md) * [FLUX: EEG dataset](eegdash.dataset.FLUX.md) * [FRL_DiscreteGestures: EEG dataset](eegdash.dataset.FRL_DiscreteGestures.md) * [FRL_Handwriting: EEG dataset](eegdash.dataset.FRL_Handwriting.md) * [FRL_WristControl: EEG dataset](eegdash.dataset.FRL_WristControl.md) * [FernandezRodriguez2023: EEG dataset](eegdash.dataset.FernandezRodriguez2023.md) * [Ferron2019: EEG dataset](eegdash.dataset.Ferron2019.md) * [Flankers_FAR: EEG dataset](eegdash.dataset.Flankers_FAR.md) * [Flankers_NEAR: EEG dataset](eegdash.dataset.Flankers_NEAR.md) * [Fogarty2025: EEG dataset](eegdash.dataset.Fogarty2025.md) * [Formica2025: EEG dataset](eegdash.dataset.Formica2025.md) * [ForrestGump_MEG: EEG dataset](eegdash.dataset.ForrestGump_MEG.md) * [FuentesGuerra2024: EEG dataset](eegdash.dataset.FuentesGuerra2024.md) * [Gama2019: EEG dataset](eegdash.dataset.Gama2019.md) * [Gao2024: EEG dataset](eegdash.dataset.Gao2024.md) * [Gao2026: EEG dataset](eegdash.dataset.Gao2026.md) * [Ghaffari2024: EEG dataset](eegdash.dataset.Ghaffari2024.md) * [GuttmannFlury2025_ME: EEG dataset](eegdash.dataset.GuttmannFlury2025_ME.md) * [GuttmannFlury2025_MIME: EEG dataset](eegdash.dataset.GuttmannFlury2025_MIME.md) * [HADMEEG: EEG dataset](eegdash.dataset.HADMEEG.md) * [HAD_MEEG: EEG dataset](eegdash.dataset.HAD_MEEG.md) * [HBN_EEG_NC: EEG dataset](eegdash.dataset.HBN_EEG_NC.md) * [HBN_NoCommercial: EEG dataset](eegdash.dataset.HBN_NoCommercial.md) * [HBN_r1: EEG dataset](eegdash.dataset.HBN_r1.md) * [HBN_r10: EEG dataset](eegdash.dataset.HBN_r10.md) * [HBN_r10_bdf: EEG dataset](eegdash.dataset.HBN_r10_bdf.md) * [HBN_r10_bdf_mini: EEG dataset](eegdash.dataset.HBN_r10_bdf_mini.md) * [HBN_r11: EEG dataset](eegdash.dataset.HBN_r11.md) * [HBN_r11_bdf: EEG dataset](eegdash.dataset.HBN_r11_bdf.md) * [HBN_r11_bdf_mini: EEG dataset](eegdash.dataset.HBN_r11_bdf_mini.md) * [HBN_r1_bdf: EEG dataset](eegdash.dataset.HBN_r1_bdf.md) * [HBN_r1_bdf_mini: EEG dataset](eegdash.dataset.HBN_r1_bdf_mini.md) * [HBN_r2: EEG dataset](eegdash.dataset.HBN_r2.md) * [HBN_r2_bdf: EEG dataset](eegdash.dataset.HBN_r2_bdf.md) * [HBN_r2_bdf_mini: EEG dataset](eegdash.dataset.HBN_r2_bdf_mini.md) * [HBN_r3: EEG dataset](eegdash.dataset.HBN_r3.md) * [HBN_r3_bdf: EEG dataset](eegdash.dataset.HBN_r3_bdf.md) * [HBN_r3_bdf_mini: EEG dataset](eegdash.dataset.HBN_r3_bdf_mini.md) * [HBN_r4: EEG dataset](eegdash.dataset.HBN_r4.md) * [HBN_r4_bdf: EEG dataset](eegdash.dataset.HBN_r4_bdf.md) * [HBN_r4_bdf_mini: EEG dataset](eegdash.dataset.HBN_r4_bdf_mini.md) * [HBN_r5: EEG dataset](eegdash.dataset.HBN_r5.md) * [HBN_r5_bdf: EEG dataset](eegdash.dataset.HBN_r5_bdf.md) * [HBN_r5_bdf_mini: EEG dataset](eegdash.dataset.HBN_r5_bdf_mini.md) * [HBN_r6: EEG dataset](eegdash.dataset.HBN_r6.md) * [HBN_r6_bdf: EEG dataset](eegdash.dataset.HBN_r6_bdf.md) * [HBN_r6_bdf_mini: EEG dataset](eegdash.dataset.HBN_r6_bdf_mini.md) * [HBN_r7_bdf: EEG dataset](eegdash.dataset.HBN_r7_bdf.md) * [HBN_r7_bdf_mini: EEG dataset](eegdash.dataset.HBN_r7_bdf_mini.md) * [HBN_r8: EEG dataset](eegdash.dataset.HBN_r8.md) * [HBN_r8_bdf: EEG dataset](eegdash.dataset.HBN_r8_bdf.md) * [HBN_r8_bdf_mini: EEG dataset](eegdash.dataset.HBN_r8_bdf_mini.md) * [HBN_r9: EEG dataset](eegdash.dataset.HBN_r9.md) * [HBN_r9_bdf: EEG dataset](eegdash.dataset.HBN_r9_bdf.md) * [HBN_r9_bdf_mini: EEG dataset](eegdash.dataset.HBN_r9_bdf_mini.md) * [HEFMIICH: EEG dataset](eegdash.dataset.HEFMIICH.md) * [HEFMI_ICH: EEG dataset](eegdash.dataset.HEFMI_ICH.md) * [HID: EEG dataset](eegdash.dataset.HID.md) * [HUPiEEG: EEG dataset](eegdash.dataset.HUPiEEG.md) * [Hatano: EEG dataset](eegdash.dataset.Hatano.md) * [Haupt2025: EEG dataset](eegdash.dataset.Haupt2025.md) * [HealthyBrainNetwork: EEG dataset](eegdash.dataset.HealthyBrainNetwork.md) * [HeartBEAM: EEG dataset](eegdash.dataset.HeartBEAM.md) * [HenaoIsaza2026: EEG dataset](eegdash.dataset.HenaoIsaza2026.md) * [Hermann2021: EEG dataset](eegdash.dataset.Hermann2021.md) * [Hermes2024: EEG dataset](eegdash.dataset.Hermes2024.md) * [Herrema2024: EEG dataset](eegdash.dataset.Herrema2024.md) * [Hinss2021: EEG dataset](eegdash.dataset.Hinss2021.md) * [Hinss2021_v2: EEG dataset](eegdash.dataset.Hinss2021_v2.md) * [Huang2022: EEG dataset](eegdash.dataset.Huang2022.md) * [Huebner2017: EEG dataset](eegdash.dataset.Huebner2017.md) * [Huebner2018: EEG dataset](eegdash.dataset.Huebner2018.md) * [HySER: EEG dataset](eegdash.dataset.HySER.md) * [Hyser: EEG dataset](eegdash.dataset.Hyser.md) * [IACKD: EEG dataset](eegdash.dataset.IACKD.md) * [Jao2020: EEG dataset](eegdash.dataset.Jao2020.md) * [Johnson2024: EEG dataset](eegdash.dataset.Johnson2024.md) * [Johnson2025: EEG dataset](eegdash.dataset.Johnson2025.md) * [Kajikawa2000: EEG dataset](eegdash.dataset.Kajikawa2000.md) * [Kalenkovich2019: EEG dataset](eegdash.dataset.Kalenkovich2019.md) * [Kanno2025: EEG dataset](eegdash.dataset.Kanno2025.md) * [Kekecs2024: EEG dataset](eegdash.dataset.Kekecs2024.md) * [Kidder2024: EEG dataset](eegdash.dataset.Kidder2024.md) * [Kim2025: EEG dataset](eegdash.dataset.Kim2025.md) * [Kinley2019: EEG dataset](eegdash.dataset.Kinley2019.md) * [Kitazawa2025: EEG dataset](eegdash.dataset.Kitazawa2025.md) * [Kucyi2024: EEG dataset](eegdash.dataset.Kucyi2024.md) * [Kuroda2024: EEG dataset](eegdash.dataset.Kuroda2024.md) * [LEMON: EEG dataset](eegdash.dataset.LEMON.md) * [LPP: EEG dataset](eegdash.dataset.LPP.md) * [LeganesFonteneau2024: EEG dataset](eegdash.dataset.LeganesFonteneau2024.md) * [Lin2019: EEG dataset](eegdash.dataset.Lin2019.md) * [LittlePrince: EEG dataset](eegdash.dataset.LittlePrince.md) * [Liu2022EldBETA: EEG dataset](eegdash.dataset.Liu2022EldBETA.md) * [Lowe2025: EEG dataset](eegdash.dataset.Lowe2025.md) * [Luke2019: EEG dataset](eegdash.dataset.Luke2019.md) * [MAMEM2: EEG dataset](eegdash.dataset.MAMEM2.md) * [MAMEM2_SSVEP: EEG dataset](eegdash.dataset.MAMEM2_SSVEP.md) * [MAMEM3: EEG dataset](eegdash.dataset.MAMEM3.md) * [MASC_MEG: EEG dataset](eegdash.dataset.MASC_MEG.md) * [MAVIS: EEG dataset](eegdash.dataset.MAVIS.md) * [MEGMEM: EEG dataset](eegdash.dataset.MEGMEM.md) * [MEG_MASC: EEG dataset](eegdash.dataset.MEG_MASC.md) * [MEG_SCANS: EEG dataset](eegdash.dataset.MEG_SCANS.md) * [MNESomato: EEG dataset](eegdash.dataset.MNESomato.md) * [MNESomatoData: EEG dataset](eegdash.dataset.MNESomatoData.md) * [MNE_Sample_Data: EEG dataset](eegdash.dataset.MNE_Sample_Data.md) * [MSSV: EEG dataset](eegdash.dataset.MSSV.md) * [MUSING: EEG dataset](eegdash.dataset.MUSING.md) * [Maestu2021: EEG dataset](eegdash.dataset.Maestu2021.md) * [Martzoukou2024_Post: EEG dataset](eegdash.dataset.Martzoukou2024_Post.md) * [Martzoukou2024_Post_A: EEG dataset](eegdash.dataset.Martzoukou2024_Post_A.md) * [Melcon2024: EEG dataset](eegdash.dataset.Melcon2024.md) * [Mendola2020: EEG dataset](eegdash.dataset.Mendola2020.md) * [Mesquita2019: EEG dataset](eegdash.dataset.Mesquita2019.md) * [MetaRDK: EEG dataset](eegdash.dataset.MetaRDK.md) * [Mheich2020: EEG dataset](eegdash.dataset.Mheich2020.md) * [Mheich2024: EEG dataset](eegdash.dataset.Mheich2024.md) * [Miller2021: EEG dataset](eegdash.dataset.Miller2021.md) * [Mishra2024: EEG dataset](eegdash.dataset.Mishra2024.md) * [Mivalt2024: EEG dataset](eegdash.dataset.Mivalt2024.md) * [Moerel2023: EEG dataset](eegdash.dataset.Moerel2023.md) * [Moerel2025: EEG dataset](eegdash.dataset.Moerel2025.md) * [Moradi2024: EEG dataset](eegdash.dataset.Moradi2024.md) * [Motion_Yucel2014: EEG dataset](eegdash.dataset.Motion_Yucel2014.md) * [NOD_EEG: EEG dataset](eegdash.dataset.NOD_EEG.md) * [NOD_MEG: EEG dataset](eegdash.dataset.NOD_MEG.md) * [NenckiSymfonia: EEG dataset](eegdash.dataset.NenckiSymfonia.md) * [Neuma: EEG dataset](eegdash.dataset.Neuma.md) * [NeuroMorph: EEG dataset](eegdash.dataset.NeuroMorph.md) * [Nierula2019: EEG dataset](eegdash.dataset.Nierula2019.md) * [Ning2024: EEG dataset](eegdash.dataset.Ning2024.md) * [Normannseth2026: EEG dataset](eegdash.dataset.Normannseth2026.md) * [OMEGA: EEG dataset](eegdash.dataset.OMEGA.md) * [ORHA: EEG dataset](eegdash.dataset.ORHA.md) * [OcularLDT: EEG dataset](eegdash.dataset.OcularLDT.md) * [Oikonomou2016: EEG dataset](eegdash.dataset.Oikonomou2016.md) * [Omelyusik2026: EEG dataset](eegdash.dataset.Omelyusik2026.md) * [Onton2024: EEG dataset](eegdash.dataset.Onton2024.md) * [OpenBMI_ERP: EEG dataset](eegdash.dataset.OpenBMI_ERP.md) * [OpenBMI_MI: EEG dataset](eegdash.dataset.OpenBMI_MI.md) * [OpenBMI_P300: EEG dataset](eegdash.dataset.OpenBMI_P300.md) * [PAL: EEG dataset](eegdash.dataset.PAL.md) * [PDEEG: EEG dataset](eegdash.dataset.PDEEG.md) * [PD_EEG: EEG dataset](eegdash.dataset.PD_EEG.md) * [PEARLNeuro: EEG dataset](eegdash.dataset.PEARLNeuro.md) * [PEERS: EEG dataset](eegdash.dataset.PEERS.md) * [PRIOS: EEG dataset](eegdash.dataset.PRIOS.md) * [PROMENADE: EEG dataset](eegdash.dataset.PROMENADE.md) * [PWIe: EEG dataset](eegdash.dataset.PWIe.md) * [Penalver2024: EEG dataset](eegdash.dataset.Penalver2024.md) * [Peng2018: EEG dataset](eegdash.dataset.Peng2018.md) * [PerceiveImagine: EEG dataset](eegdash.dataset.PerceiveImagine.md) * [PhysionetMI: EEG dataset](eegdash.dataset.PhysionetMI.md) * [Podcast: EEG dataset](eegdash.dataset.Podcast.md) * [Pohle2019: EEG dataset](eegdash.dataset.Pohle2019.md) * [RAM_catFR: EEG dataset](eegdash.dataset.RAM_catFR.md) * [RESPect_CCEP: EEG dataset](eegdash.dataset.RESPect_CCEP.md) * [RESPect_intraop: EEG dataset](eegdash.dataset.RESPect_intraop.md) * [RESPect_longterm: EEG dataset](eegdash.dataset.RESPect_longterm.md) * [Ramzaoui2024: EEG dataset](eegdash.dataset.Ramzaoui2024.md) * [Rani2019: EEG dataset](eegdash.dataset.Rani2019.md) * [Rockhill2022: EEG dataset](eegdash.dataset.Rockhill2022.md) * [Rodrigues2017: EEG dataset](eegdash.dataset.Rodrigues2017.md) * [Romani2025: EEG dataset](eegdash.dataset.Romani2025.md) * [Romani2025_erp: EEG dataset](eegdash.dataset.Romani2025_erp.md) * [Runabout: EEG dataset](eegdash.dataset.Runabout.md) * [SINGSING: EEG dataset](eegdash.dataset.SINGSING.md) * [SSVEPMAMEM2: EEG dataset](eegdash.dataset.SSVEPMAMEM2.md) * [SSVEP_MAMEM3: EEG dataset](eegdash.dataset.SSVEP_MAMEM3.md) * [STRONG: EEG dataset](eegdash.dataset.STRONG.md) * [STReEF: EEG dataset](eegdash.dataset.STReEF.md) * [Sakakura2024: EEG dataset](eegdash.dataset.Sakakura2024.md) * [Sakakura2025: EEG dataset](eegdash.dataset.Sakakura2025.md) * [Sato2024: EEG dataset](eegdash.dataset.Sato2024.md) * [Sato2025: EEG dataset](eegdash.dataset.Sato2025.md) * [SeizeIT2: EEG dataset](eegdash.dataset.SeizeIT2.md) * [Shalamberidze2025: EEG dataset](eegdash.dataset.Shalamberidze2025.md) * [Shin2017A: EEG dataset](eegdash.dataset.Shin2017A.md) * [Shin2017B: EEG dataset](eegdash.dataset.Shin2017B.md) * [SleepEDF: EEG dataset](eegdash.dataset.SleepEDF.md) * [SleepEDFExpanded: EEG dataset](eegdash.dataset.SleepEDFExpanded.md) * [Somato: EEG dataset](eegdash.dataset.Somato.md) * [Surrey_cEEGrid_sleep: EEG dataset](eegdash.dataset.Surrey_cEEGrid_sleep.md) * [THINGS: EEG dataset](eegdash.dataset.THINGS.md) * [THINGSMEG: EEG dataset](eegdash.dataset.THINGSMEG.md) * [THINGS_EEG: EEG dataset](eegdash.dataset.THINGS_EEG.md) * [THINGS_MEG: EEG dataset](eegdash.dataset.THINGS_MEG.md) * [TMNRED: EEG dataset](eegdash.dataset.TMNRED.md) * [TNO: EEG dataset](eegdash.dataset.TNO.md) * [TX14: EEG dataset](eegdash.dataset.TX14.md) * [TX15: EEG dataset](eegdash.dataset.TX15.md) * [TX18: EEG dataset](eegdash.dataset.TX18.md) * [TX20: EEG dataset](eegdash.dataset.TX20.md) * [Todorovic2023: EEG dataset](eegdash.dataset.Todorovic2023.md) * [ToonFaces: EEG dataset](eegdash.dataset.ToonFaces.md) * [Touryan1999: EEG dataset](eegdash.dataset.Touryan1999.md) * [Tripathy2024: EEG dataset](eegdash.dataset.Tripathy2024.md) * [VEPCON: EEG dataset](eegdash.dataset.VEPCON.md) * [Veillette2019: EEG dataset](eegdash.dataset.Veillette2019.md) * [Vianney2025: EEG dataset](eegdash.dataset.Vianney2025.md) * [VisualContextTrajectory: EEG dataset](eegdash.dataset.VisualContextTrajectory.md) * [VisualContextTrajectory_v2: EEG dataset](eegdash.dataset.VisualContextTrajectory_v2.md) * [WBCICSHU: EEG dataset](eegdash.dataset.WBCICSHU.md) * [WBCIC_SHU: EEG dataset](eegdash.dataset.WBCIC_SHU.md) * [WIRED_ICM: EEG dataset](eegdash.dataset.WIRED_ICM.md) * [Wakeman2015: EEG dataset](eegdash.dataset.Wakeman2015.md) * [WakemanHenson: EEG dataset](eegdash.dataset.WakemanHenson.md) * [WakemanHenson_EEG_MEG: EEG dataset](eegdash.dataset.WakemanHenson_EEG_MEG.md) * [Weibo2014: EEG dataset](eegdash.dataset.Weibo2014.md) * [Weisend2007: EEG dataset](eegdash.dataset.Weisend2007.md) * [Wimmer2024: EEG dataset](eegdash.dataset.Wimmer2024.md) * [Yang2025: EEG dataset](eegdash.dataset.Yang2025.md) * [Yu2019: EEG dataset](eegdash.dataset.Yu2019.md) * [Yucel2014: EEG dataset](eegdash.dataset.Yucel2014.md) * [Yucel2015: EEG dataset](eegdash.dataset.Yucel2015.md) * [Zhang2025: EEG dataset](eegdash.dataset.Zhang2025.md) * [Zhao2024: EEG dataset](eegdash.dataset.Zhao2024.md) * [Zhou2016_NEMAR: EEG dataset](eegdash.dataset.Zhou2016_NEMAR.md) * [Zhou2024: EEG dataset](eegdash.dataset.Zhou2024.md) * [catFR_Categorized_Free_Recall: EEG dataset](eegdash.dataset.catFR_Categorized_Free_Recall.md) * [catFR_closed_loop: EEG dataset](eegdash.dataset.catFR_closed_loop.md) * [catFR_open_loop: EEG dataset](eegdash.dataset.catFR_open_loop.md) * [catFR_stim: EEG dataset](eegdash.dataset.catFR_stim.md) * [eldBETA: EEG dataset](eegdash.dataset.eldBETA.md) * [emg2qwerty: EEG dataset](eegdash.dataset.emg2qwerty.md) * [neuromorph: EEG dataset](eegdash.dataset.neuromorph.md) * [ocular_ldt: EEG dataset](eegdash.dataset.ocular_ldt.md) * [pyFR: EEG dataset](eegdash.dataset.pyFR.md) # Feature Package Overview The `eegdash.features` namespace re-exports feature extractors, decorators, and dataset utilities from the underlying submodules so callers can import the most common helpers from a single place. To avoid duplicated documentation in the API reference, the classes themselves are documented in their defining modules (see the links below). This page focuses on the high-level orchestration helpers that only live in the package `__init__`. ## High-level discovery helpers ### eegdash.features.get_all_features() → list[tuple[str, Callable]] Get a list of all available feature functions. Scans the `feature_bank` module for functions that have been decorated with a feature_kind. * **Returns:** A list of (name, function) tuples for all discovered feature functions. * **Return type:** list of tuple ### eegdash.features.get_feature_kind(feature: Callable) → eegdash.features.extractors.MultivariateFeature Get the ‘kind’ of a feature function. Identifies whether a feature is univariate, bivariate, or multivariate using decorators. * **Parameters:** **feature** (*callable*) – The feature function to inspect. * **Returns:** An instance of the feature kind. * **Return type:** `MultivariateFeature` ### eegdash.features.get_feature_predecessors(feature_or_extractor: Callable | None) → list Get the dependency hierarchy for a feature or feature extractor. This function recursively traverses the parent_extractor_type attribute of a feature or extractor to build a list representing its dependency lineage. * **Parameters:** **feature_or_extractor** (*callable*) – The feature function or `FeatureExtractor` instance to inspect. * **Returns:** A nested list representing the dependency tree. For a simple linear chain, this will be a flat list from the specific feature up to the base signal input. For multiple dependencies, it contains tuples of sub-dependencies. * **Return type:** list ### Notes The traversal stops when it reaches a predecessor of `None`, which typically represents the raw signal. ### Examples ```pycon >>> # Example: Linear dependency with a branching dependency >>> print(get_feature_predecessors(feature_bank.spectral_entropy)) [, , , (None, [, None])] ``` ### eegdash.features.get_all_feature_kinds() → list[tuple[str, type[TypeAliasForwardRef('eegdash.features.extractors.MultivariateFeature')]]] Get a list of all available feature ‘kind’ classes. Scans the `kinds` module for all classes that subclass `MultivariateFeature`. * **Returns:** A list of (name, class) tuples for all discovered feature kinds. * **Return type:** list of tuple ### eegdash.features.get_all_preprocessor_output_types() → list[tuple[str, type[BasePreprocessorOutputType]]] Get a list of all available preprocessor output type classes. Scans the `feature_bank` module for all classes that subclass `BasePreprocessorOutputType`. * **Returns:** A list of (name, class) tuples for all discovered preprocessor output types. * **Return type:** list of tuple ## Dataset and extraction utilities ### eegdash.features.extract_features(concat_dataset: BaseConcatDataset, features: FeatureExtractor | Dict[str, Callable] | List[Callable], , batch_size: int = 512, n_jobs: int = 1) → FeaturesConcatDataset Extract features from a collection of windowed recordings. This function applies a feature extraction pipeline to every individual recording in a `BaseConcatDataset`. * **Parameters:** * **concat_dataset** (*BaseConcatDataset*) – A concatenated dataset of `WindowsDataset` or `EEGWindowsDataset` instances. * **features** (*FeatureExtractor* *or* *dict* *or* *list*) – The feature extractor(s) to apply. Can be a `FeatureExtractor` instance, a dictionary of named feature functions, or a list of feature functions. * **batch_size** (*int* *,* *default 512*) – The size of batches used for feature extraction within each recording. * **n_jobs** (*int* *,* *default 1*) – The number of parallel jobs to use for processing different recordings simultaneously. * **Returns:** A unified collection of feature datasets corresponding to the input recordings. * **Return type:** *FeaturesConcatDataset* ### eegdash.features.fit_feature_extractors(concat_dataset: BaseConcatDataset, features: FeatureExtractor | Dict[str, Callable] | List[Callable], batch_size: int = 8192) → FeatureExtractor Fit trainable feature extractors on a concatenated dataset. Scans the provided feature pipeline for components that require training (subclasses of `TrainableFeature`). If found, the function iterates through the dataset in batches to perform partial fitting before finalization. * **Parameters:** * **concat_dataset** (*BaseConcatDataset*) – The dataset used to train the feature extractors. * **features** (*FeatureExtractor* *or* *dict* *or* *list*) – The feature extractor pipeline(s) to fit. * **batch_size** (*int* *,* *default 8192*) – The batch size to use when streaming data through the `partial_fit()` phase. * **Returns:** The fitted feature extractor instance, ready for feature extraction. * **Return type:** *FeatureExtractor* ### Notes If the provided extractors are not trainable, the function returns the original input without modification. ### eegdash.features.load_features_concat_dataset(path: str | Path, ids_to_load: list[int] | None = None, n_jobs: int = 1) → eegdash.features.datasets.FeaturesConcatDataset Load a stored `FeaturesConcatDataset` from a directory. This function reconstructs a concatenated dataset by loading individual `FeaturesDataset` instances from numbered subdirectories. * **Parameters:** * **path** (*str* *or* *pathlib.Path*) – The root directory where the dataset was previously saved. This directory should contain numbered subdirectories. * **ids_to_load** (*list* *of* *int* *,* *optional*) – A list of specific recording IDs (subdirectory names) to load. If **None**, all numbered subdirectories found in the path are loaded in ascending numerical order. * **n_jobs** (*int* *,* *default=1*) – The number of CPU cores to use for parallel loading. Set to -1 to use all available processors. * **Returns:** A unified concatenated dataset containing the loaded recordings. * **Return type:** FeaturesConcatDataset #### SEE ALSO `braindecode.datautil.load_concat_dataset` ### Notes The function expects the directory structure generated by `FeaturesConcatDataset.save()`. It automatically reconstructs the feature DataFrames (safetensors), metadata (Pickle), recording info (FIF), and preprocessing keyword arguments (JSON). ## See also - `eegdash.features.extractors` for the feature-extraction base classes such as `FeatureExtractor`. - `eegdash.features.datasets` for dataset wrappers like `FeaturesConcatDataset`. - `eegdash.features.feature_bank.*` for the concrete feature families (complexity, connectivity, spectral, and more). # eegdash.api High-level interface to the EEGDash metadata database. This module provides the main EEGDash class which serves as the primary entry point for interacting with the EEGDash ecosystem. It offers methods to query, insert, and update metadata records stored in the EEGDash database via REST API. ### Classes | `EEGDash`(\*[, database, api_url, auth_token]) | High-level interface to the EEGDash metadata database. | |--------------------------------------------------|----------------------------------------------------------| ### *class* eegdash.api.EEGDash(, database: str = 'eegdash', api_url: str | None = None, auth_token: str | None = None) Bases: `object` High-level interface to the EEGDash metadata database. Provides methods to query, insert, and update metadata records stored in the EEGDash database via REST API gateway. For working with collections of recordings as PyTorch datasets, prefer `EEGDashDataset`. Create a new EEGDash client. * **Parameters:** * **database** (*str* *,* *default "eegdash"*) – Name of the MongoDB database to connect to. Common values: `"eegdash"` (production), `"eegdash_staging"` (staging), `"eegdash_v1"` (legacy archive). * **api_url** (*str* *,* *optional*) – Override the default API URL. If not provided, uses the default public endpoint or the `EEGDASH_API_URL` environment variable. * **auth_token** (*str* *,* *optional*) – Authentication token for admin write operations. Not required for public read operations. ### Examples ```pycon >>> eegdash = EEGDash() # production >>> eegdash = EEGDash(database="eegdash_staging") # staging >>> records = eegdash.find({"dataset": "ds002718"}) ``` #### find_datasets(query: dict[str, Any] | None = None, limit: int = 1000) → list[Mapping[str, Any]] Find datasets matching query. * **Parameters:** * **query** (*dict*) – Filter query. * **limit** (*int*) – Max number of datasets to return. * **Returns:** List of dataset metadata documents. * **Return type:** list of dict #### find(query: dict[str, Any] = None, , \*\*kwargs) → list[Mapping[str, Any]] Find records in the collection. ### Examples ```pycon >>> from eegdash import EEGDash >>> eegdash = EEGDash() >>> eegdash.find({"dataset": "ds002718", "subject": {"$in": ["012", "013"]}}) # pre-built query >>> eegdash.find(dataset="ds002718", subject="012") # keyword filters >>> eegdash.find(dataset="ds002718", subject=["012", "013"]) # sequence -> $in >>> eegdash.find({}) # fetch all (use with care) >>> eegdash.find({"dataset": "ds002718"}, subject=["012", "013"]) # combine query + kwargs (AND) ``` * **Parameters:** * **query** (*dict* *,* *optional*) – Complete MongoDB query dictionary. This is a positional-only argument. * **\*\*kwargs** – User-friendly field filters that are converted to a MongoDB query. Values can be scalars (e.g., `"sub-01"`) or sequences (translated to `$in` queries). Special parameters: `limit` (int) and `skip` (int) for pagination. * **Returns:** DB records that match the query. * **Return type:** list of dict #### exists(query: dict[str, Any] = None, , \*\*kwargs) → bool Check if at least one record matches the query. * **Parameters:** * **query** (*dict* *,* *optional*) – Complete query dictionary. This is a positional-only argument. * **\*\*kwargs** – User-friendly field filters (same as find()). * **Returns:** True if at least one matching record exists; False otherwise. * **Return type:** bool ### Examples ```pycon >>> eeg = EEGDash() >>> eeg.exists(dataset="ds002718") # check by dataset >>> eeg.exists({"data_name": "ds002718_sub-001_eeg.set"}) # check by data_name ``` #### count(query: dict[str, Any] = None, , \*\*kwargs) → int Count documents matching the query. * **Parameters:** * **query** (*dict* *,* *optional*) – Complete query dictionary. This is a positional-only argument. * **\*\*kwargs** – User-friendly field filters (same as find()). * **Returns:** Number of matching documents. * **Return type:** int ### Examples ```pycon >>> eeg = EEGDash() >>> count = eeg.count({}) # count all >>> count = eeg.count(dataset="ds002718") # count by dataset ``` #### find_one(query: dict[str, Any] = None, , \*\*kwargs) → Mapping[str, Any] | None Find a single record matching the query. * **Parameters:** * **query** (*dict* *,* *optional*) – Complete query dictionary. This is a positional-only argument. * **\*\*kwargs** – User-friendly field filters (same as find()). * **Returns:** The first matching record, or None if no match. * **Return type:** dict or None ### Examples ```pycon >>> eeg = EEGDash() >>> record = eeg.find_one(data_name="ds002718_sub-001_eeg.set") ``` #### get_dataset(dataset_id: str) → Mapping[str, Any] | None Fetch metadata for a specific dataset. * **Parameters:** **dataset_id** (*str*) – The unique identifier of the dataset (e.g., ‘ds002718’). * **Returns:** The dataset metadata document, or None if not found. * **Return type:** dict or None #### insert(records: dict[str, Any] | list[dict[str, Any]]) → int Insert one or more records (requires auth_token). * **Parameters:** **records** (*dict* *or* *list* *of* *dict*) – A single record or list of records to insert. * **Returns:** Number of records inserted. * **Return type:** int ### Examples ```pycon >>> eeg = EEGDash(auth_token="...") >>> eeg.insert({"dataset": "ds001", "subject": "01", ...}) # single >>> eeg.insert([record1, record2, record3]) # batch ``` #### update_field(query: dict[str, Any] = None, , , update: dict[str, Any], \*\*kwargs) → tuple[int, int] Update fields on records matching the query (requires auth_token). Use this to add or modify fields across matching records, e.g., after re-extracting entities with an improved algorithm. * **Parameters:** * **query** (*dict* *,* *optional*) – Filter query to match records. This is a positional-only argument. * **update** (*dict*) – Fields to update. Keys are field names, values are new values. * **\*\*kwargs** – User-friendly field filters (same as find()). * **Returns:** Number of records matched and actually modified. * **Return type:** tuple of (matched_count, modified_count) ### Examples ```pycon >>> eeg = EEGDash(auth_token="...") >>> # Update entities for all records in a dataset >>> eeg.update_field({"dataset": "ds002718"}, update={"entities": {"subject": "01"}}) >>> # Using kwargs for filter >>> eeg.update_field(dataset="ds002718", update={"entities": new_entities}) >>> # Combine query + kwargs >>> eeg.update_field({"dataset": "ds002718"}, subject="01", update={"entities": new_entities}) ``` #### update_dataset(dataset_id: str, update: dict[str, Any]) → int Update metadata for a specific dataset (requires auth_token). * **Parameters:** * **dataset_id** (*str*) – The unique identifier of the dataset (e.g., ‘ds002718’). * **update** (*dict*) – Dictionary of fields to update. * **Returns:** Number of documents modified (0 or 1). * **Return type:** int ### Examples ```pycon >>> eeg = EEGDash(auth_token="...") >>> eeg.update_dataset("ds002718", {"clinical.is_clinical": True}) ``` # eegdash.bids_metadata BIDS metadata processing and query building utilities. This module provides functions for building database queries from user parameters and enriching metadata records with participant information from BIDS datasets. ### Functions | `build_query_from_kwargs`(\*\*kwargs) | Build and validate a MongoDB query from keyword arguments. | |-----------------------------------------------------|---------------------------------------------------------------------------------------| | `merge_query`([query, require_query]) | Merge a raw query dict with keyword arguments into a final query. | | `normalize_key`(key) | Normalize a string key for robust matching. | | `merge_participants_fields`(description, ...) | Merge fields from a participants.tsv row into a description dict. | | `participants_row_for_subject`(bids_root, subject) | Load participants.tsv and return the row for a specific subject. | | `participants_extras_from_tsv`(bids_root, ...) | Extract additional participant information from participants.tsv. | | `attach_participants_extras`(raw, description, ...) | Attach extra participant data to a raw object and its description. | | `enrich_from_participants`(bids_root, ...) | Read participants.tsv and attach extra info for the subject. | | `get_entity_from_record`(record, entity) | Get an entity value from a record, supporting both v1 (flat) and v2 (nested) formats. | | `get_entities_from_record`(record[, entities]) | Get multiple entity values from a record. | ### eegdash.bids_metadata.build_query_from_kwargs(\*\*kwargs) → dict[str, Any] Build and validate a MongoDB query from keyword arguments. Converts user-friendly keyword arguments into a valid MongoDB query dictionary. Scalar values become exact matches; list-like values become `$in` queries. Entity fields (subject, task, session, run) are queried at the top level since the inject script flattens these from nested entities. * **Parameters:** **\*\*kwargs** – Query filters. Allowed keys are in `eegdash.const.ALLOWED_QUERY_FIELDS`. * **Returns:** A MongoDB query dictionary. * **Return type:** dict * **Raises:** **ValueError** – If an unsupported field is provided, or if a value is None/empty. ### eegdash.bids_metadata.merge_query(query: dict[str, Any] | None = None, require_query: bool = True, \*\*kwargs) → dict[str, Any] Merge a raw query dict with keyword arguments into a final query. * **Parameters:** * **query** (*dict* *or* *None*) – Raw MongoDB query dictionary. Pass `{}` to match all documents. * **require_query** (*bool* *,* *default True*) – If True, raise ValueError when no query or kwargs provided. * **\*\*kwargs** – User-friendly field filters (converted via `build_query_from_kwargs`). * **Returns:** The merged MongoDB query. * **Return type:** dict * **Raises:** **ValueError** – If `require_query=True` and neither query nor kwargs provided, or if conflicting constraints are detected. ### eegdash.bids_metadata.normalize_key(key: str) → str Normalize a string key for robust matching. Converts to lowercase, replaces non-alphanumeric chars with underscores. ### eegdash.bids_metadata.merge_participants_fields(description: dict[str, Any], participants_row: dict[str, Any] | None, description_fields: list[str] | None = None) → dict[str, Any] Merge fields from a participants.tsv row into a description dict. * **Parameters:** * **description** (*dict*) – The description dictionary to enrich. * **participants_row** (*dict* *or* *None*) – A row from participants.tsv. If None, returns description unchanged. * **description_fields** (*list* *of* *str* *,* *optional*) – Specific fields to include (matched using normalized keys). * **Returns:** The enriched description dictionary. * **Return type:** dict ### eegdash.bids_metadata.participants_row_for_subject(bids_root: str | Path, subject: str, id_columns: tuple[str, ...] = ('participant_id', 'participant', 'subject')) → Series | None Load participants.tsv and return the row for a specific subject. * **Parameters:** * **bids_root** (*str* *or* *Path*) – Root directory of the BIDS dataset. * **subject** (*str*) – Subject identifier (e.g., “01” or “sub-01”). * **id_columns** (*tuple* *of* *str*) – Column names to search for the subject identifier. * **Returns:** Subject’s data if found, otherwise None. * **Return type:** pandas.Series or None ### eegdash.bids_metadata.participants_extras_from_tsv(bids_root: str | Path, subject: str, , id_columns: tuple[str, ...] = ('participant_id', 'participant', 'subject'), na_like: tuple[str, ...] = ('', 'n/a', 'na', 'nan', 'unknown', 'none')) → dict[str, Any] Extract additional participant information from participants.tsv. * **Parameters:** * **bids_root** (*str* *or* *Path*) – Root directory of the BIDS dataset. * **subject** (*str*) – Subject identifier. * **id_columns** (*tuple* *of* *str*) – Column names treated as identifiers (excluded from output). * **na_like** (*tuple* *of* *str*) – Values considered as “Not Available” (excluded). * **Returns:** Extra participant information. * **Return type:** dict ### eegdash.bids_metadata.attach_participants_extras(raw: Any, description: Any, extras: dict[str, Any]) → None Attach extra participant data to a raw object and its description. * **Parameters:** * **raw** (*mne.io.Raw*) – The MNE Raw object to be updated. * **description** (*dict* *or* *pandas.Series*) – The description object to be updated. * **extras** (*dict*) – Extra participant information to attach. ### eegdash.bids_metadata.enrich_from_participants(bids_root: str | Path, bidspath: Any, raw: Any, description: Any) → dict[str, Any] Read participants.tsv and attach extra info for the subject. * **Parameters:** * **bids_root** (*str* *or* *Path*) – Root directory of the BIDS dataset. * **bidspath** (*mne_bids.BIDSPath*) – BIDSPath object for the current data file. * **raw** (*mne.io.Raw*) – The MNE Raw object to be updated. * **description** (*dict* *or* *pandas.Series*) – The description object to be updated. * **Returns:** The extras that were attached. * **Return type:** dict ### eegdash.bids_metadata.get_entity_from_record(record: dict[str, Any], entity: str) → Any Get an entity value from a record, supporting both v1 (flat) and v2 (nested) formats. * **Parameters:** * **record** (*dict*) – A record dictionary. * **entity** (*str*) – Entity name (e.g., “subject”, “task”, “session”, “run”). * **Returns:** The entity value, or None if not found. * **Return type:** Any ### Examples ```pycon >>> # v2 record (nested) >>> rec = {"entities": {"subject": "01", "task": "rest"}} >>> get_entity_from_record(rec, "subject") '01' >>> # v1 record (flat) >>> rec = {"subject": "01", "task": "rest"} >>> get_entity_from_record(rec, "subject") '01' ``` ### eegdash.bids_metadata.get_entities_from_record(record: dict[str, Any], entities: tuple[str, ...] = ('subject', 'session', 'run', 'task')) → dict[str, Any] Get multiple entity values from a record. * **Parameters:** * **record** (*dict*) – A record dictionary. * **entities** (*tuple* *of* *str*) – Entity names to extract. * **Returns:** Dictionary of entity values (only non-None values included). * **Return type:** dict # eegdash.const Configuration constants and mappings for EEGDash. This module contains global configuration settings, allowed query fields, and mapping constants used throughout the EEGDash package. It defines the interface between EEGDash releases and OpenNeuro dataset identifiers, as well as validation rules for database queries. ### Module Attributes | `ALLOWED_QUERY_FIELDS` | A set of field names that are permitted in database queries constructed via `find()` with keyword arguments. | |------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------| | `RELEASE_TO_OPENNEURO_DATASET_MAP` | A mapping from Healthy Brain Network (HBN) release identifiers (e.g., "R11") to their corresponding OpenNeuro dataset identifiers (e.g., "ds005516"). | | `SUBJECT_MINI_RELEASE_MAP` | A mapping from HBN release identifiers to a list of subject IDs. | | `config` | A global configuration dictionary for the EEGDash package. | ### eegdash.const.config *= {'accepted_query_fields': ['data_name', 'dataset'], 'attributes': {'bidspath': 'str', 'data_name': 'str', 'dataset': 'str', 'modality': 'str', 'nchans': 'int', 'ntimes': 'int', 'run': 'str', 'sampling_frequency': 'float', 'session': 'str', 'subject': 'str', 'task': 'str'}, 'bids_dependencies_files': ['dataset_description.json', 'participants.tsv', 'events.tsv', 'events.json', 'eeg.json', 'electrodes.tsv', 'channels.tsv', 'coordsystem.json'], 'description_fields': ['subject', 'session', 'run', 'task', 'age', 'gender', 'sex'], 'required_fields': ['data_name']}* A global configuration dictionary for the EEGDash package. ### eegdash.const.ALLOWED_QUERY_FIELDS *= {'data_name', 'dataset', 'modality', 'nchans', 'ntimes', 'run', 'sampling_frequency', 'session', 'subject', 'task'}* A set of field names that are permitted in database queries constructed via `find()` with keyword arguments. * **Type:** set ### eegdash.const.RELEASE_TO_OPENNEURO_DATASET_MAP *= {'R1': 'ds005505', 'R10': 'ds005515', 'R11': 'ds005516', 'R2': 'ds005506', 'R3': 'ds005507', 'R4': 'ds005508', 'R5': 'ds005509', 'R6': 'ds005510', 'R7': 'ds005511', 'R8': 'ds005512', 'R9': 'ds005514'}* A mapping from Healthy Brain Network (HBN) release identifiers (e.g., “R11”) to their corresponding OpenNeuro dataset identifiers (e.g., “ds005516”). * **Type:** dict ### eegdash.const.SUBJECT_MINI_RELEASE_MAP *= {'R1': ['NDARAC904DMU', 'NDARAM704GKZ', 'NDARAP359UM6', 'NDARBD879MBX', 'NDARBH024NH2', 'NDARBK082PDD', 'NDARCA153NKE', 'NDARCE721YB5', 'NDARCJ594BWQ', 'NDARCN669XPR', 'NDARCW094JCG', 'NDARCZ947WU5', 'NDARDH670PXH', 'NDARDL511UND', 'NDARDU986RBM', 'NDAREM731BYM', 'NDAREN519BLJ', 'NDARFK610GY5', 'NDARFT581ZW5', 'NDARFW972KFQ'], 'R10': ['NDARAR935TGZ', 'NDARAV474ADJ', 'NDARCB869VM8', 'NDARCJ667UPL', 'NDARCM677TC1', 'NDARET671FTC', 'NDARKM061NHZ', 'NDARLD501HDK', 'NDARLL176DJR', 'NDARMT791WDH', 'NDARMW299ZAB', 'NDARNC405WJA', 'NDARNP962TJK', 'NDARPB967KU7', 'NDARRU560AGK', 'NDARTB173LY2', 'NDARUW377KAE', 'NDARVH565FX9', 'NDARVP799KGY', 'NDARVY962GB5'], 'R11': ['NDARAB678VYW', 'NDARAG788YV9', 'NDARAM946HJE', 'NDARAY977BZT', 'NDARAZ532KK0', 'NDARCE912ZXW', 'NDARCM214WFE', 'NDARDL033XRG', 'NDARDT889RT9', 'NDARDZ794ZVP', 'NDAREV869CPW', 'NDARFN221WW5', 'NDARFV289RKB', 'NDARFY623ZTE', 'NDARGA890MKA', 'NDARHN206XY3', 'NDARHP518FUR', 'NDARJL292RYV', 'NDARKM199DXW', 'NDARKW236TN7'], 'R2': ['NDARAB793GL3', 'NDARAM675UR8', 'NDARBM839WR5', 'NDARBU730PN8', 'NDARCT974NAJ', 'NDARCW933FD5', 'NDARCZ770BRG', 'NDARDW741HCF', 'NDARDZ058NZN', 'NDAREC377AU2', 'NDAREM500WWH', 'NDAREV527ZRF', 'NDAREV601CE7', 'NDARFF070XHV', 'NDARFR108JNB', 'NDARFT305CG1', 'NDARGA056TMW', 'NDARGH775KF5', 'NDARGJ878ZP4', 'NDARHA387FPM'], 'R3': ['NDARAA948VFH', 'NDARAD774HAZ', 'NDARAE828CML', 'NDARAG340ERT', 'NDARBA839HLG', 'NDARBE641DGZ', 'NDARBG574KF4', 'NDARBM642JFT', 'NDARCL016NHB', 'NDARCV944JA6', 'NDARCY178KJP', 'NDARDY150ZP9', 'NDAREC542MH3', 'NDAREK549XUQ', 'NDAREM887YY8', 'NDARFA815FXE', 'NDARFF644ZGD', 'NDARFV557XAA', 'NDARFV780ABD', 'NDARGB102NWJ'], 'R4': ['NDARAC350BZ0', 'NDARAD615WLJ', 'NDARAG584XLU', 'NDARAH503YG1', 'NDARAX272ZJL', 'NDARAY461TZZ', 'NDARBC734UVY', 'NDARBL444FBA', 'NDARBT640EBN', 'NDARBU098PJT', 'NDARBU928LV0', 'NDARBV059CGE', 'NDARCG037CX4', 'NDARCG947ZC0', 'NDARCH001CN2', 'NDARCU001ZN7', 'NDARCW497XW2', 'NDARCX053GU5', 'NDARDF568GL5', 'NDARDJ092YKH'], 'R5': ['NDARAH793FBF', 'NDARAJ689BVN', 'NDARAP785CTE', 'NDARAU708TL8', 'NDARBE091BGD', 'NDARBE103DHM', 'NDARBF851NH6', 'NDARBH228RDW', 'NDARBJ674TVU', 'NDARBM433VER', 'NDARCA740UC8', 'NDARCU633GCZ', 'NDARCU736GZ1', 'NDARCU744XWL', 'NDARDC843HHM', 'NDARDH086ZKK', 'NDARDL305BT8', 'NDARDU853XZ6', 'NDARDV245WJG', 'NDAREC480KFA'], 'R6': ['NDARAD224CRB', 'NDARAE301XTM', 'NDARAT680GJA', 'NDARCA578CEB', 'NDARDZ147ETZ', 'NDARFL793LDE', 'NDARFX710UZA', 'NDARGE994BMX', 'NDARGP191YHN', 'NDARGV436PFT', 'NDARHF545HFW', 'NDARHP039DBU', 'NDARHT774ZK1', 'NDARJA830BYV', 'NDARKB614KGY', 'NDARKM250ET5', 'NDARKZ085UKQ', 'NDARLB581AXF', 'NDARNJ899HW7', 'NDARRZ606EDP'], 'R7': ['NDARAY475AKD', 'NDARBW026UGE', 'NDARCK162REX', 'NDARCK481KRH', 'NDARCV378MMX', 'NDARCX462NVA', 'NDARDJ970ELG', 'NDARDU617ZW1', 'NDAREM609ZXW', 'NDAREW074ZM2', 'NDARFE555KXB', 'NDARFT176NJP', 'NDARGK442YHH', 'NDARGM439FZD', 'NDARGT634DUJ', 'NDARHE283KZN', 'NDARHG260BM9', 'NDARHL684WYU', 'NDARHN224TPA', 'NDARHP841RMR'], 'R8': ['NDARAB514MAJ', 'NDARAD571FLB', 'NDARAF003VCL', 'NDARAG191AE8', 'NDARAJ977PRJ', 'NDARAP912JK3', 'NDARAV454VF0', 'NDARAY298THW', 'NDARBJ375VP4', 'NDARBT436PMT', 'NDARBV630BK6', 'NDARCB627KDN', 'NDARCC059WTH', 'NDARCM953HKD', 'NDARCN681CXW', 'NDARCT889DMB', 'NDARDJ204EPU', 'NDARDJ544BU5', 'NDARDP292DVC', 'NDARDW178AC6'], 'R9': ['NDARAC589YMB', 'NDARAC853CR6', 'NDARAH239PGG', 'NDARAL897CYV', 'NDARAN160GUF', 'NDARAP049KXJ', 'NDARAP457WB5', 'NDARAW216PM7', 'NDARBA004KBT', 'NDARBD328NUQ', 'NDARBF042LDM', 'NDARBH019KPD', 'NDARBH728DFK', 'NDARBM370JCB', 'NDARBU183TDJ', 'NDARBW971DCW', 'NDARBZ444ZHK', 'NDARCC620ZFT', 'NDARCD182XT1', 'NDARCK113CJM']}* A mapping from HBN release identifiers to a list of subject IDs. This is used to select a small, representative subset of subjects for creating “mini” datasets for testing and demonstration purposes. * **Type:** dict # eegdash.downloader File downloading utilities for EEG data from cloud storage. This module provides functions for downloading EEG data files and BIDS dependencies from AWS S3 storage, with support for caching and progress tracking. It handles the communication between the EEGDash metadata database and the actual EEG data stored in the cloud. ### Functions | `download_s3_file`(s3_path, local_path, \*[, ...]) | Download a single file from S3 to a local path. | |------------------------------------------------------|---------------------------------------------------| | `download_files`(files, \*[, filesystem, ...]) | Download multiple S3 URIs to local destinations. | | `get_s3path`(s3_bucket, filepath) | Construct an S3 URI from a bucket and file path. | | `get_s3_filesystem`() | Get an anonymous S3 filesystem object. | ### eegdash.downloader.download_s3_file(s3_path: str, local_path: Path, , filesystem: S3FileSystem | None = None) → Path Download a single file from S3 to a local path. Handles the download of a raw EEG data file from an S3 bucket, caching it at the specified local path. Creates parent directories if they do not exist. * **Parameters:** * **s3_path** (*str*) – The full S3 URI of the file to download. * **local_path** (*pathlib.Path*) – The local file path where the downloaded file will be saved. * **filesystem** (*s3fs.S3FileSystem* *|* *None*) – Optional pre-created filesystem to reuse across multiple downloads. * **Returns:** The local path to the downloaded file. * **Return type:** pathlib.Path ### eegdash.downloader.download_files(files: Sequence[tuple[str, Path]] | Iterable[tuple[str, Path]], , filesystem: S3FileSystem | None = None, skip_existing: bool = True, skip_missing: bool = False) → list[Path] Download multiple S3 URIs to local destinations. * **Parameters:** * **files** (*iterable* *of* *(**str* *,* *Path* *)*) – Pairs of (S3 URI, local destination path). * **filesystem** (*s3fs.S3FileSystem* *|* *None*) – Optional pre-created filesystem to reuse across multiple downloads. * **skip_existing** (*bool*) – If True, do not download files that already exist locally. * **skip_missing** (*bool*) – If True, skip files that do not exist on S3 instead of raising. ### eegdash.downloader.get_s3path(s3_bucket: str, filepath: str) → str Construct an S3 URI from a bucket and file path. * **Parameters:** * **s3_bucket** (*str*) – The S3 bucket name (e.g., “s3://my-bucket”). * **filepath** (*str*) – The path to the file within the bucket. * **Returns:** The full S3 URI (e.g., “s3://my-bucket/path/to/file”). * **Return type:** str ### eegdash.downloader.get_s3_filesystem() → S3FileSystem Get an anonymous S3 filesystem object. Initializes and returns an `s3fs.S3FileSystem` for anonymous access to public S3 buckets, configured for the ‘us-east-2’ region. * **Returns:** An S3 filesystem object. * **Return type:** s3fs.S3FileSystem # eegdash.hbn Healthy Brain Network (HBN) specific utilities and preprocessing. This module provides specialized functions for working with the Healthy Brain Network dataset, including preprocessing pipelines, annotation handling, and windowing utilities tailored for HBN EEG data analysis. ### Functions | `build_trial_table`(events_df) | Build a table of contrast trials from an events DataFrame. | |-----------------------------------------------------|-----------------------------------------------------------------------------------| | `annotate_trials_with_target`(raw[, ...]) | Create trial annotations with a specified target value. | | `add_aux_anchors`(raw[, stim_desc, resp_desc]) | Add auxiliary annotations for stimulus and response onsets. | | `add_extras_columns`(windows_concat_ds, ...[, ...]) | Add columns from annotation extras to a windowed dataset's metadata. | | `keep_only_recordings_with`(desc, concat_ds) | Filter a concatenated dataset to keep only recordings with a specific annotation. | ### Classes | `hbn_ec_ec_reannotation`() | Preprocessor to reannotate HBN data for eyes-open/eyes-closed events. | |------------------------------|-------------------------------------------------------------------------| ### *class* eegdash.hbn.hbn_ec_ec_reannotation Bases: `Preprocessor` Preprocessor to reannotate HBN data for eyes-open/eyes-closed events. This preprocessor is specifically designed for Healthy Brain Network (HBN) datasets. It identifies existing annotations for “instructed_toCloseEyes” and “instructed_toOpenEyes” and creates new, regularly spaced annotations for “eyes_closed” and “eyes_open” segments, respectively. This is useful for creating windowed datasets based on these new, more precise event markers. ### Notes This class inherits from `braindecode.preprocessing.Preprocessor` and is intended to be used within a braindecode preprocessing pipeline. #### transform(raw: Raw) → Raw Create new annotations for eyes-open and eyes-closed periods. This function finds the original “instructed_to…” annotations and generates new annotations every 2 seconds within specific time ranges relative to the original markers: - “eyes_closed”: 15s to 29s after “instructed_toCloseEyes” - “eyes_open”: 5s to 19s after “instructed_toOpenEyes” The original annotations in the mne.io.Raw object are replaced by this new set of annotations. * **Parameters:** **raw** (*mne.io.Raw*) – The raw MNE object containing the HBN data and original annotations. * **Returns:** The raw MNE object with the modified annotations. * **Return type:** mne.io.Raw ### eegdash.hbn.build_trial_table(events_df: DataFrame) → DataFrame Build a table of contrast trials from an events DataFrame. This function processes a DataFrame of events (typically from a BIDS events.tsv file) to identify contrast trials and extract relevant metrics like stimulus onset, response onset, and reaction times. * **Parameters:** **events_df** (*pandas.DataFrame*) – A DataFrame containing event information, with at least “onset” and “value” columns. * **Returns:** A DataFrame where each row represents a single contrast trial, with columns for onsets, reaction times, and response correctness. * **Return type:** pandas.DataFrame ### eegdash.hbn.annotate_trials_with_target(raw: Raw, target_field: str = 'rt_from_stimulus', epoch_length: float = 2.0, require_stimulus: bool = True, require_response: bool = True) → Raw Create trial annotations with a specified target value. This function reads the BIDS events file associated with the raw object, builds a trial table, and creates new MNE annotations for each trial. The annotations are labeled “contrast_trial_start” and their extras dictionary is populated with trial metrics, including a “target” key. * **Parameters:** * **raw** (*mne.io.Raw*) – The raw data object. Must have a single associated file name from which the BIDS path can be derived. * **target_field** (*str* *,* *default "rt_from_stimulus"*) – The column from the trial table to use as the “target” value in the annotation extras. * **epoch_length** (*float* *,* *default 2.0*) – The duration to set for each new annotation. * **require_stimulus** (*bool* *,* *default True*) – If True, only include trials that have a recorded stimulus event. * **require_response** (*bool* *,* *default True*) – If True, only include trials that have a recorded response event. * **Returns:** The raw object with the new annotations set. * **Return type:** mne.io.Raw * **Raises:** **KeyError** – If target_field is not a valid column in the built trial table. ### eegdash.hbn.add_aux_anchors(raw: Raw, stim_desc: str = 'stimulus_anchor', resp_desc: str = 'response_anchor') → Raw Add auxiliary annotations for stimulus and response onsets. This function inspects existing “contrast_trial_start” annotations and adds new, zero-duration “anchor” annotations at the precise onsets of stimuli and responses for each trial. * **Parameters:** * **raw** (*mne.io.Raw*) – The raw data object with “contrast_trial_start” annotations. * **stim_desc** (*str* *,* *default "stimulus_anchor"*) – The description for the new stimulus annotations. * **resp_desc** (*str* *,* *default "response_anchor"*) – The description for the new response annotations. * **Returns:** The raw object with the auxiliary annotations added. * **Return type:** mne.io.Raw ### eegdash.hbn.add_extras_columns(windows_concat_ds: BaseConcatDataset, original_concat_ds: BaseConcatDataset, desc: str = 'contrast_trial_start', keys: tuple = ('target', 'rt_from_stimulus', 'rt_from_trialstart', 'stimulus_onset', 'response_onset', 'correct', 'response_type')) → BaseConcatDataset Add columns from annotation extras to a windowed dataset’s metadata. This function propagates trial-level information stored in the extras of annotations to the metadata DataFrame of a WindowsDataset. * **Parameters:** * **windows_concat_ds** (*BaseConcatDataset*) – The windowed dataset whose metadata will be updated. * **original_concat_ds** (*BaseConcatDataset*) – The original (non-windowed) dataset containing the raw data and annotations with the extras to be added. * **desc** (*str* *,* *default "contrast_trial_start"*) – The description of the annotations to source the extras from. * **keys** (*tuple* *,* *default* *(* *...* *)*) – The keys to extract from each annotation’s extras dictionary and add as columns to the metadata. * **Returns:** The windows_concat_ds with updated metadata. * **Return type:** BaseConcatDataset ### eegdash.hbn.keep_only_recordings_with(desc: str, concat_ds: BaseConcatDataset) → BaseConcatDataset Filter a concatenated dataset to keep only recordings with a specific annotation. * **Parameters:** * **desc** (*str*) – The description of the annotation that must be present in a recording for it to be kept. * **concat_ds** (*BaseConcatDataset*) – The concatenated dataset to filter. * **Returns:** A new concatenated dataset containing only the filtered recordings. * **Return type:** BaseConcatDataset # eegdash.http_api_client HTTP API client for EEGDash REST API. ### Functions | `get_client`([api_url, database, auth_token]) | Get an API client instance. | |-------------------------------------------------|-------------------------------| ### Classes | `EEGDashAPIClient`([api_url, database, auth_token]) | HTTP client for EEGDash API. | |-------------------------------------------------------|--------------------------------| ### *class* eegdash.http_api_client.EEGDashAPIClient(api_url: str | None = None, database: str = 'eegdash', auth_token: str | None = None) Bases: `object` HTTP client for EEGDash API. * **Parameters:** * **api_url** (*str* *,* *optional*) – Base API URL. Default: [https://data.eegdash.org](https://data.eegdash.org) * **database** (*str* *,* *default "eegdash"*) – Database name (“eegdash”, “eegdash_staging”, or “eegdash_v1”). * **auth_token** (*str* *,* *optional*) – Auth token for admin write operations. #### find(query: dict[str, Any] | None = None, limit: int | None = None, skip: int | None = None, \*\*kwargs) → list[dict[str, Any]] Query records. Auto-paginates if no limit specified. #### find_one(query: dict[str, Any] | None = None, \*\*kwargs) → dict[str, Any] | None Find a single record. #### get_dataset(dataset_id: str) → dict[str, Any] | None Fetch a dataset document by ID. #### find_datasets(query: dict[str, Any] | None = None, limit: int = 1000) → list[dict[str, Any]] Find datasets matching query. #### count_documents(query: dict[str, Any] | None = None, \*\*kwargs) → int Count documents matching query. #### insert_one(record: dict[str, Any]) → str Insert single record (requires auth). #### insert_many(records: list[dict[str, Any]]) → int Insert multiple records (requires auth). #### update_many(query: dict[str, Any], update: dict[str, Any]) → tuple[int, int] Update records matching query (requires auth). * **Parameters:** * **query** (*dict*) – Filter query to match records. * **update** (*dict*) – Fields to set (wrapped in $set automatically). * **Return type:** tuple of (matched_count, modified_count) #### update_dataset(dataset_id: str, update: dict[str, Any]) → int Update dataset metadata (requires auth). * **Parameters:** * **dataset_id** (*str*) – The dataset identifier. * **update** (*dict*) – Fields to update (will be wrapped in $set automatically). * **Returns:** Modified count (1 or 0). * **Return type:** int #### upsert_many(records: list[dict[str, Any]]) → dict[str, int] Upsert multiple records (requires auth). New endpoint that uses bulk upsert based on dataset+bidspath. ### eegdash.http_api_client.get_client(api_url: str | None = None, database: str = 'eegdash', auth_token: str | None = None) → EEGDashAPIClient Get an API client instance. # eegdash.logging Logging configuration for EEGDash. This module sets up centralized logging for the EEGDash package using Rich for enhanced console output formatting. It provides a consistent logging interface across all modules. # eegdash.paths Path utilities and cache directory management. This module provides functions for resolving consistent cache directories and path management throughout the EEGDash package, with integration to MNE-Python’s configuration system. ### Functions | `get_default_cache_dir`() | Resolve the default cache directory for EEGDash data. | |-----------------------------|---------------------------------------------------------| ### eegdash.paths.get_default_cache_dir() → Path Resolve the default cache directory for EEGDash data. The function determines the cache directory based on the following priority order: > 1. The path specified by the `EEGDASH_CACHE_DIR` environment variable. > 2. A hidden directory named `.eegdash_cache` in the current working > : directory. > 3. The path specified by the `MNE_DATA` configuration in the MNE-Python > : config file (fallback). * **Returns:** The resolved, absolute path to the default cache directory. * **Return type:** pathlib.Path # eegdash.schemas ## EEGDash Data Schemas This module defines the core data structures used throughout EEGDash to represent neuroimaging datasets and individual recording files. It provides two types of schemas for each core object: 1. **Pydantic Models** (`*Model`): Used for strict data validation, serialization, and schema generation (e.g., for APIs). 2. **TypedDict Definitions**: Used for high-performance internal usage, static type checking, and efficient loading of large metadata collections. ### Core Concepts The data model is organized into a two-level hierarchy: * **Dataset**: Represents a collection of data (e.g., “ds001785”). It contains study-level metadata such as: \* Identity (ID, name, source) \* Demographics (subject ages, sex distribution) \* Clinical (diagnosis, purpose) \* Experiment Paradigm (tasks, stimuli) \* Provenance (timestamps, authors) * **Record**: Represents a single data file within a dataset (e.g., a specific .vhdr or .edf file). It is optimized for fast access and contains: \* File location (storage backend, path) \* BIDS Entities (subject, session, task, run) \* Basic signal properties (sampling rate, channel names) ### Usage Creating a Dataset: ```python from eegdash.schemas import create_dataset ds = create_dataset( dataset_id="ds001", name="My Study", subjects_count=20, ages=[20, 25, 30], recording_modality=["eeg"], ) ``` Creating a Record: ```python from eegdash.schemas import create_record rec = create_record( dataset="ds001", storage_base="https://my.storage.com", bids_relpath="sub-01/eeg/sub-01_task-rest_eeg.edf", subject="01", task="rest", ) ``` ### Functions | `create_dataset`(\*, dataset_id[, name, ...]) | Create a Dataset document. | |-------------------------------------------------|-----------------------------------------| | `create_record`(\*, dataset, storage_base, ...) | Create an EEGDash record. | | `validate_dataset`(dataset) | Validate a dataset has required fields. | | `validate_record`(record) | Validate a record has required fields. | ### Classes | `DatasetModel`(\*, dataset_id, source, ...[, ...]) | Pydantic model for dataset-level metadata. | |------------------------------------------------------|---------------------------------------------------| | `RecordModel`(\*, dataset, bids_relpath, ...[, ...]) | Pydantic model for a single recording file. | | `StorageModel`(\*, backend, ...) | Pydantic model for storage location details. | | `EntitiesModel`(\*[, subject, session, task, ...]) | Pydantic model for BIDS entities. | | `ManifestModel`(\*[, source]) | Pydantic model for a dataset file manifest. | | `ManifestFileModel`(\*[, path, name]) | Pydantic model for a file entry in a manifest. | | `Dataset` | TypedDict schema for a full Dataset document. | | `Record` | TypedDict schema for a Record document. | | `Storage` | Remote storage location details. | | `Entities` | BIDS entities parsed from the file path. | | `Demographics` | Subject demographics summary for a dataset. | | `Clinical` | Clinical classification metadata (dataset-level). | | `ExternalLinks` | Relevant external hyperlinks for the dataset. | | `RepositoryStats` | Statistics for git-based repositories (e.g. GIN). | | `Timestamps` | Processing and lifecycle timestamps. | ### *class* eegdash.schemas.DatasetModel(, dataset_id: Annotated[str, MinLen(min_length=1)], source: Annotated[str, MinLen(min_length=1)], recording_modality: Annotated[list[str], MinLen(min_length=1)], ingestion_fingerprint: str | None = None, senior_author: str | None = None, contact_info: list[str] | None = None, timestamps: dict[str, Any] | None = None, storage: StorageModel | None = None, \*\*extra_data: Any) Bases: `BaseModel` Pydantic model for dataset-level metadata. Create a new model by parsing and validating input data from keyword arguments. Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. self is explicitly positional-only to allow self as a field name. #### model_config *= {'extra': 'allow'}* Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. #### dataset_id *: str* #### source *: str* #### recording_modality *: list[str]* #### ingestion_fingerprint *: str | None* #### senior_author *: str | None* #### contact_info *: list[str] | None* #### timestamps *: dict[str, Any] | None* #### storage *: StorageModel | None* ### *class* eegdash.schemas.RecordModel(, dataset: Annotated[str, MinLen(min_length=1)], bids_relpath: Annotated[str, MinLen(min_length=1)], storage: StorageModel, recording_modality: Annotated[list[str], MinLen(min_length=1)], datatype: str | None = None, suffix: str | None = None, extension: str | None = None, entities: EntitiesModel | dict[str, Any] | None = None, \*\*extra_data: Any) Bases: `BaseModel` Pydantic model for a single recording file. Create a new model by parsing and validating input data from keyword arguments. Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. self is explicitly positional-only to allow self as a field name. #### model_config *= {'extra': 'allow'}* Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. #### dataset *: str* #### bids_relpath *: str* #### storage *: StorageModel* #### recording_modality *: list[str]* #### datatype *: str | None* #### suffix *: str | None* #### extension *: str | None* #### entities *: EntitiesModel | dict[str, Any] | None* ### *class* eegdash.schemas.StorageModel(\*, backend: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], base: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], raw_key: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], dep_keys: list[str] = , \*\*extra_data: ~typing.Any) Bases: `BaseModel` Pydantic model for storage location details. Create a new model by parsing and validating input data from keyword arguments. Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. self is explicitly positional-only to allow self as a field name. #### model_config *= {'extra': 'allow'}* Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. #### backend *: str* #### base *: str* #### raw_key *: str* #### dep_keys *: list[str]* ### *class* eegdash.schemas.EntitiesModel(, subject: str | None = None, session: str | None = None, task: str | None = None, run: str | None = None, acquisition: str | None = None, \*\*extra_data: Any) Bases: `BaseModel` Pydantic model for BIDS entities. Create a new model by parsing and validating input data from keyword arguments. Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. self is explicitly positional-only to allow self as a field name. #### model_config *= {'extra': 'allow'}* Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. #### subject *: str | None* #### session *: str | None* #### task *: str | None* #### run *: str | None* #### acquisition *: str | None* ### *class* eegdash.schemas.ManifestModel(, source: str | None = None, files: list[str | ManifestFileModel], \*\*extra_data: Any) Bases: `BaseModel` Pydantic model for a dataset file manifest. Create a new model by parsing and validating input data from keyword arguments. Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. self is explicitly positional-only to allow self as a field name. #### model_config *= {'extra': 'allow'}* Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. #### source *: str | None* #### files *: list[str | ManifestFileModel]* ### *class* eegdash.schemas.ManifestFileModel(, path: str | None = None, name: str | None = None, \*\*extra_data: Any) Bases: `BaseModel` Pydantic model for a file entry in a manifest. Create a new model by parsing and validating input data from keyword arguments. Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. self is explicitly positional-only to allow self as a field name. #### model_config *= {'extra': 'allow'}* Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. #### path *: str | None* #### name *: str | None* #### path_or_name() → str Return the path or name of the file. ### *class* eegdash.schemas.Dataset Bases: `TypedDict` TypedDict schema for a full Dataset document. This Dictionary represents all metadata available for a study/dataset. #### dataset_id Unique identifier (e.g., “ds001785”). * **Type:** str #### name Descriptive title of the dataset. * **Type:** str #### canonical_name Canonical / community-recognised name(s) for the dataset, each a valid Python identifier (e.g. `["BrainTreeBank"]`, `["SleepEDF", "SleepEDFPlus"]`). Used to register importable class aliases alongside the `DS…`-style ID. Empty list or `None` means no alias is registered. * **Type:** list[str] | None #### source Origin source (e.g., “openneuro”, “nemar”). * **Type:** str #### readme Content of the dataset’s README file. * **Type:** str | None #### recording_modality List of recording modalities (e.g., [“eeg”, “meg”]). * **Type:** list[str] #### datatypes BIDS datatypes present (e.g., [“eeg”, “anat”]). * **Type:** list[str] #### experimental_modalities Stimulus types used (e.g., [“visual”, “auditory”]). * **Type:** list[str] | None #### bids_version Version of the BIDS standard used. * **Type:** str | None #### license License string (e.g., “CC0”). * **Type:** str | None #### authors List of author names. * **Type:** list[str] #### funding List of funding sources. * **Type:** list[str] #### dataset_doi Digital Object Identifier for the dataset. * **Type:** str | None #### associated_paper_doi DOI of the paper associated with the dataset. * **Type:** str | None #### tasks List of task names found in the dataset. * **Type:** list[str] #### sessions List of session names. * **Type:** list[str] #### total_files Total file count. * **Type:** int | None #### size_bytes Total dataset size in bytes. * **Type:** int | None #### data_processed Indicates if the data has been pre-processed. * **Type:** bool | None #### study_domain General domain of the study. * **Type:** str | None #### study_design Description of the study design. * **Type:** str | None #### contributing_labs List of labs contributing to the dataset. * **Type:** list[str] | None #### n_contributing_labs Count of contributing labs. * **Type:** int | None #### demographics Summary of subject demographics. * **Type:** Demographics #### tags Classification tags (pathology, modality, type). * **Type:** Tags #### clinical Clinical classification details (deprecated, use tags instead). * **Type:** Clinical #### external_links Links to external resources. * **Type:** ExternalLinks #### repository_stats Stats for the source repository (if applicable). * **Type:** RepositoryStats | None #### senior_author Name of the senior author. * **Type:** str | None #### contact_info Contact emails or names. * **Type:** list[str] | None #### timestamps Timestamps for data processing and creation. * **Type:** Timestamps #### nemar_citation_count Number of papers citing this dataset (from NEMAR citations repository). * **Type:** int | None #### dataset_id *: str* #### name *: str* #### canonical_name *: list[str] | None* #### source *: str* #### readme *: str | None* #### ingestion_fingerprint *: str | None* #### recording_modality *: list[str]* #### datatypes *: list[str]* #### experimental_modalities *: list[str] | None* #### bids_version *: str | None* #### license *: str | None* #### authors *: list[str]* #### funding *: list[str]* #### dataset_doi *: str | None* #### associated_paper_doi *: str | None* #### tasks *: list[str]* #### sessions *: list[str]* #### total_files *: int | None* #### size_bytes *: int | None* #### data_processed *: bool | None* #### study_domain *: str | None* #### study_design *: str | None* #### contributing_labs *: list[str] | None* #### n_contributing_labs *: int | None* #### demographics *: Demographics* #### tags *: Tags* #### clinical *: Clinical* #### external_links *: ExternalLinks* #### repository_stats *: RepositoryStats | None* #### senior_author *: str | None* #### contact_info *: list[str] | None* #### timestamps *: Timestamps* #### storage *: Storage | None* #### nemar_citation_count *: int | None* ### *class* eegdash.schemas.Record Bases: `TypedDict` TypedDict schema for a Record document. Represents a single data file and its metadata. This structure is kept flat and minimal to ensure fast loading times when querying millions of records. #### dataset Foreign key matching `Dataset.dataset_id`. * **Type:** str #### data_name Unique name for the data item (e.g., “ds001_sub-01_task-rest”). * **Type:** str #### bidspath Legacy path identifier (e.g., “ds001/sub-01/eeg/…”). * **Type:** str #### bids_relpath Standard BIDS relative path (e.g., “sub-01/eeg/…”). * **Type:** str #### datatype BIDS datatype (e.g., “eeg”). * **Type:** str #### suffix Filename suffix (e.g., “eeg”). * **Type:** str #### extension File extension (e.g., “.vhdr”). * **Type:** str #### recording_modality Modality of the recording. * **Type:** list[str] | None #### entities BIDS entities dict (subject, session, etc.). * **Type:** Entities #### entities_mne BIDS entities sanitized for compatibility with MNE-Python (e.g. numeric numeric runs). * **Type:** Entities #### storage Storage location details. * **Type:** Storage #### ch_names List of channel names. * **Type:** list[str] | None #### sampling_frequency Sampling rate in Hz. * **Type:** float | None #### nchans Channel count. * **Type:** int | None #### ntimes Number of time points. * **Type:** int | None #### digested_at Timestamp of when this record was processed. * **Type:** str #### dataset *: str* #### data_name *: str* #### bidspath *: str* #### bids_relpath *: str* #### datatype *: str* #### suffix *: str* #### extension *: str* #### recording_modality *: list[str] | None* #### entities *: Entities* #### entities_mne *: Entities* #### storage *: Storage* #### ch_names *: list[str] | None* #### sampling_frequency *: float | None* #### nchans *: int | None* #### ntimes *: int | None* #### digested_at *: str* ### *class* eegdash.schemas.Storage Bases: `TypedDict` Remote storage location details. #### backend Storage backend protocol. * **Type:** {‘s3’, ‘https’, ‘local’} #### base Base URI (e.g., “s3://openneuro.org/ds000001”). * **Type:** str #### raw_key Path relative to base to reach the file. * **Type:** str #### dep_keys Paths relative to base for sidecar files (e.g., .json, .vhdr). * **Type:** list[str] #### backend *: Literal['s3', 'https', 'local']* #### base *: str* #### raw_key *: str* #### dep_keys *: list[str]* ### *class* eegdash.schemas.Entities Bases: `TypedDict` BIDS entities parsed from the file path. #### subject Subject label (e.g., “01”). * **Type:** str | None #### session Session label (e.g., “pre”). * **Type:** str | None #### task Task label (e.g., “rest”). * **Type:** str | None #### run Run label (e.g., “1” or “01”). * **Type:** str | None #### acquisition Acquisition label (e.g., “bipolar”, “PSG”). * **Type:** str | None #### subject *: str | None* #### session *: str | None* #### task *: str | None* #### run *: str | None* #### acquisition *: str | None* ### *class* eegdash.schemas.Demographics Bases: `TypedDict` Subject demographics summary for a dataset. #### subjects_count Total number of subjects. * **Type:** int #### ages List of all subject ages (if available). * **Type:** list[int] #### age_min Minimum age in the cohort. * **Type:** int | None #### age_max Maximum age in the cohort. * **Type:** int | None #### age_mean Mean age of subjects. * **Type:** float | None #### species Species of subjects (e.g., “Human”, “Mouse”). * **Type:** str | None #### sex_distribution Count of subjects by sex (e.g., {“m”: 50, “f”: 45}). * **Type:** dict[str, int] | None #### handedness_distribution Count of subjects by handedness (e.g., {“r”: 80, “l”: 15}). * **Type:** dict[str, int] | None #### subjects_count *: int* #### ages *: list[int]* #### age_min *: int | None* #### age_max *: int | None* #### age_mean *: float | None* #### species *: str | None* #### sex_distribution *: dict[str, int] | None* #### handedness_distribution *: dict[str, int] | None* ### *class* eegdash.schemas.Clinical Bases: `TypedDict` Clinical classification metadata (dataset-level). #### Deprecated Deprecated since version Use: the `tags` field with `pathology` key instead. #### is_clinical True if the dataset contains clinical population data. * **Type:** bool #### purpose The clinical condition or purpose (e.g., “epilepsy”, “depression”). * **Type:** str | None #### is_clinical *: bool* #### purpose *: str | None* ### *class* eegdash.schemas.ExternalLinks Bases: `TypedDict` Relevant external hyperlinks for the dataset. #### source_url URL to the primary data source (e.g. OpenNeuro page). * **Type:** str | None #### osf_url URL to the Open Science Framework project. * **Type:** str | None #### github_url URL to the associated GitHub repository. * **Type:** str | None #### paper_url URL to the primary publication. * **Type:** str | None #### source_url *: str | None* #### osf_url *: str | None* #### github_url *: str | None* #### paper_url *: str | None* ### *class* eegdash.schemas.RepositoryStats Bases: `TypedDict` Statistics for git-based repositories (e.g. GIN). #### stars Number of stars. * **Type:** int #### forks Number of forks. * **Type:** int #### watchers Number of watchers. * **Type:** int #### stars *: int* #### forks *: int* #### watchers *: int* ### *class* eegdash.schemas.Timestamps Bases: `TypedDict` Processing and lifecycle timestamps. #### digested_at ISO 8601 timestamp of when the data was processed by EEGDash. * **Type:** str #### dataset_created_at ISO 8601 timestamp of when the dataset was originally created. * **Type:** str | None #### dataset_modified_at ISO 8601 timestamp of when the dataset was last updated. * **Type:** str | None #### digested_at *: str* #### dataset_created_at *: str | None* #### dataset_modified_at *: str | None* ### eegdash.schemas.create_dataset(, dataset_id: str, name: str | None = None, canonical_name: list[str] | None = None, source: str = 'openneuro', readme: str | None = None, recording_modality: list[str] | None = None, datatypes: list[str] | None = None, modalities: list[str] | None = None, experimental_modalities: list[str] | None = None, bids_version: str | None = None, license: str | None = None, authors: list[str] | None = None, funding: list[str] | None = None, dataset_doi: str | None = None, associated_paper_doi: str | None = None, tasks: list[str] | None = None, sessions: list[str] | None = None, total_files: int | None = None, size_bytes: int | None = None, data_processed: bool | None = None, study_domain: str | None = None, study_design: str | None = None, subjects_count: int | None = None, ages: list[int] | None = None, age_mean: float | None = None, species: str | None = None, sex_distribution: dict[str, int] | None = None, handedness_distribution: dict[str, int] | None = None, contributing_labs: list[str] | None = None, tags_pathology: list[str] | None = None, tags_modality: list[str] | None = None, tags_type: list[str] | None = None, is_clinical: bool | None = None, clinical_purpose: str | None = None, source_url: str | None = None, osf_url: str | None = None, github_url: str | None = None, paper_url: str | None = None, stars: int | None = None, forks: int | None = None, watchers: int | None = None, senior_author: str | None = None, contact_info: list[str] | None = None, digested_at: str | None = None, dataset_created_at: str | None = None, dataset_modified_at: str | None = None, storage: Storage | None = None) → Dataset Create a Dataset document. This helper function constructs a `Dataset` TypedDict with default values and logic to handle nested structures like demographics, clinical info, and external links. * **Parameters:** * **dataset_id** (*str*) – Dataset identifier (e.g., “ds001785”). * **name** (*str* *,* *optional*) – Dataset title/name. * **canonical_name** (*list* *[**str* *]* *,* *optional*) – Canonical / community-recognised name(s) for the dataset (each a valid Python identifier, e.g. `["BrainTreeBank"]` or `["SleepEDF", "SleepEDFPlus"]`). Used by the dataset class registry to expose importable aliases. Empty list or `None` registers no aliases. * **source** (*str* *,* *default "openneuro"*) – Data source (“openneuro”, “nemar”, “gin”). * **recording_modality** (*list* *[**str* *]* *,* *optional*) – Recording types (e.g., [“eeg”, “meg”, “ieeg”]). * **datatypes** (*list* *[**str* *]* *,* *optional*) – BIDS datatypes present in the dataset (e.g., [“eeg”, “anat”, “beh”]). * **experimental_modalities** (*list* *[**str* *]* *,* *optional*) – Stimulus/experimental modalities (e.g., [“visual”, “auditory”, “tactile”]). * **bids_version** (*str* *,* *optional*) – BIDS version of the dataset. * **license** (*str* *,* *optional*) – Dataset license (e.g., “CC0”, “CC-BY-4.0”). * **authors** (*list* *[**str* *]* *,* *optional*) – Dataset authors. * **funding** (*list* *[**str* *]* *,* *optional*) – Funding sources. * **dataset_doi** (*str* *,* *optional*) – Dataset DOI. * **associated_paper_doi** (*str* *,* *optional*) – DOI of associated publication. * **tasks** (*list* *[**str* *]* *,* *optional*) – Tasks in the dataset. * **sessions** (*list* *[**str* *]* *,* *optional*) – Sessions in the dataset. * **total_files** (*int* *,* *optional*) – Total number of files. * **size_bytes** (*int* *,* *optional*) – Total size in bytes. * **data_processed** (*bool* *,* *optional*) – Whether data is processed. * **study_domain** (*str* *,* *optional*) – Study domain/topic. * **study_design** (*str* *,* *optional*) – Study design description. * **subjects_count** (*int* *,* *optional*) – Number of subjects. * **ages** (*list* *[**int* *]* *,* *optional*) – Subject ages. * **age_mean** (*float* *,* *optional*) – Mean age of subjects. * **species** (*str* *,* *optional*) – Species (e.g., “Human”). * **sex_distribution** (*dict* *[**str* *,* *int* *]* *,* *optional*) – Sex distribution (e.g., {“m”: 50, “f”: 45}). * **handedness_distribution** (*dict* *[**str* *,* *int* *]* *,* *optional*) – Handedness distribution (e.g., {“r”: 80, “l”: 15}). * **contributing_labs** (*list* *[**str* *]* *,* *optional*) – Labs that contributed data (for multi-site studies). * **is_clinical** (*bool* *,* *optional*) – Whether this is clinical data. * **clinical_purpose** (*str* *,* *optional*) – Clinical purpose (e.g., “epilepsy”, “depression”). * **paradigm_modality** (*str* *,* *optional*) – Experimental modality (e.g., “visual”, “auditory”, “text”, “multisensory”, “resting_state”). * **cognitive_domain** (*str* *,* *optional*) – Cognitive domain (e.g., “attention”, “memory”, “motor”). * **is_10_20_system** (*bool* *,* *optional*) – Whether electrodes follow the 10-20 system. * **source_url** (*str* *,* *optional*) – Primary URL to the dataset source. * **osf_url** (*str* *,* *optional*) – Open Science Framework URL. * **github_url** (*str* *,* *optional*) – GitHub repository URL. * **paper_url** (*str* *,* *optional*) – URL to associated paper. * **stars** (*int* *,* *optional*) – Repository stars count (for git-based sources). * **forks** (*int* *,* *optional*) – Repository forks count. * **watchers** (*int* *,* *optional*) – Repository watchers count. * **digested_at** (*str* *,* *optional*) – ISO 8601 timestamp. If not provided, no timestamp is set (for deterministic output). * **dataset_modified_at** (*str* *,* *optional*) – Last modification timestamp. * **Returns:** A fully populated Dataset document. * **Return type:** Dataset ### eegdash.schemas.create_record(, dataset: str, storage_base: str, bids_relpath: str, subject: str | None = None, session: str | None = None, task: str | None = None, run: str | None = None, acquisition: str | None = None, dep_keys: list[str] | None = None, datatype: str = 'eeg', suffix: str = 'eeg', storage_backend: Literal['s3', 'https', 'local'] = 's3', recording_modality: list[str] | None = None, ch_names: list[str] | None = None, sampling_frequency: float | None = None, nchans: int | None = None, ntimes: int | None = None, digested_at: str | None = None) → Record Create an EEGDash record. Helper to construct a valid `Record` TypedDict. * **Parameters:** * **dataset** (*str*) – Dataset identifier (e.g., “ds000001”). * **storage_base** (*str*) – Remote storage base URI (e.g., “s3://openneuro.org/ds000001”). * **bids_relpath** (*str*) – BIDS-relative path to the raw file (e.g., “sub-01/eeg/sub-01_task-rest_eeg.vhdr”). * **subject** (*str* *,* *optional*) – BIDS entities. * **session** (*str* *,* *optional*) – BIDS entities. * **task** (*str* *,* *optional*) – BIDS entities. * **run** (*str* *,* *optional*) – BIDS entities. * **acquisition** (*str* *,* *optional*) – BIDS entities. * **dep_keys** (*list* *[**str* *]* *,* *optional*) – Dependency paths relative to storage_base. * **datatype** (*str* *,* *default "eeg"*) – BIDS datatype. * **suffix** (*str* *,* *default "eeg"*) – BIDS suffix. * **storage_backend** ( *{"s3"* *,* *"https"* *,* *"local"}* *,* *default "s3"*) – Storage backend type. * **recording_modality** (*list* *[**str* *]* *,* *optional*) – Recording modalities (e.g., [“eeg”, “meg”, “ieeg”]). * **digested_at** (*str* *,* *optional*) – ISO 8601 timestamp. Defaults to current time. * **Returns:** A slim EEGDash record optimized for loading. * **Return type:** Record ### Notes Clinical and paradigm info is stored at the Dataset level, not per-file. ### Examples ```pycon >>> record = create_record( ... dataset="ds000001", ... storage_base="s3://openneuro.org/ds000001", ... bids_relpath="sub-01/eeg/sub-01_task-rest_eeg.vhdr", ... subject="01", ... task="rest", ... ) ``` ### eegdash.schemas.validate_dataset(dataset: dict[str, Any]) → list[str] Validate a dataset has required fields. Returns list of errors. ### eegdash.schemas.validate_record(record: dict[str, Any]) → list[str] Validate a record has required fields. Returns list of errors. ### Notes - bids_relpath is the canonical unique identifier for records - bidspath is a computed field (dataset + “/” + bids_relpath) and not strictly required - storage.raw_key always equals bids_relpath when created via create_record # eegdash.features.base_utils Basic Feature Extraction Utilities This module defines basic utilities for feature extraction. ### Functions | `channel_names_to_indices`(channels, ch_names) | Converts a list of channel names to channel indices in another list. | |--------------------------------------------------|------------------------------------------------------------------------| | `get_underlying_func`(func) | Retrieve the original Python function from a potential wrapper. | ### Classes | `BivariateIterator`(pairs[, directed]) | Pairs iterator for iterating pairs of channels. | |------------------------------------------|---------------------------------------------------| ### *class* eegdash.features.base_utils.BivariateIterator(pairs: Iterable[Tuple[int, int]] | int, directed=False) Bases: `object` Pairs iterator for iterating pairs of channels. * **Parameters:** * **pairs** (*Iterable* *[**tuple* *[**int* *,* *int* *]* *]* *|* *int*) – If an iterable of tuples is given, it represents the channel index pairs to iterate If an integer `n` is given, iterate through all unique pairs out of `n` channels. * **directed** (*bool*) – If an integer was given in `pairs`, this parameter controls whether all directed pairs should be iterated. Otherwise this parameter is ignored. Default is False. #### get_pair_iterators() → tuple[ndarray, ndarray] Get indices for pairs of channels. Computes the upper triangle indices of an (n, n) matrix, excluding the diagonal. * **Returns:** The row and column indices for the unique pairs. * **Return type:** tuple of ndarray ### eegdash.features.base_utils.channel_names_to_indices(channels: List[str], ch_names: List[str]) → List[int] Converts a list of channel names to channel indices in another list. * **Parameters:** * **channels** (*List* *[**str* *]*) – A list of channel names. * **ch_names** (*List* *[**str* *]*) – A list of existing channel names to take indices from. * **Returns:** A list of channel indices. * **Return type:** List[int] * **Raises:** **ValueError** – If the channel name was not found in the existing channels list. ### eegdash.features.base_utils.get_underlying_func(func: Callable) → Callable Retrieve the original Python function from a potential wrapper. * **Parameters:** **func** (*callable*) – The function to unwrap. Typically a raw function, a `functools.partial` object, or a Numba `Dispatcher`. * **Returns:** The underlying Python function. * **Return type:** callable ### Notes This utility specifically handles: \* **functools.partial**: Returns the `.func` attribute. \* **numba.Dispatcher**: Returns the `.py_func` attribute. # eegdash.features.datasets Datasets for Feature Management. This module defines the core data structures for storing, manipulating, and serializing extracted features. Provides the base classes: - `FeaturesDataset` — Represents features from a single recording. - `FeaturesConcatDataset` — Manages multiple `FeaturesDataset` objects as a unified dataset. ### Classes | `FeaturesDataset`(features[, metadata, ...]) | A dataset of features extracted from a single recording. | |------------------------------------------------|------------------------------------------------------------------------| | `FeaturesConcatDataset`([list_of_ds, ...]) | A concatenated dataset composed of multiple `FeaturesDataset` objects. | ### *class* eegdash.features.datasets.FeaturesDataset(features: DataFrame, metadata: DataFrame | None = None, description: dict | Series | None = None, transform: Callable | None = None, raw_info: Dict | None = None, raw_preproc_kwargs: Dict | None = None, window_kwargs: Dict | None = None, window_preproc_kwargs: Dict | None = None, features_kwargs: Dict | None = None) Bases: `EEGWindowsDataset` A dataset of features extracted from a single recording. This class holds features in a `pandas.DataFrame` and provides an interface compatible with braindecode’s dataset structure. A single object corresponds to one recording. * **Parameters:** * **features** (*pandas.DataFrame*) – A DataFrame where each row is a sample (e.g, EEG window) and each column is a feature. * **metadata** (*pandas.DataFrame* *,* *optional*) – A DataFrame containing metadata for each sample, indexed consistently with features. Must include columns ‘i_window_in_trial’, ‘i_start_in_trial’, ‘i_stop_in_trial’, and ‘target’. * **description** (*dict* *or* *pandas.Series* *,* *optional*) – Additional high-level information about the dataset. * **transform** (*callable* *,* *optional*) – A function or transform to apply to the feature data. * **raw_info** (*dict* *,* *optional*) – Information about the original raw recording (e.g., sampling rate, montage, channel names). * **raw_preproc_kwargs** (*dict* *,* *optional*) – Keyword arguments used for preprocessing the raw data. * **window_kwargs** (*dict* *,* *optional*) – Keyword arguments used for windowing the data. * **window_preproc_kwargs** (*dict* *,* *optional*) – Keyword arguments used for preprocessing the windowed data. * **features_kwargs** (*dict* *,* *optional*) – Keyword arguments used for feature extraction. #### features Table of extracted features. * **Type:** pandas.DataFrame #### n_features Number of feature columns in the dataset. * **Type:** int #### metadata Metadata describing each window. * **Type:** pandas.DataFrame #### transform The transform applied to each sample. * **Type:** callable or None #### raw_info Information about the raw recording. * **Type:** dict or None #### raw_preproc_kwargs Parameters used during raw data preprocessing. * **Type:** dict or None #### window_kwargs Parameters used during window segmentation. * **Type:** dict or None #### window_preproc_kwargs Parameters used during window-level preprocessing. * **Type:** dict or None #### features_kwargs Parameters used during feature extraction. * **Type:** dict or None #### crop_inds Indices specifying window position within each trial: (i_window_in_trial, i_start_in_trial, i_stop_in_trial). * **Type:** numpy.ndarray of shape (n_samples, 3) #### y Target labels corresponding to each window. * **Type:** list of int ### *class* eegdash.features.datasets.FeaturesConcatDataset(list_of_ds: list[TypeAliasForwardRef('eegdash.features.datasets.FeaturesDataset')] | None = None, target_transform: Callable | None = None) Bases: `BaseConcatDataset` A concatenated dataset composed of multiple `FeaturesDataset` objects. This class manages a collection of `FeaturesDataset` instances and provides an interface for treating them as a single, unified dataset. Supports concatenation, splitting, saving, and performing DataFrame-like operations across all contained datasets. * **Parameters:** * **list_of_ds** (*list* *of* *FeaturesDataset* *or* *None* *,* *optional*) – A list of `FeaturesDataset` objects to concatenate. If a list of `FeaturesConcatDataset` objects is provided, all contained datasets are automatically flattened into a single list. * **target_transform** (*callable* *or* *None* *,* *optional*) – A function to apply to target values before they are returned. #### datasets The list of individual datasets contained in this object. * **Type:** list of FeaturesDataset #### target_transform Optional transform applied to target labels. * **Type:** callable or None #### split(by: str | list[int] | list[list[int]] | dict[str, list[int]]) → dict[str, TypeAliasForwardRef('eegdash.features.datasets.FeaturesConcatDataset')] Split the concatenated dataset into multiple subsets. This method allows flexible splitting of the concatenated dataset into several `FeaturesConcatDataset` objects based on a metadata field, explicit indices, or custom grouping definitions. * **Parameters:** **by** (*str* *or* *list* *of* *int* *or* *list* *of* *list* *of* *int* *or* *dict* *of* *{str: list* *of* *int}*) – Defines how the dataset is split: * **str** — Name of a column in the dataset description. Each unique value in that column defines a separate split. * **list of int** — Indices of datasets to include in one split. * **list of list of int** — A list of groups of indices, where each sub-list defines one split. * **dict of {str: list of int}** — Explicit mapping of split names to lists of dataset indices. * **Returns:** A dictionary where each key is the split name (or index) and each value is a `FeaturesConcatDataset` containing the corresponding subset of datasets. * **Return type:** dict[str, FeaturesConcatDataset] ### Examples ```pycon >>> # Split by a metadata column (str) >>> splits = concat_ds.split(by='subject_id') >>> list(splits.keys()) ['subj_01', 'subj_02', 'subj_03'] >>> splits['subj_01'] ``` ```pycon >>> # Split by explicit indices (list of int) >>> splits = concat_ds.split(by=[0, 2, 4]) >>> splits["0"] ``` ```pycon >>> # Split by groups of indices (list of list of int) >>> splits = concat_ds.split(by=[[0, 1], [2, 3], [4, 5]]) >>> list(splits.keys()) ['0', '1', '2'] ``` ```pycon >>> # Split by custom mapping (dict) >>> splits = concat_ds.split(by={'train': [0, 1, 2], 'test': [3, 4]}) >>> splits["train"], splits["test"] (, ) ``` ### Notes The resulting splits inherit the same `target_transform` as the original dataset. Splitting by a string requires that `self.description` contains the specified column. #### get_metadata() → DataFrame Return a concatenated metadata DataFrame from all contained datasets. Collects the metadata of each `FeaturesDataset` contained in the `FeaturesConcatDataset` and concatenates them into a single pandas DataFrame, adding each dataset’s description entries as additional columns in the resulting DataFrame. * **Returns:** Combined metadata from all contained datasets. Each row corresponds to a single sample from one of the underlying `FeaturesDataset` objects. Columns include both window-level metadata (e.g., `target`, `i_window_in_trial`, `i_start_in_trial`, `i_stop_in_trial`) and dataset-level description fields (e.g., `subject_id`, `session`, etc.). * **Return type:** pandas.DataFrame * **Raises:** **TypeError** – If one or more contained datasets are not instances of `FeaturesDataset`. #### save(path: str, overwrite: bool = False, offset: int = 0) → None Save the concatenated dataset to a directory. Each contained `FeaturesDataset` is saved in its own numbered subdirectory within the specified `path`. The resulting structure is compatible with later reloading using `serialization.load_features_concat_dataset()`. **Directory structure example**: ```default path/ 0/ 0-feat.safetensors metadata_df.pkl description.json ... 1/ 1-feat.safetensors ... ``` * **Parameters:** * **path** (*str*) – Path to the parent directory where the dataset should be saved. The directory will be created if it does not exist. * **overwrite** (*bool* *,* *default=False*) – If True, existing subdirectories that conflict with the new ones are removed before saving. * **offset** (*int* *,* *default=0*) – Integer offset added to subdirectory names. Useful when saving datasets in chunks or continuing a previous save session. * **Raises:** * **ValueError** – If the concatenated dataset is empty. * **FileExistsError** – If a subdirectory already exists and `overwrite` is False. * **Warns:** **UserWarning** – If the number of saved subdirectories does not match the number of existing ones, or if unrelated files remain in the directory. ### Notes Each subdirectory contains: - `*-feat.safetensors` — feature DataFrame for that dataset. - `metadata_df.pkl` — corresponding metadata. - `description.json` — dataset-level metadata. - `raw_info.pkl` — recording information (optional). - `*_kwargs.json` — preprocessing parameters. #### to_dataframe(include_metadata: bool | str | List[str] = False, include_target: bool = False, include_crop_inds: bool = False) → DataFrame Convert the concatenated dataset into a single unified pandas DataFrame. This method flattens the collection of individual recording datasets into one table, allowing for the selective inclusion of metadata, target labels, and window-cropping indices alongside features. * **Parameters:** * **include_metadata** (*bool* *,* *str* *, or* *list* *of* *str* *,* *default=False*) – Controls the inclusion of window-level metadata: - If **True** — includes all metadata columns available in the : underlying datasets. - If **str** or **list of str** — includes only the specified : metadata column(s). - If **False** — excludes metadata (unless overridden by other : flags). * **include_target** (*bool* *,* *default=False*) – If True, ensures the ‘target’ column is included in the resulting DataFrame. * **include_crop_inds** (*bool* *,* *default=False*) – If True, includes the internal windowing indices: ‘i_dataset’, ‘i_window_in_trial’, ‘i_start_in_trial’, and ‘i_stop_in_trial’. * **Returns:** A concatenated DataFrame where each row represents a sample (window) and columns contain features and requested metadata. * **Return type:** pd.DataFrame ### Notes When metadata columns and feature columns share the same name, the metadata columns are suffixed with `_metadata` to avoid name collisions. ### Examples ```pycon >>> # Get only features >>> df = concat_ds.to_dataframe() ``` ```pycon >>> # Get features with target labels and specific metadata >>> df = concat_ds.to_dataframe( ... include_metadata=['subject_id'], ... include_target=True ... ) ``` #### count(numeric_only: bool = False, n_jobs: int = 1) → Series Count non-NA cells for each feature column across all datasets. * **Parameters:** * **numeric_only** (*bool* *,* *default=False*) – If True, only includes columns with float, int, or boolean data types. * **n_jobs** (*int* *,* *default=1*) – The number of CPU cores to use for parallel processing of individual datasets. * **Returns:** A Series containing the total count of non-missing values for each feature column, indexed by feature names. * **Return type:** pd.Series #### mean(numeric_only: bool = False, n_jobs: int = 1) → Series Compute the mean for each feature column across all datasets. This method calculates the mean of each feature by aggregating the individual means of each dataset, weighted by their respective sample counts. * **Parameters:** * **numeric_only** (*bool* *,* *default=False*) – If True, only includes columns with float, int, or boolean data types. * **n_jobs** (*int* *,* *default=1*) – The number of CPU cores to use for parallel processing of individual datasets. * **Returns:** A Series containing the weighted mean of each feature column, indexed by feature names. * **Return type:** pd.Series #### var(ddof: int = 1, numeric_only: bool = False, n_jobs: int = 1) → Series Compute the variance for each feature column across all datasets. This method calculates the total variance by combining within-dataset variability and between-dataset mean differences. * **Parameters:** * **ddof** (*int* *,* *default=1*) – Delta Degrees of Freedom. * **numeric_only** (*bool* *,* *default=False*) – If True, only includes columns with float, int, or boolean data types. * **n_jobs** (*int* *,* *default=1*) – The number of CPU cores to use for parallel processing of individual datasets. * **Returns:** A Series containing the pooled variance of each feature column, indexed by feature names. * **Return type:** pd.Series #### std(ddof: int = 1, numeric_only: bool = False, eps: float = 0, n_jobs: int = 1) → Series Compute the standard deviation for each feature column across all datasets. * **Parameters:** * **ddof** (*int* *,* *default=1*) – Delta Degrees of Freedom for the variance calculation. * **numeric_only** (*bool* *,* *default=False*) – If True, only includes numeric data types. * **eps** (*float* *,* *default=0*) – Small constant added to variance for numerical stability. * **n_jobs** (*int* *,* *default=1*) – Number of CPU cores for parallel processing. * **Returns:** Standard deviation of each feature column. Indexed by feature names. * **Return type:** pd.Series #### zscore(ddof: int = 1, numeric_only: bool = False, eps: float = 0, n_jobs: int = 1) → None Apply z-score normalization to numeric columns in-place. This method scales features to a mean of 0 and a standard deviation of 1 based on statistics pooled across all contained datasets. * **Parameters:** * **ddof** (*int* *,* *default=1*) – Delta Degrees of Freedom for the pooled variance. * **numeric_only** (*bool* *,* *default=False*) – If True, only includes numeric data types. * **eps** (*float* *,* *default=0*) – Small constant added to variance for numerical stability. * **n_jobs** (*int* *,* *default=1*) – Number of CPU cores for parallel statistics computation. #### fillna(\*args, \*\*kwargs) → None Fill NA/NaN values in-place across all datasets. * **Parameters:** * **\*args** – Arguments passed to `pandas.DataFrame.fillna()`. * **\*\*kwargs** – Arguments passed to `pandas.DataFrame.fillna()`. ### Notes `inplace` is enforced as True. #### SEE ALSO `pandas.DataFrame.fillna` : The underlying pandas method. #### replace(\*args, \*\*kwargs) → None Replace values in-place across all datasets. * **Parameters:** * **\*args** – Arguments passed to `pandas.DataFrame.replace()`. * **\*\*kwargs** – Arguments passed to `pandas.DataFrame.replace()`. ### Notes `inplace` is enforced as True. #### SEE ALSO `pandas.DataFrame.replace` : The underlying pandas method. #### interpolate(\*args, \*\*kwargs) → None Interpolate values in-place across all datasets. * **Parameters:** * **\*args** – Arguments passed to `pandas.DataFrame.interpolate()`. * **\*\*kwargs** – Arguments passed to `pandas.DataFrame.interpolate()`. ### Notes `inplace` is enforced as True. #### SEE ALSO `pandas.DataFrame.interpolate` : The underlying pandas method. #### dropna(\*args, \*\*kwargs) → None Remove missing values in-place across all datasets. * **Parameters:** * **\*args** – Arguments passed to `pandas.DataFrame.dropna()`. * **\*\*kwargs** – Arguments passed to `pandas.DataFrame.dropna()`. ### Notes `inplace` is enforced as True. #### SEE ALSO `pandas.DataFrame.dropna` : The underlying pandas method. #### drop(\*args, \*\*kwargs) → None Drop specified labels from rows or columns in-place across all datasets. This method removes features (columns) or samples (rows) from every underlying dataset in the collection. * **Parameters:** * **\*args** – Arguments passed to `pandas.DataFrame.drop()`. * **\*\*kwargs** – Arguments passed to `pandas.DataFrame.drop()`. ### Notes `inplace` is enforced as True. #### SEE ALSO `pandas.DataFrame.drop` : The underlying pandas method. ### Examples ```pycon >>> # Remove specific feature columns by name from all datasets >>> concat_ds.drop(columns=['Alpha_Power', 'Beta_Power']) ``` ```pycon >>> # Remove the first and third window (rows) from every dataset >>> concat_ds.drop(index=[0, 2]) ``` #### join(concat_dataset: eegdash.features.datasets.FeaturesConcatDataset, \*\*kwargs) → None Join columns with another FeaturesConcatDataset in-place. This method merges the feature columns of another dataset into the current one. Both collections must contain the same number of individual datasets, and corresponding datasets must have matching lengths. * **Parameters:** * **concat_dataset** (*FeaturesConcatDataset*) – The dataset containing the new columns to be joined. * **\*\*kwargs** – Keyword arguments passed to `pandas.DataFrame.join()`. * **Raises:** **AssertionError** – If the number of datasets or the lengths of corresponding datasets do not match. ### Notes This operation is performed in-place. The `ds.features` attribute of each underlying dataset is updated with the new columns. # eegdash.features.decorators Feature Metadata Decorators. This module provides decorators used to annotate feature extraction functions with structural metadata. These annotations define the dependency graph (via predecessors) and the data format (via feature kinds). The module provides the following decorators: - `feature_predecessor()` — Specifies the required input transformation for a feature. - `feature_kind()` — Defines the dimensionality of the feature output. - `univariate_feature()` — Sugar for per-channel features. - `bivariate_feature()` — Sugar for per channel-pair features. - `multivariate_feature()` — Sugar for global/all-channel features. - `metadata_preprocessor()` — Specifies a preprocessor returning a modified metadata instance. - `channel_pairer()` — Specifies a preprocessor that creates channel pairs. - `channel_pairer_undirected()` — Sugar for undirected pairs. - `channel_pairer_directed()` — Sugar for directed pairs. ### Module Attributes | `univariate_feature`(func, \*, kind) | Apply the `feature_kind()` decorator to a function. | |---------------------------------------------------|-------------------------------------------------------| | `bivariate_feature`(func, \*, kind) | Apply the `feature_kind()` decorator to a function. | | `multivariate_feature`(func, \*, kind) | Apply the `feature_kind()` decorator to a function. | | `channel_pairer_undirected`(func, \*[, directed]) | Apply the `channel_pairer()` decorator to a function. | | `channel_pairer_directed`(func, \*[, directed]) | Apply the `channel_pairer()` decorator to a function. | ### Functions | `bivariate_feature`(func, \*, kind) | Apply the `feature_kind()` decorator to a function. | |---------------------------------------------------|---------------------------------------------------------------------| | `channel_pairer`([directed]) | Decorator to set a feature preprocessor as a channel pairer. | | `channel_pairer_directed`(func, \*[, directed]) | Apply the `channel_pairer()` decorator to a function. | | `channel_pairer_undirected`(func, \*[, directed]) | Apply the `channel_pairer()` decorator to a function. | | `feature_kind`(kind) | Decorator to specify the operational dimensionality of a feature. | | `feature_predecessor`(\*parent_extractor_type) | Decorator to specify parent extractors for a feature function. | | `metadata_preprocessor`(func) | Decorator to set a feature preprocessor as a metadata preprocessor. | | `multivariate_feature`(func, \*, kind) | Apply the `feature_kind()` decorator to a function. | | `preprocessor_output_type`(output_type) | Decorator to specify the expected output type of a preprocessor. | | `univariate_feature`(func, \*, kind) | Apply the `feature_kind()` decorator to a function. | ### eegdash.features.decorators.bivariate_feature(func: Callable, \*, kind: MultivariateFeature = ) → Callable Decorator to mark a feature as bivariate. Specifies that the feature operates on pairs of channels. The output will be formatted as a dictionary with keys matching the original channel name pairs. ### eegdash.features.decorators.channel_pairer(directed: bool = False) → Callable Decorator to set a feature preprocessor as a channel pairer. This decorator lets a feature preprocessor get an additional `pairs` keyword argument, and sets a metadata field named `'ch_pair_iterator'` containing a `BivariateIterator` accordingly before calling the underlying preprocessor. * **Parameters:** **directed** (*bool*) – Whether the preprocessor assumes *directed* or *undirected* bivariate iteration. ### eegdash.features.decorators.channel_pairer_directed(func: Callable, , directed: bool = True) → Callable Decorator to mark a feature preprocessor as an undirected channel pairer. Specifies that the feature preprocessor operates on undirected pairs of channels. ### eegdash.features.decorators.channel_pairer_undirected(func: Callable, , directed: bool = False) → Callable Decorator to mark a feature preprocessor as an undirected channel pairer. Specifies that the feature preprocessor operates on undirected pairs of channels. ### eegdash.features.decorators.feature_kind(kind: MultivariateFeature) → Callable Decorator to specify the operational dimensionality of a feature. This decorator attaches a “feature kind” instance to a function, determining how the `FeatureExtractor` should map the resulting numerical arrays to channel names. * **Parameters:** **kind** (*MultivariateFeature*) – An instance of a feature kind class, such as `UnivariateFeature` or `BivariateFeature`. ### eegdash.features.decorators.feature_predecessor(\*parent_extractor_type: List[Callable]) → Callable Decorator to specify parent extractors for a feature function. This decorator attaches a list of immediate parent preprocessing steps to a feature extraction function. This metadata is used by the `FeatureExtractor` to validate the execution tree. * **Parameters:** **\*parent_extractor_type** (*list* *of* *callable*) – A list of preprocessing functions that this feature immediately depends on. Default is [`SignalOutputType`]. ### Notes A feature can have multiple potential predecessors. ### eegdash.features.decorators.metadata_preprocessor(func: Callable) → Callable Decorator to set a feature preprocessor as a metadata preprocessor. A metadata preprocessor must get a keyword argument named `"_metadata"` and return a copy of it as its last output argument. * **Parameters:** **func** (*callable*) – The feature preprocessor function to decorate. * **Returns:** The decorated function with the metadata_preprocessor attribute set. * **Return type:** callable ### eegdash.features.decorators.multivariate_feature(func: Callable, \*, kind: MultivariateFeature = ) → Callable Decorator to mark a feature as multivariate. Indicates that the feature operates on all channels simultaneously. The output naming convention is determined by the feature’s internal logic rather than channel labels. ### eegdash.features.decorators.preprocessor_output_type(output_type: Type) → Callable Decorator to specify the expected output type of a preprocessor. * **Parameters:** **output_type** (*Type*) – The expected output type for the preprocessor. Must be a `BasePreprocessorOutputType`. * **Raises:** **ValueError** – If the provided output_type does not inherit from `BasePreprocessorOutputType`. ### eegdash.features.decorators.univariate_feature(func: Callable, \*, kind: MultivariateFeature = ) → Callable Decorator to mark a feature as univariate. Indicates that the feature is computed for each channel independently. The output will be formatted as a dictionary with keys matching the original channel names. # eegdash.features.extractors Core Feature Extraction Orchestration. This module defines the fundamental building blocks for creating feature extraction pipelines. The module provides the base class: - `FeatureExtractor` - The central pipeline for execution trees. ### Classes | `FeatureExtractor`(feature_extractors[, ...]) | Pipeline for multi-stage feature extraction. | |-------------------------------------------------|------------------------------------------------| ### *class* eegdash.features.extractors.FeatureExtractor(feature_extractors: Dict[str, Callable], preprocessor: Callable | None = None) Bases: `TrainableFeature` Pipeline for multi-stage feature extraction. This class manages a collection of feature extraction functions or nested extractors. It handles the application of shared preprocessing, validates the dependency graph between components, and aggregates results into a named dictionary compatible with `FeaturesDataset`. * **Parameters:** * **feature_extractors** (*dict* *[**str* *,* *callable* *]*) – A dictionary where keys are the base names for the features and values are the extraction functions or other `FeatureExtractor` instances. * **preprocessor** (*callable* *,* *optional*) – A shared preprocessing function applied to the input data before it is passed to child extractors. #### preprocessor The shared preprocessing stage for this extractor. * **Type:** callable or None #### feature_extractors_dict The validated dictionary of child extractors. * **Type:** dict #### features_kwargs A collection of all keyword arguments used by the preprocessor and child functions, preserved for metadata tracking. * **Type:** dict ### Notes The extractor automatically detects if any child components are trainable and will require a `fit()` phase before extraction can occur. ### Examples ```pycon >>> # Create a simple extractor >>> fe = FeatureExtractor( ... feature_extractors={'mean': signal_mean, 'std': signal_std} ... ) ``` ```pycon >>> # Extract from a batch (2 windows, 3 channels, 100 samples) >>> X = np.random.randn(2, 3, 100) >>> results = fe(X, _batch_size=2, _ch_names=['O1', 'Oz', 'O2']) ``` #### preprocess(\*x, \_metadata: dict) Apply the shared preprocessor to the input data. * **Parameters:** * **\*x** (*tuple* *of* *ndarray*) – The input data batch. * **\_metadata** (*dict*) – A dictionary of record and batch metadata. * **Returns:** * *tuple* – The preprocessed data passed as a tuple to support multi-output preprocessors. * **\_metadata** (*dict*) – The preprocessed metadata. Only relevant for metadata preprocessors. #### clear() Clear the state of all trainable sub-features. #### partial_fit(\*x, y=None, \_metadata: dict) Propagate partial fitting to all trainable children. * **Parameters:** * **\*x** (*tuple* *of* *ndarray*) – The input data batch. * **y** (*ndarray* *,* *optional*) – Target labels for supervised training. * **\_metadata** (*dict*) – A dictionary of record and batch metadata. #### fit() Fit all trainable sub-features. #### to_dict() → dict Dumps the feature extractor to a dictionary. * **Returns:** A dictionary representing the feature extractor, with `"feature_extractors"` and `"preprocessor"` fields (if applicable). * **Return type:** dict #### SEE ALSO `feature_extractor_from_dict` ### Notes Feature extractors including non-function callables are not supported. #### to_json(path: str | Path) Dumps the feature extractor to a json file. * **Parameters:** **path** (*str* *|* *pathlib.Path*) – The path to the json file. #### SEE ALSO `load_feature_extractor_from_json`, `FeatureExtractor.to_dict` ### Notes Feature extractors including non-function callables are not supported. #### to_yaml(path: str | Path) Dumps the feature extractor to a yaml file. * **Parameters:** **path** (*str* *|* *pathlib.Path*) – The path to the yaml file. #### SEE ALSO `load_feature_extractor_from_yaml`, `FeatureExtractor.to_dict` ### Notes - Feature extractors including non-function callables are not : supported. - Requires the pyyaml package. #### to_hocon(path: str | Path) Dumps the feature extractor to a HOCON’s conf file. * **Parameters:** **path** (*str* *|* *pathlib.Path*) – The path to the conf file. #### SEE ALSO `load_feature_extractor_from_hocon`, `FeatureExtractor.to_dict` ### Notes - Feature extractors including non-function callables are not : supported. - Requires the pyhocon package. # eegdash.features.feature_bank.complexity ## Complexity Feature Extraction This module provides functions to compute various complexity features from signals. ### Data Shape Convention This module follows a **Time-Last** convention: * **Input:** `(..., time)` * **Output:** `(...,)` All functions collapse the last dimension (time), returning an ndarray of features corresponding to the leading dimensions (e.g., subjects, channels). ### Functions | `complexity_entropy_preprocessor`(x, /[, m, r, l]) | Precompute neighbor counts for Approximate and Sample Entropy. | |------------------------------------------------------|------------------------------------------------------------------| | `complexity_approx_entropy`(counts_m, ...) | Calculate Approximate Entropy (ApEn). | | `complexity_multiscale_entropy`(x, /[, m, r, ...]) | Calculate Multiscale Entropy (MSE). | | `complexity_sample_entropy`(counts_m, ...) | Calculate Sample Entropy (SampEn). | | `complexity_svd_entropy`(x, /[, m, tau]) | Calculate Singular Value Decomposition (SVD) Entropy. | | `complexity_lempel_ziv`(x, /[, threshold, ...]) | Calculate Lempel-Ziv Complexity (LZC). | ### eegdash.features.feature_bank.complexity.complexity_entropy_preprocessor(x, , m=2, r=0.2, l=1) Precompute neighbor counts for Approximate and Sample Entropy. This function creates a delay-embedding of the signal and uses a KDTree to count how many vectors are within a distance ‘r’ of each other. It computes counts for both dimension ‘m’ and ‘m+1’. * **Parameters:** * **x** (*ndarray*) – The input signal of shape (…, n_times). * **m** (*int* *,* *optional*) – Embedding dimension (length of compared sequences). * **r** (*float* *,* *optional*) – Tolerance threshold, expressed as a fraction of the signal standard deviation. * **l** (*int* *,* *optional*) – The lag or delay between successive embedding vectors. * **Returns:** * **counts_m** (*ndarray*) – Neighbor counts for embedding dimension m. * **counts_mp1** (*ndarray*) – Neighbor counts for embedding dimension m + 1. ### eegdash.features.feature_bank.complexity.complexity_approx_entropy(counts_m, counts_mp1,) Calculate Approximate Entropy (ApEn). Approximate Entropy quantifies the amount of regularity and the unpredictability of fluctuations over time-series data. Smaller values indicate more regular signals. * **Parameters:** * **counts_m** (*ndarray*) – Neighbor counts for embedding dimension m. * **counts_mp1** (*ndarray*) – Neighbor counts for embedding dimension m + 1. * **Returns:** Approximate Entropy values. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### eegdash.features.feature_bank.complexity.complexity_multiscale_entropy(x, , m=2, r=0.2, l_max=16) Calculate Multiscale Entropy (MSE). Computes the sample entropy (SampEn) for multiple timescales (from 1 to `l_max`), then calculate the integral of the SampEn as a function of timescale. * **Parameters:** * **x** (*ndarray*) – The input signal of shape (…, n_times). * **m** (*int* *,* *optional*) – Embedding dimension (length of compared sequences). * **r** (*float* *,* *optional*) – Tolerance threshold, expressed as a fraction of the signal standard deviation. * **l_max** (*int* *,* *optional*) – The maximal lag or delay between successive embedding vectors. * **Returns:** MSE values. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### eegdash.features.feature_bank.complexity.complexity_sample_entropy(counts_m, counts_mp1,) Calculate Sample Entropy (SampEn). A refinement of Approximate Entropy that is more consistent and less dependent on signal length. It measures the likelihood that similar patterns of data will remain similar when the window size increases. * **Parameters:** * **counts_m** (*ndarray*) – Neighbor counts for embedding dimension m. * **counts_mp1** (*ndarray*) – Neighbor counts for embedding dimension m + 1. * **Returns:** SampEn values. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### eegdash.features.feature_bank.complexity.complexity_svd_entropy(x, , m=10, tau=1) Calculate Singular Value Decomposition (SVD) Entropy. SVD Entropy measures the complexity of the signal’s embedding space. It indicates the number of independent components required to reconstruct the signal. Higher values suggest a more complex signal. * **Parameters:** * **x** (*ndarray*) – The input signal. * **m** (*int* *,* *optional*) – The embedding dimension. * **tau** (*int* *,* *optional*) – The time delay for embedding. * **Returns:** SVD Entropy values. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### eegdash.features.feature_bank.complexity.complexity_lempel_ziv(x, , threshold=None, normalize=True) Calculate Lempel-Ziv Complexity (LZC). LZC evaluates the randomness of a sequence by counting the number of distinct patterns it contains. * **Parameters:** * **x** (*ndarray*) – The input signal. * **threshold** (*float* *,* *optional*) – Value used to binarize the signal. If None, the median is used. * **normalize** (*bool* *,* *optional*) – If True, normalizes the result by: * **Returns:** LZC values. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### Notes - The implementation follows the constructive algorithm for production complexity as described by Kaspar and Schuster [[1]](#r60a22090cb8c-1). - Optimized with Numba. ### References # eegdash.features.feature_bank.connectivity ## Connectivity Feature Extraction This module computes bivariate connectivity features based on the complex coherency between pairs of channels. ### Data Shape Convention This module follows a **Time-Last** convention: * **Input:** `(..., time)` * **Output:** `(...,)` All functions collapse the last dimension (time), returning an ndarray of features corresponding to the leading dimensions (e.g., subjects, channels). ### Functions | `connectivity_coherency_preprocessor`(x, /, \*, ...) | Compute Complex Coherency for all unique channel pairs. | |--------------------------------------------------------|-----------------------------------------------------------| | `connectivity_magnitude_square_coherence`(f, c, /) | Calculate Magnitude Squared Coherence (MSC). | | `connectivity_imaginary_coherence`(f, c, /[, ...]) | Calculate Imaginary Coherence (iCOH). | | `connectivity_lagged_coherence`(f, c, /[, bands]) | Calculate Lagged Coherence. | ### eegdash.features.feature_bank.connectivity.connectivity_coherency_preprocessor(x, , , \_metadata, f_min: float | None = None, f_max: float | None = None, fs: int | None = None, window_size_in_sec: float | None = 4, overlap_in_sec: float | None = None, pairs: Iterable[Tuple[str, str]] | None = None, \*\*kwargs) Compute Complex Coherency for all unique channel pairs. The Complex Coherency is calculated by estimating the Cross-Spectral Densities (CSD) between pairs of channels and normalizing it by the auto-spectral densities. * **Parameters:** * **x** (*ndarray*) – The input signal of shape (n_trials, n_channels, n_times). * **\*\*kwargs** (*dict*) – Supports any `scipy.signal.csd()` arguments like ‘nperseg’ and ‘noverlap’. * **fs** (*int* *|* *None*) – Sampling frequency. Defaults to sfreq in MNE’s info. Do not use unless you know what you are doing. * **f_min** (*float* *|* *None*) – The minimum frequency. Use None for half the window length. Defaults to the highpass frequency used to MNE’s:meth:~mne.io.Raw.filter. * **f_max** (*float* *|* *None*) – The maximum frequency. Use None for Nyquist. Defaults to the lowpass frequency used to MNE’s `filter()`. * **window_size_in_sec** (*float* *|* *None*) – Window size in seconds, replacing nperseg. Only used if nperseg is not provided. Defaults to 4 seconds. * **overlap_in_sec** (*float* *|* *None*) – Window overlap in seconds, replacing noverlap. Only used if nperseg and noverlap are not provided.defaults to half of window_size_in_sec. * **pairs** (*Optional* *[**Iterable* *[**Tuple* *[**str* *,* *str* *]* *]* *]*) – A list of channel pairs to pick. * **Returns:** * **f** (*ndarray*) – Frequency vector of shape (n_frequencies,). * **c** (*ndarray*) – Complex coherency array of shape (n_trials, n_pairs, n_frequencies). Values are complex numbers where: - Absolute value $|c|$ is the coherence magnitude (0 to 1). - Angle $\arg(c)$ is the phase lag. ### eegdash.features.feature_bank.connectivity.connectivity_magnitude_square_coherence(f, c, , bands={'alpha': (8, 12), 'beta': (12, 30), 'delta': (1, 4.5), 'theta': (4.5, 8)}) Calculate Magnitude Squared Coherence (MSC). MSC measures the linear correlation between two signals in the frequency domain. It is defined as the squared magnitude of the complex coherency, $|c|^2$, where $c$ is the complex coherency. * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **c** (*ndarray*) – Complex coherency array. * **bands** (*dict* *,* *optional*) – Frequency bands to aggregate (defaults to DEFAULT_FREQ_BANDS). * **Returns:** Mean MSC for each frequency band. * **Return type:** dict ### References [Brainstorm - Connectivity](https://neuroimage.usc.edu/brainstorm/Tutorials/Connectivity) ### eegdash.features.feature_bank.connectivity.connectivity_imaginary_coherence(f, c, , bands={'alpha': (8, 12), 'beta': (12, 30), 'delta': (1, 4.5), 'theta': (4.5, 8)}) Calculate Imaginary Coherence (iCOH). Imaginary coherence captures only the non-zero phase-lagged synchronization. It is defined as $\operatorname{Im}(c)$, where $c$ is the complex coherency. * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **c** (*ndarray*) – Complex coherency array. * **bands** (*dict* *,* *optional*) – Frequency bands to aggregate. * **Returns:** Mean Imaginary Coherence for each frequency band. * **Return type:** dict ### References [Brainstorm - Connectivity](https://neuroimage.usc.edu/brainstorm/Tutorials/Connectivity) ### eegdash.features.feature_bank.connectivity.connectivity_lagged_coherence(f, c, , bands={'alpha': (8, 12), 'beta': (12, 30), 'delta': (1, 4.5), 'theta': (4.5, 8)}) Calculate Lagged Coherence. Lagged coherence further refines the synchronization measure by normalizing the imaginary part of the coherency, effectively removing all instantaneous (zero-lag) contributions. It is defined as $\operatorname{Im}(c)/\sqrt{1 - \left(\operatorname{Re}(c)\right)^2}$, where $c$ is the complex coherency. * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **c** (*ndarray*) – Complex coherency array. * **bands** (*dict* *,* *optional*) – Frequency bands to aggregate. * **Returns:** Mean Lagged Coherence for each frequency band. * **Return type:** dict ### References [Brainstorm - Connectivity](https://neuroimage.usc.edu/brainstorm/Tutorials/Connectivity) # eegdash.features.feature_bank.csp ## Common Spatial Pattern Features Extraction This module provides the Common Spatial Pattern (CSP) feature extractor for signal classification. ### Data Shape Convention This module follows a **Time-Last** convention: * **Input:** `(..., time)` * **Output:** `(...,)` All functions collapse the last dimension (time), returning an ndarray of features corresponding to the leading dimensions (e.g., subjects, channels). ### Classes | `CommonSpatialPattern`() | Common Spatial Pattern (CSP) for binary signal classification. | |----------------------------|------------------------------------------------------------------| ### *class* eegdash.features.feature_bank.csp.CommonSpatialPattern Bases: `TrainableFeature` Common Spatial Pattern (CSP) for binary signal classification. CSP finds spatial filters that maximize the variance for one class while minimizing it for the other. It transforms multi-channel signals into a subspace where the differences between two conditions are most prominent. #### \_weights The spatial filter matrix. * **Type:** ndarray #### \_eigvals The eigenvalues representing the variance ratio for class 0. * **Type:** ndarray #### \_means The class-wise means used for centering. * **Type:** ndarray #### \_covs The class-wise covariance matrices. * **Type:** ndarray ### Notes This implementation supports online learning through `partial_fit`, allowing the model to be updated with new batches. For a theoretical overview of Common Spatial Patterns, see the [Wikipedia entry](https://en.wikipedia.org/wiki/Common_spatial_pattern). #### clear() Reset the internal state of the feature extractor. #### SEE ALSO `clear()` #### partial_fit(x, y=None) Incrementally update class-wise mean and covariance statistics. * **Parameters:** * **x** (*ndarray*) – Input array of shape (n_epochs, n_channels, n_times). * **y** (*ndarray*) – Class labels for each epoch (must contain exactly two classes). * **Raises:** **AssertionError** – If more than two unique labels are detected across all partial fits. #### *static* transform_input(x) Reshape and transpose epoch data for matrix operations. Converts 3D epoch data into a 2D format suitable for covariance estimation and spatial filtering. The temporal dimension is collapsed into the samples dimension. * **Parameters:** **x** (*ndarray*) – Input array of shape (n_epochs, n_channels, n_times). * **Returns:** Reshaped array of shape (n_epochs \* n_times, n_channels). * **Return type:** ndarray #### fit() Solve the generalized eigenvalue problem to find spatial filters. Calculates the filters $W$ such that the ratio of variances between the two classes is maximized. Filters are sorted by their discriminative power (distance from 0.5 eigenvalue). #### SEE ALSO `fit()` ### Notes For more details on the CSP algorithm, visit the [Wikipedia entry](https://en.wikipedia.org/wiki/Common_spatial_pattern). #### feature_kind *= * #### parent_extractor_type *= []* # eegdash.features.feature_bank.dimensionality ## Dimensionality Features Extraction This module provides functions to compute various dimensionality features from signals. ### Data Shape Convention This module follows a **Time-Last** convention: * **Input:** `(..., time)` * **Output:** `(...,)` All functions collapse the last dimension (time), returning an ndarray of features corresponding to the leading dimensions (e.g., subjects, channels). ### Functions | `dimensionality_higuchi_fractal_dim`(x, /[, ...]) | Calculate Higuchi's Fractal Dimension (HFD). | |-------------------------------------------------------|------------------------------------------------| | `dimensionality_petrosian_fractal_dim`(x, /) | Calculate Petrosian Fractal Dimension (PFD). | | `dimensionality_katz_fractal_dim`(x, /) | Calculate Katz Fractal Dimension (KFD). | | `dimensionality_hurst_exp`(x, /) | Estimate the Hurst Exponent. | | `dimensionality_detrended_fluctuation_analysis`(x, /) | Calculate the Scaling Exponent via DFA. | ### eegdash.features.feature_bank.dimensionality.dimensionality_higuchi_fractal_dim(x, , k_max=10, eps=1e-07) Calculate Higuchi’s Fractal Dimension (HFD). Higuchi’s Fractal Dimension [[1]](#r81c8ba91077a-1) [[2]](#r81c8ba91077a-2) estimates the complexity of a time series by measuring the mean length of the curve at different time scales $k$. It is highly robust for non-stationary signals. * **Parameters:** * **x** (*ndarray*) – The input signal. * **k_max** (*int* *,* *optional*) – Maximum time interval (delay) used for calculating curve lengths. * **eps** (*float* *,* *optional*) – A small constant to avoid log of zero during regression. * **Returns:** The Higuchi’s Fractal Dimension values. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### Notes Optimized with Numba. For a theoretical overview of Higuchi’s Fractal Dimension, see the [Wikipedia entry](https://en.wikipedia.org/wiki/Higuchi_dimension). ### References ### eegdash.features.feature_bank.dimensionality.dimensionality_petrosian_fractal_dim(x,) Calculate Petrosian Fractal Dimension (PFD). Petrosian Fractal Dimension [[1]](#r3e02912f8ce5-1) [[2]](#r3e02912f8ce5-2) provides a fast estimate of fractal dimension by analyzing the number of sign changes in the signal’s first derivative. * **Parameters:** **x** (*ndarray*) – The input signal. * **Returns:** The Petrosian Fractal Dimension values. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### References ### eegdash.features.feature_bank.dimensionality.dimensionality_katz_fractal_dim(x,) Calculate Katz Fractal Dimension (KFD). Katz Fractal Dimension [[1]](#re99721b5c64b-1) [[2]](#re99721b5c64b-2) is calculated as the ratio between the total path length and the maximum planar distance from the first point to any other point. * **Parameters:** **x** (*ndarray*) – The input signal. * **Returns:** The Katz Fractal Dimension values. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### References ### eegdash.features.feature_bank.dimensionality.dimensionality_hurst_exp(x,) Estimate the Hurst Exponent. The Hurst exponent quantifies the long-term memory and predictability of a time series. It indicates whether a process is purely random, tends to trend in the same direction (persistent), or tends to reverse its direction (anti-persistent). * **Parameters:** **x** (*ndarray*) – The input signal. * **Returns:** The estimated Hurst Exponents. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### Notes This function calculate the Gamma Function Ratios and Bias Correction Factors to apply the Anis-Lloyd correction for small sample sizes. For more details on the Hurst Exponent and R/S analysis, visit the [Wikipedia entry](https://en.wikipedia.org/wiki/Hurst_exponent#Rescaled_range_(R/S)_analysis). ### eegdash.features.feature_bank.dimensionality.dimensionality_detrended_fluctuation_analysis(x,) Calculate the Scaling Exponent via DFA. Detrended Fluctuation Analysis (DFA) is a method used to detect long-range temporal correlations (LRTC) in non-stationary signals. It is a more robust way to estimate the Hurst exponent when the data is noisy or has shifting trends. * **Parameters:** **x** (*ndarray*) – The input signal. * **Returns:** The DFA scaling exponents ($alpha$). Shape is `x.shape[:-1]`. * **Return type:** ndarray ### Notes Optimized with Numba. For a theoretical overview of Detrended Fluctuation Analysis, see the [Wikipedia entry](https://en.wikipedia.org/wiki/Detrended_fluctuation_analysis). # eegdash.features.feature_bank.pick ## Channel-picking feature preprocessors This module provides the ability to pick specific channels or channel pairs for further processing. ### Data Shape Convention By default, this module follows a **Channel-panultimate** convention: * **Input:** `(..., channel, :)` * **Output:** same as input The choice of the channel dimension can be adjusted using the `axis` parameter. ### Functions | `pick_channel_pairs_preprocessor`(\*x, pairs, ...) | Pick a subset of channel pairs for further processing steps. | |------------------------------------------------------|----------------------------------------------------------------| | `pick_channels_preprocessor`(\*x, channels, ...) | Pick a subset of channels for further processing steps. | ### eegdash.features.feature_bank.pick.pick_channel_pairs_preprocessor(\*x, pairs: Iterable[Tuple[str, str]], \_metadata: dict, index: int | Iterable[int] | None = -1, c_index: int | Iterable[int] | None = None, x_index: int | Iterable[int] | None = None, y_index: int | Iterable[int] | None = None, axis: int = -2, c_axis: int = -2) Pick a subset of channel pairs for further processing steps. Must follow a preprocessor decorated with `channel_pairer` (or `channel_directed_pairer`). * **Parameters:** * **\*x** (*tuple* *[**ndarray* *]*) – Input batch. * **pairs** (*Iterable* *[**str* *]*) – A list of channel pairs to pick. * **index** (*int* *|* *Iterable* *[**int* *]*) – The index (or indices) of the input ndarray[s] to pick channel pairs from. Default is -1. * **c_index** (*int* *|* *Iterable* *[**int* *]*) – The index (or indices) of the input ndarray[s] to pick channels from. Default is []. * **x_index** (*int* *|* *Iterable* *[**int* *]*) – The index (or indices) of the input ndarray[s] to pick pair-first channels from. Default is []. * **y_index** (*int* *|* *Iterable* *[**int* *]*) – The index (or indices) of the input ndarray[s] to pick pair-second channels from. Default is []. * **axis** (*int*) – The channel pairs axis of the input batch at index `index`. Default is -2. * **c_axis** (*int*) – The channels axis of the input batch at index `c_index` or `x_index` or `y_index`. Default is -2. * **Returns:** * *\*ndarray* – Sliced input batch containing only the picked channels. * **\_metadata** (*dict*) – Updated metadata dictionary. #### NOTE Picking by index pair, e.g., `x[i, j]`, is not directly supported because the result may not be an numpy.ndarray. It is preferred to use a pair axis. It is possible, however, to pick just by `x_index` with `c_axis=0`, then pick again just by `y_index` with `c_index=1` (or vice versa) to effectively pick the indices intersection of such an numpy.ndarray. ### eegdash.features.feature_bank.pick.pick_channels_preprocessor(\*x, channels: Iterable[str], \_metadata: dict, index: int | Iterable[int] = -1, axis: int = -2) Pick a subset of channels for further processing steps. * **Parameters:** * **\*x** (*tuple* *[**ndarray* *]*) – Input batch. * **channels** (*Iterable* *[**str* *]*) – A list of channels to pick. * **index** (*int* *|* *Iterable* *[**int* *]*) – The index (or indices) of the input ndarray[s] to pick channels from. Default is -1. * **axis** (*int*) – The channels axis of the input batch. Default is -2. * **Returns:** * *\*ndarray* – Sliced input batch containing only the picked channels. * **\_metadata** (*dict*) – Updated metadata dictionary. # eegdash.features.feature_bank.signal ## Signal-Level Feature Extraction This module provides temporal and statistical features computed directly from time-series data. ### Data Shape Convention This module follows a **Time-Last** convention: * **Input:** `(..., time)` * **Output:** `(...,)` All functions collapse the last dimension (time), returning an ndarray of features corresponding to the leading dimensions (e.g., subjects, channels). ### Module Attributes | `signal_hjorth_activity`(x, /, \*\*kwargs) | Calculate the Hjorth Activity of the signal. | |----------------------------------------------|------------------------------------------------| ### Functions | `signal_hilbert_preprocessor`(x, /) | Compute the amplitude envelope of the analytic signal. | |----------------------------------------------------|-------------------------------------------------------------| | `signal_filter_preprocessor`(x, /, \*, ...[, ...]) | Linear-phase FIR band-pass filter. | | `signal_decorrelation_time`(x, /, \*, \_metadata) | Calculate the Decorrelation Time of the signal. | | `signal_hjorth_activity`(x, /, \*\*kwargs) | Calculate the Hjorth Activity of the signal. | | `signal_hjorth_complexity`(x, /) | Calculate the Hjorth Complexity of the signal. | | `signal_hjorth_mobility`(x, /) | Calculate the Hjorth Mobility of the signal. | | `signal_kurtosis`(x, /, \*\*kwargs) | Compute the temporal kurtosis of the signal. | | `signal_line_length`(x, /) | Calculate the Mean Signal Line Length. | | `signal_mean`(x, /) | Compute the temporal mean of the signal. | | `signal_peak_to_peak`(x, /, \*\*kwargs) | Calculate the peak-to-peak (maximum range) of the signal. | | `signal_quantile`(x, /[, q]) | Compute the q-th quantile of the signal. | | `signal_root_mean_square`(x, /) | Calculate the Root Mean Square (RMS) magnitude. | | `signal_skewness`(x, /, \*\*kwargs) | Compute the temporal skewness of the signal. | | `signal_std`(x, /, \*\*kwargs) | Compute the temporal standard deviation of the signal. | | `signal_variance`(x, /, \*\*kwargs) | Compute the temporal variance of the signal. | | `signal_zero_crossings`(x, /[, threshold]) | Count the number of times the signal crosses the zero axis. | ### eegdash.features.feature_bank.signal.signal_hilbert_preprocessor(x,) Compute the amplitude envelope of the analytic signal. * **Parameters:** **x** (*ndarray*) – Input signal * **Returns:** The signal envelope, with the same shape as the input. * **Return type:** ndarray ### eegdash.features.feature_bank.signal.signal_filter_preprocessor(x, , , \_metadata, f_min, f_max, num_taps=None) Linear-phase FIR band-pass filter. * **Parameters:** * **x** (*ndarray*) – Input signal * **f_min** (*float*) – Low cutoff frequency (Hz) * **f_max** (*float*) – High cutoff frequency (Hz) * **num_taps** (*int*) – Number of filter taps (must be odd for exact linear phase) * **Returns:** The band-pass filtered signal, with the same shape as the input. * **Return type:** ndarray ### eegdash.features.feature_bank.signal.signal_decorrelation_time(x, , , \_metadata) Calculate the Decorrelation Time of the signal. This function computes the time it takes for the signal to decorrelate, defined as the first time lag where the autocorrelation function drops to zero. * **Parameters:** **x** (*ndarray*) – The input signal. * **Returns:** The time (in seconds or samples) until the signal decorrelates. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### Notes This function uses the [Wiener-Khinchin Theorem](https://en.wikipedia.org/wiki/Wiener%E2%80%93Khinchin_theorem) to compute the autocorrelation via the inverse FFT of the power spectrum. ### eegdash.features.feature_bank.signal.signal_hjorth_activity(x, , \*\*kwargs) Calculate the Hjorth Activity of the signal. Activity is defined as the variance of the signal itself. * **Parameters:** **x** (*ndarray*) – The input signal. * **Returns:** The Hjorth Activity value. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### Notes The activity is calculated using the following formula: $$ \text{Activity}\left(x\left(t\right)\right) = \operatorname{Var}\left(x\left(t\right)\right) $$ ### References - Hjorth, B. (1970). EEG analysis based on time domain properties. Electroencephalography and Clinical Neurophysiology, 29(3), 306-310. for more details, see the [Wikipedia entry](https://en.wikipedia.org/wiki/Hjorth_parameters#Hjorth_Activity). ### eegdash.features.feature_bank.signal.signal_hjorth_complexity(x,) Calculate the Hjorth Complexity of the signal. Complexity represents the change in frequency. The parameter compares the signal’s similarity to a pure sine wave, where value of 1 indicates a perfect sine wave. * **Parameters:** **x** (*ndarray*) – The input signal. * **Returns:** The complexity value. Shape is `x.shape[:-1]`. * **Return type:** ndarray #### SEE ALSO `signal_hjorth_mobility` ### Notes The complexity is calculated using the following formula: $$ \text{Complexity}\left(x\left(t\right)\right) = \frac{\text{Mobility}\left(\frac{\mathrm{d}x\left(t\right)}{\mathrm{dt}}\right)}{\text{Mobility}\left(x\left(t\right)\right)} $$ ### References - Hjorth, B. (1970). EEG analysis based on time domain properties. Electroencephalography and Clinical Neurophysiology, 29(3), 306-310. For more details, see the [Wikipedia entry](https://en.wikipedia.org/wiki/Hjorth_parameters#Hjorth_Complexity). ### eegdash.features.feature_bank.signal.signal_hjorth_mobility(x,) Calculate the Hjorth Mobility of the signal. Mobility is defined as the standard deviation of the signal’s first derivative normalized by the standard deviation of the signal itself. * **Parameters:** **x** (*ndarray*) – The input signal. * **Returns:** The Hjorth Mobility value. Shape is `x.shape[:-1]`. * **Return type:** ndarray #### SEE ALSO `signal_hjorth_activity` ### Notes The mobility is calculated using the following formula: $$ \text{Mobility}\left(x\left(t\right)\right) = \sqrt{\frac{\text{Var}\left(\frac{\mathrm{d}x\left(t\right)}{\mathrm{dt}}\right)}{\text{Var}\left(x\left(t\right)\right)}} $$ ### References - Hjorth, B. (1970). EEG analysis based on time domain properties. Electroencephalography and Clinical Neurophysiology, 29(3), 306-310. for more details, see the [Wikipedia entry](https://en.wikipedia.org/wiki/Hjorth_parameters#Hjorth_Mobility). ### eegdash.features.feature_bank.signal.signal_kurtosis(x, , \*\*kwargs) Compute the temporal kurtosis of the signal. * **Parameters:** * **x** (*ndarray*) – The input signal. * **\*\*kwargs** (*dict*) – Additional keyword arguments passed to `scipy.stats.kurtosis()`. * **Returns:** The kurtosis of the signal along the temporal axis. Shape is `x.shape[:-1]` * **Return type:** ndarray ### eegdash.features.feature_bank.signal.signal_line_length(x,) Calculate the Mean Signal Line Length. * **Parameters:** **x** (*ndarray*) – The input signal. * **Returns:** The mean absolute vertical distance between consecutive samples. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### eegdash.features.feature_bank.signal.signal_mean(x,) Compute the temporal mean of the signal. * **Parameters:** **x** (*ndarray*) – The input signal. * **Returns:** The mean of the signal along the temporal axis. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### eegdash.features.feature_bank.signal.signal_peak_to_peak(x, , \*\*kwargs) Calculate the peak-to-peak (maximum range) of the signal. * **Parameters:** * **x** (*ndarray*) – The input signal. * **\*\*kwargs** (*dict*) – Additional keyword arguments passed to `numpy.ptp()`. * **Returns:** The peak-to-peak amplitude. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### Notes This function wraps `numpy.ptp()`; see the NumPy documentation for details on additional keyword arguments. For a theoretical overview of Peak-To-Peak amplitude in signal analysis, see the [Wikipedia entry](https://en.wikipedia.org/wiki/Peak-to-peak). ### eegdash.features.feature_bank.signal.signal_quantile(x, , q: Number = 0.5, \*\*kwargs) Compute the q-th quantile of the signal. * **Parameters:** * **x** (*ndarray*) – The input signal. * **q** (*float* *or* *array_like* *,* *optional*) – The quantile to compute. 0.5 (default) is the median. * **\*\*kwargs** (*dict*) – Additional keyword arguments passed to `numpy.quantile()`. * **Returns:** The quantile values. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### Notes This function wraps `numpy.quantile()`; see the NumPy documentation for details on additional keyword arguments. ### eegdash.features.feature_bank.signal.signal_root_mean_square(x,) Calculate the Root Mean Square (RMS) magnitude. * **Parameters:** **x** (*ndarray*) – The input signal. * **Returns:** The RMS amplitude of the signal. Shape is `x.shape[:-1]` * **Return type:** ndarray ### Notes For the RMS definition, see the [Wikipedia entry](https://en.wikipedia.org/wiki/Root_mean_square). ### eegdash.features.feature_bank.signal.signal_skewness(x, , \*\*kwargs) Compute the temporal skewness of the signal. * **Parameters:** * **x** (*ndarray*) – The input signal. * **\*\*kwargs** (*dict*) – Additional keyword arguments passed to `scipy.stats.skew()`. * **Returns:** The skewness of the signal along the temporal axis. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### eegdash.features.feature_bank.signal.signal_std(x, , \*\*kwargs) Compute the temporal standard deviation of the signal. * **Parameters:** * **x** (*ndarray*) – The input signal. * **\*\*kwargs** (*dict*) – Additional keyword arguments passed to `np.std()`. * **Returns:** The standard deviation of the signal along the temporal axis. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### eegdash.features.feature_bank.signal.signal_variance(x, , \*\*kwargs) Compute the temporal variance of the signal. * **Parameters:** * **x** (*ndarray*) – The input signal. * **\*\*kwargs** (*dict*) – Additional keyword arguments passed to `np.var()`. * **Returns:** The variance of the signal along the temporal axis. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### eegdash.features.feature_bank.signal.signal_zero_crossings(x, , threshold=1e-15) Count the number of times the signal crosses the zero axis. This function identifies points where the signal changes sign or enters/leaves a defined noise floor (threshold). * **Parameters:** * **x** (*ndarray*) – The input signal. * **threshold** (*float* *,* *optional*) – A small epsilon value to treat values near zero as exactly zero, preventing false counts due to floating-point noise. * **Returns:** The count of zero crossings. Shape is `x.shape[:-1]`. * **Return type:** ndarray ### Notes For a theoretical overview of zero-crossing rate in signal analysis, see the [Wikipedia entry](https://en.wikipedia.org/wiki/Zero_crossing). # eegdash.features.feature_bank.spectral ## Spectral Feature Extraction This module provides functions to compute various spectral features from signals. ### Data Shape Convention This module follows a **Time-Last** convention: * **Input:** `(..., time)` * **Output:** `(...,)` All functions collapse the last dimension (time), returning an ndarray of features corresponding to the leading dimensions (e.g., subjects, channels). ### Functions | `spectral_preprocessor`(x, /, \*, \_metadata[, ...]) | Compute the Power Spectral Density (PSD) using Welch's method. | |--------------------------------------------------------|-------------------------------------------------------------------| | `spectral_normalized_preprocessor`(f, p, /) | Normalize the PSD so that the total power equals 1. | | `spectral_db_preprocessor`(f, p, /[, eps]) | Convert the PSD to decibels. | | `spectral_root_total_power`(f, p, /) | Calculate the square root of the total spectral power. | | `spectral_moment`(f, p, /) | Calculate the first spectral moment ('Weighted' Mean Frequency). | | `spectral_entropy`(f, p, /) | Calculate Spectral Entropy of thepower spectrum. | | `spectral_edge`(f, p, /[, edge]) | Calculate the Spectral Edge Frequency (SEF). | | `spectral_slope`(f, p, /) | Estimate the $1/f$ spectral slope using least-squares regression. | | `spectral_bands_power`(f, p, /[, bands]) | Calculate total power within specified frequency bands. | | `spectral_hjorth_activity`(f, p, /) | Calculate Hjorth Activity in the frequency domain. | | `spectral_hjorth_mobility`(f, p, /) | Calculate Hjorth Mobility in the frequency domain. | | `spectral_hjorth_complexity`(f, p, /) | Calculate Hjorth Complexity in the frequency domain. | ### eegdash.features.feature_bank.spectral.spectral_preprocessor(x, , , \_metadata, f_min: float | None = None, f_max: float | None = None, fs: int | None = None, window_size_in_sec: float | None = 4, overlap_in_sec: float | None = None, \*\*kwargs) Compute the Power Spectral Density (PSD) using Welch’s method. * **Parameters:** * **x** (*ndarray*) – The input signal (shape: …, n_times). * **\*\*kwargs** (*dict*) – Supports any scipy.signal.welch arguments like ‘nperseg’ and ‘noverlap’. * **fs** (*int* *|* *None*) – Sampling frequency. Defaults to sfreq in MNE’s info. Do not use unless you know what you are doing. * **f_min** (*float* *|* *None*) – The minimum frequency. Use None for half the window length. Defaults to the highpass frequency used to MNE’s:meth:~mne.io.Raw.filter. * **f_max** (*float* *|* *None*) – The maximum frequency. Use None for Nyquist. Defaults to the lowpass frequency used to MNE’s `filter()`. * **window_size_in_sec** (*float* *|* *None*) – Window size in seconds, replacing nperseg. Only used if nperseg is not provided. Defaults to 4 seconds. * **overlap_in_sec** (*float* *|* *None*) – Window overlap in seconds, replacing noverlap. Only used if nperseg and noverlap are not provided.defaults to half of window_size_in_sec. * **Returns:** * **f** (*ndarray*) – Frequency vector. * **p** (*ndarray*) – Power Spectral Density. ### eegdash.features.feature_bank.spectral.spectral_normalized_preprocessor(f, p,) Normalize the PSD so that the total power equals 1. This is equivalent to treating the PSD as a Probability Density Function (PDF), which is required for Spectral Entropy. * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **p** (*ndarray*) – Power Spectral Density. * **Returns:** * **f** (*ndarray*) – Frequency vector (unchanged). * **p** (*ndarray*) – Normalized Power Spectral Density. ### eegdash.features.feature_bank.spectral.spectral_db_preprocessor(f, p, , eps=1e-15) Convert the PSD to decibels. Calculated as: $$ 10 \cdot \log_{10}\left(P\left(f\right) + \epsilon\right). $$ * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **p** (*ndarray*) – Power Spectral Density. * **eps** (*float* *,* *optional*) – A small constant to prevent log of zero (default: 1e-15). * **Returns:** * **f** (*ndarray*) – Frequency vector (unchanged). * *ndarray* – Power Spectral Density in decibels. Shape is `p.shape`. ### eegdash.features.feature_bank.spectral.spectral_root_total_power(f, p,) Calculate the square root of the total spectral power. * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **p** (*ndarray*) – Power Spectral Density (PSD). * **Returns:** The root total power. Shape is `p.shape[:-1]`. * **Return type:** ndarray ### eegdash.features.feature_bank.spectral.spectral_moment(f, p,) Calculate the first spectral moment (‘Weighted’ Mean Frequency). When applied to a normalized PSD, this represents the “center of mass” of the power spectrum. * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **p** (*ndarray*) – Normalized Power Spectral Density. * **Returns:** The mean frequency of the signal. Shape is `p.shape[:-1]`. * **Return type:** ndarray ### eegdash.features.feature_bank.spectral.spectral_entropy(f, p,) Calculate Spectral Entropy of thepower spectrum. Spectral Entropy (SE) measures the complexity or “disorder” of a signal. A high SE indicates a flat, broad spectrum (e.g., white noise), while a low SE indicates a spectrum concentrated in a few frequency components. It is calculated as: $$ SE = -\sum_f P\left(f\right) \ln\left(P\left(f\right)\right) $$ * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **p** (*ndarray*) – Normalized Power Spectral Density (treated as a PDF). * **Returns:** The entropy values. Shape is `p.shape[:-1]`. * **Return type:** ndarray ### eegdash.features.feature_bank.spectral.spectral_edge(f, p, , edge=0.9) Calculate the Spectral Edge Frequency (SEF). The frequency below which a certain percentage (e.g., 90%) of the total power is contained. * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **p** (*ndarray*) – Normalized Power Spectral Density (treated as a PDF). * **edge** (*float* *,* *optional*) – The fraction of total power (default is 0.9 for SEF90). * **Returns:** The spectral edge frequency. Shape is `p.shape[:-1]`. * **Return type:** ndarray ### Notes Optimized with Numba `fastmath` for rapid scanning of cumulative power. ### eegdash.features.feature_bank.spectral.spectral_slope(f, p,) Estimate the $1/f$ spectral slope using least-squares regression. This measures the slope and intercept of the PSD in log-log space. * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **p** (*ndarray*) – Power Spectral Density in decibels. * **Returns:** A dictionary containing: - `'exp'`: The slope/exponent (scaling). - `'int'`: The y-intercept (offset). * **Return type:** dict ### eegdash.features.feature_bank.spectral.spectral_bands_power(f, p, , bands={'alpha': (8, 12), 'beta': (12, 30), 'delta': (1, 4.5), 'theta': (4.5, 8)}) Calculate total power within specified frequency bands. * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **p** (*ndarray*) – Power Spectral Density (PSD). * **bands** (*dict* *,* *optional*) – Mapping of band names to (min, max) frequency tuples. * **Returns:** The summed power for each band. * **Return type:** dict ### eegdash.features.feature_bank.spectral.spectral_hjorth_activity(f, p,) Calculate Hjorth Activity in the frequency domain. Activity represents the total power of the signal, calculated here as the integral (sum) of the Power Spectral Density. * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **p** (*ndarray*) – Power Spectral Density. * **Returns:** Total spectral power. Shape is `p.shape[:-1]`. * **Return type:** ndarray ### References - Hjorth, B. (1970). EEG analysis based on time domain properties. Electroencephalography and Clinical Neurophysiology, 29(3), 306-310. ### eegdash.features.feature_bank.spectral.spectral_hjorth_mobility(f, p,) Calculate Hjorth Mobility in the frequency domain. Mobility is an estimate of the mean frequency. For a normalized PSD, it is calculated as: $$ \sqrt{\sum_f f^2 P(f)}, $$ where $\sum P(f) = 1$ (since the PSD is normalized). * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **p** (*ndarray*) – Normalized Power Spectral Density. * **Returns:** Spectral mobility. Shape is `p.shape[:-1]`. * **Return type:** ndarray ### References - Hjorth, B. (1970). EEG analysis based on time domain properties. Electroencephalography and Clinical Neurophysiology, 29(3), 306-310. ### eegdash.features.feature_bank.spectral.spectral_hjorth_complexity(f, p,) Calculate Hjorth Complexity in the frequency domain. Complexity measures the bandwidth or the “irregularity” of the spectrum. For a normalized PSD, it is calculated as: $$ \frac{\sqrt{\sum_f f^4 P(f)}}{\sum_f f^2 P(f)}. $$ * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **p** (*ndarray*) – Normalized Power Spectral Density. * **Returns:** Spectral complexity. Shape is `p.shape[:-1]`. * **Return type:** ndarray ### References - Hjorth, B. (1970). EEG analysis based on time domain properties. Electroencephalography and Clinical Neurophysiology, 29(3), 306-310. # eegdash.features.feature_bank.utils ## Feature Extraction Utilities This module provides the following helper functions: - get_valid_freq_band: Validates and returns frequency boundaries based on Nyquist and resolution. - slice_freq_band: Slices frequency vector and associated data arrays to a specific range. - reduce_freq_bands: Reduces spectral data into discrete frequency bands by aggregating bins ### Functions | `get_valid_freq_band`(fs, n[, f_min, f_max]) | Validate and return frequency boundaries based on Nyquist and resolution. | |-------------------------------------------------|-----------------------------------------------------------------------------| | `preprocessor_as_feature`(\*x) | A pass-through feature, returning its preprocessor output as is. | | `reduce_freq_bands`(f, x, bands[, reduce_func]) | Reduce spectral data into discrete frequency bands by aggregating bins. | | `set_spectral_default_kwargs`(kwargs, metadata) | Sets default parameters for spectral preprocecssors. | | `slice_freq_band`(f, \*x[, f_min, f_max]) | Slice frequency vector and associated data arrays to a specific range. | | `spectral_kwargs`(func) | A decorator for functions receiving spectral-like parameters. | ### eegdash.features.feature_bank.utils.get_valid_freq_band(fs, n, f_min=None, f_max=None) Validate and return frequency boundaries based on Nyquist and resolution. * **Parameters:** * **fs** (*float*) – The sampling frequency in Hz. * **n** (*int*) – The number of points in the signal/window. * **f_min** (*float* *,* *optional*) – Requested minimum frequency. Defaults to 2 \* resolution (f0). * **f_max** (*float* *,* *optional*) – Requested maximum frequency. Defaults to Nyquist frequency. * **Returns:** **f_min, f_max** – The validated frequency boundaries. * **Return type:** float * **Raises:** * **AssertionError** – If f_min is below the minimum resolvable frequency. * **AssertionError** – If f_max is above the Nyquist frequency. ### Examples ```pycon >>> get_valid_freq_band(fs=100, n=1000) (0.2, 50.0) >>> get_valid_freq_band(fs=200, n=500, f_min=1, f_max=80) (1, 80) ``` ### eegdash.features.feature_bank.utils.preprocessor_as_feature(\*x) A pass-through feature, returning its preprocessor output as is. Use if the preprocessor is a feature by itself, and it should also be treated as a feature. * **Parameters:** **\*x** (*tuple*) – Any preprocessor output. * **Returns:** **\*x** – The input (as is). * **Return type:** tuple ### eegdash.features.feature_bank.utils.reduce_freq_bands(f, x, bands, reduce_func=) Reduce spectral data into discrete frequency bands by aggregating bins. This function identifies the frequency indices belonging to specific bands and applies a reduction function (like sum or mean) to collapse the frequency axis. * **Parameters:** * **f** (*ndarray*) – Frequency vector. * **x** (*ndarray*) – Spectral data. Can be multi-dimensional. The last dimension must match the length of f. * **bands** (*dict*) – Mapping of band names to (min, max) frequency tuples. * **reduce_func** (*callable* *,* *optional*) – Function to aggregate the values. Default is np.sum. * **Returns:** **x_bands** – Dictionary where keys are the band names from bands and values are the reduced arrays. The last dimension of the input x is removed. * **Return type:** dict * **Raises:** **AssertionError** – If a band name is not a string. If a band limit tuple does not contain exactly two values or if min > max. If the requested band limits fall outside the range of the available frequency vector f. ### Examples ```pycon >>> f = np.array([0, 2, 4, 6, 8, 10]) >>> x = np.array([ ... [1, 2, 3, 4, 5, 6], ... [60, 50, 40, 30, 20, 10], ... ]) >>> bands = {'low': (0, 5), 'high': (5, 11)} # check assertion >>> results = reduce_freq_bands(f, x, bands, reduce_func=np.sum) >>> results['low'] array([6, 150]) >>> results['high'] array([15, 60]) ``` ### eegdash.features.feature_bank.utils.set_spectral_default_kwargs(kwargs, metadata) Sets default parameters for spectral preprocecssors. - Set the default frequency limits to the bandpass frequencies (if available). - Set the default sampling frequency to freq in MNE’s info. - Use window_size_in_sec if nperseg is not provided. Defaults to 4 seconds. - Use overlap_in_sec if nperseg and noverlap are not provided. : Defaults to half the window size. - Set the axis to -1 * **Parameters:** * **kwargs** (*dict*) – A dictionary of keyword arguments. * **metadata** (*dict*) – A dictionary of record and batch metadata. * **Returns:** * **f_min** (*float*) – Minimum frequency. * **f_max** (*float*) – Maximum frequency. * **kwargs** (*dict*) – A dictionary of keyword arguments. ### eegdash.features.feature_bank.utils.slice_freq_band(f, \*x, f_min=None, f_max=None) Slice frequency vector and associated data arrays to a specific range. * **Parameters:** * **f** (*ndarray*) – The frequency vector. * **\*x** (*ndarray*) – One or more data arrays to be sliced along the frequency axis. The last dimension of each array must match the length of f. * **f_min** (*float* *,* *optional*) – Lower frequency bound. * **f_max** (*float* *,* *optional*) – Upper frequency bound. * **Returns:** * **f** (*ndarray*) – The cropped frequency vector. * **\*xl** (*ndarray*) – The cropped data arrays. ### Examples ```pycon >>> # Create 0-10 Hz frequencies >>> freqs = np.array([0, 2, 4, 6, 8, 10]) ``` ```pycon >>> # Create data: (2 channels, 6 frequency bins) >>> data = np.array([[10, 20, 30, 40, 50, 60], ... [15, 25, 35, 45, 55, 65]]) ``` ```pycon >>> # Keep only the range 4Hz to 8Hz >>> f_s, d_s = slice_freq_band(freqs, data, f_min=4, f_max=8) ``` ```pycon >>> f_s array([4, 6, 8])~ >>> d_s array([[30, 40, 50], [35, 45, 55]]) ``` ### eegdash.features.feature_bank.utils.spectral_kwargs(func: Callable) A decorator for functions receiving spectral-like parameters. * **Parameters:** **func** (*Callable*) – A function receiving spectral-like parameters. * **Returns:** A wrapped function with extra parameters and a suitable docstring. * **Return type:** Callable # eegdash.features.inspect Feature Bank Inspection and Discovery. This module provides utilities for introspecting the feature extraction registry. It allows users and system components to discover available features, identify their kinds, and traverse the preprocessing dependency graph. The module provides the following utilities: - `get_all_features()` — Lists all final feature functions. - `get_all_feature_preprocessors()` — Lists all available preprocessing steps. - `get_feature_kind()` — Identifies the dimensionality of a feature. - `get_feature_predecessors()` — Traces the dependency lineage of a feature. - `get_all_feature_kinds()` — Lists all valid feature categories. ### Functions | `get_all_feature_preprocessors`() | Get a list of all available preprocessor functions. | |--------------------------------------------------|------------------------------------------------------------------| | `get_all_feature_kinds`() | Get a list of all available feature 'kind' classes. | | `get_all_features`() | Get a list of all available feature functions. | | `get_all_preprocessor_output_types`() | Get a list of all available preprocessor output type classes. | | `get_feature_kind`(feature) | Get the 'kind' of a feature function. | | `get_feature_predecessors`(feature_or_extractor) | Get the dependency hierarchy for a feature or feature extractor. | ### eegdash.features.inspect.get_all_feature_preprocessors() → list[tuple[str, Callable]] Get a list of all available preprocessor functions. Scans the `feature_bank` module for all functions that participate in the dependency graph but do not produce final features (e.g., lack a feature_kind). * **Returns:** A list of (name, function) tuples for all discovered feature preprocessors. * **Return type:** list of tuple ### eegdash.features.inspect.get_all_feature_kinds() → list[tuple[str, type[TypeAliasForwardRef('eegdash.features.extractors.MultivariateFeature')]]] Get a list of all available feature ‘kind’ classes. Scans the `kinds` module for all classes that subclass `MultivariateFeature`. * **Returns:** A list of (name, class) tuples for all discovered feature kinds. * **Return type:** list of tuple ### eegdash.features.inspect.get_all_features() → list[tuple[str, Callable]] Get a list of all available feature functions. Scans the `feature_bank` module for functions that have been decorated with a feature_kind. * **Returns:** A list of (name, function) tuples for all discovered feature functions. * **Return type:** list of tuple ### eegdash.features.inspect.get_all_preprocessor_output_types() → list[tuple[str, type[BasePreprocessorOutputType]]] Get a list of all available preprocessor output type classes. Scans the `feature_bank` module for all classes that subclass `BasePreprocessorOutputType`. * **Returns:** A list of (name, class) tuples for all discovered preprocessor output types. * **Return type:** list of tuple ### eegdash.features.inspect.get_feature_kind(feature: Callable) → eegdash.features.extractors.MultivariateFeature Get the ‘kind’ of a feature function. Identifies whether a feature is univariate, bivariate, or multivariate using decorators. * **Parameters:** **feature** (*callable*) – The feature function to inspect. * **Returns:** An instance of the feature kind. * **Return type:** `MultivariateFeature` ### eegdash.features.inspect.get_feature_predecessors(feature_or_extractor: Callable | None) → list Get the dependency hierarchy for a feature or feature extractor. This function recursively traverses the parent_extractor_type attribute of a feature or extractor to build a list representing its dependency lineage. * **Parameters:** **feature_or_extractor** (*callable*) – The feature function or `FeatureExtractor` instance to inspect. * **Returns:** A nested list representing the dependency tree. For a simple linear chain, this will be a flat list from the specific feature up to the base signal input. For multiple dependencies, it contains tuples of sub-dependencies. * **Return type:** list ### Notes The traversal stops when it reaches a predecessor of `None`, which typically represents the raw signal. ### Examples ```pycon >>> # Example: Linear dependency with a branching dependency >>> print(get_feature_predecessors(feature_bank.spectral_entropy)) [, , , (None, [, None])] ``` # eegdash.features.kinds Feature Channel-processing Kinds. This module defines the fundamental feature-processing kinds and the logic to map raw arrays to named features. The module provides the classes: - `UnivariateFeature` - `BivariateFeature` - `MultivariateFeature` ### Classes | `BivariateFeature`(\*args[, channel_pair_format]) | Feature kind for operations on pairs of channels. | |-----------------------------------------------------|----------------------------------------------------------------------| | `MultivariateFeature`() | Logic wrapper for features that operate on one or more EEG channels. | | `UnivariateFeature`() | Feature kind for operations applied to each channel independently. | ### *class* eegdash.features.kinds.BivariateFeature(\*args, channel_pair_format: str | None = None) Bases: `MultivariateFeature` Feature kind for operations on pairs of channels. Designed for undirected relationship measures between two signals. * **Parameters:** **channel_pair_format** (*str*) – A format string used to create feature names from pairs of channel names. Default is “{}<>{}” for undirected bivariate features or “{}->{}” for directed bivariate features. #### feature_channel_names(\_metadata: dict) → list[str] Generate feature names for each unique pair of channels. * **Parameters:** **\_metadata** (*dict*) – A dictionary of record and batch metadata. * **Returns:** Formatted strings representing channel pairs (e.g., ‘F3<>F4’). * **Return type:** list of str ### *class* eegdash.features.kinds.MultivariateFeature Bases: `object` Logic wrapper for features that operate on one or more EEG channels. This class defines the logic for mapping raw numerical results into structured, named dictionaries. It determines the “kind” of a feature (e.g., univariate, bivariate) and handles the association of feature values with specific channels or channel groupings. ### Notes Subclasses should override `feature_channel_names()` to define specific naming conventions for the extracted features. #### feature_channel_names(\_metadata: dict) → list[str] Generate feature-specific names based on input channels. * **Parameters:** **\_metadata** (*dict*) – A dictionary of record and batch metadata. * **Returns:** A list of strings defining the naming for each output feature. Returns an empty list in the base implementation. * **Return type:** list of str ### *class* eegdash.features.kinds.UnivariateFeature Bases: `MultivariateFeature` Feature kind for operations applied to each channel independently. Used when a single feature value is produced per channel. #### feature_channel_names(\_metadata: dict) → list[str] Return the channel names themselves as feature names. * **Parameters:** **\_metadata** (*dict*) – A dictionary of record and batch metadata. * **Returns:** A list of channel names. * **Return type:** list of str # eegdash.features.output_types Core Output Types. This module defines the fundamental output types for feature preprocessors. The module provides the classes: - `BasePreprocessorOutputType` - The base abstract output type. - `AsInputOutputType` - A “pass through” output type, enforcing the output type to match the input type. ### Classes | `AsInputOutputType`(preprocessor) | A special class for preprocessors where the output type is the same as their input type. | |--------------------------------------------|--------------------------------------------------------------------------------------------| | `BasePreprocessorOutputType`(preprocessor) | An abstract class representing a type of preprocessor output. | | `SignalOutputType`(preprocessor) | A class for preprocessors where the output type is raw-signal-like. | ### *class* eegdash.features.output_types.AsInputOutputType(preprocessor: Callable) Bases: `BasePreprocessorOutputType` A special class for preprocessors where the output type is the same as their input type. If used as a preprocessor predecessor, the preprocessor must not have any other predecessors. * **Parameters:** **preprocessor** (*callable*) – The underlying preprocessor callable. ### *class* eegdash.features.output_types.BasePreprocessorOutputType(preprocessor: Callable) Bases: `ABC`, `Callable` An abstract class representing a type of preprocessor output. * **Parameters:** **preprocessor** (*callable*) – The underlying preprocessor callable. ### *class* eegdash.features.output_types.SignalOutputType(preprocessor: Callable) Bases: `BasePreprocessorOutputType` A class for preprocessors where the output type is raw-signal-like. * **Parameters:** **preprocessor** (*callable*) – The underlying preprocessor callable. # eegdash.features.serialization Serialization Utilities for Feature Datasets. This module provides functions for reconstructing feature datasets from disk. It serves as the inverse of the saving logic implemented in `FeaturesConcatDataset` and `FeatureExtractor`, allowing for efficient, parallelized reloading of processed features and their associated metadata. ### Functions | `feature_extractor_from_dict`(fe_dict) | Get a feature extractor from a dictionary. | |---------------------------------------------|---------------------------------------------------------| | `load_feature_extractor_from_hocon`(path) | Reads a feature extractor from a HOCON's conf file. | | `load_feature_extractor_from_json`(path) | Reads a feature extractor from a json file. | | `load_feature_extractor_from_yaml`(path) | Reads a feature extractor from a yaml file. | | `load_features_concat_dataset`(path[, ...]) | Load a stored `FeaturesConcatDataset` from a directory. | ### eegdash.features.serialization.feature_extractor_from_dict(fe_dict: dict) → eegdash.features.extractors.FeatureExtractor Get a feature extractor from a dictionary. Get a feature extractor object from a dictionary saved by `FeatureExtractor.to_dict()`. * **Parameters:** **fe_dict** (*dict*) – A dictionary representing the feature extractor, with `"feature_extractors"` and `"preprocessor"` fields (if applicable). * **Returns:** A feature extractor * **Return type:** FeatureExtractor #### SEE ALSO `FeatureExtractor.to_dict` ### Notes - Only `feature_bank` features and preprocessors : are supported. - Feature extractors including non-function callables are not supported. ### eegdash.features.serialization.load_feature_extractor_from_hocon(path: str | Path) → eegdash.features.extractors.FeatureExtractor Reads a feature extractor from a HOCON’s conf file. * **Parameters:** **path** (*str* *|* *pathlib.Path*) – The path to the conf file. #### SEE ALSO `FeatureExtractor.to_hocon`, `feature_extractor_from_dict` ### Notes - Only `feature_bank` features and : preprocessors are supported. - Feature extractors including non-function callables are not : supported. - Requires the pyhocon package. ### eegdash.features.serialization.load_feature_extractor_from_json(path: str | Path) → eegdash.features.extractors.FeatureExtractor Reads a feature extractor from a json file. * **Parameters:** **path** (*str* *|* *pathlib.Path*) – The path to the json file. #### SEE ALSO `FeatureExtractor.to_json`, `feature_extractor_from_dict` ### Notes - Only `feature_bank` features and : preprocessors are supported. - Feature extractors including non-function callables are not : supported. ### eegdash.features.serialization.load_feature_extractor_from_yaml(path: str | Path) → eegdash.features.extractors.FeatureExtractor Reads a feature extractor from a yaml file. * **Parameters:** **path** (*str* *|* *pathlib.Path*) – The path to the yaml file. ### Notes - Only `feature_bank` features and : preprocessors are supported. - Feature extractors including non-function callables are not : supported. - Requires the yaml package. #### SEE ALSO `FeatureExtractor.to_yaml`, `feature_extractor_from_dict` ### eegdash.features.serialization.load_features_concat_dataset(path: str | Path, ids_to_load: list[int] | None = None, n_jobs: int = 1) → eegdash.features.datasets.FeaturesConcatDataset Load a stored `FeaturesConcatDataset` from a directory. This function reconstructs a concatenated dataset by loading individual `FeaturesDataset` instances from numbered subdirectories. * **Parameters:** * **path** (*str* *or* *pathlib.Path*) – The root directory where the dataset was previously saved. This directory should contain numbered subdirectories. * **ids_to_load** (*list* *of* *int* *,* *optional*) – A list of specific recording IDs (subdirectory names) to load. If **None**, all numbered subdirectories found in the path are loaded in ascending numerical order. * **n_jobs** (*int* *,* *default=1*) – The number of CPU cores to use for parallel loading. Set to -1 to use all available processors. * **Returns:** A unified concatenated dataset containing the loaded recordings. * **Return type:** FeaturesConcatDataset #### SEE ALSO `braindecode.datautil.load_concat_dataset` ### Notes The function expects the directory structure generated by `FeaturesConcatDataset.save()`. It automatically reconstructs the feature DataFrames (safetensors), metadata (Pickle), recording info (FIF), and preprocessing keyword arguments (JSON). # eegdash.features.trainable Core Trainable Feature Interface. This module defines the interface for creating trainable features. The module provides the base class: - `TrainableFeature` - The interface for features requiring a fitting phase. ### Classes | `TrainableFeature`() | Abstract base class for features requiring a training phase. | |------------------------|----------------------------------------------------------------| ### *class* eegdash.features.trainable.TrainableFeature Bases: `ABC` Abstract base class for features requiring a training phase. This class provides the interface for features that must be fitted on a representative dataset before they can process new samples. #### \_is_trained Internal flag indicating whether the feature has completed its training phase. * **Type:** bool #### *abstractmethod* clear() Reset the internal state of the feature. This method must be implemented by subclasses to clear any learned parameters, statistics, or buffers. #### *abstractmethod* partial_fit(\*x, y=None) Update the extractor’s state using a single batch of data. This method allows for incremental learning, making it possible to train on datasets that are too large to fit into memory at once. * **Parameters:** * **\*x** (*tuple* *of* *ndarray*) – The input data batch. * **y** (*ndarray* *,* *optional*) – Target labels associated with the batch, required for supervised feature extraction methods. #### fit() Finalize the training of the feature extractor. This method should be called after the entire training set has been processed via `partial_fit()`. It transitions the object to a “trained” state, enabling the `__call__()` method. # eegdash.features.utils Feature Extraction Utilities. This module provides the primary entry points for applying feature extraction pipelines to windowed datasets. The module provides the following functions: - `extract_features()` — The main interface for computing features across an entire concatenated dataset. - `fit_feature_extractors()` — Fits trainable features using a representative dataset. ### Functions | `extract_features`(concat_dataset, features, \*) | Extract features from a collection of windowed recordings. | |----------------------------------------------------|--------------------------------------------------------------| | `fit_feature_extractors`(concat_dataset, features) | Fit trainable feature extractors on a concatenated dataset. | ### eegdash.features.utils.extract_features(concat_dataset: BaseConcatDataset, features: FeatureExtractor | Dict[str, Callable] | List[Callable], , batch_size: int = 512, n_jobs: int = 1) → FeaturesConcatDataset Extract features from a collection of windowed recordings. This function applies a feature extraction pipeline to every individual recording in a `BaseConcatDataset`. * **Parameters:** * **concat_dataset** (*BaseConcatDataset*) – A concatenated dataset of `WindowsDataset` or `EEGWindowsDataset` instances. * **features** (*FeatureExtractor* *or* *dict* *or* *list*) – The feature extractor(s) to apply. Can be a `FeatureExtractor` instance, a dictionary of named feature functions, or a list of feature functions. * **batch_size** (*int* *,* *default 512*) – The size of batches used for feature extraction within each recording. * **n_jobs** (*int* *,* *default 1*) – The number of parallel jobs to use for processing different recordings simultaneously. * **Returns:** A unified collection of feature datasets corresponding to the input recordings. * **Return type:** *FeaturesConcatDataset* ### eegdash.features.utils.fit_feature_extractors(concat_dataset: BaseConcatDataset, features: FeatureExtractor | Dict[str, Callable] | List[Callable], batch_size: int = 8192) → FeatureExtractor Fit trainable feature extractors on a concatenated dataset. Scans the provided feature pipeline for components that require training (subclasses of `TrainableFeature`). If found, the function iterates through the dataset in batches to perform partial fitting before finalization. * **Parameters:** * **concat_dataset** (*BaseConcatDataset*) – The dataset used to train the feature extractors. * **features** (*FeatureExtractor* *or* *dict* *or* *list*) – The feature extractor pipeline(s) to fit. * **batch_size** (*int* *,* *default 8192*) – The batch size to use when streaming data through the `partial_fit()` phase. * **Returns:** The fitted feature extractor instance, ready for feature extraction. * **Return type:** *FeatureExtractor* ### Notes If the provided extractors are not trainable, the function returns the original input without modification. # EEG P3 Transfer Learning with AS-MMD This tutorial demonstrates how to train a domain-adaptive deep learning model for EEG P3 component classification across two different datasets using Adaptive Symmetric Maximum Mean Discrepancy (AS-MMD). **Paper:** Chen, W., Delorme, A. (2025). Adaptive Split-MMD Training for Small-Sample Cross-Dataset P300 EEG Classification. arXiv: [2510.21969](https://arxiv.org/abs/2510.21969) # Key Concepts This tutorial covers: - **Domain Adaptation**: Training on multiple datasets with different recording setups - **Deep Learning**: Using EEGConformer, a transformer-based model for EEG - **AS-MMD**: A technique that aligns feature distributions across datasets - **Cross-Validation**: Robust evaluation using nested stratified folds By the end, you’ll understand how to: 1. Load and preprocess multi-dataset EEG recordings 2. Build a domain-adaptive classifier 3. Evaluate performance across domains 4. Apply the method to your own datasets # Part 1: Loading and Preprocessing Data First, we load two oddball datasets from the EEGDash API. You can override the dataset IDs with EEGDASH_SOURCE_DATASET and EEGDASH_TARGET_DATASET. ```Python from pathlib import Path import os from eegdash import EEGDash, EEGDashDataset from eegdash.paths import get_default_cache_dir cache_folder = Path(get_default_cache_dir()).resolve() cache_folder.mkdir(parents=True, exist_ok=True) eegdash = EEGDash() ODDBALL_TASK = os.getenv("EEGDASH_ODDBALL_TASK", "visualoddball") record_limit = int(os.getenv("EEGDASH_RECORD_LIMIT", "60")) def _fetch_records(dataset_id): if ODDBALL_TASK: records = eegdash.find( {"dataset": dataset_id, "task": ODDBALL_TASK}, limit=record_limit ) if records: return records return eegdash.find({"dataset": dataset_id}, limit=record_limit) source_id = os.getenv("EEGDASH_SOURCE_DATASET", "ds005863") target_id = os.getenv("EEGDASH_TARGET_DATASET", "ds003061") source_records = _fetch_records(source_id) target_records = _fetch_records(target_id) if not source_records or not target_records: records = eegdash.find({"task": {"$regex": "oddball", "$options": "i"}}, limit=200) dataset_ids = [] for rec in records: ds_id = rec.get("dataset") if ds_id and ds_id not in dataset_ids: dataset_ids.append(ds_id) if dataset_ids: source_id = dataset_ids[0] target_id = dataset_ids[1] if len(dataset_ids) > 1 else dataset_ids[0] source_records = _fetch_records(source_id) target_records = _fetch_records(target_id) if not source_records or not target_records: raise RuntimeError("Unable to find two oddball datasets from the API.") ds_p3 = EEGDashDataset(cache_dir=cache_folder, records=source_records) ds_avo = EEGDashDataset(cache_dir=cache_folder, records=target_records) print(f"Source ({source_id}): {len(ds_p3)} recordings") print(f"Target ({target_id}): {len(ds_avo)} recordings") ``` ## Data Preprocessing Pipeline Before training, we apply standard EEG preprocessing: - **Event labeling**: Identify oddball vs. standard stimuli - **Filtering**: 0.5-30 Hz bandpass to focus on relevant oscillations - **Resampling**: Downsample to 128 Hz to reduce computation - **Channel selection**: Keep Fz, Pz, P3, P4, Oz (standard P3 locations) - **Windowing**: Extract 1.2 sec epochs (-0.1s before to 1.1s after stimulus) - **Normalization**: Z-score normalization per trial ```Python import numpy as np import torch import mne from braindecode.preprocessing import ( preprocess, Preprocessor, create_windows_from_events, ) mne.set_log_level("ERROR") # Preprocessing parameters LOW_FREQ = 0.5 HIGH_FREQ = 30 RESAMPLE_FREQ = 128 TRIAL_START_OFFSET = -0.1 # 100 ms before stimulus TRIAL_DURATION = 1.1 # Total window 1.1 seconds COMMON_CHANNELS = ["Fz", "Pz", "P3", "P4", "Oz"] def preprocess_dataset(dataset, channels, dataset_type="P3"): """Apply preprocessing pipeline to an EEG dataset. Returns numpy arrays: (n_trials, n_channels, n_times) """ print(f"\nPreprocessing {dataset_type} dataset...") # Define preprocessing steps preprocessors = [ Preprocessor("set_eeg_reference", ref_channels="average", projection=True), Preprocessor("resample", sfreq=RESAMPLE_FREQ), Preprocessor("filter", l_freq=LOW_FREQ, h_freq=HIGH_FREQ), Preprocessor( "pick_channels", ch_names=[ch.lower() for ch in channels], ordered=False ), ] # Apply preprocessing preprocess(dataset, preprocessors) # Extract windowed trials around stimulus onset trial_start = int(TRIAL_START_OFFSET * RESAMPLE_FREQ) trial_stop = int((TRIAL_START_OFFSET + TRIAL_DURATION) * RESAMPLE_FREQ) # Define event mapping to handle both datasets # ErpCore uses "Target"/"NonTarget" # ds005863 (AVO) uses "S 11", "S 21", etc. for targets and "S 12", "S 22" for nontargets mapping = { "Target": 1, "NonTarget": 0, "standard": 0, "target": 1, } # Add AVO specific codes if they are missing for i in range(1, 6): mapping[f"S {i}1"] = 1 # Often S 11, S 21, ... are targets mapping[f"S {i}2"] = 0 # Often S 12, S 22, ... are nontargets mapping[f"S{i}1"] = 1 mapping[f"S{i}2"] = 0 windows_ds = create_windows_from_events( dataset, trial_start_offset_samples=trial_start, trial_stop_offset_samples=trial_stop, preload=True, drop_bad_windows=True, mapping=mapping, ) X, y = [], [] for i in range(len(windows_ds)): data, label, *_ = windows_ds[i] X.append(data) y.append(label) print(f"Extracted {len(X)} trials from {dataset_type}") return np.array(X), np.array(y) # Preprocess both datasets X_p3, y_p3 = preprocess_dataset(ds_p3, COMMON_CHANNELS, f"Source ({source_id})") X_avo, y_avo = preprocess_dataset(ds_avo, COMMON_CHANNELS, f"Target ({target_id})") # Ensure both have the same number of samples (cropping to the minimum) min_samples = min(X_p3.shape[2], X_avo.shape[2]) if X_p3.shape[2] != X_avo.shape[2]: print(f"\nCropping trials to {min_samples} samples for consistency...") X_p3 = X_p3[:, :, :min_samples] X_avo = X_avo[:, :, :min_samples] # Combine datasets for training X_all = np.vstack([X_p3, X_avo]) y_all = np.hstack([y_p3, y_avo]) src_all = np.array([source_id] * len(X_p3) + [target_id] * len(X_avo)) print(f"\nCombined dataset: {len(X_all)} trials ({X_all.shape})") print(f" Source: {np.sum(src_all == source_id)} trials") print(f" Target: {np.sum(src_all == target_id)} trials") ``` # Part 2: Model Architecture and Training ## Building the Domain-Adaptive Model We use **EEGConformer**, a transformer-based architecture designed for EEG signals. The key idea in AS-MMD is to combine: 1. **Classification loss**: Standard cross-entropy on both domains 2. **Domain alignment**: MMD loss to match feature distributions 3. **Prototype alignment**: Align class centers across domains 4. **Data augmentation**: Mixup + Gaussian noise for regularization ```Python from braindecode.models import EEGConformer import torch.nn.functional as F def normalize_data(x, eps=1e-7): """Normalize each trial independently.""" mean = x.mean(dim=2, keepdim=True) std = x.std(dim=2, keepdim=True) std = torch.clamp(std, min=eps) return (x - mean) / std ``` ## Domain Adaptation Techniques **Mixup**: Interpolates between sample pairs ```Python def mixup_data(x, y, alpha=0.4): """Mix samples from the same batch.""" if alpha > 0: lam = np.random.beta(alpha, alpha) else: lam = 1.0 batch_size = x.size(0) index = torch.randperm(batch_size, device=x.device) mixed_x = lam * x + (1 - lam) * x[index] return mixed_x, y, y[index], lam # **Focal Loss**: Down-weights easy examples def compute_focal_loss(scores, targets, gamma=2.0, alpha=0.25): """Focal loss for class imbalance.""" ce_loss = F.cross_entropy(scores, targets, reduction="none") pt = torch.exp(-ce_loss) focal_loss = alpha * (1 - pt) ** gamma * ce_loss return focal_loss.mean() # **Maximum Mean Discrepancy**: Measures domain distribution mismatch def compute_mmd_rbf(x, y, eps=1e-8): """RBF-kernel MMD for distribution alignment.""" if x.dim() > 2: x = x.view(x.size(0), -1) if y.dim() > 2: y = y.view(y.size(0), -1) z = torch.cat([x, y], dim=0) if z.size(0) > 1: dists = torch.cdist(z, z, p=2.0) sigma = torch.median(dists) sigma = torch.clamp(sigma, min=eps) else: sigma = torch.tensor(1.0, device=z.device) gamma = 1.0 / (2.0 * (sigma**2) + eps) k_xx = torch.exp(-gamma * torch.cdist(x, x, p=2.0) ** 2) k_yy = torch.exp(-gamma * torch.cdist(y, y, p=2.0) ** 2) k_xy = torch.exp(-gamma * torch.cdist(x, y, p=2.0) ** 2) m, n = x.size(0), y.size(0) if m <= 1 or n <= 1: return torch.tensor(0.0, device=x.device) mmd = (k_xx.sum() - torch.trace(k_xx)) / (m * (m - 1) + eps) mmd += (k_yy.sum() - torch.trace(k_yy)) / (n * (n - 1) + eps) mmd -= 2.0 * k_xy.mean() return mmd # **Prototype Alignment**: Align class centers across domains def compute_prototypes(features, labels, n_classes=2): """Compute mean feature vector per class.""" if features.dim() > 2: features = features.view(features.size(0), -1) prototypes = [] for c in range(n_classes): mask = labels == c if mask.sum() > 0: proto = features[mask].mean(dim=0) else: proto = torch.zeros(features.size(1), device=features.device) prototypes.append(proto) return torch.stack(prototypes) def compute_prototype_loss(features, labels, prototypes): """Align features to their class prototypes.""" if features.dim() > 2: features = features.view(features.size(0), -1) loss = 0.0 for i, label in enumerate(labels): proto = prototypes[label] loss += F.mse_loss(features[i], proto) return loss / max(1, len(labels)) ``` ## Training Configuration Define hyperparameters for stable cross-domain training ```Python BATCH_SIZE = 22 LEARNING_RATE = 0.001 WEIGHT_DECAY = 2.5e-4 MAX_EPOCHS = 100 EARLY_STOPPING_PATIENCE = 10 DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") ``` # Part 3: Training and Evaluation ## The Training Loop For each batch, we compute four loss components: 1. **Classification loss** (source + target): Standard cross-entropy 2. **Mixup loss** (target domain): Interpolated samples for regularization 3. **MMD loss**: Aligns logit-space feature distributions 4. **Prototype loss**: Pulls small-domain features to large-domain class centers All losses are combined with domain-adaptive weights that increase during training. ```Python from torch.utils.data import TensorDataset, DataLoader from sklearn.metrics import roc_auc_score def evaluate_model(model, data_loader, device): """Evaluate model on a dataset and compute metrics.""" model.eval() all_preds = [] all_targets = [] all_probs = [] with torch.no_grad(): for x, y in data_loader: x = normalize_data(x).to(device) y = y.to(device) scores = model(x) all_preds.append(scores.argmax(1).cpu().numpy()) all_targets.append(y.cpu().numpy()) all_probs.append(torch.softmax(scores, dim=1)[:, 1].cpu().numpy()) preds = np.concatenate(all_preds) targets = np.concatenate(all_targets) probs = np.concatenate(all_probs) accuracy = (preds == targets).mean() auc = roc_auc_score(targets, probs) if len(np.unique(targets)) > 1 else 0.5 return {"accuracy": float(accuracy), "auc": float(auc)} def make_loader(X, y, shuffle=False): dataset = TensorDataset(torch.FloatTensor(X), torch.LongTensor(y)) return DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=shuffle) def train_asmmd_model( Xtr_p3, ytr_p3, Xva_p3, yva_p3, Xtr_avo, ytr_avo, Xva_avo, yva_avo, n_channels, n_times, seed=42, ): """Train a single AS-MMD model. Parameters ---------- Xtr_*, ytr_* : numpy arrays Training data and labels for each domain Xva_*, yva_* : numpy arrays Validation data and labels for each domain """ torch.manual_seed(seed) np.random.seed(seed) # Create data loaders train_p3 = make_loader(Xtr_p3, ytr_p3, shuffle=True) val_p3 = make_loader(Xva_p3, yva_p3, shuffle=False) train_avo = make_loader(Xtr_avo, ytr_avo, shuffle=True) val_avo = make_loader(Xva_avo, yva_avo, shuffle=False) # Initialize model model = EEGConformer( n_chans=n_channels, n_outputs=2, # Binary: oddball vs. standard n_times=n_times, n_filters_time=40, filter_time_length=25, pool_time_length=75, pool_time_stride=15, drop_prob=0.5, num_layers=3, # Corrected from att_depth num_heads=4, # Corrected from att_heads att_drop_prob=0.5, ).to(DEVICE) optimizer = torch.optim.Adamax( model.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY ) scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=MAX_EPOCHS) # Compute domain-specific weights n_p3, n_avo = len(Xtr_p3), len(Xtr_avo) small_domain = "P3" if n_p3 < n_avo else "AVO" large_domain = "AVO" if small_domain == "P3" else "P3" # Training loop best_score = 0.0 best_state = None patience = 0 for epoch in range(1, MAX_EPOCHS + 1): model.train() # Warmup: gradually increase domain adaptation strength warmup_epoch = min(1.0, epoch / 20) loaders = {"P3": train_p3, "AVO": train_avo} itr_small = iter(loaders[small_domain]) for xb_large, yb_large in loaders[large_domain]: # Large domain batch x_large = normalize_data(xb_large).to(DEVICE) y_large = yb_large.to(DEVICE) scores_large = model(x_large) loss_cls = F.cross_entropy(scores_large, y_large) # Small domain batch try: xb_small, yb_small = next(itr_small) except StopIteration: itr_small = iter(loaders[small_domain]) xb_small, yb_small = next(itr_small) x_small = normalize_data(xb_small).to(DEVICE) y_small = yb_small.to(DEVICE) # Mixup on small domain x_mixed, y_a, y_b, lam = mixup_data(x_small, y_small) scores_mixed = model(x_mixed) loss_mixup = lam * compute_focal_loss(scores_mixed, y_a) + ( 1 - lam ) * compute_focal_loss(scores_mixed, y_b) # MMD alignment scores_orig = model(x_small) loss_mmd = warmup_epoch * compute_mmd_rbf( scores_large.detach(), scores_orig.detach() ) # Prototype alignment with torch.no_grad(): proto_large = compute_prototypes( scores_large.detach(), y_large, n_classes=2 ) loss_proto = warmup_epoch * compute_prototype_loss( scores_orig, y_small, proto_large ) # Combined loss loss = loss_cls + loss_mixup + 0.3 * loss_mmd + 0.5 * loss_proto optimizer.zero_grad() loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=5.0) optimizer.step() scheduler.step() # Validation val_p3_metrics = evaluate_model(model, val_p3, DEVICE) val_avo_metrics = evaluate_model(model, val_avo, DEVICE) # Track best model on small domain small_val = ( val_p3_metrics["accuracy"] if small_domain == "P3" else val_avo_metrics["accuracy"] ) if small_val > best_score: best_score = small_val best_state = model.state_dict() patience = 0 else: patience += 1 if (epoch % 10 == 0) or (epoch == 1): print( f"Epoch {epoch:3d} | P3 val: {val_p3_metrics['accuracy']:.3f} | " f"AVO val: {val_avo_metrics['accuracy']:.3f} | Score: {small_val:.3f}" ) if patience >= EARLY_STOPPING_PATIENCE: print(f"Early stopping at epoch {epoch}") break # Load best model if best_state is not None: model.load_state_dict(best_state) return model ``` ## Nested Cross-Validation We use nested CV to robustly estimate model performance: - **Outer folds (5)**: For test set evaluation - **Inner split**: Train/val split for hyperparameter tuning - **Repeats (5)**: Multiple random seeds for stability ```Python from sklearn.model_selection import StratifiedKFold, train_test_split import pandas as pd import warnings warnings.filterwarnings("ignore") def run_nested_cv(X_all, y_all, src_all, channels): """Run nested cross-validation with AS-MMD.""" n_channels = X_all.shape[1] n_times = X_all.shape[2] results = [] SEEDS = [42, 123, 456, 789, 321] for repeat in range(2): # 2 repeats for quick demo (use 5 for final results) print(f"\n{'=' * 60}") print(f"Repeat {repeat + 1}/2") print("=" * 60) cv = StratifiedKFold( n_splits=3, shuffle=True, random_state=SEEDS[repeat] ) # 3 folds for demo for fold_idx, (train_idx, test_idx) in enumerate(cv.split(X_all, y_all)): print(f"\nFold {fold_idx + 1}/3") X_tr, y_tr, src_tr = X_all[train_idx], y_all[train_idx], src_all[train_idx] X_te, y_te, src_te = X_all[test_idx], y_all[test_idx], src_all[test_idx] # Split train into train/val tr_idx, va_idx = train_test_split( np.arange(len(X_tr)), test_size=0.15, stratify=y_tr, random_state=42 ) # Extract per-domain data def get_domain(X, y, src, idx, domain): mask = src == domain indices = np.intersect1d(np.where(mask)[0], idx) return X[indices], y[indices] Xtr_p3, ytr_p3 = get_domain(X_tr, y_tr, src_tr, tr_idx, "P3") Xtr_avo, ytr_avo = get_domain(X_tr, y_tr, src_tr, tr_idx, "AVO") Xva_p3, yva_p3 = get_domain(X_tr, y_tr, src_tr, va_idx, "P3") Xva_avo, yva_avo = get_domain(X_tr, y_tr, src_tr, va_idx, "AVO") if len(Xtr_p3) == 0 or len(Xtr_avo) == 0: print(" Skipping: insufficient training samples") continue print(f" Train: P3={len(Xtr_p3)}, AVO={len(Xtr_avo)}") print(f" Val: P3={len(Xva_p3)}, AVO={len(Xva_avo)}") # Train model model = train_asmmd_model( Xtr_p3, ytr_p3, Xva_p3, yva_p3, Xtr_avo, ytr_avo, Xva_avo, yva_avo, n_channels, n_times, seed=SEEDS[repeat], ) # Evaluate on test set def test_domain(domain_label): mask = src_te == domain_label if not np.any(mask): return {"accuracy": 0.0, "auc": 0.5}, 0 loader = make_loader(X_te[mask], y_te[mask]) metrics = evaluate_model(model, loader, DEVICE) return metrics, np.sum(mask) def make_loader(X, y): return DataLoader( TensorDataset(torch.FloatTensor(X), torch.LongTensor(y)), batch_size=BATCH_SIZE, shuffle=False, ) m_p3, n_p3 = test_domain("P3") m_avo, n_avo = test_domain("AVO") overall_acc = (m_p3["accuracy"] * n_p3 + m_avo["accuracy"] * n_avo) / ( n_p3 + n_avo + 1e-8 ) print( f" Test: P3={m_p3['accuracy']:.3f} (n={n_p3}), AVO={m_avo['accuracy']:.3f} (n={n_avo})" ) results.append( { "repeat": repeat + 1, "fold": fold_idx + 1, "p3_acc": m_p3["accuracy"], "p3_auc": m_p3["auc"], "avo_acc": m_avo["accuracy"], "avo_auc": m_avo["auc"], "overall_acc": overall_acc, } ) return pd.DataFrame(results) ``` ## Execute Training ```Python print("\nStarting AS-MMD Training with Nested Cross-Validation...") print("=" * 60) results_df = run_nested_cv(X_all, y_all, src_all, COMMON_CHANNELS) # Print summary print("\n" + "=" * 60) print("RESULTS SUMMARY") print("=" * 60) print( f"\nOverall Accuracy: {results_df['overall_acc'].mean():.4f} ± {results_df['overall_acc'].std():.4f}" ) print("\nP3 Dataset:") print( f" Accuracy: {results_df['p3_acc'].mean():.4f} ± {results_df['p3_acc'].std():.4f}" ) print(f" AUC: {results_df['p3_auc'].mean():.4f} ± {results_df['p3_auc'].std():.4f}") print("\nAVO Dataset:") print( f" Accuracy: {results_df['avo_acc'].mean():.4f} ± {results_df['avo_acc'].std():.4f}" ) print(f" AUC: {results_df['avo_auc'].mean():.4f} ± {results_df['avo_auc'].std():.4f}") print("=" * 60) # Save results results_df.to_csv("asmmd_results.csv", index=False) print("\nResults saved to: asmmd_results.csv") ``` # Key Takeaways **Main Components of AS-MMD:** 1. **Classification Loss**: Standard cross-entropy on both datasets 2. **Mixup Regularization**: Interpolate between samples for better generalization 3. **MMD Alignment**: Match feature distributions across domains 4. **Prototype Alignment**: Pull small-domain features toward large-domain class centers 5. **Warmup Schedule**: Gradually introduce domain adaptation during training **When to Use This Method:** - You have limited data from your target domain - You have access to a related source domain (different equipment/site) - You want a single model that performs well on both domains - You need robust cross-dataset performance **Tips for Your Own Data:** - Verify channel names match between datasets (case-insensitive lowercasing helps) - Adjust BATCH_SIZE if memory is limited (try 16 or 32) - Increase MAX_EPOCHS if curves haven’t plateaued - Tune MMD weight (0.2-0.5) and prototype weight (0.5-0.8) based on domain similarity - Use more CV folds (5-10) for final results **References:** - Chen, W., Delorme, A. (2025). Adaptive Split-MMD Training for Small-Sample Cross-Dataset P300 Classification. - Song et al. (2019). “EEGConformer: Convolutional Transformer for EEG Decoding” - Long et al. (2015). “Learning Transferable Features with Deep Adaptation Networks” # Next Steps - Try different EEG components (e.g., N1, P2, N2 instead of P3) - Extend to multi-class classification (e.g., oddball paradigm variants) - Apply to other tasks (motor imagery, sleep staging, seizure detection) - Experiment with other backbones (ResNet, LSTM) instead of EEGConformer - Implement subject-independent vs. subject-specific models # Computation times **01:57.318** total execution time for 4 files **from generated/auto_examples/core**: | Example | Time | Mem (MB) | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|------------| | [Minimal Tutorial](tutorial_minimal.md#sphx-glr-generated-auto-examples-core-tutorial-minimal-py) (`tutorial_minimal.py`) | 01:48.298 | 0 | | [Eyes Open vs. Closed Classification](tutorial_eoec.md#sphx-glr-generated-auto-examples-core-tutorial-eoec-py) (`tutorial_eoec.py`) | 00:06.323 | 0 | | [EEGDash Feature Extractor](tutorial_feature_extractor_open_close_eye.md#sphx-glr-generated-auto-examples-core-tutorial-feature-extractor-open-close-eye-py) (`tutorial_feature_extractor_open_close_eye.py`) | 00:02.698 | 0 | | [EEG P3 Transfer Learning with AS-MMD](p300_transfer_learning.md#sphx-glr-generated-auto-examples-core-p300-transfer-learning-py) (`p300_transfer_learning.py`) | 00:00.000 | 0 | # Eyes Open vs. Closed Classification EEGDash example for eyes open vs. closed classification. This example uses the `eegdash` library in combination with PyTorch to develop a deep learning model for analyzing EEG data, specifically for eyes open vs. closed classification in a single subject. 1. **Data Retrieval Using EEGDash**: An instance of `eegdash.api.EEGDashDataset` is created to search and retrieve an EEG dataset. At this step, only the metadata is transferred. 2. **Data Preprocessing Using BrainDecode**: This process preprocesses EEG data using Braindecode by reannotating events, selecting specific channels, resampling, filtering, and extracting 2-second epochs, ensuring balanced eyes-open and eyes-closed data for analysis. 3. **Creating train and testing sets**: The dataset is split into training (80%) and testing (20%) sets with balanced labels, converted into PyTorch tensors, and wrapped in DataLoader objects for efficient mini-batch training. 4. **Model Definition**: The model is a shallow convolutional neural network (ShallowFBCSPNet) with 24 input channels (EEG channels), 2 output classes (eyes-open and eyes-closed). 5. **Model Training and Evaluation Process**: This section trains the neural network, normalizes input data, computes cross-entropy loss, updates model parameters, and evaluates classification accuracy over six epochs. ## Data Retrieval Using EEGDash This section instantiates `eegdash.api.EEGDashDataset` to fetch the metadata for the experiment before requesting any recordings. First we find one resting state dataset. This dataset contains both eyes open and eyes closed data. ```Python from pathlib import Path from eegdash.paths import get_default_cache_dir cache_folder = Path(get_default_cache_dir()).resolve() cache_folder.mkdir(parents=True, exist_ok=True) ``` ```Python from eegdash import EEGDashDataset ds_eoec = EEGDashDataset( query={"dataset": "ds005514", "task": "RestingState", "subject": "NDARDB033FW5"}, cache_dir=cache_folder, ) ``` ```none ╭────────────────────── EEG 2025 Competition Data Notice ──────────────────────╮ │ This notice is only for users who are participating in the EEG 2025 │ │ Competition. │ │ │ │ EEG 2025 Competition Data Notice! │ │ You are loading one of the datasets that is used in competition, but via │ │ `EEGDashDataset`. │ │ │ │ IMPORTANT: │ │ If you download data from `EEGDashDataset`, it is NOT identical to the │ │ official │ │ competition data, which is accessed via `EEGChallengeDataset`. The │ │ competition data has been downsampled and filtered. │ │ │ │ If you are participating in the competition, │ │ you must use the `EEGChallengeDataset` object to ensure consistency. │ │ │ │ If you are not participating in the competition, you can ignore this │ │ message. │ ╰─────────────────────────── Source: EEGDashDataset ───────────────────────────╯ ``` ## Data Preprocessing Using Braindecode [braindecode](https://braindecode.org/stable/install/install.html) is a specialized library for preprocessing EEG and MEG data. In this dataset, there are two key events in the continuous data: **instructed_toCloseEyes**, marking the start of a 40-second eyes-closed period, and **instructed_toOpenEyes**, indicating the start of a 20-second eyes-open period. For the eyes-closed event, we extract 14 seconds of data from 15 to 29 seconds after the event onset. Similarly, for the eyes-open event, we extract data from 5 to 19 seconds after the event onset. This ensures an equal amount of data for both conditions. The event extraction is handled by the custom function `eegdash.hbn.preprocessing.hbn_ec_ec_reannotation()`. Next, we apply four preprocessing steps in Braindecode: 1. **Reannotation** of event markers using `eegdash.hbn.preprocessing.hbn_ec_ec_reannotation()`. 2. **Selection** of 24 specific EEG channels from the original 128. 3. **Resampling** the EEG data to a frequency of 128 Hz. 4. **Filtering** the EEG signals to retain frequencies between 1 Hz and 55 Hz. When calling the preprocess function, the data is retrieved from the remote repository. Finally, we use create_windows_from_events to extract 2-second epochs from the data. These epochs serve as the dataset samples. At this stage, each sample is automatically labeled with the corresponding event type (eyes-open or eyes-closed). windows_ds is a PyTorch dataset, and when queried, it returns labels for eyes-open and eyes-closed (assigned as labels 0 and 1, corresponding to their respective event markers). ```Python from braindecode.preprocessing import ( preprocess, Preprocessor, create_windows_from_events, ) import numpy as np from eegdash.hbn.preprocessing import hbn_ec_ec_reannotation import warnings warnings.simplefilter("ignore", category=RuntimeWarning) # BrainDecode preprocessors preprocessors = [ hbn_ec_ec_reannotation(), Preprocessor( "pick_channels", ch_names=[ "E22", "E9", "E33", "E24", "E11", "E124", "E122", "E29", "E6", "E111", "E45", "E36", "E104", "E108", "E42", "E55", "E93", "E58", "E52", "E62", "E92", "E96", "E70", "Cz", ], ), Preprocessor("resample", sfreq=128), Preprocessor("filter", l_freq=1, h_freq=55), ] preprocess(ds_eoec, preprocessors) # Extract 2-second segments windows_ds = create_windows_from_events( ds_eoec, trial_start_offset_samples=0, trial_stop_offset_samples=256, preload=True, ) ``` ```none /home/runner/work/EEGDash/EEGDash/.venv/lib/python3.11/site-packages/braindecode/preprocessing/preprocess.py:77: UserWarning: apply_on_array can only be True if fn is a callable function. Automatically correcting to apply_on_array=False. warn( [04/19/26 15:18:12] WARNING File not found on S3, skipping: downloader.py:146 s3://openneuro.org/ds005514/sub-N DARDB033FW5/eeg/sub-NDARDB033FW5_ task-RestingState_eeg.fdt Used Annotations descriptions: [np.str_('boundary'), np.str_('break cnt'), np.str_('instructed_toCloseEyes'), np.str_('instructed_toOpenEyes'), np.str_('resting_start')] INFO Original events found with ids: preprocessing.py:66 {np.str_('boundary'): 1, np.str_('break cnt'): 2, np.str_('instructed_toCloseEyes '): 3, np.str_('instructed_toOpenEyes' ): 4, np.str_('resting_start'): 5} NOTE: pick_channels() is a legacy function. New code should use inst.pick(...). Filtering raw data in 1 contiguous segment Setting up band-pass filter from 1 - 55 Hz FIR filter parameters --------------------- Designing a one-pass, zero-phase, non-causal bandpass filter: - Windowed time-domain design (firwin) method - Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation - Lower passband edge: 1.00 - Lower transition bandwidth: 1.00 Hz (-6 dB cutoff frequency: 0.50 Hz) - Upper passband edge: 55.00 Hz - Upper transition bandwidth: 9.00 Hz (-6 dB cutoff frequency: 59.50 Hz) - Filter length: 423 samples (3.305 s) ``` ## Plotting a Single Channel for One Sample It’s always a good practice to verify that the data has been properly loaded and processed. Here, we plot a single channel from one sample to ensure the signal is present and looks as expected. ```Python import matplotlib.pyplot as plt plt.figure() plt.plot(windows_ds[2][0][0, :].transpose()) # first channel of first epoch plt.show() ``` ## Creating training and test sets The code below creates a training and test set. We first split the data into training and test sets using the **train_test_split** function from the **sklearn** library. We then create a **TensorDataset** for the training and test sets. 1. **Set Random Seed** – The random seed is fixed using torch.manual_seed(random_state) to ensure reproducibility in dataset splitting and model training. 2. **Extract Labels from the Dataset** – Labels (eye-open or eye-closed events) are extracted from windows_ds, stored as a NumPy array, and printed for verification. 3. **Split Dataset into Train and Test Sets** – The dataset is split into training (80%) and testing (20%) subsets using train_test_split(), ensuring balanced stratification based on the extracted labels. 4. **Convert Data to PyTorch Tensors** – The selected training and testing samples are converted into FloatTensor for input features and LongTensor for labels, making them compatible with PyTorch models. 5. **Create DataLoaders** – The datasets are wrapped in PyTorch DataLoader objects with a batch size of 10, enabling efficient mini-batch training and shuffling. ```Python import torch from sklearn.model_selection import train_test_split from torch.utils.data import DataLoader from torch.utils.data import TensorDataset # Set random seed for reproducibility random_state = 42 torch.manual_seed(random_state) np.random.seed(random_state) # Extract labels from the dataset eo_ec = np.array([ds[1] for ds in windows_ds]).transpose() # check labels print("labels: ", eo_ec) # Get balanced indices for male and female subjects train_indices, test_indices = train_test_split( range(len(windows_ds)), test_size=0.2, stratify=eo_ec, random_state=random_state ) # Convert the data to tensors X_train = torch.FloatTensor( np.array([windows_ds[i][0] for i in train_indices]) ) # Convert list of arrays to single tensor X_test = torch.FloatTensor( np.array([windows_ds[i][0] for i in test_indices]) ) # Convert list of arrays to single tensor y_train = torch.LongTensor(eo_ec[train_indices]) # Convert targets to tensor y_test = torch.LongTensor(eo_ec[test_indices]) # Convert targets to tensor dataset_train = TensorDataset(X_train, y_train) dataset_test = TensorDataset(X_test, y_test) # Create data loaders for training and testing (batch size 10) train_loader = DataLoader(dataset_train, batch_size=10, shuffle=True) test_loader = DataLoader(dataset_test, batch_size=10, shuffle=True) # Print shapes and sizes to verify split print( f"Shape of data {X_train.shape} number of samples - Train: {len(train_loader)}, Test: {len(test_loader)}" ) print( f"Eyes-Open/Eyes-Closed balance, train: {np.mean(eo_ec[train_indices]):.2f}, test: {np.mean(eo_ec[test_indices]):.2f}" ) ``` ```none labels: [1 1 1 1 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0] Shape of data torch.Size([56, 24, 256]) number of samples - Train: 6, Test: 2 Eyes-Open/Eyes-Closed balance, train: 0.50, test: 0.50 ``` ## Check labels It is good practice to verify the labels and ensure the random seed is functioning correctly. If all labels are 0s (eyes closed) or 1s (eyes open), it could indicate an issue with data loading or stratification, requiring further investigation. Visualize a batch of target labels ```Python dataiter = iter(train_loader) first_item, label = dataiter.__next__() label ``` ```none tensor([0, 1, 1, 1, 1, 0, 1, 1, 0, 0]) ``` ## Create model The model is a shallow convolutional neural network (ShallowFBCSPNet) with 24 input channels (EEG channels), 2 output classes (eyes-open and eyes-closed), and an input window size of 256 samples (2 seconds of EEG data). ```Python import torch import numpy as np from torch.nn import functional as F from braindecode.models import ShallowFBCSPNet from torchinfo import summary torch.manual_seed(random_state) model = ShallowFBCSPNet(24, 2, n_times=256, final_conv_length="auto") summary(model, input_size=(1, 24, 256)) ``` ```none ========================================================================================== Layer (type:depth-idx) Output Shape Param # ========================================================================================== ShallowFBCSPNet [1, 2] -- ├─Ensure4d: 1-1 [1, 24, 256, 1] -- ├─Rearrange: 1-2 [1, 1, 256, 24] -- ├─CombinedConv: 1-3 [1, 40, 232, 1] 39,440 ├─BatchNorm2d: 1-4 [1, 40, 232, 1] 80 ├─Square: 1-5 [1, 40, 232, 1] -- ├─AvgPool2d: 1-6 [1, 40, 11, 1] -- ├─SafeLog: 1-7 [1, 40, 11, 1] -- ├─Dropout: 1-8 [1, 40, 11, 1] -- ├─Sequential: 1-9 [1, 2] -- │ └─Conv2d: 2-1 [1, 2, 1, 1] 882 │ └─SqueezeFinalOutput: 2-2 [1, 2] -- │ │ └─Rearrange: 3-1 [1, 2, 1] -- ========================================================================================== Total params: 40,402 Trainable params: 40,402 Non-trainable params: 0 Total mult-adds (Units.MEGABYTES): 0.00 ========================================================================================== Input size (MB): 0.02 Forward/backward pass size (MB): 0.07 Params size (MB): 0.00 Estimated Total Size (MB): 0.10 ========================================================================================== ========================================================================================== Layer (type:depth-idx) Output Shape Param # ========================================================================================== ShallowFBCSPNet [1, 2] -- ├─Ensure4d: 1-1 [1, 24, 256, 1] -- ├─Rearrange: 1-2 [1, 1, 256, 24] -- ├─CombinedConv: 1-3 [1, 40, 232, 1] 39,440 ├─BatchNorm2d: 1-4 [1, 40, 232, 1] 80 ├─Square: 1-5 [1, 40, 232, 1] -- ├─AvgPool2d: 1-6 [1, 40, 11, 1] -- ├─SafeLog: 1-7 [1, 40, 11, 1] -- ├─Dropout: 1-8 [1, 40, 11, 1] -- ├─Sequential: 1-9 [1, 2] -- │ └─Conv2d: 2-1 [1, 2, 1, 1] 882 │ └─SqueezeFinalOutput: 2-2 [1, 2] -- │ │ └─Rearrange: 3-1 [1, 2, 1] -- ========================================================================================== Total params: 40,402 Trainable params: 40,402 Non-trainable params: 0 Total mult-adds (Units.MEGABYTES): 0.00 ========================================================================================== Input size (MB): 0.02 Forward/backward pass size (MB): 0.07 Params size (MB): 0.00 Estimated Total Size (MB): 0.10 ========================================================================================== ``` ## Model Training and Evaluation Process This section trains the neural network using the Adamax optimizer, normalizes input data, computes cross-entropy loss, updates model parameters, and tracks accuracy across six epochs. 1. **Set Up Optimizer and Learning Rate Scheduler** – The Adamax optimizer initializes with a learning rate of 0.002 and weight decay of 0.001 for regularization. An ExponentialLR scheduler with a decay factor of 1 keeps the learning rate constant. 2. **Allocate Model to Device** – The model moves to the specified device (CPU, GPU, or MPS for Mac silicon) to optimize computation efficiency. 3. **Normalize Input Data** – The normalize_data function standardizes input data by subtracting the mean and dividing by the standard deviation along the time dimension before transferring it to the appropriate device. 4. **Evaluates Classification Accuracy Over Six Epochs** – The training loop iterates through data batches with the model in training mode. It normalizes inputs, computes predictions, calculates cross-entropy loss, performs backpropagation, updates model parameters, and steps the learning rate scheduler. It tracks correct predictions to compute accuracy. 5. **Evaluate on Test Data** – After each epoch, the model runs in evaluation mode on the test set. It computes predictions on normalized data and calculates test accuracy by comparing outputs with actual labels. ```Python optimizer = torch.optim.Adamax(model.parameters(), lr=0.002, weight_decay=0.001) scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=1) device = torch.device( "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu" ) model = model.to(device=device) # move the model parameters to CPU/GPU epochs = 6 def normalize_data(x): mean = x.mean(dim=2, keepdim=True) std = x.std(dim=2, keepdim=True) + 1e-7 # add small epsilon for numerical stability x = (x - mean) / std x = x.to(device=device, dtype=torch.float32) # move to device, e.g. GPU return x for e in range(epochs): # training correct_train = 0 for t, (x, y) in enumerate(train_loader): model.train() # put model to training mode scores = model(normalize_data(x)) y = y.to(device=device, dtype=torch.long) _, preds = scores.max(1) correct_train += (preds == y).sum() / len(dataset_train) loss = F.cross_entropy(scores, y) optimizer.zero_grad() loss.backward() optimizer.step() scheduler.step() # Validation correct_test = 0 for t, (x, y) in enumerate(test_loader): model.eval() # put model to testing mode scores = model(normalize_data(x)) y = y.to(device=device, dtype=torch.long) _, preds = scores.max(1) correct_test += (preds == y).sum() / len(dataset_test) # Reporting print( f"Epoch {e}, Train accuracy: {correct_train:.2f}, Test accuracy: {correct_test:.2f}" ) ``` ```none Epoch 0, Train accuracy: 0.66, Test accuracy: 0.50 Epoch 1, Train accuracy: 0.79, Test accuracy: 0.50 Epoch 2, Train accuracy: 0.91, Test accuracy: 0.50 Epoch 3, Train accuracy: 0.88, Test accuracy: 0.57 Epoch 4, Train accuracy: 0.91, Test accuracy: 0.57 Epoch 5, Train accuracy: 0.88, Test accuracy: 0.50 ``` **Total running time of the script:** (0 minutes 6.323 seconds) # EEGDash Feature Extractor EEGDash example for eyes open vs. closed classification. This example uses the `eegdash` library in combination with PyTorch to develop a deep learning model for analyzing EEG data, specifically for eyes open vs. closed classification in a single subject. 1. **Data Retrieval Using EEGDash**: An instance of `eegdash.api.EEGDashDataset` is created to search and retrieve an EEG dataset. At this step, only the metadata is transferred. 2. **Data Preprocessing Using BrainDecode**: This process preprocesses EEG data using Braindecode by reannotating events, selecting specific channels, resampling, filtering, and extracting 2-second epochs, ensuring balanced eyes-open and eyes-closed data for analysis. 3. **Creating train and testing sets**: The dataset is split into training (80%) and testing (20%) sets with balanced labels, converted into PyTorch tensors, and wrapped in DataLoader objects for efficient mini-batch training. 4. **Model Definition**: The model is a shallow convolutional neural network (ShallowFBCSPNet) with 24 input channels (EEG channels), 2 output classes (eyes-open and eyes-closed). 5. **Model Training and Evaluation Process**: This section trains the neural network, normalizes input data, computes cross-entropy loss, updates model parameters, and evaluates classification accuracy over six epochs. ## Data Retrieval Using EEGDash We instantiate `eegdash.api.EEGDashDataset` to pull the experiment metadata and build the dataset definition. First we find one resting state dataset. This dataset contains both eyes open and eyes closed data. ```Python from pathlib import Path from eegdash import EEGDashDataset from eegdash.paths import get_default_cache_dir cache_folder = Path(get_default_cache_dir()).resolve() cache_folder.mkdir(parents=True, exist_ok=True) ds_eoec = EEGDashDataset( dataset="ds005514", task="RestingState", subject="NDARDB033FW5", cache_dir=cache_folder, ) ``` ## Data Preprocessing Using Braindecode [BrainDecode]([https://braindecode.org/stable/install/install.html](https://braindecode.org/stable/install/install.html)) is a specialized library for preprocessing EEG and MEG data. In this dataset, there are two key events in the continuous data: **instructed_toCloseEyes**, marking the start of a 40-second eyes-closed period, and **instructed_toOpenEyes**, indicating the start of a 20-second eyes-open period. For the eyes-closed event, we extract 14 seconds of data from 15 to 29 seconds after the event onset. Similarly, for the eyes-open event, we extract data from 5 to 19 seconds after the event onset. This ensures an equal amount of data for both conditions. The event extraction is handled by the custom function `eegdash.hbn.preprocessing.hbn_ec_ec_reannotation()`. Next, we apply four preprocessing steps in Braindecode: 1. **Reannotation** of event markers using `eegdash.hbn.preprocessing.hbn_ec_ec_reannotation()`. 2. **Selection** of 24 specific EEG channels from the original 128. 3. **Resampling** the EEG data to a frequency of 128 Hz. 4. **Filtering** the EEG signals to retain frequencies between 1 Hz and 55 Hz. When calling the preprocess function, the data is retrieved from the remote repository. Finally, we use create_windows_from_events to extract 2-second epochs from the data. These epochs serve as the dataset samples. At this stage, each sample is automatically labeled with the corresponding event type (eyes-open or eyes-closed). windows_ds is a PyTorch dataset, and when queried, it returns labels for eyes-open and eyes-closed (assigned as labels 0 and 1, corresponding to their respective event markers). ```Python from eegdash.hbn.preprocessing import hbn_ec_ec_reannotation from braindecode.preprocessing import ( preprocess, Preprocessor, create_windows_from_events, ) import numpy as np import warnings warnings.simplefilter("ignore", category=RuntimeWarning) # BrainDecode preprocessors preprocessors = [ hbn_ec_ec_reannotation(), Preprocessor( "pick_channels", ch_names=[ "E22", "E9", "E33", "E24", "E11", "E124", "E122", "E29", "E6", "E111", "E45", "E36", "E104", "E108", "E42", "E55", "E93", "E58", "E52", "E62", "E92", "E96", "E70", "Cz", ], ), Preprocessor("resample", sfreq=128), Preprocessor("filter", l_freq=1, h_freq=55), ] preprocess(ds_eoec, preprocessors) # Extract 2-second segments windows_ds = create_windows_from_events( ds_eoec, trial_start_offset_samples=0, trial_stop_offset_samples=int(2 * ds_eoec.datasets[0].raw.info["sfreq"]), preload=True, ) ``` ## Plotting a Single Channel for One Sample It’s always a good practice to verify that the data has been properly loaded and processed. Here, we plot a single channel from one sample to ensure the signal is present and looks as expected. ```Python import matplotlib.pyplot as plt plt.figure() plt.plot(windows_ds[2][0][0, :].transpose()) # first channel of first epoch plt.show() ``` ## Features ```Python from eegdash import features from eegdash.features import extract_features from functools import partial sfreq = windows_ds.datasets[0].raw.info["sfreq"] # Support both old (dict) and new (list) braindecode preproc metadata formats preproc_data = windows_ds.datasets[0].raw_preproc_kwargs if isinstance(preproc_data, list): # Find the 'filter' preprocessor in the list of dicts/objects filter_kwargs = {} for item in preproc_data: if isinstance(item, dict) and ( item.get("fn") == "filter" or item.get("__class_path__") == "filter" ): filter_kwargs = item.get("kwargs", {}) break elif hasattr(item, "fn") and getattr(item.fn, "__name__", "") == "filter": filter_kwargs = getattr(item, "kwargs", {}) break filter_freqs = filter_kwargs else: filter_freqs = preproc_data.get("filter", {}) features_dict = { "sig": features.FeatureExtractor( { "mean": features.signal_mean, "var": features.signal_variance, "std": features.signal_std, "skew": features.signal_skewness, "kurt": features.signal_kurtosis, "rms": features.signal_root_mean_square, "ptp": features.signal_peak_to_peak, "quan.1": partial(features.signal_quantile, q=0.1), "quan.9": partial(features.signal_quantile, q=0.9), "line_len": features.signal_line_length, "zero_x": features.signal_zero_crossings, }, ), "spec": features.FeatureExtractor( preprocessor=partial( features.spectral_preprocessor, fs=sfreq, f_min=filter_freqs["l_freq"], f_max=filter_freqs["h_freq"], nperseg=2 * sfreq, noverlap=int(1.5 * sfreq), ), feature_extractors={ "rtot_power": features.spectral_root_total_power, "band_power": partial( features.spectral_bands_power, bands={ "theta": (4.5, 8), "alpha": (8, 12), "beta": (12, 30), }, ), 0: features.FeatureExtractor( preprocessor=features.spectral_normalized_preprocessor, feature_extractors={ "moment": features.spectral_moment, "entropy": features.spectral_entropy, "edge": partial(features.spectral_edge, edge=0.9), }, ), 1: features.FeatureExtractor( preprocessor=features.spectral_db_preprocessor, feature_extractors={ "slope": features.spectral_slope, }, ), }, ), } features_ds = extract_features(windows_ds, features_dict, batch_size=512) ``` ```Python features_ds.to_dataframe(include_crop_inds=True) ``` ```Python features_ds.fillna(0) features_ds.zscore(eps=1e-7) ``` ```Python features_ds.to_dataframe(include_target=True) ``` ## Creating training and test sets The code below creates a training and test set. We first split the data into training and test sets using the **train_test_split** function from the **sklearn** library. We then create a **TensorDataset** for the training and test sets. 1. **Set Random Seed** – The random seed is fixed using torch.manual_seed(random_state) to ensure reproducibility in dataset splitting and model training. 2. **Extract Labels from the Dataset** – Labels (eye-open or eye-closed events) are extracted from windows or features, stored as a NumPy array, and printed for verification. 3. **Split Dataset into Train and Test Sets** – The dataset is split into training (80%) and testing (20%) subsets using train_test_split(), ensuring balanced stratification based on the extracted labels. 4. **Convert Data to PyTorch Tensors** – The selected training and testing samples are converted into FloatTensor for input features and LongTensor for labels, making them compatible with PyTorch models. 5. **Create DataLoaders** – The datasets are wrapped in PyTorch DataLoader objects with a batch size of 10, enabling efficient mini-batch training and shuffling. ```Python import torch from sklearn.model_selection import train_test_split from torch.utils.data import DataLoader from torch.utils.data import TensorDataset # Set random seed for reproducibility random_state = 42 torch.manual_seed(random_state) np.random.seed(random_state) # Extract labels from the dataset eo_ec = np.array([ds[1] for ds in features_ds]).ravel() # check labels print("labels: ", eo_ec) # Get balanced indices for male and female subjects train_indices, test_indices = train_test_split( range(len(features_ds)), test_size=0.2, stratify=eo_ec, random_state=random_state ) # Convert the data to tensors X_train = torch.FloatTensor( np.array([features_ds[i][0] for i in train_indices]) ) # Convert list of arrays to single tensor X_test = torch.FloatTensor( np.array([features_ds[i][0] for i in test_indices]) ) # Convert list of arrays to single tensor y_train = torch.LongTensor(eo_ec[train_indices]) # Convert targets to tensor y_test = torch.LongTensor(eo_ec[test_indices]) # Convert targets to tensor dataset_train = TensorDataset(X_train, y_train) dataset_test = TensorDataset(X_test, y_test) # Create data loaders for training and testing (batch size 10) train_loader = DataLoader(dataset_train, batch_size=10, shuffle=True) test_loader = DataLoader(dataset_test, batch_size=10, shuffle=True) # Print shapes and sizes to verify split print( f"Shape of data {X_train.shape} number of samples - Train: {len(train_loader)}, Test: {len(test_loader)}" ) print( f"Eyes-Open/Eyes-Closed balance, train: {np.mean(eo_ec[train_indices]):.2f}, test: {np.mean(eo_ec[test_indices]):.2f}" ) ``` ## Check labels It is good practice to verify the labels and ensure the random seed is functioning correctly. If all labels are 0s (eyes closed) or 1s (eyes open), it could indicate an issue with data loading or stratification, requiring further investigation. ```Python # Visualize a batch of target labels dataiter = iter(train_loader) first_item, label = dataiter.__next__() label ``` ## Create model The model is a shallow convolutional neural network (ShallowFBCSPNet) with 24 input channels (EEG channels), 2 output classes (eyes-open and eyes-closed), and an input window size of 256 samples (2 seconds of EEG data). ```Python import torch from torch import nn from torchinfo import summary torch.manual_seed(random_state) # MLP model = nn.Sequential( nn.Flatten(), nn.Linear(features_ds.datasets[0].n_features, 100), nn.Linear(100, 100), nn.Linear(100, 100), nn.Linear(100, 2), ) summary(model, input_size=first_item.shape) ``` ## Model Training and Evaluation Process This section trains the neural network using the Adamax optimizer, normalizes input data, computes cross-entropy loss, updates model parameters, and tracks accuracy across six epochs. 1. **Set Up Optimizer and Learning Rate Scheduler** – The Adamax optimizer initializes with a learning rate of 0.002 and weight decay of 0.001 for regularization. An ExponentialLR scheduler with a decay factor of 1 keeps the learning rate constant. 2. **Allocate Model to Device** – The model moves to the specified device (CPU, GPU, or MPS for Mac silicon) to optimize computation efficiency. 3. **Normalize Input Data** – The normalize_data function standardizes input data by subtracting the mean and dividing by the standard deviation along the time dimension before transferring it to the appropriate device. 4. **Evaluates Classification Accuracy Over Six Epochs** – The training loop iterates through data batches with the model in training mode. It normalizes inputs, computes predictions, calculates cross-entropy loss, performs backpropagation, updates model parameters, and steps the learning rate scheduler. It tracks correct predictions to compute accuracy. 5. **Evaluate on Test Data** – After each epoch, the model runs in evaluation mode on the test set. It computes predictions on normalized data and calculates test accuracy by comparing outputs with actual labels. ```Python from torch.nn import functional as F optimizer = torch.optim.Adamax(model.parameters(), lr=0.002, weight_decay=0.001) scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=1) device = torch.device("cpu") model = model.to(device=device) # move the model parameters to CPU/GPU epochs = 6 for e in range(epochs): # training correct_train = 0 for t, (x, y) in enumerate(train_loader): model.train() # put model to training mode scores = model(x) y = y.to(device=device, dtype=torch.long) _, preds = scores.max(1) correct_train += (preds == y).sum() / len(dataset_train) loss = F.cross_entropy(scores, y) optimizer.zero_grad() loss.backward() optimizer.step() scheduler.step() # Validation correct_test = 0 for t, (x, y) in enumerate(test_loader): model.eval() # put model to testing mode scores = model(x) y = y.to(device=device, dtype=torch.long) _, preds = scores.max(1) correct_test += (preds == y).sum() / len(dataset_test) # Reporting print( f"Epoch {e}, Train accuracy: {correct_train:.2f}, Test accuracy: {correct_test:.2f}" ) ``` ```Python from lightgbm import LGBMClassifier data_df = features_ds.to_dataframe(include_target=True) X_train, y_train = ( data_df.drop("target", axis=1).iloc[train_indices], data_df.loc[train_indices, "target"], ) X_val, y_val = ( data_df.drop("target", axis=1).iloc[test_indices], data_df.loc[test_indices, "target"], ) clf = LGBMClassifier(n_jobs=1) clf.fit(X_train, y_train) y_hat_train = clf.predict(X_train) correct_train = (y_train == y_hat_train).mean() y_hat_val = clf.predict(X_val) correct_val = (y_val == y_hat_val).mean() print(f"Train accuracy: {correct_train:.2f}, Validation accuracy: {correct_val:.2f}\n") ``` ```Python from lightgbm import plot_importance plot_importance(clf, importance_type="split", max_num_features=10) ``` ```Python plot_importance(clf, importance_type="gain", max_num_features=10) ``` # Minimal Tutorial This is a minimal tutorial demonstrating how to use EEGDash with BrainDecode. ```none ╭────────────────────── EEG 2025 Competition Data Notice ──────────────────────╮ │ This object loads the HBN dataset that has been preprocessed for the EEG │ │ Challenge: │ │ * Downsampled from 500Hz to 100Hz │ │ * Bandpass filtered (0.5-50 Hz) │ │ │ │ For full preprocessing applied for competition details, see: │ │ https://github.com/eeg2025/downsample-datasets │ │ │ │ The HBN dataset have some preprocessing applied by the HBN team: │ │ * Re-reference (Cz Channel) │ │ │ │ IMPORTANT: The data accessed via `EEGChallengeDataset` is NOT identical to │ │ what you get from EEGDashDataset directly. │ │ If you are participating in the competition, always use │ │ `EEGChallengeDataset` to ensure consistency with the challenge data. │ ╰──────────────────────── Source: EEGChallengeDataset ─────────────────────────╯ /home/runner/work/EEGDash/EEGDash/.venv/lib/python3.11/site-packages/braindecode/preprocessing/preprocess.py:77: UserWarning: apply_on_array can only be True if fn is a callable function. Automatically correcting to apply_on_array=False. warn( Downloading sub-NDARAC904DMU_task-contrastChangeDetection_run-3_channels.tsv: 0%| | 0.00/1.00 [00:00 ```Python import torch import torch.nn.functional as F from sklearn.model_selection import train_test_split from torch.utils.data import DataLoader from braindecode.models import EEGConformer from braindecode.preprocessing import ( Preprocessor, create_fixed_length_windows, preprocess, ) from eegdash.dataset import EEGChallengeDataset from eegdash.paths import get_default_cache_dir # Load data dataset = EEGChallengeDataset( release="R1", task="contrastChangeDetection", description_fields=["p_factor"], cache_dir=get_default_cache_dir(), ) # Filter out any non-EEG files (e.g. .tsv/.json sidecars) valid_extensions = (".vhdr", ".edf", ".bdf", ".set") dataset.datasets = [ ds for ds in dataset.datasets if str(ds.bidspath).endswith(valid_extensions) ] # Preprocess preprocess( dataset, [ Preprocessor("resample", sfreq=100), Preprocessor("filter", l_freq=1, h_freq=35), ], ) # Segment into windows windows_ds = create_fixed_length_windows( dataset, window_size_samples=200, window_stride_samples=200, drop_last_window=True, ) # Split and create loaders (not splitting by subjects, so there is obvious leakage) train_ds, test_ds = train_test_split(windows_ds, test_size=0.2, random_state=42) train_loader = DataLoader(train_ds, batch_size=100) test_loader = DataLoader(test_ds, batch_size=100) # Define model and optimizer model = EEGConformer( n_chans=129, n_outputs=1, n_times=200, num_layers=4, num_heads=8, ) optimizer = torch.optim.Adam(model.parameters(), lr=0.00002, weight_decay=1e-2) # Train for epoch in range(1): model.train() for batch in train_loader: optimizer.zero_grad() loss = F.mse_loss(model(batch[0]), batch[1].float().unsqueeze(1)) loss.backward() optimizer.step() print(f"Epoch {epoch}, Train Loss: {loss.item()}") model.eval() with torch.no_grad(): for batch in test_loader: loss = F.mse_loss(model(batch[0]), batch[1].float().unsqueeze(1)) print(f"Epoch {epoch}, Test Loss: {loss.item()}") ``` **Total running time of the script:** (1 minutes 48.298 seconds) # Exploring Braindecode’s BIDSDataset Tests showing BIDSDataset not able to handle example EEGLAB dataset and slower than pybids ```Python from pathlib import Path import os os.environ.setdefault("NUMBA_DISABLE_JIT", "1") os.environ.setdefault("_MNE_FAKE_HOME_DIR", str(Path.cwd())) (Path(os.environ["_MNE_FAKE_HOME_DIR"]) / ".mne").mkdir(exist_ok=True) from bids import BIDSLayout from braindecode.datasets import BIDSDataset from eegdash import EEGDash, EEGDashDataset CACHE_DIR = Path(os.getenv("EEGDASH_CACHE_DIR", Path.cwd() / "eegdash_cache")).resolve() CACHE_DIR.mkdir(parents=True, exist_ok=True) DATASET_ID = os.getenv("EEGDASH_DATASET_ID", "ds002718") eegdash = EEGDash() records = eegdash.find({"dataset": DATASET_ID}, limit=3) if not records: raise RuntimeError(f"No records found for dataset {DATASET_ID}.") dataset = EEGDashDataset(cache_dir=CACHE_DIR, records=records) try: _ = dataset.datasets[0].raw except RuntimeError as exc: print(f"Raw read failed (likely missing coordsystem.json): {exc}") root = CACHE_DIR / DATASET_ID bids = BIDSDataset(root=str(root), preload=False) # Can't import regular EEGLAB dataset ``` Tests showing pybids utilities as well as limitations - Recording files can be retrieved fast - File path can be mapped to BIDS file using simple additional parsing - Needed info such as duration and channel count can be retrieved easily - Not all file level metadata files can be retrieved even though they exist - Top level json associated with a file can’t be retrieved from file level ```Python def get_recordings(layout: BIDSLayout): extensions = { ".set": [".set", ".fdt"], # eeglab ".edf": [".edf"], # european ".vhdr": [".eeg", ".vhdr", ".vmrk", ".dat", ".raw"], # brainvision ".bdf": [".bdf"], # biosemi } files = [] for ext, exts in extensions.items(): files = layout.get(extension=ext, return_type="filename") if files: break return files print(get_recordings(BIDSLayout(str(root)))) ``` ```Python layout = BIDSLayout(str(root)) # get file from path recordings = get_recordings(layout) if not recordings: raise RuntimeError(f"No EEG recordings found under {root}.") example_file = recordings[0] entities = layout.parse_file_entities(example_file) bidsfile = layout.get(**entities)[0] print(bidsfile) ``` ```Python import pprint # get general info of a recording pprint.pprint(bidsfile.get_entities(metadata="all")) ``` get associations doesn’t give us all desired bids dependencies ```Python bidsfile.get_associations() ``` top level events.json can’t be retrieved from a file level ```Python file_entities = bidsfile.get_entities() # remove 'datatype' file_entities.pop("datatype") file_entities["suffix"] = "events" file_entities["extension"] = ".json" print(file_entities) print(layout.get(**file_entities)) print(layout.get(suffix="events", extension=".json")) # not all file level metadata files can be retrieved even though they exist file_entities["suffix"] = "events" file_entities["extension"] = "tsv" print(file_entities) print(layout.get(**file_entities)) file_entities["suffix"] = "electrodes" file_entities["extension"] = "tsv" print(file_entities) print(layout.get(**file_entities)) file_entities["suffix"] = "coordsystem" file_entities["extension"] = "json" print(file_entities) print(layout.get(**file_entities)) ``` # Computation times **00:00.000** total execution time for 1 file **from generated/auto_examples/dev_scripts**: | Example | Time | Mem (MB) | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|------------| | [Exploring Braindecode’s BIDSDataset](debug_pybids_braindecode.md#sphx-glr-generated-auto-examples-dev-scripts-debug-pybids-braindecode-py) (`debug_pybids_braindecode.py`) | 00:00.000 | 0 | # Computation times **10:29.352** total execution time for 3 files **from generated/auto_examples/eeg2025**: | Example | Time | Mem (MB) | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|------------| | [Challenge 2: Predicting the p-factor from EEG](tutorial_challenge_2.md#sphx-glr-generated-auto-examples-eeg2025-tutorial-challenge-2-py) (`tutorial_challenge_2.py`) | 06:19.254 | 0 | | [Challenge 1: Cross-Task Transfer Learning!](tutorial_challenge_1.md#sphx-glr-generated-auto-examples-eeg2025-tutorial-challenge-1-py) (`tutorial_challenge_1.py`) | 04:00.429 | 0 | | [Working Offline with EEGDash](tutorial_eegdash_offline.md#sphx-glr-generated-auto-examples-eeg2025-tutorial-eegdash-offline-py) (`tutorial_eegdash_offline.py`) | 00:09.670 | 0 | # Challenge 1: Cross-Task Transfer Learning! ```Python # ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eeg2025/startkit/blob/main/challenge_1.ipynb) --- > **Preliminary notes** > Before we begin, I just want to make a deal with you, ok? > This is a community competition with a strong open-source foundation. > When I say open-source, I mean volunteer work. > So, if you see something that does not work or could be improved, first, **please be kind**, and > we will fix it together on GitHub, okay? > The entire decoding community will only go further when we stop > solving the same problems over and over again, and it starts working together. --- > **How can we use the knowledge from one EEG Decoding task into another?** > Transfer learning is a widespread technique used in deep learning. It > uses knowledge learned from one source task/domain in another target > task/domain. It has been studied in depth in computer vision, natural > language processing, and speech, but what about EEG brain decoding? > The cross-task transfer learning scenario in EEG decoding is remarkably > underexplored compared to the development of new models, > [Aristimunha et al. (2023)](https://arxiv.org/abs/2308.02408), even > though it can be much more useful for real applications, see > [Wimpff et al. (2025)](https://arxiv.org/abs/2502.06828), > [Wu et al. (2025)](https://arxiv.org/abs/2507.09882). > Our Challenge 1 addresses a key goal in neurotechnology: decoding > cognitive function from EEG using the pre-trained knowledge from another. > In other words, developing models that can effectively > transfer/adapt/adjust/fine-tune knowledge from passive EEG tasks to > active tasks. > The ability to generalize and transfer is something critical that we > believe should be focused on. To go beyond just comparing metrics numbers > that are often not comparable, given the specificities of EEG, such as > pre-processing, inter-subject variability, and many other unique > components of this type of data. > This means your submitted model might be trained on a subset of tasks > and fine-tuned on data from another condition, evaluating its capacity to > generalize with task-specific fine-tuning. --- > #### NOTE > For simplicity purposes, we will only show how to do the decoding > directly in our target task, and it is up to the teams to think about > how to use the passive task to perform the pre-training. --- > **Install dependencies** > For the challenge, we will need two significant dependencies: > braindecode and eegdash. The libraries will install PyTorch, > Pytorch Audio, Scikit-learn, MNE, MNE-BIDS, and many other packages > necessary for the many functions. > Install dependencies on colab or your local machine, as eegdash > have braindecode as a dependency. > you can just run `pip install eegdash`. --- > **Imports and setup** ```Python from pathlib import Path import torch from braindecode.datasets import BaseConcatDataset from braindecode.preprocessing import ( preprocess, Preprocessor, create_windows_from_events, ) from braindecode.models import EEGNeX from torch.utils.data import DataLoader from sklearn.model_selection import train_test_split from sklearn.utils import check_random_state from typing import Optional from torch.nn import Module from torch.optim.lr_scheduler import LRScheduler from tqdm import tqdm import copy # ``` **Check GPU availability** Identify whether a CUDA-enabled GPU is available and set the device accordingly. If using Google Colab, ensure that the runtime is set to use a GPU. This can be done by navigating to Runtime > Change runtime type and selecting GPU as the hardware accelerator. ```Python device = "cuda" if torch.cuda.is_available() else "cpu" if device == "cuda": msg = "CUDA-enabled GPU found. Training should be faster." else: msg = ( "No GPU found. Training will be carried out on CPU, which might be " "slower.\n\nIf running on Google Colab, you can request a GPU runtime by" " clicking\n`Runtime/Change runtime type` in the top bar menu, then " "selecting 'T4 GPU'\nunder 'Hardware accelerator'." ) print(msg) # ``` ```none No GPU found. Training will be carried out on CPU, which might be slower. If running on Google Colab, you can request a GPU runtime by clicking `Runtime/Change runtime type` in the top bar menu, then selecting 'T4 GPU' under 'Hardware accelerator'. ``` **What are we decoding?** > To start to talk about what we want to analyse, the important thing > is to understand some basic concepts. --- > **The brain decodes the problem** > Broadly speaking, here *brain decoding* is the following problem: > given brain time-series signals $X \in \mathbb{R}^{C \times T}$ with > labels $y \in \mathcal{Y}$, we implement a neural network $f$ that > **decodes/translates** brain activity into the target label. > We aim to translate recorded brain activity into its originating > stimulus, behavior, or mental state, [King, J-R. et al. (2020)](https://lauragwilliams.github.io/d/m/CognitionAlgorithm.pdf). > The neural network $f$ applies a series of transformation layers > (e.g., `torch.nn.Conv2d`, `torch.nn.Linear`, `torch.nn.ELU`, `torch.nn.BatchNorm2d`) > to the data to filter, extract features, and learn embeddings > relevant to the optimization objective—in other words: > $$ > f_{\theta}: X \to y, > $$ > where $C$ (`n_chans`) is the number of channels/electrodes and $T$ (`n_times`) > is the temporal window length/epoch size over the interval of interest. > Here, $\theta$ denotes the parameters learned by the neural network. > **Input/Output definition** > For the competition, the HBN-EEG (Healthy Brain Network EEG Datasets) > dataset has `n_chans = 129` with the last channels as a [reference channel](https://mne.tools/stable/auto_tutorials/preprocessing/55_setting_eeg_reference.html), > and we define the window length as `n_times = 200`, corresponding to 2-second windows. > Your model should follow this definition exactly; any specific selection of channels, > filtering, or domain-adaptation technique must be performed **within the layers of the neural network model**. > In this tutorial, we will use the `EEGNeX` model from `braindecode` as an example. > You can use any model you want, as long as it follows the input/output > definitions above. --- > **Understand the task: Contrast Change Detection (CCD)** > If you are interested to get more neuroscience insight, we recommend these two references, [HBN-EEG](https://www.biorxiv.org/content/10.1101/2024.10.03.615261v2.full.pdf) and [Langer, N et al. (2017)](https://www.nature.com/articles/sdata201740#Sec2). > Your task (**label**) is to predict the response time for the subject during this windows. > In the Video, we have an example of recording cognitive activity: > The Contrast Change Detection (CCD) task relates to > [Steady-State Visual Evoked Potentials (SSVEP)](https://en.wikipedia.org/wiki/Steady-state_visually_evoked_potential) > and [Event-Related Potentials (ERP)](https://en.wikipedia.org/wiki/Event-related_potential). > Algorithmically, what the subject sees during recording is: > * Two flickering striped discs: one tilted left, one tilted right. > * After a variable delay, **one disc’s contrast gradually increases** **while the other decreases**. > * They **press left or right** to indicate which disc got stronger. > * They receive **feedback** (🙂 correct / 🙁 incorrect). > **The task parallels SSVEP and ERP:** > * The continuous flicker **tags the EEG at fixed frequencies (and harmonics)** → SSVEP-like signals. > * The **ramp onset**, the **button press**, and the **feedback** are **time-locked events** that yield ERP-like components. > Your task (**label**) is to predict the response time for the subject during this windows. --- > In the figure below, we have the timeline representation of the cognitive task: > ![image](https://eeg2025.github.io/assets/img/image-2.jpg) --- > **Stimulus demonstration** >
> >
--- > **PyTorch Dataset for the competition** > Now, we have a Pytorch Dataset object that contains the set of recordings for the task > contrastChangeDetection. ```Python from eegdash.dataset import EEGChallengeDataset from eegdash.hbn.windows import ( annotate_trials_with_target, add_aux_anchors, keep_only_recordings_with, add_extras_columns, ) from eegdash.paths import get_default_cache_dir # Match tests' cache layout under ~/eegdash_cache/eeg_challenge_cache DATA_DIR = Path(get_default_cache_dir()).resolve() DATA_DIR.mkdir(parents=True, exist_ok=True) dataset_ccd = EEGChallengeDataset( task="contrastChangeDetection", release="R5", cache_dir=DATA_DIR, mini=True ) # The dataset contains 20 subjects in the minirelease, and each subject has multiple recordings # (sessions). Each recording is represented as a dataset object within the `dataset_ccd.datasets` list. print(f"Number of recordings in the dataset: {len(dataset_ccd.datasets)}") print( f"Number of unique subjects in the dataset: {dataset_ccd.description['subject'].nunique()}" ) # # This dataset object have very rich Raw object details that can help you to # understand better the data. The framework behind this is braindecode, # and if you want to understand in depth what is happening, we recommend the # braindecode github itself. # # We can also access the Raw object for visualization purposes, we will see just one object. raw = dataset_ccd.datasets[0].raw # get the Raw object of the first recording # And to download all the data all data directly, you can do: dataset_ccd.download_all(n_jobs=-1) # ``` ```none ╭────────────────────── EEG 2025 Competition Data Notice ──────────────────────╮ │ This object loads the HBN dataset that has been preprocessed for the EEG │ │ Challenge: │ │ * Downsampled from 500Hz to 100Hz │ │ * Bandpass filtered (0.5-50 Hz) │ │ │ │ For full preprocessing applied for competition details, see: │ │ https://github.com/eeg2025/downsample-datasets │ │ │ │ The HBN dataset have some preprocessing applied by the HBN team: │ │ * Re-reference (Cz Channel) │ │ │ │ IMPORTANT: The data accessed via `EEGChallengeDataset` is NOT identical to │ │ what you get from EEGDashDataset directly. │ │ If you are participating in the competition, always use │ │ `EEGChallengeDataset` to ensure consistency with the challenge data. │ ╰──────────────────────── Source: EEGChallengeDataset ─────────────────────────╯ Number of recordings in the dataset: 60 Number of unique subjects in the dataset: 20 Downloading sub-NDARAH793FBF_task-contrastChangeDetection_run-1_channels.tsv: 0%| | 0.00/1.00 [00:00 **Alternatives for Downloading the data** You can also perform this operation with wget or the aws cli. These options will probably be faster! Please check more details in the HBN data webpage [HBN-EEG](https://neuromechanist.github.io/data/hbn/). You need to download the 100Hz preprocessed data in BDF format. Example of wget for release R1 : wget [https://sccn.ucsd.edu/download/eeg2025/R1_L100_bdf.zip](https://sccn.ucsd.edu/download/eeg2025/R1_L100_bdf.zip) -O R1_L100_bdf.zip Example of AWS CLI for release R1 > aws s3 sync s3://nmdatasets/NeurIPS25/R1_L100_bdf data/R1_L100_bdf –no-sign-request ```Python # ``` **Create windows of interest** So we epoch after the stimulus moment with a beginning shift of 500 ms. ```Python EPOCH_LEN_S = 2.0 SFREQ = 100 # by definition here transformation_offline = [ Preprocessor( annotate_trials_with_target, target_field="rt_from_stimulus", epoch_length=EPOCH_LEN_S, require_stimulus=True, require_response=True, apply_on_array=False, ), Preprocessor(add_aux_anchors, apply_on_array=False), ] preprocess(dataset_ccd, transformation_offline, n_jobs=1) ANCHOR = "stimulus_anchor" SHIFT_AFTER_STIM = 0.5 WINDOW_LEN = 2.0 # Keep only recordings that actually contain stimulus anchors dataset = keep_only_recordings_with(ANCHOR, dataset_ccd) # Create single-interval windows (stim-locked, long enough to include the response) single_windows = create_windows_from_events( dataset, mapping={ANCHOR: 0}, trial_start_offset_samples=int(SHIFT_AFTER_STIM * SFREQ), # +0.5 s trial_stop_offset_samples=int((SHIFT_AFTER_STIM + WINDOW_LEN) * SFREQ), # +2.5 s window_size_samples=int(EPOCH_LEN_S * SFREQ), window_stride_samples=SFREQ, preload=True, ) # Injecting metadata into the extra mne annotation. single_windows = add_extras_columns( single_windows, dataset, desc=ANCHOR, keys=( "target", "rt_from_stimulus", "rt_from_trialstart", "stimulus_onset", "response_onset", "correct", "response_type", ), ) # ``` ```none /home/runner/work/EEGDash/EEGDash/.venv/lib/python3.11/site-packages/braindecode/preprocessing/windowers.py:793: UserWarning: Dropping extra columns that conflict with windowing metadata: {'target'} warnings.warn( ``` **Inspect the label distribution** ```Python import numpy as np from skorch.helper import SliceDataset y_label = np.array(list(SliceDataset(single_windows, 1))) # Plot histogram of the response times with matplotlib import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(10, 5)) ax.hist(y_label, bins=30) ax.set_title("Response Time Distribution") ax.set_xlabel("Response Time (s)") ax.set_ylabel("Count") plt.tight_layout() plt.show() # ``` **Split the data** Extract meta information ```Python meta_information = single_windows.get_metadata() valid_frac = 0.1 test_frac = 0.1 seed = 2025 subjects = meta_information["subject"].unique() train_subj, valid_test_subject = train_test_split( subjects, test_size=(valid_frac + test_frac), random_state=check_random_state(seed), shuffle=True, ) valid_subj, test_subj = train_test_split( valid_test_subject, test_size=test_frac, random_state=check_random_state(seed + 1), shuffle=True, ) # Sanity check assert (set(valid_subj) | set(test_subj) | set(train_subj)) == set(subjects) # Create train/valid/test splits for the windows subject_split = single_windows.split("subject") train_set = [] valid_set = [] test_set = [] for s in subject_split: if s in train_subj: train_set.append(subject_split[s]) elif s in valid_subj: valid_set.append(subject_split[s]) elif s in test_subj: test_set.append(subject_split[s]) train_set = BaseConcatDataset(train_set) valid_set = BaseConcatDataset(valid_set) test_set = BaseConcatDataset(test_set) print("Number of examples in each split in the minirelease") print(f"Train:\t{len(train_set)}") print(f"Valid:\t{len(valid_set)}") print(f"Test:\t{len(test_set)}") # ``` ```none Number of examples in each split in the minirelease Train: 981 Valid: 183 Test: 50 ``` **Create dataloaders** ```Python batch_size = 128 # Set num_workers to 0 to avoid multiprocessing issues in notebooks/tutorials num_workers = 0 train_loader = DataLoader( train_set, batch_size=batch_size, shuffle=True, num_workers=num_workers ) valid_loader = DataLoader( valid_set, batch_size=batch_size, shuffle=False, num_workers=num_workers ) test_loader = DataLoader( test_set, batch_size=batch_size, shuffle=False, num_workers=num_workers ) # ``` **Build the model** For neural network models, **to start**, we suggest using [braindecode models](https://braindecode.org/1.2/models/models_table.html) zoo. We have implemented several different models for decoding the brain timeseries. Your team’s responsibility is to develop a PyTorch module that receives the three-dimensional (batch, n_chans, n_times) input and outputs the contrastive response time. **You can use any model you want**, as long as it follows the input/output definitions above. ```Python model = EEGNeX( n_chans=129, # 129 channels n_outputs=1, # 1 output for regression n_times=200, # 2 seconds sfreq=100, # sample frequency 100 Hz ) print(model) model.to(device) # ``` ```none /home/runner/work/EEGDash/EEGDash/.venv/lib/python3.11/site-packages/torch/nn/modules/conv.py:548: UserWarning: Using padding='same' with even kernel lengths and odd dilation may require a zero-padded copy of the input be created (Triggered internally at /pytorch/aten/src/ATen/native/Convolution.cpp:1024.) return F.conv2d( ================================================================================================================================================================ Layer (type (var_name):depth-idx) Input Shape Output Shape Param # Kernel Shape ================================================================================================================================================================ EEGNeX (EEGNeX) [1, 129, 200] [1, 1] -- -- ├─Sequential (block_1): 1-1 [1, 129, 200] [1, 8, 129, 200] -- -- │ └─Rearrange (0): 2-1 [1, 129, 200] [1, 1, 129, 200] -- -- │ └─Conv2d (1): 2-2 [1, 1, 129, 200] [1, 8, 129, 200] 512 [1, 64] │ └─BatchNorm2d (2): 2-3 [1, 8, 129, 200] [1, 8, 129, 200] 16 -- ├─Sequential (block_2): 1-2 [1, 8, 129, 200] [1, 32, 129, 200] -- -- │ └─Conv2d (0): 2-4 [1, 8, 129, 200] [1, 32, 129, 200] 16,384 [1, 64] │ └─BatchNorm2d (1): 2-5 [1, 32, 129, 200] [1, 32, 129, 200] 64 -- ├─Sequential (block_3): 1-3 [1, 32, 129, 200] [1, 64, 1, 50] -- -- │ └─ParametrizedConv2dWithConstraint (0): 2-6 [1, 32, 129, 200] [1, 64, 1, 200] -- [129, 1] │ │ └─ModuleDict (parametrizations): 3-1 -- -- 8,256 -- │ └─BatchNorm2d (1): 2-7 [1, 64, 1, 200] [1, 64, 1, 200] 128 -- │ └─ELU (2): 2-8 [1, 64, 1, 200] [1, 64, 1, 200] -- -- │ └─AvgPool2d (3): 2-9 [1, 64, 1, 200] [1, 64, 1, 50] -- [1, 4] │ └─Dropout (4): 2-10 [1, 64, 1, 50] [1, 64, 1, 50] -- -- ├─Sequential (block_4): 1-4 [1, 64, 1, 50] [1, 32, 1, 50] -- -- │ └─Conv2d (0): 2-11 [1, 64, 1, 50] [1, 32, 1, 50] 32,768 [1, 16] │ └─BatchNorm2d (1): 2-12 [1, 32, 1, 50] [1, 32, 1, 50] 64 -- ├─Sequential (block_5): 1-5 [1, 32, 1, 50] [1, 48] -- -- │ └─Conv2d (0): 2-13 [1, 32, 1, 50] [1, 8, 1, 50] 4,096 [1, 16] │ └─BatchNorm2d (1): 2-14 [1, 8, 1, 50] [1, 8, 1, 50] 16 -- │ └─ELU (2): 2-15 [1, 8, 1, 50] [1, 8, 1, 50] -- -- │ └─AvgPool2d (3): 2-16 [1, 8, 1, 50] [1, 8, 1, 6] -- [1, 8] │ └─Dropout (4): 2-17 [1, 8, 1, 6] [1, 8, 1, 6] -- -- │ └─Flatten (5): 2-18 [1, 8, 1, 6] [1, 48] -- -- ├─ParametrizedLinearWithConstraint (final_layer): 1-6 [1, 48] [1, 1] 1 -- │ └─ModuleDict (parametrizations): 2-19 -- -- -- -- │ │ └─ParametrizationList (weight): 3-2 -- [1, 48] 48 -- ================================================================================================================================================================ Total params: 62,353 Trainable params: 62,353 Non-trainable params: 0 Total mult-adds (Units.MEGABYTES): 437.76 ================================================================================================================================================================ Input size (MB): 0.10 Forward/backward pass size (MB): 16.65 Params size (MB): 0.22 Estimated Total Size (MB): 16.97 ================================================================================================================================================================ EEGNeX( (block_1): Sequential( (0): Rearrange('batch ch time -> batch 1 ch time') (1): Conv2d(1, 8, kernel_size=(1, 64), stride=(1, 1), padding=same, bias=False) (2): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (block_2): Sequential( (0): Conv2d(8, 32, kernel_size=(1, 64), stride=(1, 1), padding=same, bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (block_3): Sequential( (0): ParametrizedConv2dWithConstraint( 32, 64, kernel_size=(129, 1), stride=(1, 1), groups=32, bias=False (parametrizations): ModuleDict( (weight): ParametrizationList( (0): MaxNormParametrize() ) ) ) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ELU(alpha=1.0) (3): AvgPool2d(kernel_size=(1, 4), stride=(1, 4), padding=(0, 1)) (4): Dropout(p=0.5, inplace=False) ) (block_4): Sequential( (0): Conv2d(64, 32, kernel_size=(1, 16), stride=(1, 1), padding=same, dilation=(1, 2), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (block_5): Sequential( (0): Conv2d(32, 8, kernel_size=(1, 16), stride=(1, 1), padding=same, dilation=(1, 4), bias=False) (1): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ELU(alpha=1.0) (3): AvgPool2d(kernel_size=(1, 8), stride=(1, 8), padding=(0, 1)) (4): Dropout(p=0.5, inplace=False) (5): Flatten(start_dim=1, end_dim=-1) ) (final_layer): ParametrizedLinearWithConstraint( in_features=48, out_features=1, bias=True (parametrizations): ModuleDict( (weight): ParametrizationList( (0): MaxNormParametrize() ) ) ) ) ``` **Define training and validation functions** The rest is our classic PyTorch/torch lighting/skorch training pipeline, you can use any training framework you want. We provide a simple training and validation loop below. ```Python def train_one_epoch( dataloader: DataLoader, model: Module, loss_fn, optimizer, scheduler: Optional[LRScheduler], epoch: int, device, print_batch_stats: bool = True, ): model.train() total_loss = 0.0 sum_sq_err = 0.0 n_samples = 0 progress_bar = tqdm( enumerate(dataloader), total=len(dataloader), disable=not print_batch_stats ) for batch_idx, batch in progress_bar: # Support datasets that may return (X, y) or (X, y, ...) X, y = batch[0], batch[1] X, y = X.to(device).float(), y.to(device).float() optimizer.zero_grad(set_to_none=True) preds = model(X) loss = loss_fn(preds, y) loss.backward() optimizer.step() total_loss += loss.item() # Flatten to 1D for regression metrics and accumulate squared error preds_flat = preds.detach().view(-1) y_flat = y.detach().view(-1) sum_sq_err += torch.sum((preds_flat - y_flat) ** 2).item() n_samples += y_flat.numel() if print_batch_stats: running_rmse = (sum_sq_err / max(n_samples, 1)) ** 0.5 progress_bar.set_description( f"Epoch {epoch}, Batch {batch_idx + 1}/{len(dataloader)}, " f"Loss: {loss.item():.6f}, RMSE: {running_rmse:.6f}" ) if scheduler is not None: scheduler.step() avg_loss = total_loss / len(dataloader) rmse = (sum_sq_err / max(n_samples, 1)) ** 0.5 return avg_loss, rmse @torch.no_grad() def valid_model( dataloader: DataLoader, model: Module, loss_fn, device, print_batch_stats: bool = True, ): model.eval() total_loss = 0.0 sum_sq_err = 0.0 n_batches = len(dataloader) n_samples = 0 iterator = tqdm( enumerate(dataloader), total=n_batches, disable=not print_batch_stats ) for batch_idx, batch in iterator: # Supports (X, y) or (X, y, ...) X, y = batch[0], batch[1] X, y = X.to(device).float(), y.to(device).float() preds = model(X) batch_loss = loss_fn(preds, y).item() total_loss += batch_loss preds_flat = preds.detach().view(-1) y_flat = y.detach().view(-1) sum_sq_err += torch.sum((preds_flat - y_flat) ** 2).item() n_samples += y_flat.numel() if print_batch_stats: running_rmse = (sum_sq_err / max(n_samples, 1)) ** 0.5 iterator.set_description( f"Val Batch {batch_idx + 1}/{n_batches}, " f"Loss: {batch_loss:.6f}, RMSE: {running_rmse:.6f}" ) avg_loss = total_loss / n_batches if n_batches else float("nan") rmse = (sum_sq_err / max(n_samples, 1)) ** 0.5 print(f"Val RMSE: {rmse:.6f}, Val Loss: {avg_loss:.6f}\n") return avg_loss, rmse # ``` **Train the model** ```Python lr = 1e-3 weight_decay = 1e-5 n_epochs = ( 5 # For demonstration purposes, we use just 5 epochs here. You can increase this. ) early_stopping_patience = 50 optimizer = torch.optim.AdamW(model.parameters(), lr=lr, weight_decay=weight_decay) scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=n_epochs - 1) loss_fn = torch.nn.MSELoss() patience = 5 min_delta = 1e-4 best_rmse = float("inf") epochs_no_improve = 0 best_state, best_epoch = None, None for epoch in range(1, n_epochs + 1): print(f"Epoch {epoch}/{n_epochs}: ", end="") train_loss, train_rmse = train_one_epoch( train_loader, model, loss_fn, optimizer, scheduler, epoch, device ) val_loss, val_rmse = valid_model(test_loader, model, loss_fn, device) print( f"Train RMSE: {train_rmse:.6f}, " f"Average Train Loss: {train_loss:.6f}, " f"Val RMSE: {val_rmse:.6f}, " f"Average Val Loss: {val_loss:.6f}" ) if val_rmse < best_rmse - min_delta: best_rmse = val_rmse best_state = copy.deepcopy(model.state_dict()) best_epoch = epoch epochs_no_improve = 0 else: epochs_no_improve += 1 if epochs_no_improve >= patience: print( f"Early stopping at epoch {epoch}. Best Val RMSE: {best_rmse:.6f} (epoch {best_epoch})" ) break if best_state is not None: model.load_state_dict(best_state) # ``` ```none Epoch 1/5: 0%| | 0/8 [00:00 **Save the model** ```Python torch.save(model.state_dict(), "weights_challenge_1.pt") print("Model saved as 'weights_challenge_1.pt'") ``` ```none Model saved as 'weights_challenge_1.pt' ``` **Total running time of the script:** (4 minutes 0.429 seconds) # Challenge 2: Predicting the p-factor from EEG ```Python # ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eeg2025/startkit/blob/main/challenge_2.ipynb) --- > **Preliminary notes** > Before we begin, I just want to make a deal with you, ok? > This is a community competition with a strong open-source foundation. > When I say open-source, I mean volunteer work. > So, if you see something that does not work or could be improved, first, **please be kind**, and > we will fix it together on GitHub, okay? > The entire decoding community will only go further when we stop > solving the same problems over and over again, and it starts working together. ```Python # ``` **Overview** > The psychopathology factor (P-factor) is a widely recognized construct in mental health research, representing a common underlying dimension of psychopathology across various disorders. > Currently, the P-factor is often assessed using self-report questionnaires or clinician ratings, which can be subjective, prone to bias, and time-consuming. > **The Challenge 2** consists of developing a model to predict the P-factor from EEG recordings. > The challenge encourages learning physiologically meaningful signal representations and discovery of reproducible biomarkers. > Models of any size should emphasize robust, interpretable features that generalize across subjects, > sessions, and acquisition sites. > Unlike a standard in-distribution classification task, this regression problem stresses out-of-distribution robustness > and extrapolation. The goal is not only to minimize error on seen subjects, but also to transfer effectively to unseen data. > Ensure the dataset is available locally. If not, see the > [dataset download guide](https://eeg2025.github.io/data/#downloading-the-data). --- > **Contents of this start kit** > #### NOTE > If you need additional explanations on the > [EEGChallengeDataset](../../../api/dataset/eegdash.dataset.EEGChallengeDataset.md) class, dataloading, > [braindecode](https://braindecode.org/stable/models/models_table.html)’s > deep learning models, or brain decoding in general, please refer to the > start-kit of challenge 1 which delves deeper into these topics. > More contents will be released during the competition inside the > `eegdash` [examples webpage](https://eeglab.org/EEGDash/generated/auto_examples/index.html). --- > **Install dependencies on Colab** > #### NOTE > These installs are optional; skip on local environments > where you already have the dependencies installed. > ```bash > pip install eegdash > ``` --- > **Imports** ```Python from pathlib import Path import math import os import random import torch from torch.utils.data import DataLoader from torch import optim from torch.nn.functional import l1_loss from braindecode.preprocessing import create_fixed_length_windows from braindecode.datasets.base import EEGWindowsDataset, BaseConcatDataset, BaseDataset from braindecode.models import EEGNeX from eegdash import EEGChallengeDataset from eegdash.paths import get_default_cache_dir # ``` #### WARNING In case of Colab, before starting, make sure you’re on a GPU instance for faster training! If running on Google Colab, please request a GPU runtime by clicking Runtime/Change runtime type in the top bar menu, then selecting ‘T4 GPU’ under ‘Hardware accelerator’. --- > **Identify whether a CUDA-enabled GPU is available** ```Python device = "cuda" if torch.cuda.is_available() else "cpu" if device == "cuda": msg = "CUDA-enabled GPU found. Training should be faster." else: msg = ( "No GPU found. Training will be carried out on CPU, which might be " "slower.\n\nIf running on Google Colab, you can request a GPU runtime by" " clicking\n`Runtime/Change runtime type` in the top bar menu, then " "selecting 'T4 GPU'\nunder 'Hardware accelerator'." ) print(msg) # ``` ```none No GPU found. Training will be carried out on CPU, which might be slower. If running on Google Colab, you can request a GPU runtime by clicking `Runtime/Change runtime type` in the top bar menu, then selecting 'T4 GPU' under 'Hardware accelerator'. ``` **Understanding the P-factor regression task** > The psychopathology factor (P-factor) is a widely recognized construct in mental health research, representing a common underlying dimension of psychopathology across various disorders. > The P-factor is thought to reflect the shared variance among different psychiatric conditions, suggesting that individuals with higher P-factor scores may be more vulnerable to a range of mental health issues. > Currently, the P-factor is often assessed using self-report questionnaires or clinician ratings, which can be subjective, prone to bias, and time-consuming. > In the dataset of this challenge, the P-factor was assessed using the Child > Behavior Checklist (CBCL) [McElroy et al., (2017)](https://doi.org/10.1111/jcpp.12849). > The goal of Challenge 2 is to develop a model to predict the P-factor from EEG recordings. > **The feasibility of using EEG data for this purpose is still an open question**. > The solution may involve finding meaningful representations of the EEG data that correlate with the P-factor scores. > The challenge encourages learning physiologically meaningful signal representations and discovery of reproducible biomarkers. > If contestants are successful in this task, it could pave the way for more objective and efficient assessments of the P-factor in clinical settings. --- > **Define local path and (down)load the data** > In this challenge 2 example, we load the EEG 2025 release using > [EEGChallengeDataset](../../../api/dataset/eegdash.dataset.EEGChallengeDataset.md). > **Note:** in this example notebook, we load the contrast change detection task from one mini release only as an example. Naturally, you are encouraged to train your models on all complete releases, using data from all the tasks you deem relevant. --- > The first step is to define the cache folder! > Match tests’ cache layout under ~/eegdash_cache/eeg_challenge_cache ```Python DATA_DIR = Path(get_default_cache_dir()).resolve() # Creating the path if it does not exist DATA_DIR.mkdir(parents=True, exist_ok=True) # We define the list of releases to load. # Here, only release 5 is loaded. release_list = ["R5"] all_datasets_list = [ EEGChallengeDataset( release=release, task="contrastChangeDetection", mini=True, description_fields=[ "subject", "session", "run", "task", "age", "gender", "sex", "p_factor", ], cache_dir=DATA_DIR, ) for release in release_list ] print("Datasets loaded") sub_rm = ["NDARWV769JM7"] # ``` ```none ╭────────────────────── EEG 2025 Competition Data Notice ──────────────────────╮ │ This object loads the HBN dataset that has been preprocessed for the EEG │ │ Challenge: │ │ * Downsampled from 500Hz to 100Hz │ │ * Bandpass filtered (0.5-50 Hz) │ │ │ │ For full preprocessing applied for competition details, see: │ │ https://github.com/eeg2025/downsample-datasets │ │ │ │ The HBN dataset have some preprocessing applied by the HBN team: │ │ * Re-reference (Cz Channel) │ │ │ │ IMPORTANT: The data accessed via `EEGChallengeDataset` is NOT identical to │ │ what you get from EEGDashDataset directly. │ │ If you are participating in the competition, always use │ │ `EEGChallengeDataset` to ensure consistency with the challenge data. │ ╰──────────────────────── Source: EEGChallengeDataset ─────────────────────────╯ Datasets loaded ``` **Combine the datasets into a single one** Here, we combine the datasets from the different releases into a single `BaseConcatDataset` object. %% ```Python all_datasets = BaseConcatDataset(all_datasets_list) print(all_datasets.description) for ds in all_datasets_list: ds.download_all(n_jobs=os.cpu_count()) # ``` ```none subject run ... seqlearning8target symbolsearch 0 NDARAH793FBF 2 ... available available 1 NDARAH793FBF 3 ... available available 2 NDARAH793FBF 1 ... available available 3 NDARAJ689BVN 3 ... unavailable available 4 NDARAJ689BVN 2 ... unavailable available 5 NDARAJ689BVN 1 ... unavailable available 6 NDARAP785CTE 1 ... available available 7 NDARAP785CTE 3 ... available available 8 NDARAP785CTE 2 ... available available 9 NDARAU708TL8 1 ... available available 10 NDARAU708TL8 2 ... available available 11 NDARAU708TL8 3 ... available available 12 NDARBE091BGD 3 ... unavailable available 13 NDARBE091BGD 2 ... unavailable available 14 NDARBE091BGD 1 ... unavailable available 15 NDARBE103DHM 1 ... available available 16 NDARBE103DHM 2 ... available available 17 NDARBE103DHM 3 ... available available 18 NDARBF851NH6 3 ... available available 19 NDARBF851NH6 2 ... available available 20 NDARBF851NH6 1 ... available available 21 NDARBH228RDW 1 ... available available 22 NDARBH228RDW 2 ... available available 23 NDARBH228RDW 3 ... available available 24 NDARBJ674TVU 1 ... unavailable available 25 NDARBJ674TVU 3 ... unavailable available 26 NDARBJ674TVU 2 ... unavailable available 27 NDARBM433VER 1 ... available available 28 NDARBM433VER 2 ... available available 29 NDARBM433VER 3 ... available available 30 NDARCA740UC8 2 ... available available 31 NDARCA740UC8 3 ... available available 32 NDARCA740UC8 1 ... available available 33 NDARCU633GCZ 1 ... available available 34 NDARCU633GCZ 3 ... available available 35 NDARCU633GCZ 2 ... available available 36 NDARCU736GZ1 2 ... unavailable available 37 NDARCU736GZ1 3 ... unavailable available 38 NDARCU736GZ1 1 ... unavailable available 39 NDARCU744XWL 1 ... available available 40 NDARCU744XWL 3 ... available available 41 NDARCU744XWL 2 ... available available 42 NDARDC843HHM 3 ... available available 43 NDARDC843HHM 2 ... available available 44 NDARDC843HHM 1 ... available available 45 NDARDH086ZKK 2 ... available available 46 NDARDH086ZKK 3 ... available available 47 NDARDH086ZKK 1 ... available available 48 NDARDL305BT8 1 ... available available 49 NDARDL305BT8 2 ... available available 50 NDARDL305BT8 3 ... available available 51 NDARDU853XZ6 3 ... unavailable available 52 NDARDU853XZ6 2 ... unavailable available 53 NDARDU853XZ6 1 ... unavailable available 54 NDARDV245WJG 3 ... unavailable available 55 NDARDV245WJG 2 ... unavailable available 56 NDARDV245WJG 1 ... unavailable available 57 NDAREC480KFA 1 ... available available 58 NDAREC480KFA 2 ... available available 59 NDAREC480KFA 3 ... available available [60 rows x 26 columns] ``` **Inspect your data** > We can check what is inside the dataset consuming the > MNE-object inside the Braindecode dataset. > The following snippet, if uncommented, will show the first 10 seconds of the raw EEG signal. > We can also inspect the data further by looking at the events and annotations. > We strongly recommend you to take a look into the details and check how the events are structured. --- ```Python raw = all_datasets.datasets[0].raw # mne.io.Raw object print(raw.info) raw.plot(duration=10, scalings="auto", show=True) print(raw.annotations) SFREQ = 100 # ``` ```none > Using matplotlib as 2D backend. ``` **Wrap the data into a PyTorch-compatible dataset** The class below defines a dataset wrapper that will extract 2-second windows, uniformly sampled over the whole signal. In addition, it will add useful information about the extracted windows, such as the p-factor, the subject or the task. ```Python class DatasetWrapper(BaseDataset): def __init__(self, dataset: EEGWindowsDataset, crop_size_samples: int, seed=None): self.dataset = dataset self.crop_size_samples = crop_size_samples self.rng = random.Random(seed) def __len__(self): return len(self.dataset) def __getitem__(self, index): X, _, crop_inds = self.dataset[index] # P-factor label: p_factor = self.dataset.description["p_factor"] p_factor = float(p_factor) # Additional information: infos = { "subject": self.dataset.description["subject"], "sex": self.dataset.description["sex"], "age": float(self.dataset.description["age"]), "task": self.dataset.description["task"], "session": self.dataset.description.get("session", None) or "", "run": self.dataset.description.get("run", None) or "", } # Randomly crop the signal to the desired length: i_window_in_trial, i_start, i_stop = crop_inds assert i_stop - i_start >= self.crop_size_samples, f"{i_stop=} {i_start=}" start_offset = self.rng.randint(0, i_stop - i_start - self.crop_size_samples) i_start = i_start + start_offset i_stop = i_start + self.crop_size_samples X = X[:, start_offset : start_offset + self.crop_size_samples] return X, p_factor, (i_window_in_trial, i_start, i_stop), infos # We filter out certain recordings, create fixed length windows and finally make use of our `DatasetWrapper`. ``` Filter out recordings that are too short or missing p_factor ```Python all_datasets = BaseConcatDataset( [ ds for ds in all_datasets.datasets if ds.description.subject not in sub_rm and ds.raw.n_times >= 4 * SFREQ and len(ds.raw.ch_names) == 129 and "p_factor" in ds.description and ds.description["p_factor"] is not None and not math.isnan(ds.description["p_factor"]) ] ) # Create 4-seconds windows with 2-seconds stride windows_ds = create_fixed_length_windows( all_datasets, window_size_samples=4 * SFREQ, window_stride_samples=2 * SFREQ, drop_last_window=True, ) # Wrap each sub-dataset in the windows_ds windows_ds = BaseConcatDataset( [DatasetWrapper(ds, crop_size_samples=2 * SFREQ) for ds in windows_ds.datasets] ) # ``` **Inspect the label distribution** ```Python import numpy as np from skorch.helper import SliceDataset y_label = np.array(list(SliceDataset(windows_ds, 1))) # Plot histogram of the response times with matplotlib import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(10, 5)) ax.hist(y_label) ax.set_title("Response Time Distribution") ax.set_xlabel("Response Time (s)") ax.set_ylabel("Count") plt.tight_layout() plt.show() # ``` **Define, train and save a model** > Now we have our pytorch dataset necessary for the training! > Below, we define a simple EEGNeX model from Braindecode. > All the braindecode models expect the input to be of shape (batch_size, n_channels, n_times) > and have a test coverage about the behavior of the model. > However, you can use any pytorch model you want. --- > **Initialize model** ```Python model = EEGNeX(n_chans=129, n_outputs=1, n_times=2 * SFREQ).to(device) # Specify optimizer optimizer = optim.Adamax(params=model.parameters(), lr=0.002) print(model) # Finally, we can train our model. Here we define a simple training loop using pure PyTorch. # In this example, we only train for a single epoch. Feel free to increase the number of epochs. # Create PyTorch Dataloader num_workers = ( 0 # Set num_workers to 0 to avoid multiprocessing issues in notebooks/tutorials. ) dataloader = DataLoader( windows_ds, batch_size=128, shuffle=True, num_workers=num_workers ) n_epochs = 1 # Train model for 1 epoch for epoch in range(n_epochs): for idx, batch in enumerate(dataloader): # Reset gradients optimizer.zero_grad() # Unpack the batch X, y, crop_inds, infos = batch X = X.to(dtype=torch.float32, device=device) y = y.to(dtype=torch.float32, device=device).unsqueeze(1) # Forward pass y_pred = model(X) # Compute loss loss = l1_loss(y_pred, y) print(f"Epoch {0} - step {idx}, loss: {loss.item()}") # Gradient backpropagation loss.backward() optimizer.step() # Finally, we can save the model for later use torch.save(model.state_dict(), "weights_challenge_2.pt") print("Model saved as 'weights_challenge_2.pt'") ``` ```none ================================================================================================================================================================ Layer (type (var_name):depth-idx) Input Shape Output Shape Param # Kernel Shape ================================================================================================================================================================ EEGNeX (EEGNeX) [1, 129, 200] [1, 1] -- -- ├─Sequential (block_1): 1-1 [1, 129, 200] [1, 8, 129, 200] -- -- │ └─Rearrange (0): 2-1 [1, 129, 200] [1, 1, 129, 200] -- -- │ └─Conv2d (1): 2-2 [1, 1, 129, 200] [1, 8, 129, 200] 512 [1, 64] │ └─BatchNorm2d (2): 2-3 [1, 8, 129, 200] [1, 8, 129, 200] 16 -- ├─Sequential (block_2): 1-2 [1, 8, 129, 200] [1, 32, 129, 200] -- -- │ └─Conv2d (0): 2-4 [1, 8, 129, 200] [1, 32, 129, 200] 16,384 [1, 64] │ └─BatchNorm2d (1): 2-5 [1, 32, 129, 200] [1, 32, 129, 200] 64 -- ├─Sequential (block_3): 1-3 [1, 32, 129, 200] [1, 64, 1, 50] -- -- │ └─ParametrizedConv2dWithConstraint (0): 2-6 [1, 32, 129, 200] [1, 64, 1, 200] -- [129, 1] │ │ └─ModuleDict (parametrizations): 3-1 -- -- 8,256 -- │ └─BatchNorm2d (1): 2-7 [1, 64, 1, 200] [1, 64, 1, 200] 128 -- │ └─ELU (2): 2-8 [1, 64, 1, 200] [1, 64, 1, 200] -- -- │ └─AvgPool2d (3): 2-9 [1, 64, 1, 200] [1, 64, 1, 50] -- [1, 4] │ └─Dropout (4): 2-10 [1, 64, 1, 50] [1, 64, 1, 50] -- -- ├─Sequential (block_4): 1-4 [1, 64, 1, 50] [1, 32, 1, 50] -- -- │ └─Conv2d (0): 2-11 [1, 64, 1, 50] [1, 32, 1, 50] 32,768 [1, 16] │ └─BatchNorm2d (1): 2-12 [1, 32, 1, 50] [1, 32, 1, 50] 64 -- ├─Sequential (block_5): 1-5 [1, 32, 1, 50] [1, 48] -- -- │ └─Conv2d (0): 2-13 [1, 32, 1, 50] [1, 8, 1, 50] 4,096 [1, 16] │ └─BatchNorm2d (1): 2-14 [1, 8, 1, 50] [1, 8, 1, 50] 16 -- │ └─ELU (2): 2-15 [1, 8, 1, 50] [1, 8, 1, 50] -- -- │ └─AvgPool2d (3): 2-16 [1, 8, 1, 50] [1, 8, 1, 6] -- [1, 8] │ └─Dropout (4): 2-17 [1, 8, 1, 6] [1, 8, 1, 6] -- -- │ └─Flatten (5): 2-18 [1, 8, 1, 6] [1, 48] -- -- ├─ParametrizedLinearWithConstraint (final_layer): 1-6 [1, 48] [1, 1] 1 -- │ └─ModuleDict (parametrizations): 2-19 -- -- -- -- │ │ └─ParametrizationList (weight): 3-2 -- [1, 48] 48 -- ================================================================================================================================================================ Total params: 62,353 Trainable params: 62,353 Non-trainable params: 0 Total mult-adds (Units.MEGABYTES): 437.76 ================================================================================================================================================================ Input size (MB): 0.10 Forward/backward pass size (MB): 16.65 Params size (MB): 0.22 Estimated Total Size (MB): 16.97 ================================================================================================================================================================ Epoch 0 - step 0, loss: 0.6052998304367065 Epoch 0 - step 1, loss: 0.6569179892539978 Epoch 0 - step 2, loss: 0.7196495532989502 Epoch 0 - step 3, loss: 0.6781036257743835 Epoch 0 - step 4, loss: 0.651820957660675 Epoch 0 - step 5, loss: 0.6353812217712402 Epoch 0 - step 6, loss: 0.6646864414215088 Epoch 0 - step 7, loss: 0.6568052768707275 Epoch 0 - step 8, loss: 0.6118667125701904 Epoch 0 - step 9, loss: 0.6582334637641907 Epoch 0 - step 10, loss: 0.6524134278297424 Epoch 0 - step 11, loss: 0.6618609428405762 Epoch 0 - step 12, loss: 0.6417728066444397 Epoch 0 - step 13, loss: 0.6418073177337646 Epoch 0 - step 14, loss: 0.6577914953231812 Epoch 0 - step 15, loss: 0.6444278359413147 Epoch 0 - step 16, loss: 0.6307916045188904 Epoch 0 - step 17, loss: 0.6393419504165649 Epoch 0 - step 18, loss: 0.5603690147399902 Epoch 0 - step 19, loss: 0.5925142168998718 Epoch 0 - step 20, loss: 0.594791054725647 Epoch 0 - step 21, loss: 0.6844034194946289 Epoch 0 - step 22, loss: 0.6384214758872986 Epoch 0 - step 23, loss: 0.6886588335037231 Epoch 0 - step 24, loss: 0.6380999684333801 Epoch 0 - step 25, loss: 0.6195749044418335 Epoch 0 - step 26, loss: 0.7809057831764221 Epoch 0 - step 27, loss: 0.6068577766418457 Epoch 0 - step 28, loss: 0.7507789731025696 Epoch 0 - step 29, loss: 0.7139544486999512 Epoch 0 - step 30, loss: 0.5961211919784546 Epoch 0 - step 31, loss: 0.5796927213668823 Epoch 0 - step 32, loss: 0.6313275098800659 Epoch 0 - step 33, loss: 0.6277403235435486 Epoch 0 - step 34, loss: 0.657975971698761 Epoch 0 - step 35, loss: 0.6875464916229248 Epoch 0 - step 36, loss: 0.6521888375282288 Epoch 0 - step 37, loss: 0.6558427810668945 Epoch 0 - step 38, loss: 0.612235426902771 Epoch 0 - step 39, loss: 0.5443032383918762 Epoch 0 - step 40, loss: 0.6460886597633362 Epoch 0 - step 41, loss: 0.7183793187141418 Epoch 0 - step 42, loss: 0.6537125110626221 Epoch 0 - step 43, loss: 0.5364158153533936 Epoch 0 - step 44, loss: 0.6794991493225098 Epoch 0 - step 45, loss: 0.6527145504951477 Epoch 0 - step 46, loss: 0.6814470291137695 Epoch 0 - step 47, loss: 0.6257495880126953 Epoch 0 - step 48, loss: 0.6734403967857361 Epoch 0 - step 49, loss: 0.634363055229187 Epoch 0 - step 50, loss: 0.6878324747085571 Epoch 0 - step 51, loss: 0.5632811188697815 Epoch 0 - step 52, loss: 0.6585205793380737 Epoch 0 - step 53, loss: 0.6109294891357422 Epoch 0 - step 54, loss: 0.6400525569915771 Epoch 0 - step 55, loss: 0.6375831365585327 Epoch 0 - step 56, loss: 0.6906240582466125 Epoch 0 - step 57, loss: 0.7184688448905945 Epoch 0 - step 58, loss: 0.6907398104667664 Epoch 0 - step 59, loss: 0.6325060129165649 Epoch 0 - step 60, loss: 0.6276141405105591 Epoch 0 - step 61, loss: 0.5794558525085449 Epoch 0 - step 62, loss: 0.707737147808075 Epoch 0 - step 63, loss: 0.700897216796875 Epoch 0 - step 64, loss: 0.6948139071464539 Model saved as 'weights_challenge_2.pt' ``` **Total running time of the script:** (6 minutes 19.254 seconds) # Working Offline with EEGDash Many HPC clusters restrict or block network access. It’s common to have dedicated queues for internet-enabled jobs that differ from GPU queues. This tutorial shows how to use [EEGChallengeDataset](../../../api/dataset/eegdash.dataset.EEGChallengeDataset.md) offline once a dataset is present on disk. ```Python from pathlib import Path from eegdash.const import RELEASE_TO_OPENNEURO_DATASET_MAP from eegdash.paths import get_default_cache_dir from eegdash import EEGChallengeDataset # We'll use Release R2 as an example (HBN subset). # :doc:`EEGChallengeDataset ` # uses a suffixed cache folder for the competition data (e.g., "-bdf-mini"). release = "R2" dataset_id = RELEASE_TO_OPENNEURO_DATASET_MAP[release] task = "RestingState" # Choose a cache directory. This should be on a fast local filesystem. cache_dir = Path(get_default_cache_dir()).resolve() cache_dir.mkdir(parents=True, exist_ok=True) # ``` ## Step 1: Populate the local cache (Online) This block downloads the dataset from S3 to your local cache directory. Run this part on a machine with internet access. If the dataset is already on your disk at the specified `cache_dir`, you can comment out or skip this section. To keep this example self-contained, we prefetch the data here. ```Python ds_online = EEGChallengeDataset( release=release, cache_dir=cache_dir, task=task, mini=True, ) # Optional prefetch of all recordings (downloads everything to cache). ds_online.download_all(n_jobs=-1) # ``` ```none ╭────────────────────── EEG 2025 Competition Data Notice ──────────────────────╮ │ This object loads the HBN dataset that has been preprocessed for the EEG │ │ Challenge: │ │ * Downsampled from 500Hz to 100Hz │ │ * Bandpass filtered (0.5-50 Hz) │ │ │ │ For full preprocessing applied for competition details, see: │ │ https://github.com/eeg2025/downsample-datasets │ │ │ │ The HBN dataset have some preprocessing applied by the HBN team: │ │ * Re-reference (Cz Channel) │ │ │ │ IMPORTANT: The data accessed via `EEGChallengeDataset` is NOT identical to │ │ what you get from EEGDashDataset directly. │ │ If you are participating in the competition, always use │ │ `EEGChallengeDataset` to ensure consistency with the challenge data. │ ╰──────────────────────── Source: EEGChallengeDataset ─────────────────────────╯ Downloading sub-NDARAM675UR8_task-RestingState_channels.tsv: 0%| | 0.00/1.00 [00:00 ## Step 2: Basic Offline Usage Once the data is cached locally, you can interact with it without needing an internet connection. The key is to instantiate your dataset object with the `download=False` flag. This tells [EEGChallengeDataset](../../../api/dataset/eegdash.dataset.EEGChallengeDataset.md) to look for data in the `cache_dir` instead of trying to connect to the database or S3. ```Python # Here we check that the local cache folder exists offline_root = cache_dir / f"{dataset_id}-bdf-mini" print(f"Local dataset folder exists: {offline_root.exists()}\n{offline_root}") ds_offline = EEGChallengeDataset( release=release, cache_dir=cache_dir, task=task, download=False, ) print(f"Found {len(ds_offline.datasets)} recording(s) offline.") if ds_offline.datasets: print("First record bidspath:", ds_offline.datasets[0].record["bidspath"]) # ``` ```none Local dataset folder exists: False /home/runner/eegdash_cache/ds005506-bdf-mini ╭────────────────────── EEG 2025 Competition Data Notice ──────────────────────╮ │ This object loads the HBN dataset that has been preprocessed for the EEG │ │ Challenge: │ │ * Downsampled from 500Hz to 100Hz │ │ * Bandpass filtered (0.5-50 Hz) │ │ │ │ For full preprocessing applied for competition details, see: │ │ https://github.com/eeg2025/downsample-datasets │ │ │ │ The HBN dataset have some preprocessing applied by the HBN team: │ │ * Re-reference (Cz Channel) │ │ │ │ IMPORTANT: The data accessed via `EEGChallengeDataset` is NOT identical to │ │ what you get from EEGDashDataset directly. │ │ If you are participating in the competition, always use │ │ `EEGChallengeDataset` to ensure consistency with the challenge data. │ ╰──────────────────────── Source: EEGChallengeDataset ─────────────────────────╯ Found 20 recording(s) offline. First record bidspath: EEG2025r2mini/sub-NDARAB793GL3/eeg/sub-NDARAB793GL3_task-RestingState_eeg.bdf ``` ## Step 3: Filtering Entities Offline Even without a database connection, you can still filter your dataset by BIDS entities like subject, session, or task. When `download=False`, [EEGChallengeDataset](../../../api/dataset/eegdash.dataset.EEGChallengeDataset.md) uses the BIDS directory structure and filenames to apply these filters. This example shows how to load data for a specific subject from the local cache. ```Python ds_offline_sub = EEGChallengeDataset( cache_dir=cache_dir, release=release, download=False, subject="NDARAB793GL3", ) print(f"Filtered by subject=NDARAB793GL3: {len(ds_offline_sub.datasets)} recording(s).") if ds_offline_sub.datasets: keys = ("dataset", "subject", "task", "run") print("Records (dataset, subject, task, run):") for idx, base_ds in enumerate(ds_offline_sub.datasets, start=1): rec = base_ds.record summary = ", ".join(f"{k}={rec.get(k)}" for k in keys) print(f" {idx:03d}: {summary}") # ``` ```none ╭────────────────────── EEG 2025 Competition Data Notice ──────────────────────╮ │ This object loads the HBN dataset that has been preprocessed for the EEG │ │ Challenge: │ │ * Downsampled from 500Hz to 100Hz │ │ * Bandpass filtered (0.5-50 Hz) │ │ │ │ For full preprocessing applied for competition details, see: │ │ https://github.com/eeg2025/downsample-datasets │ │ │ │ The HBN dataset have some preprocessing applied by the HBN team: │ │ * Re-reference (Cz Channel) │ │ │ │ IMPORTANT: The data accessed via `EEGChallengeDataset` is NOT identical to │ │ what you get from EEGDashDataset directly. │ │ If you are participating in the competition, always use │ │ `EEGChallengeDataset` to ensure consistency with the challenge data. │ ╰──────────────────────── Source: EEGChallengeDataset ─────────────────────────╯ Filtered by subject=NDARAB793GL3: 1 recording(s). Records (dataset, subject, task, run): 001: dataset=EEG2025r2mini, subject=None, task=None, run=None ``` ## Step 4: Comparing Online vs. Offline Data As a sanity check, you can verify that the data loaded from your local cache is identical to the data fetched from the online sources. This section compares the shape of the raw data from the online and offline datasets to ensure they match. This is a good way to confirm your local cache is complete and correct. If you have network access, you can uncomment the block below to download and compare shapes. ```Python raw_online = ds_online.datasets[0].raw raw_offline = ds_offline.datasets[0].raw print("online shape:", raw_online.get_data().shape) print("offline shape:", raw_offline.get_data().shape) print("shapes equal:", raw_online.get_data().shape == raw_offline.get_data().shape) # ``` ```none online shape: (129, 40800) offline shape: (129, 40800) shapes equal: True ``` ## Step 4.1: Comparing Descriptions, Online vs. Offline Data If you have network access, you can uncomment the block below to download and compare shapes. ```Python description_online = ds_online.description description_offline = ds_offline.description print(description_offline) print(description_online) print("Online description shape:", description_online.shape) print("Offline description shape:", description_offline.shape) print("Descriptions equal:", description_online.equals(description_offline)) # ``` ```none subject task 0 NDARAB793GL3 RestingState 1 NDARAM675UR8 RestingState 2 NDARBM839WR5 RestingState 3 NDARBU730PN8 RestingState 4 NDARCT974NAJ RestingState 5 NDARCW933FD5 RestingState 6 NDARCZ770BRG RestingState 7 NDARDW741HCF RestingState 8 NDARDZ058NZN RestingState 9 NDAREC377AU2 RestingState 10 NDAREM500WWH RestingState 11 NDAREV527ZRF RestingState 12 NDAREV601CE7 RestingState 13 NDARFF070XHV RestingState 14 NDARFR108JNB RestingState 15 NDARFT305CG1 RestingState 16 NDARGA056TMW RestingState 17 NDARGH775KF5 RestingState 18 NDARGJ878ZP4 RestingState 19 NDARHA387FPM RestingState subject task ... seqlearning8target symbolsearch 0 NDARAB793GL3 RestingState ... available available 1 NDARAM675UR8 RestingState ... unavailable available 2 NDARBM839WR5 RestingState ... available available 3 NDARBU730PN8 RestingState ... available available 4 NDARCT974NAJ RestingState ... available available 5 NDARCW933FD5 RestingState ... available available 6 NDARCZ770BRG RestingState ... available available 7 NDARDW741HCF RestingState ... unavailable available 8 NDARDZ058NZN RestingState ... unavailable available 9 NDAREC377AU2 RestingState ... available available 10 NDAREM500WWH RestingState ... unavailable available 11 NDAREV527ZRF RestingState ... available available 12 NDAREV601CE7 RestingState ... available available 13 NDARFF070XHV RestingState ... available available 14 NDARFR108JNB RestingState ... unavailable available 15 NDARFT305CG1 RestingState ... unavailable available 16 NDARGA056TMW RestingState ... available available 17 NDARGH775KF5 RestingState ... available available 18 NDARGJ878ZP4 RestingState ... unavailable available 19 NDARHA387FPM RestingState ... available available [20 rows x 25 columns] Online description shape: (20, 25) Offline description shape: (20, 2) Descriptions equal: False ``` ## Notes and troubleshooting - Working offline selects recordings by parsing BIDS filenames and directory structure. Some DB-only fields are unavailable; entity filters (subject, session, task, run) usually suffice. - If you encounter issues, please open a GitHub issue so we can discuss. **Total running time of the script:** (0 minutes 9.670 seconds) # Computation times **00:31.978** total execution time for 1 file **from generated/auto_examples/hpc**: | Example | Time | Mem (MB) | |------------------------------------------------------------------------------------------------------------------------------------|-----------|------------| | [Eyes Open vs. Closed Classification](tutorial_eoec.md#sphx-glr-generated-auto-examples-hpc-tutorial-eoec-py) (`tutorial_eoec.py`) | 00:31.978 | 0 | # Eyes Open vs. Closed Classification EEGDash example for eyes open vs. closed classification. CHANGES: - Uses the EEGDash API (no local OpenNeuro mirror required) - Multi-subject run: auto-discover subjects from API and use ~10 valid subjects - Skip subjects that produce empty EEGDashDataset (no recordings) - Skip subjects that produce 0 windows after preprocessing/windowing - Subject-wise train/test split (no leakage) - Robust windowing: one 2s window per event (avoids braindecode trial overlap errors) - Save plot to file (no GUI needed on compute nodes) ```none /home/runner/work/EEGDash/EEGDash/.venv/lib/python3.11/site-packages/braindecode/preprocessing/preprocess.py:77: UserWarning: apply_on_array can only be True if fn is a callable function. Automatically correcting to apply_on_array=False. warn( Discovered subjects (first 20): ['NDARAC589YMB', 'NDARAC853CR6', 'NDARAE710YWG', 'NDARAH239PGG', 'NDARAL897CYV', 'NDARAN160GUF', 'NDARAP049KXJ', 'NDARAP457WB5', 'NDARAU939WUK', 'NDARAW216PM7', 'NDARAW298ZA9', 'NDARAX075WL9', 'NDARAX722PKY', 'NDARAZ068TNJ', 'NDARBA004KBT', 'NDARBD328NUQ', 'NDARBD992CH7', 'NDARBE719PMB', 'NDARBF042LDM', 'NDARBH019KPD'] === Subject NDARAC589YMB === ╭────────────────────── EEG 2025 Competition Data Notice ──────────────────────╮ │ This notice is only for users who are participating in the EEG 2025 │ │ Competition. │ │ │ │ EEG 2025 Competition Data Notice! │ │ You are loading one of the datasets that is used in competition, but via │ │ `EEGDashDataset`. │ │ │ │ IMPORTANT: │ │ If you download data from `EEGDashDataset`, it is NOT identical to the │ │ official │ │ competition data, which is accessed via `EEGChallengeDataset`. The │ │ competition data has been downsampled and filtered. │ │ │ │ If you are participating in the competition, │ │ you must use the `EEGChallengeDataset` object to ensure consistency. │ │ │ │ If you are not participating in the competition, you can ignore this │ │ message. │ ╰─────────────────────────── Source: EEGDashDataset ───────────────────────────╯ Downloading sub-NDARAC589YMB_task-RestingState_channels.tsv: 0%| | 0.00/1.00 [00:00 ```Python from pathlib import Path import os import warnings import numpy as np import torch warnings.simplefilter("ignore", category=RuntimeWarning) os.environ.setdefault("NUMBA_DISABLE_JIT", "1") os.environ.setdefault("MNE_USE_NUMBA", "false") os.environ.setdefault("_MNE_FAKE_HOME_DIR", str(Path.cwd())) (Path(os.environ["_MNE_FAKE_HOME_DIR"]) / ".mne").mkdir(exist_ok=True) from eegdash import EEGDash, EEGDashDataset from eegdash.paths import get_default_cache_dir from braindecode.preprocessing import ( preprocess, Preprocessor, create_windows_from_events, ) from eegdash.hbn.preprocessing import hbn_ec_ec_reannotation # ----------------------------- # Config # ----------------------------- cache_folder = get_default_cache_dir() cache_folder.mkdir(parents=True, exist_ok=True) dataset_id = "ds005514" task = "RestingState" # number of *valid* subjects to use num_subjects = int(os.environ.get("NUM_SUBJECTS", "10")) num_test_subjects = int(os.environ.get("NUM_TEST_SUBJECTS", "2")) random_state = int(os.environ.get("SEED", "42")) # 2 seconds at 128 Hz window_size_samples = 256 # training params epochs = int(os.environ.get("EPOCHS", "6")) batch_size = int(os.environ.get("BATCH_SIZE", "32")) # ----------------------------- # Preprocessors # ----------------------------- preprocessors = [ hbn_ec_ec_reannotation(), Preprocessor( "pick_channels", ch_names=[ "E22", "E9", "E33", "E24", "E11", "E124", "E122", "E29", "E6", "E111", "E45", "E36", "E104", "E108", "E42", "E55", "E93", "E58", "E52", "E62", "E92", "E96", "E70", "Cz", ], ), Preprocessor("resample", sfreq=128), Preprocessor("filter", l_freq=1, h_freq=55), ] # ----------------------------- # Build multi-subject windows (skip empties) # ----------------------------- eegdash = EEGDash() records = eegdash.find(dataset=dataset_id, task=task, limit=500) subjects_all = sorted({rec.get("subject") for rec in records if rec.get("subject")}) if len(subjects_all) == 0: raise RuntimeError( f"No subjects returned by API for dataset={dataset_id}, task={task}" ) print("Discovered subjects (first 20):", subjects_all[:20]) all_windows = [] all_subject_ids = [] valid_subjects = [] for subj in subjects_all: if len(valid_subjects) >= num_subjects: break print(f"\n=== Subject {subj} ===") try: ds_eoec = EEGDashDataset( dataset=dataset_id, task=task, subject=subj, cache_dir=cache_folder, ) except AssertionError as e: # This happens when EEGDashDataset finds 0 recordings (empty iterable) print(f"[SKIP] EEGDashDataset empty for subject {subj}: {e}") continue except Exception as e: print( f"[SKIP] Failed to construct EEGDashDataset for subject {subj}: {type(e).__name__}: {e}" ) continue try: preprocess(ds_eoec, preprocessors) windows_ds = create_windows_from_events( ds_eoec, trial_start_offset_samples=0, trial_stop_offset_samples=window_size_samples, # one 2s window per event preload=True, ) except Exception as e: print( f"[SKIP] Preprocess/windowing failed for subject {subj}: {type(e).__name__}: {e}" ) continue n_win = len(windows_ds) if n_win == 0: print(f"[SKIP] 0 windows for subject {subj}") continue print("Windows for subject:", n_win) all_windows.append(windows_ds) all_subject_ids.extend([subj] * n_win) valid_subjects.append(subj) if len(valid_subjects) < 2: raise RuntimeError( f"Only {len(valid_subjects)} valid subject(s) collected; need >=2." ) if num_test_subjects >= len(valid_subjects): raise ValueError("NUM_TEST_SUBJECTS must be < number of valid subjects found.") print("\nUsing valid subjects:", valid_subjects) print("Total subjects requested:", num_subjects, " | collected:", len(valid_subjects)) # Concatenate from braindecode.datasets import BaseConcatDataset concat_ds = BaseConcatDataset(all_windows) print("Total windows across valid subjects:", len(concat_ds)) # ----------------------------- # Save a sanity plot (no GUI) # ----------------------------- import matplotlib matplotlib.use("Agg") import matplotlib.pyplot as plt if len(concat_ds) > 2: plt.figure() plt.plot(concat_ds[2][0][0, :].transpose()) plt.savefig("sample_epoch.png", dpi=150, bbox_inches="tight") print("Saved plot to sample_epoch.png") # ----------------------------- # Subject-wise train/test split # ----------------------------- rng = np.random.RandomState(random_state) subjects_shuffled = valid_subjects.copy() rng.shuffle(subjects_shuffled) test_subjects = set(subjects_shuffled[:num_test_subjects]) train_subjects = set(subjects_shuffled[num_test_subjects:]) print("\nTrain subjects:", sorted(train_subjects)) print("Test subjects :", sorted(test_subjects)) indices = np.arange(len(concat_ds)) subj_arr = np.array(all_subject_ids) train_indices = indices[np.isin(subj_arr, list(train_subjects))] test_indices = indices[np.isin(subj_arr, list(test_subjects))] print("Train windows:", len(train_indices), "Test windows:", len(test_indices)) # ----------------------------- # Tensors + loaders # ----------------------------- # ----------------------------- torch.manual_seed(random_state) np.random.seed(random_state) X_train = torch.FloatTensor(np.array([concat_ds[i][0] for i in train_indices])) X_test = torch.FloatTensor(np.array([concat_ds[i][0] for i in test_indices])) y_train = torch.LongTensor(np.array([concat_ds[i][1] for i in train_indices])) y_test = torch.LongTensor(np.array([concat_ds[i][1] for i in test_indices])) from torch.utils.data import DataLoader, TensorDataset dataset_train = TensorDataset(X_train, y_train) dataset_test = TensorDataset(X_test, y_test) train_loader = DataLoader(dataset_train, batch_size=batch_size, shuffle=True) test_loader = DataLoader(dataset_test, batch_size=batch_size, shuffle=False) print( f"X_train {X_train.shape} | Train batches: {len(train_loader)} | Test batches: {len(test_loader)}" ) print( f"Label balance train: {float(y_train.float().mean()):.2f} | test: {float(y_test.float().mean()):.2f}" ) # ----------------------------- # Model # ----------------------------- from torch.nn import functional as F from braindecode.models import ShallowFBCSPNet from torchinfo import summary model = ShallowFBCSPNet(24, 2, n_times=256, final_conv_length="auto") summary(model, input_size=(1, 24, 256)) # ----------------------------- # Train # ----------------------------- optimizer = torch.optim.Adamax(model.parameters(), lr=0.002, weight_decay=0.001) scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=1) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = model.to(device=device) print("Using epochs =", epochs, "| device =", device, "| batch_size =", batch_size) def normalize_data(x): mean = x.mean(dim=2, keepdim=True) std = x.std(dim=2, keepdim=True) + 1e-7 x = (x - mean) / std return x.to(device=device, dtype=torch.float32) for e in range(epochs): model.train() correct_train = 0.0 for x, y in train_loader: scores = model(normalize_data(x)) y = y.to(device=device, dtype=torch.long) loss = F.cross_entropy(scores, y) optimizer.zero_grad() loss.backward() optimizer.step() scheduler.step() preds = scores.argmax(dim=1) correct_train += (preds == y).sum().item() model.eval() correct_test = 0.0 with torch.no_grad(): for x, y in test_loader: scores = model(normalize_data(x)) y = y.to(device=device, dtype=torch.long) preds = scores.argmax(dim=1) correct_test += (preds == y).sum().item() train_acc = correct_train / len(dataset_train) test_acc = correct_test / len(dataset_test) print(f"Epoch {e}, Train accuracy: {train_acc:.2f}, Test accuracy: {test_acc:.2f}") # ----------------------------- torch.manual_seed(random_state) np.random.seed(random_state) X_train = torch.FloatTensor(np.array([concat_ds[i][0] for i in train_indices])) X_test = torch.FloatTensor(np.array([concat_ds[i][0] for i in test_indices])) y_train = torch.LongTensor(np.array([concat_ds[i][1] for i in train_indices])) y_test = torch.LongTensor(np.array([concat_ds[i][1] for i in test_indices])) from torch.utils.data import DataLoader, TensorDataset dataset_train = TensorDataset(X_train, y_train) dataset_test = TensorDataset(X_test, y_test) train_loader = DataLoader(dataset_train, batch_size=batch_size, shuffle=True) test_loader = DataLoader(dataset_test, batch_size=batch_size, shuffle=False) print( f"X_train {X_train.shape} | Train batches: {len(train_loader)} | Test batches: {len(test_loader)}" ) print( f"Label balance train: {float(y_train.float().mean()):.2f} | test: {float(y_test.float().mean()):.2f}" ) # ----------------------------- # Model # ----------------------------- from torch.nn import functional as F from braindecode.models import ShallowFBCSPNet from torchinfo import summary model = ShallowFBCSPNet(24, 2, n_times=256, final_conv_length="auto") summary(model, input_size=(1, 24, 256)) # ----------------------------- # Train # ----------------------------- optimizer = torch.optim.Adamax(model.parameters(), lr=0.002, weight_decay=0.001) scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=1) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = model.to(device=device) print("Using epochs =", epochs, "| device =", device, "| batch_size =", batch_size) def normalize_data(x): mean = x.mean(dim=2, keepdim=True) std = x.std(dim=2, keepdim=True) + 1e-7 x = (x - mean) / std return x.to(device=device, dtype=torch.float32) for e in range(epochs): model.train() correct_train = 0.0 for x, y in train_loader: scores = model(normalize_data(x)) y = y.to(device=device, dtype=torch.long) loss = F.cross_entropy(scores, y) optimizer.zero_grad() loss.backward() optimizer.step() scheduler.step() preds = scores.argmax(dim=1) correct_train += (preds == y).sum().item() model.eval() correct_test = 0.0 with torch.no_grad(): for x, y in test_loader: scores = model(normalize_data(x)) y = y.to(device=device, dtype=torch.long) preds = scores.argmax(dim=1) correct_test += (preds == y).sum().item() train_acc = correct_train / len(dataset_train) test_acc = correct_test / len(dataset_test) print(f"Epoch {e}, Train accuracy: {train_acc:.2f}, Test accuracy: {test_acc:.2f}") ``` **Total running time of the script:** (0 minutes 31.978 seconds) # Tutorials! More tutorials are on the way! In the meantime, check out the EEG2025 Competition lessons and our EEGDash basics guide.
# EEG Dash Playing with eegdash!
EEG P3 Transfer Learning with AS-MMD
Eyes Open vs. Closed Classification
EEGDash Feature Extractor
Minimal Tutorial
# Dev scripts for EEGDash
Exploring Braindecode's BIDSDataset
# General tutorials
Age Prediction from EEG
Oddball Classification
EEG Features for Sex Classification
Eyes Open vs. Closed Features
P3 Visual Oddball Classification
Predicting p-factor from EEG
P-Factor Regression Tutorial
Sex Classification Tutorial
Clinical Dataset Summary
EEGDash API Tutorial
Transfer Learning with EEGDash
# EEG 2025 Foundation Challenge 1. Cross-Task Transfer Learning: Developing models that can effectively transfer knowledge from passive EEG tasks to active tasks 2. Subject Invariant Representation: Creating robust representations that generalize across different subjects while predicting clinical factors
Challenge 1: Cross-Task Transfer Learning!
Challenge 2: Predicting the p-factor from EEG
Working Offline with EEGDash
# HPC tutorials
Eyes Open vs. Closed Classification
# Computation times **00:00.000** total execution time for 0 files **from generated/auto_examples**: | Example | Time | Mem (MB) | |-----------|--------|------------| | N/A | N/A | N/A | # Age Prediction from EEG **Objective**: Learn how to predict a continuous variable (Subject Age) from raw EEG data using a Convolutional Neural Network (Conformer). **What you will learn**: 1. **Data Retrieval**: How to fetch specific datasets (e.g., Healthy Brain Network) using EEGDash. 2. **Preprocessing**: Applying standard EEG cleaning techniques (filtering, resampling) with BrainDecode. 3. **Windowing**: cutting continuous EEG into fixed-length training windows. 4. **Modeling**: Training a Conformer model (Transformer-based) using PyTorch. 5. **Interpretation**: Visualizing training progress (MAE/RMSE). ```Python from pathlib import Path import os import numpy as np import torch import torch.nn.functional as F from torch.utils.data import DataLoader from braindecode.datautil import load_concat_dataset from braindecode.models import EEGConformer from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt from eegdash import EEGDash, EEGDashDataset ``` ## Configuration & Setup We start by defining our hyperparameters and caching paths. Using a centralized `CACHE_DIR` ensures we don’t re-download data unnecessarily. ```Python # ============================================================================ # Configuration # ============================================================================ CACHE_DIR_BASE = Path( os.getenv("EEGDASH_CACHE_DIR", Path.cwd() / "eegdash_cache") ).resolve() CACHE_DIR_BASE.mkdir(parents=True, exist_ok=True) DATASET_NAME = "ds005505" TARGET_NAME = "age" CACHE_DIR = CACHE_DIR_BASE / f"reg_{DATASET_NAME}_all_{TARGET_NAME}" SFREQ = 100 BATCH_SIZE = 64 LEARNING_RATE = 0.00002 WEIGHT_DECAY = 1e-2 NUM_EPOCHS = 5 RANDOM_SEED = 41 RECORD_LIMIT = 60 # Set random seeds for reproducibility torch.manual_seed(RANDOM_SEED) np.random.seed(RANDOM_SEED) ``` ## Data Preparation We need to fetch metadata from the EEGDash API and then download the corresponding raw files. The `EEGDash` client handles the metadata query, allowing us to find subjects with valid “age” labels. #### NOTE For this tutorial, we limit the dataset to just 10 subjects to ensure quick execution. In a real scenario, you would use the full dataset. #### NOTE The `PREPARE_DATA` flag is a safety switch. In a real workflow, you usually process raw data once and then load the processed windows from disk for all subsequent experiments. ```Python PREPARE_DATA = True # Set to True to prepare data from scratch if PREPARE_DATA or not CACHE_DIR.exists(): eegdash = EEGDash() from braindecode.preprocessing import ( Preprocessor, create_fixed_length_windows, preprocess, ) from braindecode.datasets.base import BaseConcatDataset print(f"Preparing data for {DATASET_NAME} - {TARGET_NAME}...") # Load raw dataset from API records, requesting age ds_data = EEGDashDataset( dataset=DATASET_NAME, cache_dir=CACHE_DIR_BASE, description_fields=["subject", "session", "run", "task", "age", "sex"], ) # Filter subjects: remove problematic subjects sub_rm = { "NDARWV769JM7", "NDARME789TD2", "NDARUA442ZVF", "NDARJP304NK1", "NDARTY128YLU", "NDARDW550GU6", "NDARLD243KRE", "NDARUJ292JXV", "NDARBA381JGH", "041", } filtered_datasets = [] # Reconstruct datasets with valid description for ds in ds_data.datasets: subj = ds.description.get("subject", "") if subj is None: continue subj = str(subj).replace("sub-", "") # Check exclusion list if subj in sub_rm: continue # Check age validity age_val = ds.description.get("age") if age_val is None: continue try: age = float(age_val) except (ValueError, TypeError): continue if np.isnan(age): continue # Update description with clean values ds.description["age"] = age ds.description["subject"] = subj # Check data is not empty if len(ds) == 0: continue # Data quality checks (moved inside loop to ensure we get 10 VALID subjects) # Note: accessing ds.raw triggers download if not cached try: if ds.raw.n_times < 4 * SFREQ: print(f"Skipping {subj}: duration {ds.raw.n_times / SFREQ:.2f}s < 4s") continue if len(ds.raw.ch_names) != 64: print(f"Skipping {subj}: channel count {len(ds.raw.ch_names)} != 64") continue except Exception as e: print(f"Skipping {subj}: failed to load raw data ({e})") continue filtered_datasets.append(ds) # LIMIT FOR TUTORIAL: Stop after 10 valid subjects if len(filtered_datasets) >= 10: print("Reached limit of 10 valid subjects for tutorial demonstration.") break if len(filtered_datasets) == 0: raise RuntimeError( "No valid datasets found (checked for metadata and data quality)." ) all_datasets = BaseConcatDataset(filtered_datasets) if len(all_datasets.datasets) == 0: raise RuntimeError("No datasets remaining after quality checks.") # Define preprocessing pipeline - select a subset of standard 10-20 channels # We downsample to 128Hz to reduce computational load while keeping relevant brain frequencies. # We filter between 1-55Hz to remove DC drift (<1Hz) and line noise/high-freq artifacts (>55Hz). ch_names = [ "Fp1", "Fp2", "F3", "F4", "C3", "C4", "P3", "P4", "O1", "O2", "F7", "F8", "T7", "T8", "P7", "P8", "Fz", "Cz", "Pz", "Oz", "FC1", "FC2", "CP1", "CP2", ] preprocessors = [ Preprocessor("pick_channels", ch_names=ch_names), Preprocessor("resample", sfreq=128), Preprocessor("filter", l_freq=1, h_freq=55, picks=ch_names), ] # Apply preprocessing print(f"Preprocessing {len(all_datasets.datasets)} datasets...") preprocess(all_datasets, preprocessors, n_jobs=16) print("Preprocessing completed!") # Create fixed-length windows windows_ds = create_fixed_length_windows( all_datasets, start_offset_samples=0, stop_offset_samples=None, window_size_samples=256, window_stride_samples=256, drop_last_window=True, preload=False, ) for ds in windows_ds.datasets: ds.target_name = "age" # Save processed data os.makedirs(CACHE_DIR, exist_ok=True) windows_ds.save(str(CACHE_DIR), overwrite=True) print(f"Data saved to {CACHE_DIR}") # ============================================================================ # Load Dataset # ============================================================================ print(f"Loading data from {CACHE_DIR}...") windows_ds = load_concat_dataset(path=str(CACHE_DIR), preload=False) print(f"Loaded {len(windows_ds.datasets)} subjects, {len(windows_ds)} windows total") def _to_float(val): try: return float(val) except (TypeError, ValueError): return np.nan windows_ds.description["age"] = windows_ds.description["age"].apply(_to_float) for ds in windows_ds.datasets: ds.target_name = "age" # ============================================================================ # Train/Validation Split (80/20) # ============================================================================ unique_subjects = np.unique(windows_ds.description["subject"]) train_subj, val_subj = train_test_split( unique_subjects, train_size=0.8, random_state=RANDOM_SEED ) # Filter valid age values and create train/val datasets train_ds = [ ds for ds in windows_ds.datasets if ds.description.subject in train_subj and ds.description.age is not None and float(ds.description.age) > 0.5 ] val_ds = [ ds for ds in windows_ds.datasets if ds.description.subject in val_subj and ds.description.age is not None and float(ds.description.age) > 0.5 ] from braindecode.datasets.base import BaseConcatDataset train_ds = BaseConcatDataset(train_ds) val_ds = BaseConcatDataset(val_ds) train_loader = DataLoader(train_ds, batch_size=BATCH_SIZE, shuffle=True, num_workers=0) val_loader = DataLoader(val_ds, batch_size=BATCH_SIZE, num_workers=0) print(f"Train: {len(train_ds)} windows, Val: {len(val_ds)} windows") ``` ## Model Architecture: EEGConformer We use the **Conformer** architecture, which combines Convolutional Neural Networks (CNNs) for local feature extraction with Transformers for capturing long-range global dependencies. * `n_times=256`: Matches our window length (2 seconds @ 128Hz) * `n_outputs=1`: We are doing regression (predicting a single float value: age). ```Python # ============================================================================ # Initialize Model (EEGConformerSimplified) # ============================================================================ device = torch.device( "mps" if torch.backends.mps.is_available() else "cuda" if torch.cuda.is_available() else "cpu" ) print(f"Using device: {device}") model = EEGConformer( n_chans=24, n_outputs=1, n_times=256, sfreq=128, drop_prob=0.7, n_filters_time=32, filter_time_length=20, num_layers=4, num_heads=8, pool_time_stride=12, pool_time_length=64, ).to(device) optimizer = torch.optim.AdamW( model.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY ) # ============================================================================ # Helper Function: Normalize EEG Data # ============================================================================ def normalize_data(x): x = x.reshape(x.shape[0], 24, 256).float() mean = x.mean(dim=2, keepdim=True) std = x.std(dim=2, keepdim=True) + 1e-7 return (x - mean) / std # ============================================================================ # Calculate Baseline Metrics # ============================================================================ # Collect all training and validation ages for baseline calculation train_ages = np.array([ds.description.age for ds in train_ds.datasets]) val_ages = np.array([ds.description.age for ds in val_ds.datasets]) # Baseline MAE: predict median age for all samples train_median = np.median(train_ages) baseline_train_mae = np.mean(np.abs(train_ages - train_median)) baseline_val_mae = np.mean(np.abs(val_ages - train_median)) # Baseline RMSE: predict mean age for all samples train_mean = np.mean(train_ages) baseline_train_rmse = np.sqrt(np.mean((train_ages - train_mean) ** 2)) baseline_val_rmse = np.sqrt(np.mean((val_ages - train_mean) ** 2)) print( f"Baseline (predict median={train_median:.1f}): Train MAE={baseline_train_mae:.4f}, Val MAE={baseline_val_mae:.4f}" ) print( f"Baseline (predict mean={train_mean:.1f}): Train RMSE={baseline_train_rmse:.4f}, Val RMSE={baseline_val_rmse:.4f}" ) # ============================================================================ # Training Loop # ============================================================================ history = {"train_loss": [], "val_loss": [], "train_rmse": [], "val_rmse": []} for epoch in range(NUM_EPOCHS): # --- Training Phase --- model.train() train_mae_sum, train_mse_sum, train_count = 0.0, 0.0, 0 for x, y, _ in train_loader: x = normalize_data(x).to(device, dtype=torch.float32) y = y.to(device, dtype=torch.float32) optimizer.zero_grad() preds = model(x).squeeze() loss = F.l1_loss(preds, y) loss.backward() optimizer.step() train_mae_sum += loss.item() * len(y) train_mse_sum += F.mse_loss(preds, y).item() * len(y) train_count += len(y) train_mae = train_mae_sum / train_count train_rmse = np.sqrt(train_mse_sum / train_count) # --- Validation Phase --- model.eval() val_mae_sum, val_mse_sum, val_count = 0.0, 0.0, 0 with torch.no_grad(): for x, y, _ in val_loader: x = normalize_data(x).to(device, dtype=torch.float32) y = y.to(device, dtype=torch.float32) preds = model(x).squeeze() val_mae_sum += F.l1_loss(preds, y).item() * len(y) val_mse_sum += F.mse_loss(preds, y).item() * len(y) val_count += len(y) val_mae = val_mae_sum / val_count val_rmse = np.sqrt(val_mse_sum / val_count) # Store history history["train_loss"].append(train_mae) history["val_loss"].append(val_mae) history["train_rmse"].append(train_rmse) history["val_rmse"].append(val_rmse) print( f"Epoch {epoch + 1}/{NUM_EPOCHS} | Train MAE: {train_mae:.4f} RMSE: {train_rmse:.4f} | Val MAE: {val_mae:.4f} RMSE: {val_rmse:.4f}" ) # ============================================================================ # Plot Results # ============================================================================ fig, axes = plt.subplots(1, 2, figsize=(12, 4)) # Plot 1: Train vs Val Loss (MAE) axes[0].plot(history["train_loss"], label="Train MAE", marker="o") axes[0].plot(history["val_loss"], label="Val MAE", marker="s") axes[0].axhline( y=baseline_train_mae, color="blue", linestyle="--", alpha=0.5, label=f"Train Baseline (median) = {baseline_train_mae:.2f}", ) axes[0].axhline( y=baseline_val_mae, color="orange", linestyle="--", alpha=0.5, label=f"Val Baseline (median) = {baseline_val_mae:.2f}", ) axes[0].set_xlabel("Epoch") axes[0].set_ylabel("MAE") axes[0].set_title("Train vs Validation Loss (MAE)") axes[0].legend() axes[0].grid(True) # Plot 2: Train vs Val RMSE axes[1].plot(history["train_rmse"], label="Train RMSE", marker="o") axes[1].plot(history["val_rmse"], label="Val RMSE", marker="s") axes[1].axhline( y=baseline_train_rmse, color="blue", linestyle="--", alpha=0.5, label=f"Train Baseline (mean) = {baseline_train_rmse:.2f}", ) axes[1].axhline( y=baseline_val_rmse, color="orange", linestyle="--", alpha=0.5, label=f"Val Baseline (mean) = {baseline_val_rmse:.2f}", ) axes[1].set_xlabel("Epoch") axes[1].set_ylabel("RMSE") axes[1].set_title("Train vs Validation RMSE") axes[1].legend() axes[1].grid(True) plt.tight_layout() plt.savefig("training_results.png", dpi=150) print("\nTraining complete! Results saved to 'training_results.png'") plt.show() ``` # Oddball Classification This tutorial demonstrates using the *EEGDash* library with PyTorch to classify EEG responses in an oddball paradigm. 1. **Data Description**: Dataset contains EEG recordings during an oddball task with two stimulus types: - Standard (non-target) - Oddball (target) 2. **Data Preprocessing**: - Applies bandpass filtering (1-55 Hz) - Selects all 64 EEG channels - Creates event-based windows - Processes data in batches for memory efficiency 3. **Dataset Preparation**: - Remaps events into two classes: oddball, standard - Splits into training (80%) and test (20%) sets - Creates PyTorch DataLoaders 4. **Model**: - ShallowFBCSPNet architecture - 64 input channels, 2 output classes - 256-sample input windows 5. **Training**: - Adamax optimizer with learning rate decay - A few training epochs (configurable) - Reports accuracy on train and test sets ## Data Retrieval Using EEGDash Data retrieved via the EEGDash API. Use EEGDASH_DATASET_ID/EEGDASH_TASK to override the defaults. ```Python from pathlib import Path import os from eegdash import EEGDash, EEGDashDataset from eegdash.paths import get_default_cache_dir CACHE_DIR = Path(get_default_cache_dir()).resolve() CACHE_DIR.mkdir(parents=True, exist_ok=True) DATASET_ID = "ds005863" TASK = "visualoddball" RECORD_LIMIT = 20 eegdash = EEGDash() records = eegdash.find({"dataset": DATASET_ID, "task": TASK}, limit=RECORD_LIMIT) if not records: records = eegdash.find( {"task": {"$regex": "oddball", "$options": "i"}}, limit=RECORD_LIMIT ) if records: dataset_id = records[0].get("dataset") if dataset_id: records = [rec for rec in records if rec.get("dataset") == dataset_id] if not records: raise RuntimeError("No oddball task records found from the API.") dataset_concat = EEGDashDataset(cache_dir=CACHE_DIR, records=records) dataset_concat.download_all() ``` ## Data Preprocessing Using Braindecode [Braindecode]([https://braindecode.org/](https://braindecode.org/)) provides a powerful framework for EEG data preprocessing and analysis. We apply three preprocessing steps in Braindecode: 1.\*\*Event Remapping\*\* using event markers to convert: : - 3,4 → oddball (0) - 6,7 → standard (1) 2.\*\*Channel Selection & Filtering\*\*: : - Selecting first 64 EEG channels - Bandpass filtering between 1 Hz and 55 Hz When calling the **preprocess** function, the data is retrieved from the files. Finally, we use **create_windows_from_events** to extract windows centered on events (-128 to +128 samples around each event). ```Python import logging import warnings import numpy as np import mne ``` ```Python from braindecode.preprocessing import ( Preprocessor, create_windows_from_events, preprocess, ) mne.set_log_level("ERROR") logging.getLogger("joblib").setLevel(logging.ERROR) warnings.filterwarnings("ignore") # BrainDecode preprocessors preprocessors = [ Preprocessor( "pick_channels", ch_names=dataset_concat.datasets[0].raw.ch_names[:64] ), Preprocessor("resample", sfreq=128), Preprocessor("filter", l_freq=1, h_freq=55), ] preprocess(dataset_concat, preprocessors) # Extract windows event_mapping = { "Target": 1, "NonTarget": 0, "target": 1, "standard": 0, "oddball": 1, "3": 1, "4": 1, "6": 0, "7": 0, } windows_ds = create_windows_from_events( dataset_concat, trial_start_offset_samples=-128, trial_stop_offset_samples=128, preload=False, drop_bad_windows=True, mapping=event_mapping, ) print(f"\nAll files processed, total number of windows: {len(windows_ds)}") print(f"Window shape: {windows_ds[0][0].shape}") ``` ## Creating training and test sets The data preparation pipeline consists of these key steps: 1. **Dataset Creation** - The processed windows are automatically labeled (0=oddball, 1=standard) by the OddballPreprocessor using efficient array operations. 2. **Train-Test Split** - Using sklearn’s train_test_split with 80-20 split and stratified sampling. 3. **PyTorch Data Preparation** - Converting to tensors and creating DataLoader objects for mini-batch training. ```Python import torch import torch.nn.functional as F from sklearn.model_selection import train_test_split from torch.utils.data import DataLoader, TensorDataset # Set random seed for reproducibility random_state = 42 torch.manual_seed(random_state) np.random.seed(random_state) # Extract data and labels using array operations data = np.stack([windows_ds[i][0] for i in range(len(windows_ds))]) labels = np.array([windows_ds[i][1] for i in range(len(windows_ds))]) # Print dataset information print(f"Dataset size: {len(data)}") print(f"Data shape: {data.shape}") print("Distribution of labels:", np.unique(labels, return_counts=True)) print("Label meanings: 0=oddball, 1=standard") # Split into train and test sets train_indices, test_indices = train_test_split( range(len(data)), test_size=0.2, stratify=labels, random_state=random_state ) # Convert to PyTorch tensors X_train = torch.FloatTensor(data[train_indices]) X_test = torch.FloatTensor(data[test_indices]) y_train = torch.LongTensor(labels[train_indices]) y_test = torch.LongTensor(labels[test_indices]) # Create data loaders dataset_train = TensorDataset(X_train, y_train) dataset_test = TensorDataset(X_test, y_test) train_loader = DataLoader(dataset_train, batch_size=10, shuffle=True) test_loader = DataLoader(dataset_test, batch_size=10, shuffle=True) # Print dataset information print("\nDataset size:") print(f"Training set: {X_train.shape}, labels: {y_train.shape}") print(f"Test set: {X_test.shape}, labels: {y_test.shape}") print("\nProportion of samples of each class in training set:") for label in np.unique(labels): ratio = np.mean(y_train.numpy() == label) print(f"Category {label}: {ratio:.3f}") ``` # Create model The model is a shallow convolutional neural network (ShallowFBCSPNet) with 64 input channels (EEG channels), 2 output classes (oddball, standard), and an input window size of 256 samples (1 seconds of EEG data). ```Python from torchinfo import summary ``` ```Python from braindecode.models import ShallowFBCSPNet model = ShallowFBCSPNet( in_chans=64, n_classes=2, input_window_samples=256, final_conv_length="auto" ) summary(model, input_size=(1, 64, 256)) ``` ## Model Training and Evaluation Process The training and evaluation pipeline runs for 5 epochs using Adamax optimization. Key components include: 1. **Hardware Setup** - Model allocation to CPU/GPU for optimal computation. 2. **Data Processing** - Channel-wise normalization of input data using mean and standard deviation. 3. **Training Process** - Each epoch performs forward passes, computes cross-entropy loss, updates parameters, and tracks accuracy. 4. **Evaluation** - Model performance is assessed on the test set after each training epoch. The process monitors both training and test accuracy to track model learning progress. Set up device, optimizer, and learning rate scheduler ```Python device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = model.to(device) optimizer = torch.optim.Adamax(model.parameters(), lr=0.001, weight_decay=0.0005) scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=1) def normalize_data(x): mean = x.mean(dim=2, keepdim=True) std = x.std(dim=2, keepdim=True) + 1e-7 x = (x - mean) / std x = x.to(device=device, dtype=torch.float32) return x print("\nStart training...") epochs = int(os.getenv("EEGDASH_EPOCHS", "2")) for e in range(epochs): model.train() correct_train = 0 for t, (x, y) in enumerate(train_loader): scores = model(normalize_data(x)) y = y.to(device=device, dtype=torch.long) _, preds = scores.max(1) correct_train += (preds == y).sum() / len(dataset_train) loss = F.cross_entropy(scores, y) optimizer.zero_grad() loss.backward() optimizer.step() scheduler.step() model.eval() correct_test = 0 with torch.no_grad(): for t, (x, y) in enumerate(test_loader): scores = model(normalize_data(x)) y = y.to(device=device, dtype=torch.long) _, preds = scores.max(1) correct_test += (preds == y).sum() / len(dataset_test) print( f"epoch {e + 1}, training accuracy: {correct_train:.3f}, test accuracy: {correct_test:.3f}" ) ``` # EEG Features for Sex Classification The code below provides an example of using the *EEGDash* library in combination with PyTorch to develop a deep learning model for detecting sex in a collection of subjects. 1. **Data Retrieval Using EEGDash**: An instance of *EEGDashDataset* is created to search and retrieve resting state data for 136 subjects (dataset ds005505). At this step, only the metadata is transferred. 2. **Data Preprocessing Using BrainDecode**: This process preprocesses EEG data using Braindecode by selecting specific channels, resampling, filtering, and extracting 2-second epochs. This takes about 2 minutes. 3. **Creating a train and testing sets**: The dataset is split into training (80%) and testing (20%) sets with balanced labels–making sure also that we have as many males as females–converted into PyTorch tensors, and wrapped in DataLoader objects for efficient mini-batch training. 4. **Model Definition**: The model is a custom convolutional neural network with 24 input channels (EEG channels), 2 output classes (male and female). 5. **Model Training and Evaluation Process**: This section trains the neural network, normalizes input data, computes cross-entropy loss, updates model parameters, and evaluates classification accuracy over six epochs. This takes less than 10 seconds to a couple of minutes, depending on the device you use. **Optimization Note**: Using `num_workers>0` in PyTorch DataLoaders and `n_jobs=-1` for feature extraction significantly speeds up the pipeline by utilizing parallel processing. ```Python import time from functools import wraps def timeit(func): @wraps(func) def wrapper(*args, **kwargs): start = time.time() result = func(*args, **kwargs) end = time.time() print(f"Reference '{func.__name__}' took {end - start:.2f} seconds") return result return wrapper ``` ## Data Retrieval Using EEGDash First we find one resting state dataset for a collection of subjects. ```Python from pathlib import Path import os import numpy as np from braindecode.datautil import load_concat_dataset from braindecode.preprocessing import ( preprocess, Preprocessor, create_fixed_length_windows, ) from eegdash import EEGDashDataset from eegdash.paths import get_default_cache_dir CACHE_DIR = Path(get_default_cache_dir()).resolve() CACHE_DIR.mkdir(parents=True, exist_ok=True) DATASET_ID = "ds005505" TASK = "RestingState" RECORD_LIMIT = 80 PREPARED_DIR = CACHE_DIR / "restingstate_windows" # Fetch dataset directly ds_sexdata = EEGDashDataset( dataset=DATASET_ID, task=TASK, cache_dir=CACHE_DIR, description_fields=["subject", "session", "run", "task", "sex", "gender", "age"], ) # Filter datasets that have sex/gender info valid_datasets = [] from braindecode.datasets import BaseConcatDataset for ds in ds_sexdata.datasets: # Check if sex is present (populated from DB) if ds.description.get("sex") is not None: valid_datasets.append(ds) if not valid_datasets: raise RuntimeError("No records with sex/gender metadata found (API).") # Reconstitute BaseConcatDataset with valid datasets ds_sexdata = BaseConcatDataset(valid_datasets) if not PREPARED_DIR.exists(): preprocessors = [ Preprocessor( "pick_channels", ch_names=[ "E22", "E9", "E33", "E24", "E11", "E124", "E122", "E29", "E6", "E111", "E45", "E36", "E104", "E108", "E42", "E55", "E93", "E58", "E52", "E62", "E92", "E96", "E70", "Cz", ], ), Preprocessor("resample", sfreq=128), Preprocessor("filter", l_freq=1, h_freq=55), ] preprocess(ds_sexdata, preprocessors, n_jobs=-1) windows_ds = create_fixed_length_windows( ds_sexdata, start_offset_samples=0, stop_offset_samples=None, window_size_samples=256, window_stride_samples=256, drop_last_window=True, preload=False, ) PREPARED_DIR.mkdir(parents=True, exist_ok=True) windows_ds.save(str(PREPARED_DIR), overwrite=True) print("Loading data from disk") windows_ds = load_concat_dataset(path=str(PREPARED_DIR), preload=False) ``` ## Feature Extraction ```Python from functools import partial ``` ```Python from eegdash import features from eegdash.features import extract_features, fit_feature_extractors sfreq = windows_ds.datasets[0].raw.info["sfreq"] def _get_filter_freqs(raw_preproc_kwargs): if isinstance(raw_preproc_kwargs, list): for item in raw_preproc_kwargs: if isinstance(item, dict) and item.get("fn") == "filter": return item.get("kwargs", {}) if hasattr(item, "fn") and getattr(item.fn, "__name__", "") == "filter": return getattr(item, "kwargs", {}) return {} return raw_preproc_kwargs.get("filter", {}) filter_freqs = _get_filter_freqs(windows_ds.datasets[0].raw_preproc_kwargs) features_dict = { "sig": features.FeatureExtractor( { "mean": features.signal_mean, "var": features.signal_variance, "std": features.signal_std, "skew": features.signal_skewness, "kurt": features.signal_kurtosis, "rms": features.signal_root_mean_square, "ptp": features.signal_peak_to_peak, "quan.1": partial(features.signal_quantile, q=0.1), "quan.9": partial(features.signal_quantile, q=0.9), "line_len": features.signal_line_length, "zero_x": features.signal_zero_crossings, "hjorth_mob": features.signal_hjorth_mobility, "hjorth_comp": features.signal_hjorth_complexity, "dcorr_t": partial(features.signal_decorrelation_time, fs=sfreq), }, ), "dim": features.FeatureExtractor( { "higuchi": partial(features.dimensionality_higuchi_fractal_dim, k_max=5), "katz": partial(features.dimensionality_katz_fractal_dim), "pet": features.dimensionality_petrosian_fractal_dim, "hurst": features.dimensionality_hurst_exp, "": features.HilbertFeatureExtractor( { "dfa": features.dimensionality_detrended_fluctuation_analysis, } ), }, ), "comp": features.FeatureExtractor( { "ent": features.EntropyFeatureExtractor( { "app": features.complexity_approx_entropy, "samp": features.complexity_sample_entropy, }, m=2, r=0.2, l=1, ), "ent_svd": partial(features.complexity_svd_entropy, m=20), "lzc": features.complexity_lempel_ziv, }, ), "spec": features.SpectralFeatureExtractor( { "rtot_power": features.spectral_root_total_power, "band_power": features.spectral_bands_power, "hjorth_act": features.spectral_hjorth_activity, 0: features.NormalizedSpectralFeatureExtractor( { "moment": features.spectral_moment, "entropy": features.spectral_entropy, "edge": partial(features.spectral_edge, edge=0.9), "hjorth_mob": features.spectral_hjorth_mobility, "hjorth_comp": features.spectral_hjorth_complexity, }, ), 1: features.DBSpectralFeatureExtractor( { "slope": features.spectral_slope, }, ), }, fs=sfreq, f_min=filter_freqs.get("l_freq", 1.0), f_max=filter_freqs.get("h_freq", sfreq / 2.0), nperseg=4 * sfreq, noverlap=3 * sfreq, ), "coher": features.CoherenceFeatureExtractor( { "msc": features.connectivity_magnitude_square_coherence, "imag": features.connectivity_imaginary_coherence, "lag": features.connectivity_lagged_coherence, }, fs=sfreq, f_min=filter_freqs["l_freq"], f_max=filter_freqs["h_freq"], nperseg=4 * sfreq, noverlap=3 * sfreq, ), "csp": partial(features.CommonSpatialPattern(), n_select=5), } # TODO: fit on train, extract on train/validation feature_ext = fit_feature_extractors(windows_ds, features_dict, batch_size=1024) features_ds = extract_features(windows_ds, feature_ext, batch_size=64, n_jobs=-1) ``` ```Python features_dir = CACHE_DIR / "hbn_features_restingstate" os.makedirs(features_dir, exist_ok=True) features_ds.save(str(features_dir), overwrite=True) ``` ```Python from eegdash.features import load_features_concat_dataset print("Loading features from disk") features_ds = load_features_concat_dataset(path=str(features_dir), n_jobs=-1) ``` ```Python features_ds.to_dataframe(include_crop_inds=True) ``` ```Python features_ds.replace([-np.inf, +np.inf], np.nan) mean = features_ds.mean(n_jobs=-1) features_ds.fillna(mean) features_ds.fillna(0) features_ds.zscore(eps=1e-7, n_jobs=-1) ``` ```Python features_ds.to_dataframe(include_target=True) ``` ## Creating a Training and Test Set The code below creates a training and test set. We first split the data using the **train_test_split** function and then create a **TensorDataset** for both sets. 1. **Set Random Seed** – The random seed is fixed using torch.manual_seed(random_state) to ensure reproducibility in dataset splitting and model training. 2. **Get Balanced Indices for Male and Female Subjects** – We ensure a 50/50 split of male and female subjects in both the training and test sets. Additionally, we prevent subject leakage, meaning the same subjects do not appear in both sets. The dataset is split into training (90%) and testing (10%) subsets using train_test_split(), ensuring balanced stratification based on gender. 3. **Convert Data to PyTorch Tensors** – The selected training and testing samples are converted into FloatTensor for input features and LongTensor for labels, making them compatible with PyTorch models. 4. **Create DataLoaders** – The datasets are wrapped in PyTorch DataLoader objects with a batch size of 100, allowing efficient mini-batch training and shuffling. Although there are only 136 subjects, the dataset contains more than 10,000 2-second samples. ```Python import numpy as np import torch from sklearn.model_selection import train_test_split from torch.utils.data import DataLoader ``` ```Python from eegdash.features import FeaturesConcatDataset # random seed for reproducibility random_state = 0 np.random.seed(random_state) torch.manual_seed(random_state) # Get balanced indices for male and female subjects and create a balanced dataset male_subjects = features_ds.description["subject"][ features_ds.description["sex"] == "M" ] female_subjects = features_ds.description["subject"][ features_ds.description["sex"] == "F" ] n_samples = min(len(male_subjects), len(female_subjects)) balanced_subjects = np.concatenate( [male_subjects[:n_samples], female_subjects[:n_samples]] ) balanced_gender = ["M"] * n_samples + ["F"] * n_samples train_subj, val_subj, train_gender, val_gender = train_test_split( balanced_subjects, balanced_gender, train_size=0.9, stratify=balanced_gender, random_state=random_state, ) # Create datasets train_ds = FeaturesConcatDataset( [ds for ds in features_ds.datasets if ds.description.subject in train_subj] ) val_ds = FeaturesConcatDataset( [ds for ds in features_ds.datasets if ds.description.subject in val_subj] ) # Check the balance of the dataset assert len(balanced_subjects) == len(balanced_gender) print(f"Number of subjects in balanced dataset: {len(balanced_subjects)}") print( f"Gender distribution in balanced dataset: {np.unique(balanced_gender, return_counts=True)}" ) ``` ```Python from lightgbm import LGBMClassifier train_df = train_ds.to_dataframe(include_target=True) X_train, y_train = train_df.drop("target", axis=1), train_df["target"] val_df = val_ds.to_dataframe(include_target=True) X_val, y_val = val_df.drop("target", axis=1), val_df["target"] clf = LGBMClassifier(n_jobs=1) clf.fit(X_train, y_train) y_hat_train = clf.predict(X_train) correct_train = (y_train == y_hat_train).mean() y_hat_val = clf.predict(X_val) correct_val = (y_val == y_hat_val).mean() print(f"Train accuracy: {correct_train:.2f}, Validation accuracy: {correct_val:.2f}\n") ``` ```Python from lightgbm import plot_importance plot_importance(clf, importance_type="split", max_num_features=10) ``` ```Python plot_importance(clf, importance_type="gain", max_num_features=10) ``` Create dataloaders Create dataloaders Optimization: Use num_workers > 0 to prefetch data in parallel ```Python train_loader = DataLoader( train_ds, batch_size=100, shuffle=True, num_workers=2, pin_memory=True ) val_loader = DataLoader( val_ds, batch_size=100, shuffle=True, num_workers=2, pin_memory=True ) ``` # Check labels It is good practice to verify the labels and ensure the random seed is functioning correctly. If all labels are ‘M’ (male) or ‘F’ (female), it could indicate an issue with data loading or stratification, requiring further investigation. get the first batch to check the labels ```Python dataiter = iter(train_loader) first_item, label, _ = dataiter.__next__() np.array(label).T ``` # Create model The model is a custom convolutional neural network with 24 input channels (EEG channels), 2 output classes (male vs. female), and an input window size of 256 samples (2 seconds of EEG data). See the reference below for more information. [1] Truong, D., Milham, M., Makeig, S., & Delorme, A. (2021). Deep Convolutional Neural Network Applied to Electroencephalography: Raw Data vs Spectral Features. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2021, 1039–1042. [https://doi.org/10.1109/EMBC46164.2021.9630708](https://doi.org/10.1109/EMBC46164.2021.9630708) ```Python from torch import nn ``` create model ```Python from torchinfo import summary # MLP model = nn.Sequential( nn.Flatten(), nn.Linear(features_ds.datasets[0].n_features, 100), nn.Linear(100, 100), nn.Linear(100, 100), nn.Linear(100, 2), ) print(summary(model, input_size=first_item.shape)) ``` # Model Training and Evaluation Process This section trains the neural network using the Adamax optimizer, normalizes input data, computes cross-entropy loss, updates model parameters, and tracks accuracy across six epochs. 1. **Set Up Optimizer and Learning Rate Scheduler** – The Adamax optimizer initializes with a learning rate of 0.002 and weight decay of 0.001 for regularization. 2. **Allocate Model to Device** – The model moves to the specified device (CPU, GPU, or MPS for Mac silicon) to optimize computation efficiency. 3. **Normalize Input Data** – The normalize_data function standardizes input data by subtracting the mean and dividing by the standard deviation along the time dimension before transferring it to the appropriate device. 4. **Train the Model for Two Epochs** – The training loop iterates through data batches with the model in training mode. It normalizes inputs, computes predictions, calculates cross-entropy loss, performs backpropagation, updates model parameters, and steps the learning rate scheduler. It tracks correct predictions to compute accuracy. 5. **Evaluate on Test Data** – After each epoch, the model runs in evaluation mode on the test set. It computes predictions on normalized data and calculates test accuracy by comparing outputs with actual labels. ```Python from torch.nn import functional as F optimizer = torch.optim.Adamax(model.parameters(), lr=0.0005, weight_decay=0.001) device = torch.device( "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu" ) model.to(device=device) # dictionary of genders for converting sample labels to numerical values gender_dict = {"M": 0, "F": 1} epochs = 2 for e in range(epochs): # training correct_train = 0 for t, (x, y, _) in enumerate(train_loader): model.train() # put model to training mode x = x.to(device=device, dtype=torch.float32) scores = model(x) _, preds = scores.max(1) y = torch.tensor( [gender_dict[gender] for gender in y], device=device, dtype=torch.long ) correct_train += (preds == y).sum() / len(train_ds) # Calculates the cross-entropy loss and performs backpropagation loss = F.cross_entropy(scores, y) optimizer.zero_grad() loss.backward() optimizer.step() if t % 50 == 0: print("Epoch %d, Iteration %d, loss = %.4f" % (e, t, loss.item())) # validation correct_test = 0 for t, (x, y, _) in enumerate(val_loader): model.eval() # put model to testing mode x = x.to(device=device, dtype=torch.float32) scores = model(x) _, preds = scores.max(1) y = torch.tensor( [gender_dict[gender] for gender in y], device=device, dtype=torch.long ) correct_test += (preds == y).sum() / len(val_ds) print( f"Epoch {e}, Train accuracy: {correct_train:.2f}, Test accuracy: {correct_test:.2f}\n" ) ``` # Eyes Open vs. Closed Features EEGDash example for eyes open vs. closed classification. The code below provides an example of using the *EEGDash* library in combination with PyTorch to develop a deep learning model for analyzing EEG data, specifically for eyes open vs. closed classification in a single subject. 1. **Data Retrieval Using EEGDash**: An instance of *EEGDashDataset* is created to search and retrieve an EEG dataset. At this step, only the metadata is transferred. 2. **Data Preprocessing Using BrainDecode**: This process preprocesses EEG data using Braindecode by reannotating events, selecting specific channels, resampling, filtering, and extracting 2-second epochs, ensuring balanced eyes-open and eyes-closed data for analysis. 3. **Extracting EEG Features Using EEGDash.features**: Building a feature extraction tree using existing and new features. 4. **Creating train and testing sets**: The dataset is split into training (80%) and testing (20%) sets with balanced labels, converted into PyTorch tensors, and wrapped in DataLoader objects for efficient mini-batch training. 5. **Model Definition**: The model is a MLP with n_features input channels, 2 output classes (eyes-open and eyes-closed). 6. **Model Training and Evaluation Process**: This section trains the neural network, computes cross-entropy loss, updates model parameters, and evaluates classification accuracy over six epochs. ## Data Retrieval Using EEGDash First we find one resting state dataset. This dataset contains both eyes open and eyes closed data. ```Python from pathlib import Path import os os.environ.setdefault("NUMBA_DISABLE_JIT", "1") os.environ.setdefault("_MNE_FAKE_HOME_DIR", str(Path.cwd())) (Path(os.environ["_MNE_FAKE_HOME_DIR"]) / ".mne").mkdir(exist_ok=True) os.environ.setdefault("MPLCONFIGDIR", str(Path.cwd() / ".matplotlib")) Path(os.environ["MPLCONFIGDIR"]).mkdir(exist_ok=True) from eegdash import EEGDash, EEGDashDataset CACHE_DIR = Path(os.getenv("EEGDASH_CACHE_DIR", Path.cwd() / "eegdash_cache")).resolve() CACHE_DIR.mkdir(parents=True, exist_ok=True) eegdash = EEGDash() records = eegdash.find({"dataset": "ds005514", "task": "RestingState"}, limit=20) if not records: records = eegdash.find({"task": "RestingState"}, limit=20) subject_id = records[0]["subject"] if records else "NDARDB033FW5" ds_eoec = EEGDashDataset( dataset="ds005514", task="RestingState", subject=subject_id, cache_dir=CACHE_DIR, ) ``` ## Data Preprocessing Using Braindecode [BrainDecode]([https://braindecode.org/stable/install/install.html](https://braindecode.org/stable/install/install.html)) is a specialized library for preprocessing EEG and MEG data. In this dataset, there are two key events in the continuous data: **instructed_toCloseEyes**, marking the start of a 40-second eyes-closed period, and **instructed_toOpenEyes**, indicating the start of a 20-second eyes-open period. For the eyes-closed event, we extract 14 seconds of data from 15 to 29 seconds after the event onset. Similarly, for the eyes-open event, we extract data from 5 to 19 seconds after the event onset. This ensures an equal amount of data for both conditions. The event extraction is handled by the custom function **hbn_ec_ec_reannotation**. Next, we apply four preprocessing steps in Braindecode: 1. **Reannotation** of event markers using hbn_ec_ec_reannotation(). 2. **Selection** of 24 specific EEG channels from the original 128. 3. **Resampling** the EEG data to a frequency of 128 Hz. 4. **Filtering** the EEG signals to retain frequencies between 1 Hz and 55 Hz. When calling the **preprocess** function, the data is retrieved from the remote repository. Finally, we use **create_windows_from_events** to extract 5-second epochs from the data. These epochs serve as the dataset samples. At this stage, each sample is automatically labeled with the corresponding event type (eyes-open or eyes-closed). windows_ds is a PyTorch dataset, and when queried, it returns labels for eyes-open and eyes-closed (assigned as labels 0 and 1, corresponding to their respective event markers). ```Python import warnings import numpy as np ``` ```Python from braindecode.preprocessing import ( Preprocessor, create_windows_from_events, preprocess, ) from eegdash.hbn.preprocessing import hbn_ec_ec_reannotation warnings.simplefilter("ignore", category=RuntimeWarning) # BrainDecode preprocessors preprocessors = [ hbn_ec_ec_reannotation(), Preprocessor( "pick_channels", ch_names=[ "E22", "E9", "E33", "E24", "E11", "E124", "E122", "E29", "E6", "E111", "E45", "E36", "E104", "E108", "E42", "E55", "E93", "E58", "E52", "E62", "E92", "E96", "E70", "Cz", ], ), Preprocessor("resample", sfreq=128), Preprocessor("filter", l_freq=1, h_freq=55), ] preprocess(ds_eoec, preprocessors) # Extract 2-second segments windows_ds = create_windows_from_events( ds_eoec, trial_start_offset_samples=0, trial_stop_offset_samples=int(5 * ds_eoec.datasets[0].raw.info["sfreq"]), preload=True, ) ``` ## Plotting a Single Channel for One Sample It’s always a good practice to verify that the data has been properly loaded and processed. Here, we plot a single channel from one sample to ensure the signal is present and looks as expected. ```Python import matplotlib.pyplot as plt plt.figure() plt.plot(windows_ds[2][0][0, :].transpose()) # first channel of first epoch plt.show() ``` ## Features * We start by extracting the signal variance from each channel (EEG electrode) as a feature, so we get 24 features (one per channel). * The function signal_variance_feature gets a *batch* of samples, represented by a numpy array of size (batch_size\`$times$\`num_channels\`$times$\`time_points_per_window). * The function returns a numpy array of size (batch_size\`$times$\`num_channels). * To automatically match the channel name to each feature, we use the univariate_feature decorator. * The features extraction is performed by the extract_features function, getting a braindecode windows dataset and a features dictionary mapping feature names to feature extraction functions. ```Python from eegdash import features from eegdash.features import extract_features @features.univariate_feature def signal_variance_feature(x): return x.var(axis=-1) features_dict = {"sig_var": signal_variance_feature} features_ds = extract_features(windows_ds, features_dict, batch_size=512) ``` Let us have a look at the feature values. In this example, the first three columns represent the window crop indices, and are optional. ```Python features_ds.to_dataframe(include_crop_inds=True) ``` * Now we add two spectral features: the root of the total power, and the power in different power bands. * Keyword parameters can be passed to each feature using the functools.partial function. * Multiple similar features can be returned from a feature extraction function by passing a dictionary of numpy arrays. ```Python from functools import partial from scipy.signal import welch sfreq = windows_ds.datasets[0].raw.info["sfreq"] @features.univariate_feature def spectral_root_total_power_feature(x, **kwargs): f, p = welch(x, **kwargs) return p.sum(axis=-1) DEFAULT_FREQ_BANDS = { "delta": (1, 4.5), "theta": (4.5, 8), "alpha": (8, 12), "beta": (12, 30), } @features.univariate_feature def spectral_power_bands_feature(x, bands=DEFAULT_FREQ_BANDS, **kwargs): f, p = welch(x, **kwargs) power_bands = dict() for band_name, band_lims in bands.items(): ind = np.logical_and(f > band_lims[0], f < band_lims[1]) power_bands[band_name] = p[..., ind].sum(axis=-1) return power_bands features_dict = { "sig_var": signal_variance_feature, "spec_rtotpow": partial(spectral_root_total_power_feature, fs=sfreq), "sig_pband": partial(spectral_power_bands_feature, fs=sfreq), } features_ds = extract_features(windows_ds, features_dict, batch_size=512) ``` Again, let us have a look at the feature values (this time without the window crop indices). ```Python features_ds.to_dataframe() ``` You might have noticed that both of the spectral feature extraction functions call the welch function with exact same parameters, so the computation will happen twice. As we may add more spectral features, this repeating computation will slow down the feature extraction computations. This can be solved by creating a mid-step computation of the power spectrum, then reusing its result to compute different spectral features. * Mid-step computations is implemented by inheriting the FeatureExtractor class and overriding its preprocess method. * The output of the preprocess method will pass as-is to downstream feature extraction functions. * The FeaturePredecessor decorator is used to make sure each feature extraction function receives a properly preprocessed input. * The new processing step is included as a new feature, getting its own descendants in a new features dictionary. The feature names will be a concatenation of the processing steps. ```Python sfreq = windows_ds.datasets[0].raw.info["sfreq"] class WelchFeatureExtractor(features.FeatureExtractor): def __init__(self, feature_extractors, fs=None, **kwargs): super().__init__(feature_extractors) self.fs = fs self.kwargs = kwargs def preprocess(self, x, **kwargs): # use self.kwargs if needed, or just pass through f, p = welch(x, fs=self.fs if self.fs else 128, **kwargs) return f, p @features.univariate_feature def spectral_root_total_power_feature(f, p, **kwargs): return p.sum(axis=-1) @features.univariate_feature def spectral_power_bands_feature(f, p, bands=DEFAULT_FREQ_BANDS, **kwargs): power_bands = dict() for band_name, band_lims in bands.items(): ind = np.logical_and(f > band_lims[0], f < band_lims[1]) power_bands[band_name] = p[..., ind].sum(axis=-1) return power_bands features_dict = { "sig_var": signal_variance_feature, "spec": WelchFeatureExtractor( { "rtotpow": spectral_root_total_power_feature, "pband": spectral_power_bands_feature, }, fs=sfreq, ), } features_ds = extract_features(windows_ds, features_dict, batch_size=512) ``` Again, let us have a look at the feature values. ```Python features_ds.to_dataframe() ``` Finally, let us extract the same features using features already implemented in the EEGDash.features package. ```Python sfreq = windows_ds.datasets[0].raw.info["sfreq"] features_dict = { "sig_var": features.signal_variance, "spec": features.SpectralFeatureExtractor( { "rtotpow": features.spectral_root_total_power, "pband": features.spectral_bands_power, }, fs=sfreq, ), } features_ds = extract_features(windows_ds, features_dict, batch_size=512) ``` ```Python features_ds.to_dataframe() ``` The function get_all_features returns a list of all currently implemented features: ```Python features.get_all_features() ``` The function get_all_feature_extractors returns a list of all currently implemented feature extractors: ```Python features.get_all_feature_extractors() ``` Now we can add some new features. ```Python sfreq = windows_ds.datasets[0].raw.info["sfreq"] def _get_filter_freqs(raw_preproc_kwargs): if isinstance(raw_preproc_kwargs, list): for item in raw_preproc_kwargs: if isinstance(item, dict) and item.get("fn") == "filter": return item.get("kwargs", {}) if hasattr(item, "fn") and getattr(item.fn, "__name__", "") == "filter": return getattr(item, "kwargs", {}) return {} return raw_preproc_kwargs.get("filter", {}) filter_freqs = _get_filter_freqs(windows_ds.datasets[0].raw_preproc_kwargs) features_dict = { "sig_var": features.signal_variance, "spec": features.SpectralFeatureExtractor( { "rtotpow": features.spectral_root_total_power, "pband": features.spectral_bands_power, 0: features.NormalizedSpectralFeatureExtractor( { "entropy": features.spectral_entropy, "moment": features.spectral_moment, "edge": partial(features.spectral_edge, edge=0.9), } ), 1: features.DBSpectralFeatureExtractor( { "slope": features.spectral_slope, } ), }, fs=sfreq, nperseg=2 * sfreq, noverlap=int(1.5 * sfreq), f_min=filter_freqs.get("l_freq", 1.0), f_max=filter_freqs.get("h_freq", sfreq / 2.0), ), } features_ds = extract_features(windows_ds, features_dict, batch_size=512) ``` ```Python features_ds.to_dataframe() ``` Note that the signal of Cz electrode is always zero, so some of its features(e.g., ‘spec_moment_Cz’) are *NaN*. To avoid future problems, let us replace them with zeros. ```Python features_ds.fillna(0) ``` ```Python features_ds.to_dataframe() ``` #### Advanced usage * The feature extraction process can be controlled via the batch_size and n_jobs parameters, allowing for efficient parallel and batched processing. * The resulting FeaturesConcatDataset (in this example, features_ds) can be saved to disk using the save method, then loaded using the load_features_concat_dataset function. * A FeaturesConcatDataset object also supports a subset of pandas-dataframe-like operations, such as mean, var, zscore, fillna, join and more. * A feature extraction function may be any callable object. If necessary, the relevant decorators can be applied directly to the class definition. > - Feature extraction functions decorated by a numba.jit decorator are explicitly supported. * By default, any new feature assumes its predecessor preprocessing step is a simple FeatureExtractor; otherwise, the FeaturePredecessor decorator is used to enforce a specific type of a preprocessing step. If relevant, multiple possible preprocessing steps can be passed to the decorator (for example, spectral power bands may be computed for different types of power normalizations, each performed by a different preprocessing step). > - Each object inheriting from FeatureExtractor may be decorated with a FeaturePredecessor to create a tree of processing steps. > - The function get_feature_predecessors returns a list of all possible predecessors for a given feature. > - The feature name is derived by concatenating the names of its processing steps. To ignore a certain step (such as a simple normalization), replace its key in the dictionary by an empty string or a non-string value. * Just like the univariate_feature decorator one may use the bivariate_feature and multivariate_feature decorators. In each case, the second dimension returned by the feature extraction function should match the feature kind (i.e., for a bivariate_feature, the second dimension should be equal to num_channels\`$times$(\`num_channels - 1)/2, and their order should match the one computed via BivariateFeature.get_pair_iterators). For a multivariate_feature this dimension should be omitted completely. * If necessary, one may create new feature kinds (e.g., triplet features) by inheriting from MultivariateFeature and overriding its feature_channel_names method. The new feature kind can be enforces using the FeatureKind decorator (e.g., univariate_feature is just a shorthand for FeatureKind(UnivariateFeature())). > - The function get_feature_kind returns the FeatureKind of a given feature. > - The function get_all_feature_kinds returns a list of all currently implemented > ``` > ` > ``` > FeatureKind\`s. * Trainable features (e.g., Common Spatial Pattern features) can be implemented by inheriting the TrainableFeature class and overriding its partial_fit, fit and \_\_call_\_ methods, then call the fit_feature_extractors function before extract_features. For an example, see the built-in CSP implementation. ## Creating training and test sets The code below creates a training and test set. We first split the data into training and test sets using the **train_test_split** function from the **sklearn** library. We then create a **TensorDataset** for the training and test sets. 1. **Set Random Seed** – The random seed is fixed using torch.manual_seed(random_state) to ensure reproducibility in dataset splitting and model training. 2. **Extract Labels from the Dataset** – Labels (eye-open or eye-closed events) are extracted from windows_ds, stored as a NumPy array, and printed for verification. 3. **Split Dataset into Train and Test Sets** – The dataset is split into training (80%) and testing (20%) subsets using train_test_split(), ensuring balanced stratification based on the extracted labels. Stratification means that we have as many eyes-open and eyes-closed samples in the training and testing sets. 4. **Convert Data to PyTorch Tensors** – The selected training and testing samples are converted into FloatTensor for input features and LongTensor for labels, making them compatible with PyTorch models. 5. **Create DataLoaders** – The datasets are wrapped in PyTorch DataLoader objects with a batch size of 10, enabling efficient mini-batch training and shuffling. ```Python import torch from sklearn.model_selection import train_test_split from torch.utils.data import DataLoader, TensorDataset # Set random seed for reproducibility random_state = 42 torch.manual_seed(random_state) np.random.seed(random_state) # Extract labels from the dataset eo_ec = np.array([ds[1] for ds in features_ds]).ravel() # check labels print("labels: ", eo_ec) # Get balanced indices for male and female subjects train_indices, test_indices = train_test_split( range(len(features_ds)), test_size=0.2, stratify=eo_ec, random_state=random_state ) # Convert the data to tensors X_train = torch.FloatTensor( np.array([features_ds[i][0] for i in train_indices]) ) # Convert list of arrays to single tensor X_test = torch.FloatTensor( np.array([features_ds[i][0] for i in test_indices]) ) # Convert list of arrays to single tensor y_train = torch.LongTensor(eo_ec[train_indices]) # Convert targets to tensor y_test = torch.LongTensor(eo_ec[test_indices]) # Convert targets to tensor dataset_train = TensorDataset(X_train, y_train) dataset_test = TensorDataset(X_test, y_test) # Create data loaders for training and testing (batch size 10) train_loader = DataLoader(dataset_train, batch_size=10, shuffle=True) test_loader = DataLoader(dataset_test, batch_size=10, shuffle=True) # Print shapes and sizes to verify split print( f"Shape of data {X_train.shape} number of samples - Train: {len(train_loader)}, Test: {len(test_loader)}" ) print( f"Eyes-Open/Eyes-Closed balance, train: {np.mean(eo_ec[train_indices]):.2f}, test: {np.mean(eo_ec[test_indices]):.2f}" ) ``` ## Check labels It is good practice to verify the labels and ensure the random seed is functioning correctly. If all labels are 0s (eyes closed) or 1s (eyes open), it could indicate an issue with data loading or stratification, requiring further investigation. Visualize a batch of target labels ```Python dataiter = iter(train_loader) first_item, label = dataiter.__next__() label ``` ## Create model The model is a MLP with n_features input channels, and 2 output classes (eyes-open and eyes-closed). ```Python import torch from torch import nn from torchinfo import summary torch.manual_seed(random_state) # MLP model = nn.Sequential( nn.Flatten(), nn.Linear(features_ds.datasets[0].n_features, 100), nn.Linear(100, 100), nn.Linear(100, 100), nn.Linear(100, 2), ) summary(model, input_size=first_item.shape) ``` ## Model Training and Evaluation Process This section trains the neural network using the Adamax optimizer, normalizes input data, computes cross-entropy loss, updates model parameters, and tracks accuracy across six epochs. 1. **Set Up Optimizer and Learning Rate Scheduler** – The Adamax optimizer initializes with a learning rate of 0.002 and weight decay of 0.001 for regularization. An ExponentialLR scheduler with a decay factor of 1 keeps the learning rate constant. 2. **Allocate Model to Device** – The model moves to the specified device (CPU, GPU, or MPS for Mac silicon) to optimize computation efficiency. 3. **Normalize Input Data** – The normalize_data function standardizes input data by subtracting the mean and dividing by the standard deviation along the time dimension before transferring it to the appropriate device. 4. **Evaluates Classification Accuracy Over Six Epochs** – The training loop iterates through data batches with the model in training mode. It normalizes inputs, computes predictions, calculates cross-entropy loss, performs backpropagation, updates model parameters, and steps the learning rate scheduler. It tracks correct predictions to compute accuracy. 5. **Evaluate on Test Data** – After each epoch, the model runs in evaluation mode on the test set. It computes predictions on normalized data and calculates test accuracy by comparing outputs with actual labels. ```Python from torch.nn import functional as F optimizer = torch.optim.Adamax(model.parameters(), lr=0.002, weight_decay=0.001) scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=1) device = torch.device( "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu" ) model = model.to(device=device) # move the model parameters to CPU/GPU epochs = 6 x_mean = X_train.mean(dim=0, keepdim=True) x_std = X_train.std(dim=0, keepdim=True) + 1e-7 def normalize_data(x): x = (x - x_mean) / x_std x = x.to(device=device, dtype=torch.float32) # move to device, e.g. GPU return x for e in range(epochs): # training correct_train = 0 for t, (x, y) in enumerate(train_loader): model.train() # put model to training mode scores = model(normalize_data(x)) y = y.to(device=device, dtype=torch.long) _, preds = scores.max(1) correct_train += (preds == y).sum() / len(dataset_train) loss = F.cross_entropy(scores, y) optimizer.zero_grad() loss.backward() optimizer.step() scheduler.step() # Validation correct_test = 0 for t, (x, y) in enumerate(test_loader): model.eval() # put model to testing mode scores = model(normalize_data(x)) y = y.to(device=device, dtype=torch.long) _, preds = scores.max(1) correct_test += (preds == y).sum() / len(dataset_test) # Reporting print( f"Epoch {e}, Train accuracy: {correct_train:.2f}, Test accuracy: {correct_test:.2f}" ) ``` # P3 Visual Oddball Classification This tutorial demonstrates using the *EEGDash* library with PyTorch to classify EEG responses from a visual P3 oddball paradigm. 1. **Data Description**: Dataset contains EEG recordings during a visual oddball task where: - Letters A, B, C, D, and E were presented randomly (p = .2 for each) - One letter was designated as target (oddball) for each block - Other letters served as non-targets (standard) - Participants responded whether each letter was target or non-target 2. **Data Preprocessing**: - Applies bandpass filtering (1-55 Hz) - Selects first 30 EEG channels - Downsamples to 256Hz - Creates event-based windows (0.1s to 0.6s post-stimulus) 3. **Dataset Preparation**: - Maps events into two classes (target vs. standard) using annotation names - Splits into training (80%) and test (20%) sets - Creates PyTorch DataLoaders 4. **Model**: - ShallowFBCSPNet architecture - 30 input channels, 2 output classes - 128-sample input windows (0.5s at 256Hz) 5. **Training**: - Adamax optimizer with learning rate decay - A few training epochs (configurable) - Reports accuracy on train and test sets ## Data Retrieval Using EEGDash The P3 oddball dataset is fetched from the EEGDash API. ```Python from pathlib import Path import os from eegdash import EEGDash, EEGDashDataset from eegdash.paths import get_default_cache_dir CACHE_DIR = Path(get_default_cache_dir()).resolve() CACHE_DIR.mkdir(parents=True, exist_ok=True) DATASET_ID = "ds005863" TASK = "visualoddball" RECORD_LIMIT = 20 eegdash = EEGDash() records = eegdash.find({"dataset": DATASET_ID, "task": TASK}, limit=RECORD_LIMIT) if not records: records = eegdash.find( {"task": {"$regex": "oddball", "$options": "i"}}, limit=RECORD_LIMIT ) if records: dataset_id = records[0].get("dataset") if dataset_id: records = [rec for rec in records if rec.get("dataset") == dataset_id] if not records: raise RuntimeError("No oddball task records found from the API.") dataset_concat = EEGDashDataset(cache_dir=CACHE_DIR, records=records) dataset_concat.download_all() ``` ## Data Preprocessing Using Braindecode [Braindecode]([https://braindecode.org/](https://braindecode.org/)) provides powerful tools for EEG data preprocessing and analysis. Our implementation processes EEG data with these key steps: 1. **Channel Selection & Signal Processing**: - Selecting first 30 EEG channels - Bandpass filtering between 1-55 Hz - Downsampling from 1024Hz to 256Hz 2. **Event Processing**: - Map target vs. standard events based on annotation labels (e.g., Target/NonTarget). - Response-only events are ignored by the mapping. 3. **Window Creation**: - Window duration: 1s - Efficient memory usage with on-demand loading ```Python import logging import warnings import mne import numpy as np ``` ```Python from braindecode.preprocessing import ( Preprocessor, create_windows_from_events, preprocess, ) mne.set_log_level("ERROR") logging.getLogger("joblib").setLevel(logging.ERROR) warnings.filterwarnings("ignore") # BrainDecode preprocessors preprocessors = [ Preprocessor( "pick_channels", ch_names=dataset_concat.datasets[0].raw.ch_names[:30] ), Preprocessor("resample", sfreq=256), Preprocessor("filter", l_freq=1, h_freq=55), ] preprocess(dataset_concat, preprocessors) # Extract windows event_mapping = { "Target": 1, "NonTarget": 0, "target": 1, "standard": 0, "oddball": 1, "3": 1, "4": 1, "6": 0, "7": 0, } windows_ds = create_windows_from_events( dataset_concat, trial_start_offset_samples=26, trial_stop_offset_samples=154, preload=False, window_size_samples=None, window_stride_samples=None, drop_bad_windows=True, mapping=event_mapping, ) print(f"\nAll files processed, total number of windows: {len(windows_ds)}") print(f"Window shape: {windows_ds[0][0].shape}") ``` ## Creating Training and Test Sets The data preparation pipeline consists of these key steps: 1. **Data Extraction** - Windows are automatically labeled (0=standard, 1=oddball) by the P3OddballPreprocessor. 2. **Train-Test Split** - Using sklearn’s train_test_split with: - 80-20 split ratio - Stratified sampling to maintain class proportions - Fixed random seed for reproducibility 3. **PyTorch Data Preparation** - Converting to tensors and creating DataLoader objects for mini-batch training. ```Python import torch import torch.nn.functional as F from sklearn.model_selection import train_test_split from torch.utils.data import DataLoader, TensorDataset # Set random seed for reproducibility random_state = 42 torch.manual_seed(random_state) np.random.seed(random_state) # Extract data and labels using array operations data = np.stack([windows_ds[i][0] for i in range(len(windows_ds))]) labels = np.array([windows_ds[i][1] for i in range(len(windows_ds))]) # Print dataset information print(f"Dataset size: {len(data)}") print(f"Data shape: {data.shape}") print("Distribution of labels:", np.unique(labels, return_counts=True)) print("Label meanings: 0=standard, 1=oddball") # Split into train and test sets train_indices, test_indices = train_test_split( range(len(data)), test_size=0.2, stratify=labels, random_state=random_state ) # Convert to PyTorch tensors X_train = torch.FloatTensor(data[train_indices]) X_test = torch.FloatTensor(data[test_indices]) y_train = torch.LongTensor(labels[train_indices]) y_test = torch.LongTensor(labels[test_indices]) # Create data loaders dataset_train = TensorDataset(X_train, y_train) dataset_test = TensorDataset(X_test, y_test) train_loader = DataLoader(dataset_train, batch_size=8, shuffle=True) test_loader = DataLoader(dataset_test, batch_size=8, shuffle=True) # Print dataset information print("\nDataset size:") print(f"Training set: {X_train.shape}, labels: {y_train.shape}") print(f"Test set: {X_test.shape}, labels: {y_test.shape}") print("\nProportion of samples of each class in training set:") for label in np.unique(labels): ratio = np.mean(y_train.numpy() == label) print(f"Category {label}: {ratio:.3f}") ``` ## Create Model The model is a shallow convolutional neural network (ShallowFBCSPNet) with: - 30 input channels (EEG channels) - 2 output classes (oddball, standard) - 128-sample input windows (0.5s at 256Hz) This architecture is particularly effective for EEG classification tasks, incorporating frequency-band specific spatial patterns. ```Python from torchinfo import summary ``` ```Python from braindecode.models import ShallowFBCSPNet model = ShallowFBCSPNet( in_chans=30, n_classes=2, input_window_samples=128, # 0.5s at 256Hz final_conv_length="auto", ) summary(model, input_size=(1, 30, 128)) ``` ## Model Training and Evaluation The training pipeline consists of: 1. **Optimization Setup**: - Adamax optimizer with learning rate 0.002 - Weight decay for regularization - Learning rate scheduler 2. **Training Process**: - 5 epochs of training - Mini-batch processing - Cross-entropy loss function 3. **Evaluation**: - Accuracy tracking for both training and test sets - Batch normalization applied to input data ```Python device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = model.to(device) optimizer = torch.optim.Adamax(model.parameters(), lr=0.002, weight_decay=0.005) scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=1) def normalize_data(x): mean = x.mean(dim=2, keepdim=True) std = x.std(dim=2, keepdim=True) + 1e-7 x = (x - mean) / std x = x.to(device=device, dtype=torch.float32) return x print("\nStart training...") epochs = int(os.getenv("EEGDASH_EPOCHS", "2")) for e in range(epochs): model.train() correct_train = 0 for t, (x, y) in enumerate(train_loader): scores = model(normalize_data(x)) y = y.to(device=device, dtype=torch.long) _, preds = scores.max(1) correct_train += (preds == y).sum() / len(dataset_train) loss = F.cross_entropy(scores, y) optimizer.zero_grad() loss.backward() optimizer.step() scheduler.step() model.eval() correct_test = 0 with torch.no_grad(): for t, (x, y) in enumerate(test_loader): scores = model(normalize_data(x)) y = y.to(device=device, dtype=torch.long) _, preds = scores.max(1) correct_test += (preds == y).sum() / len(dataset_test) print( f"epoch {e + 1}, training accuracy: {correct_train:.3f}, test accuracy: {correct_test:.3f}" ) ``` # Predicting p-factor from EEG The code below provides an example of using the *braindecode* and *EEGDash* libraries in combination with LightGBM to predict a subject’s p-factor. 1. **Data Retrieval Using EEGDash**: An instance of *EEGDashDataset* is created to search and retrieve resting state data. At this step, only the metadata is transferred. 2. **Data Preprocessing Using BrainDecode**: This process preprocesses EEG data using Braindecode by selecting specific channels, resampling, filtering, and extracting 10-second epochs. 3. **Extracting EEG Features Using EEGDash.features**: Building a feature extraction tree and extracting features per EEG window. 4. **Model Training and Evaluation Process**: This section normalizes input data, trains a LightGBM model, and evaluates regression MSE. ## Data Retrieval Using EEGDash ```Python from pathlib import Path import pandas as pd from eegdash import EEGDashDataset from sklearn.model_selection import train_test_split from eegdash.paths import get_default_cache_dir CACHE_DIR = Path(get_default_cache_dir()).resolve() CACHE_DIR.mkdir(parents=True, exist_ok=True) DATASET_ID = "EEG2025r5" target_name = "p_factor" desc_fields = ["subject", "session", "run", "task", "age", "gender", "sex", "p_factor"] RECORD_LIMIT = 100 # Fetch dataset directly raw_all = EEGDashDataset( dataset=DATASET_ID, cache_dir=CACHE_DIR, description_fields=desc_fields, ) # Filter datasets that have p_factor and valid raw data filtered_datasets = [] from braindecode.datasets import BaseConcatDataset for ds in raw_all.datasets: # Skip datasets whose raw data could not be loaded (e.g. Access Denied) try: if ds.raw is None or len(ds.raw.times) == 0: continue except Exception: continue # Check if p_factor is present in description (populated from DB) p_val = ds.description.get("p_factor") if p_val is not None: try: # Convert to float if not already val = float(p_val) if not pd.isna(val): ds.description["p_factor"] = val filtered_datasets.append(ds) except (ValueError, TypeError): pass if not filtered_datasets: raise RuntimeError( "No records with valid raw data and p_factor metadata found. " "Ensure the EEG2025r5 dataset is accessible (may require S3 credentials)." ) # Limit to requested number and reconstitute filtered_datasets = filtered_datasets[:RECORD_LIMIT] raw_all = BaseConcatDataset(filtered_datasets) ``` ```Python from braindecode.datasets import BaseConcatDataset subjects = raw_all.description["subject"].unique() train_subj, valid_subj = train_test_split( subjects, train_size=0.8, random_state=42, shuffle=True ) raw_train = BaseConcatDataset( [ds for ds in raw_all.datasets if ds.description.subject in train_subj] ) raw_valid = BaseConcatDataset( [ds for ds in raw_all.datasets if ds.description.subject in valid_subj] ) ``` ## Data Preprocessing Using Braindecode [BrainDecode]([https://braindecode.org/stable/install/install.html](https://braindecode.org/stable/install/install.html)) is a specialized library for preprocessing EEG and MEG data. We apply three preprocessing steps in Braindecode: 1. **Selection** of 24 specific EEG channels from the original 128. 2. **Resampling** the EEG data to a frequency of 128 Hz. 3. **Filtering** the EEG signals to retain frequencies between 1 Hz and 55 Hz. When calling the **preprocess** function, the data is retrieved from the remote repository. Finally, we use **create_windows_from_events** to extract 10-second epochs from the data. These epochs serve as the dataset samples. ```Python from braindecode.preprocessing import ( Preprocessor, create_fixed_length_windows, preprocess, ) def preprocess_and_window(raw_ds): # preprocessing using a Braindecode pipeline: preprocessors = [ Preprocessor( "pick_channels", ch_names=[ "E22", "E9", "E33", "E24", "E11", "E124", "E122", "E29", "E6", "E111", "E45", "E36", "E104", "E108", "E42", "E55", "E93", "E58", "E52", "E62", "E92", "E96", "E70", "Cz", ], ), Preprocessor("resample", sfreq=128), Preprocessor("filter", l_freq=1, h_freq=55), ] preprocess(raw_ds, preprocessors, n_jobs=1) for ds in raw_ds.datasets: ds.target_name = target_name # extract windows and save to disk sfreq = raw_ds.datasets[0].raw.info["sfreq"] windows_ds = create_fixed_length_windows( raw_ds, start_offset_samples=0, stop_offset_samples=None, window_size_samples=int(10 * sfreq), window_stride_samples=int(5 * sfreq), drop_last_window=True, preload=False, ) return windows_ds windows_train = preprocess_and_window(raw_train) train_dir = CACHE_DIR / "pfactor_all_train" train_dir.mkdir(parents=True, exist_ok=True) windows_train.save(str(train_dir), overwrite=True) windows_valid = preprocess_and_window(raw_valid) valid_dir = CACHE_DIR / "pfactor_all_valid" valid_dir.mkdir(parents=True, exist_ok=True) windows_valid.save(str(valid_dir), overwrite=True) ``` ## Extracting EEG Features Using EEGDash.features ```Python from functools import partial ``` ```Python from eegdash import features from eegdash.features import extract_features sfreq = windows_train.datasets[0].raw.info["sfreq"] def _get_filter_freqs(raw_preproc_kwargs): if isinstance(raw_preproc_kwargs, list): for item in raw_preproc_kwargs: if isinstance(item, dict) and item.get("fn") == "filter": return item.get("kwargs", {}) if hasattr(item, "fn") and getattr(item.fn, "__name__", "") == "filter": return getattr(item, "kwargs", {}) return {} return raw_preproc_kwargs.get("filter", {}) filter_freqs = _get_filter_freqs(windows_train.datasets[0].raw_preproc_kwargs) features_dict = { "sig": features.FeatureExtractor( { "std": features.signal_std, "line_len": features.signal_line_length, "zero_x": features.signal_zero_crossings, }, ), "spec": features.SpectralFeatureExtractor( { "rtot_power": features.spectral_root_total_power, "band_power": features.spectral_bands_power, 0: features.NormalizedSpectralFeatureExtractor( { "moment": features.spectral_moment, "entropy": features.spectral_entropy, "edge": partial(features.spectral_edge, edge=0.9), }, ), 1: features.DBSpectralFeatureExtractor( { "slope": features.spectral_slope, }, ), }, fs=sfreq, f_min=filter_freqs.get("l_freq", 1.0), f_max=filter_freqs.get("h_freq", sfreq / 2.0), nperseg=4 * sfreq, noverlap=3 * sfreq, ), } features_train = extract_features( windows_train, features_dict, batch_size=64, n_jobs=-1 ) train_feat_dir = CACHE_DIR / "pfactor_features_all_train" train_feat_dir.mkdir(parents=True, exist_ok=True) features_train.save(str(train_feat_dir), overwrite=True) features_valid = extract_features( windows_valid, features_dict, batch_size=64, n_jobs=-1 ) valid_feat_dir = CACHE_DIR / "pfactor_features_all_valid" valid_feat_dir.mkdir(parents=True, exist_ok=True) features_valid.save(str(valid_feat_dir), overwrite=True) ``` ```Python features_train.to_dataframe() ``` Replace Inf and NaN values: ```Python import numpy as np features_train.replace([-np.inf, +np.inf], np.nan) features_train.fillna(0) features_valid.replace([-np.inf, +np.inf], np.nan) features_valid.fillna(0) ``` ```Python features_train.to_dataframe() ``` ## Model Training and Evaluation Convert to pandas dataframes and normalize: ```Python mean_train = features_train.mean(n_jobs=-1) std_train = features_train.std(eps=1e-14, n_jobs=-1) X_train = features_train.to_dataframe() X_train = (X_train - mean_train) / std_train y_train = features_train.get_metadata()["target"] X_valid = features_valid.to_dataframe() X_valid = (X_valid - mean_train) / std_train y_valid = features_valid.get_metadata()["target"] ``` ### Train ```Python from lightgbm import LGBMRegressor, record_evaluation random_seed = 137 model = LGBMRegressor( random_state=random_seed, n_jobs=1, n_estimators=10000, num_leaves=5, max_depth=2, min_data_in_leaf=4, learning_rate=0.1, early_stopping_round=5, first_metric_only=True, ) eval_results = dict() model.fit( X_train, y_train, eval_set=[(X_train, y_train), (X_valid, y_valid)], eval_names=["train", "validation"], eval_metric="l2", callbacks=[record_evaluation(eval_results)], ) y_hat_train = model.predict(X_train) correct_train = ((y_train - y_hat_train) ** 2).mean() y_hat_valid = model.predict(X_valid) correct_valid = ((y_valid - y_hat_valid) ** 2).mean() print(f"Train MSE: {correct_train:.2f}, Validation MSE: {correct_valid:.2f}\n") ``` ### Plot Results ```Python from lightgbm import plot_metric plot_metric(model, "l2") ``` ```Python from lightgbm import plot_importance plot_importance(model, importance_type="split", max_num_features=10) ``` ```Python plot_importance(model, importance_type="gain", max_num_features=10) ``` # P-Factor Regression Tutorial A tutorial for training an EEG Conformer model to predict the “p-factor” (a psychometric score) from EEG data. ```Python from pathlib import Path import numpy as np import torch import torch.nn.functional as F from torch.utils.data import DataLoader from braindecode.models import EEGConformer from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt from eegdash import EEGDash, EEGDashDataset from braindecode.preprocessing import ( Preprocessor, create_fixed_length_windows, preprocess, ) from braindecode.datasets.base import BaseConcatDataset from eegdash.paths import get_default_cache_dir # ============================================================================ # Configuration # ============================================================================ CACHE_DIR_BASE = Path(get_default_cache_dir()).resolve() CACHE_DIR_BASE.mkdir(parents=True, exist_ok=True) DATASET_NAME = "ds005505" TARGET_NAME = "p_factor" CACHE_DIR = CACHE_DIR_BASE / f"reg_{DATASET_NAME}_all_{TARGET_NAME}" SFREQ = 100 BATCH_SIZE = 64 LEARNING_RATE = 0.0001 WEIGHT_DECAY = 1e-3 NUM_EPOCHS = 5 RANDOM_SEED = 42 RECORD_LIMIT = 200 # Set random seeds torch.manual_seed(RANDOM_SEED) np.random.seed(RANDOM_SEED) # ============================================================================ # Data Preparation # ============================================================================ if not CACHE_DIR.exists(): eegdash = EEGDash() print(f"Preparing data for {DATASET_NAME} - {TARGET_NAME}...") ds_data = EEGDashDataset( dataset=DATASET_NAME, cache_dir=CACHE_DIR_BASE, description_fields=["subject", "session", "run", "task", TARGET_NAME], ) filtered_datasets = [] for ds in ds_data.datasets: # Skip datasets whose raw data could not be loaded try: if ds.raw is None or len(ds.raw.times) == 0: continue except Exception: continue subj = ds.description.get("subject", "") if not subj: continue subj = str(subj).replace("sub-", "") # Check target validity target_val = ds.description.get(TARGET_NAME) if target_val is None: continue try: target_val = float(target_val) except (ValueError, TypeError): continue if np.isnan(target_val): continue if len(ds) == 0: continue # Update description with clean values ds.description[TARGET_NAME] = target_val ds.description["subject"] = subj filtered_datasets.append(ds) print(f"Retained {len(filtered_datasets)} datasets with valid {TARGET_NAME}.") if not filtered_datasets: raise RuntimeError(f"No datasets remained after filtering for {TARGET_NAME}.") all_datasets = BaseConcatDataset(filtered_datasets) # Preprocessing # HydroCel GSN 128 equivalents: Fz=E11, Cz=Cz, Pz=E62, Oz=E75, C3=E36, C4=E104, P3=E52, P4=E92 ch_names = ["E11", "Cz", "E62", "E75", "E36", "E104", "E52", "E92"] preprocessors = [ Preprocessor("pick_channels", ch_names=ch_names, ordered=True), Preprocessor("resample", sfreq=SFREQ), Preprocessor("filter", l_freq=1, h_freq=30), ] print("Preprocessing...") preprocess(all_datasets, preprocessors, n_jobs=2) # Windowing windows_ds = create_fixed_length_windows( all_datasets, start_offset_samples=0, stop_offset_samples=None, window_size_samples=SFREQ * 2, # 2 seconds window_stride_samples=SFREQ * 2, drop_last_window=True, preload=False, ) for ds in windows_ds.datasets: ds.target_name = TARGET_NAME # Save windows_ds.save(str(CACHE_DIR), overwrite=True) print(f"Data saved to {CACHE_DIR}") else: print(f"Loading data from {CACHE_DIR}...") from braindecode.datautil import load_concat_dataset windows_ds = load_concat_dataset(path=str(CACHE_DIR), preload=False) # ============================================================================ # Splitting and Loading # ============================================================================ # Basic split by subject subjects = np.array([ds.description["subject"] for ds in windows_ds.datasets]) unique_subs = np.unique(subjects) train_subs, val_subs = train_test_split( unique_subs, test_size=0.2, random_state=RANDOM_SEED ) train_ds = [ds for ds in windows_ds.datasets if ds.description["subject"] in train_subs] val_ds = [ds for ds in windows_ds.datasets if ds.description["subject"] in val_subs] train_ds = BaseConcatDataset(train_ds) val_ds = BaseConcatDataset(val_ds) train_loader = DataLoader(train_ds, batch_size=BATCH_SIZE, shuffle=True, num_workers=0) val_loader = DataLoader(val_ds, batch_size=BATCH_SIZE, shuffle=False, num_workers=0) print(f"Train windows: {len(train_ds)}, Val windows: {len(val_ds)}") # ============================================================================ # Model # ============================================================================ device = "cuda" if torch.cuda.is_available() else "cpu" # simple check for mps if ( not torch.cuda.is_available() and hasattr(torch.backends, "mps") and torch.backends.mps.is_available() ): device = "mps" print(f"Using device: {device}") # Assuming 8 channels from our preprocessing n_chans = 8 n_times = SFREQ * 2 model = EEGConformer( n_chans=n_chans, n_outputs=1, # Regression n_times=n_times, sfreq=SFREQ, num_layers=3, # Simplified for tutorial num_heads=4, final_fc_length="auto", ).to(device) optimizer = torch.optim.AdamW( model.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY ) def normalize_batch(x): # (B, C, T) mean = x.mean(dim=2, keepdim=True) std = x.std(dim=2, keepdim=True) + 1e-6 return (x - mean) / std # ============================================================================ # Training # ============================================================================ history = {"train_loss": [], "val_loss": []} for epoch in range(NUM_EPOCHS): model.train() train_loss_accum = 0 count = 0 for x, y, _ in train_loader: x = x.to(device).float() y = y.to(device).float() x = normalize_batch(x) optimizer.zero_grad() preds = model(x).squeeze() loss = F.mse_loss(preds, y) loss.backward() optimizer.step() train_loss_accum += loss.item() * len(x) count += len(x) avg_train_loss = train_loss_accum / count model.eval() val_loss_accum = 0 val_count = 0 with torch.no_grad(): for x, y, _ in val_loader: x = x.to(device).float() y = y.to(device).float() x = normalize_batch(x) preds = model(x).squeeze() loss = F.mse_loss(preds, y) val_loss_accum += loss.item() * len(x) val_count += len(x) avg_val_loss = val_loss_accum / val_count history["train_loss"].append(avg_train_loss) history["val_loss"].append(avg_val_loss) print( f"Epoch {epoch + 1}/{NUM_EPOCHS} | Train MSE: {avg_train_loss:.4f} | Val MSE: {avg_val_loss:.4f}" ) # ============================================================================ # Visualization # ============================================================================ plt.figure(figsize=(10, 5)) plt.plot(history["train_loss"], label="Train MSE") plt.plot(history["val_loss"], label="Val MSE") plt.xlabel("Epoch") plt.ylabel("MSE Loss") plt.title("P-Factor Regression Training") plt.legend() plt.tight_layout() plt.show() # In a real script we might save it, but here we show ``` # Sex Classification Tutorial The code below provides an example of using the *EEGDash* library in combination with PyTorch to develop a deep learning model for detecting sex in a collection of subjects. 1. **Data Retrieval Using EEGDash**: An instance of *EEGDashDataset* is created to search and retrieve resting state data for 136 subjects (dataset ds005505). At this step, only the metadata is transferred. 2. **Data Preprocessing Using BrainDecode**: This process preprocesses EEG data using Braindecode by selecting specific channels, resampling, filtering, and extracting 2-second epochs. This takes about 2 minutes. 3. **Creating a train and testing sets**: The dataset is split into training (80%) and testing (20%) sets with balanced labels–making sure also that we have as many males as females–converted into PyTorch tensors, and wrapped in DataLoader objects for efficient mini-batch training. 4. **Model Definition**: The model is a custom convolutional neural network with 24 input channels (EEG channels), 2 output classes (male and female). 5. **Model Training and Evaluation Process**: This section trains the neural network, normalizes input data, computes cross-entropy loss, updates model parameters, and evaluates classification accuracy over a few epochs. This takes less than 10 seconds to a couple of minutes, depending on the device you use. ## Data Retrieval Using EEGDash First we find one resting state dataset for a collection of subjects. The API returns candidate subjects with sex/gender metadata. ```Python from pathlib import Path import os import numpy as np from eegdash import EEGDashDataset CACHE_DIR = Path(os.getenv("EEGDASH_CACHE_DIR", Path.cwd() / "eegdash_cache")).resolve() CACHE_DIR.mkdir(parents=True, exist_ok=True) DATASET_ID = os.getenv("EEGDASH_DATASET_ID", "ds005505") TASK = os.getenv("EEGDASH_TASK", "RestingState") RECORD_LIMIT = 80 # Fetch dataset directly, requesting sex/gender in description ds_sexdata = EEGDashDataset( dataset=DATASET_ID, task=TASK, cache_dir=CACHE_DIR, description_fields=["subject", "session", "run", "task", "sex", "gender"], ) # Filter datasets that have sex/gender info valid_datasets = [] for ds in ds_sexdata.datasets: if ds.description.get("sex") or ds.description.get("gender"): valid_datasets.append(ds) if not valid_datasets: raise RuntimeError("No records with sex/gender metadata found.") # Update the concat dataset with filtered list from braindecode.datasets import BaseConcatDataset ds_sexdata = BaseConcatDataset(valid_datasets) PREPARED_DIR = CACHE_DIR / "preprocessed_sex" def _normalize_sex(value): if value is None: return None value = str(value).strip().lower() if value in {"m", "male"}: return "M" if value in {"f", "female"}: return "F" return None def _apply_sex_label(windows): sex_series = windows.description.get("sex") gender_series = windows.description.get("gender") if sex_series is None and gender_series is None: raise RuntimeError("No sex/gender metadata available for labeling.") merged = sex_series if sex_series is not None else gender_series if gender_series is not None: merged = merged.fillna(gender_series) windows.description["sex_label"] = merged.apply(_normalize_sex) for ds in windows.datasets: ds.target_name = "sex_label" return windows ``` ## Data Preprocessing Using Braindecode [BrainDecode]([https://braindecode.org/stable/install/install.html](https://braindecode.org/stable/install/install.html)) is a specialized library for preprocessing EEG and MEG data. We apply three preprocessing steps in Braindecode: 1. **Selection** of 24 specific EEG channels from the original 128. 2. **Resampling** the EEG data to a frequency of 128 Hz. 3. **Filtering** the EEG signals to retain frequencies between 1 Hz and 55 Hz. When calling the **preprocess** function, the data is retrieved from the remote repository. Finally, we use **create_windows_from_events** to extract 2-second epochs from the data. These epochs serve as the dataset samples. ```Python from braindecode.preprocessing import ( Preprocessor, create_fixed_length_windows, preprocess, ) # Alternatively, if you want to include this as a preprocessing step in a Braindecode pipeline: preprocessors = [ Preprocessor( "pick_channels", ch_names=[ "E22", "E9", "E33", "E24", "E11", "E124", "E122", "E29", "E6", "E111", "E45", "E36", "E104", "E108", "E42", "E55", "E93", "E58", "E52", "E62", "E92", "E96", "E70", "Cz", ], ), Preprocessor("resample", sfreq=128), Preprocessor("filter", l_freq=1, h_freq=55), ] preprocess( ds_sexdata, preprocessors, n_jobs=1 ) # , save_dir='xxxx'' will save and set preload to false # extract windows and save to disk windows_ds = create_fixed_length_windows( ds_sexdata, start_offset_samples=0, stop_offset_samples=None, window_size_samples=256, window_stride_samples=256, drop_last_window=True, preload=False, ) windows_ds = _apply_sex_label(windows_ds) os.makedirs(PREPARED_DIR, exist_ok=True) windows_ds.save(str(PREPARED_DIR), overwrite=True) ``` ## Plotting a Single Channel for One Sample It’s always a good practice to verify that the data has been properly loaded and processed. Here, we plot a single channel from one sample to ensure the signal is present and looks as expected. ```Python import matplotlib.pyplot as plt plt.figure() plt.plot(windows_ds[150][0][0, :].transpose()) # first channel of first epoch plt.savefig(CACHE_DIR / "sample_channel.png") plt.show() ``` ## Load pre-saved data If you have run the previous steps before, the data should be saved and may be reloaded here. If you are simply running this notebook for the first time, there is no need to reload the data, and this step may be skipped. However, it is quick, so you might as well execute the cell; it will have no consequences and will allow you to check that the data was saved properly. ```Python from braindecode.datautil import load_concat_dataset print("Loading data from disk") windows_ds = load_concat_dataset(path=str(PREPARED_DIR), preload=False) windows_ds = _apply_sex_label(windows_ds) ``` ## Creating a Training and Test Set The code below creates a training and test set. We first split the data using the **train_test_split** function and then create a **TensorDataset** for both sets. 1. **Set Random Seed** – The random seed is fixed using torch.manual_seed(random_state) to ensure reproducibility in dataset splitting and model training. 2. **Get Balanced Indices for Male and Female Subjects** – We ensure a 50/50 split of male and female subjects in both the training and test sets. Additionally, we prevent subject leakage, meaning the same subjects do not appear in both sets. The dataset is split into training (90%) and testing (10%) subsets using train_test_split(), ensuring balanced stratification based on gender. 3. **Convert Data to PyTorch Tensors** – The selected training and testing samples are converted into FloatTensor for input features and LongTensor for labels, making them compatible with PyTorch models. 4. **Create DataLoaders** – The datasets are wrapped in PyTorch DataLoader objects with a batch size of 100, allowing efficient mini-batch training and shuffling. Although there are only 136 subjects, the dataset contains more than 10,000 2-second samples. ```Python import torch from sklearn.model_selection import train_test_split from torch.utils.data import DataLoader ``` ```Python from braindecode.datasets import BaseConcatDataset # random seed for reproducibility random_state = 0 np.random.seed(random_state) torch.manual_seed(random_state) # Get balanced indices for male and female subjects and create a balanced dataset male_subjects = windows_ds.description["subject"][ windows_ds.description["sex_label"] == "M" ] female_subjects = windows_ds.description["subject"][ windows_ds.description["sex_label"] == "F" ] n_samples = min(len(male_subjects), len(female_subjects)) balanced_subjects = np.concatenate( [male_subjects[:n_samples], female_subjects[:n_samples]] ) balanced_gender = ["M"] * n_samples + ["F"] * n_samples train_subj, val_subj, train_gender, val_gender = train_test_split( balanced_subjects, balanced_gender, train_size=0.9, stratify=balanced_gender, random_state=random_state, ) # Create datasets train_ds = BaseConcatDataset( [ds for ds in windows_ds.datasets if ds.description.subject in train_subj] ) val_ds = BaseConcatDataset( [ds for ds in windows_ds.datasets if ds.description.subject in val_subj] ) # Create dataloaders train_loader = DataLoader(train_ds, batch_size=100, shuffle=True) val_loader = DataLoader(val_ds, batch_size=100, shuffle=True) # Check the balance of the dataset assert len(balanced_subjects) == len(balanced_gender) print(f"Number of subjects in balanced dataset: {len(balanced_subjects)}") print( f"Gender distribution in balanced dataset: {np.unique(balanced_gender, return_counts=True)}" ) ``` # Check labels It is good practice to verify the labels and ensure the random seed is functioning correctly. If all labels are ‘M’ (male) or ‘F’ (female), it could indicate an issue with data loading or stratification, requiring further investigation. get the first batch to check the labels ```Python dataiter = iter(train_loader) first_item, label, sz = dataiter.__next__() np.array(label).T ``` # Create model The model is a custom convolutional neural network with 24 input channels (EEG channels), 2 output classes (male vs. female), and an input window size of 256 samples (2 seconds of EEG data). See the reference below for more information. [1] Truong, D., Milham, M., Makeig, S., & Delorme, A. (2021). Deep Convolutional Neural Network Applied to Electroencephalography: Raw Data vs Spectral Features. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2021, 1039–1042. [https://doi.org/10.1109/EMBC46164.2021.9630708](https://doi.org/10.1109/EMBC46164.2021.9630708) ```Python from torch import nn ``` create model ```Python from torchinfo import summary model = nn.Sequential( # First VGG block nn.Conv2d(1, 16, kernel_size=3, padding=1), nn.ReLU(), nn.Conv2d(16, 16, kernel_size=3, padding=1), nn.ReLU(), nn.MaxPool2d(2, 2), # Second VGG block nn.Conv2d(16, 32, kernel_size=3, padding=1), nn.ReLU(), nn.Conv2d(32, 32, kernel_size=3, padding=1), nn.ReLU(), nn.MaxPool2d(2, 2), # Third VGG block nn.Conv2d(32, 64, kernel_size=3, padding=1), nn.ReLU(), nn.Conv2d(64, 64, kernel_size=3, padding=1), nn.ReLU(), nn.Conv2d(64, 64, kernel_size=3, padding=1), nn.ReLU(), nn.MaxPool2d(2, 2), # Flatten and FC layers nn.Flatten(), nn.Linear(64 * 3 * 32, 1024), nn.ReLU(), nn.Dropout(0.5), nn.Linear(1024, 1024), nn.ReLU(), nn.Dropout(0.5), nn.Linear(1024, 2), ) print(summary(model, input_size=(1, 1, 24, 256))) ``` # Model Training and Evaluation Process This section trains the neural network using the Adamax optimizer, normalizes input data, computes cross-entropy loss, updates model parameters, and tracks accuracy across a few epochs. 1. **Set Up Optimizer and Learning Rate Scheduler** – The Adamax optimizer initializes with a learning rate of 0.002 and weight decay of 0.001 for regularization. 2. **Allocate Model to Device** – The model moves to the specified device (CPU, GPU, or MPS for Mac silicon) to optimize computation efficiency. 3. **Normalize Input Data** – The normalize_data function standardizes input data by subtracting the mean and dividing by the standard deviation along the time dimension before transferring it to the appropriate device. 4. **Train the Model for Two Epochs** – The training loop iterates through data batches with the model in training mode. It normalizes inputs, computes predictions, calculates cross-entropy loss, performs backpropagation, updates model parameters, and steps the learning rate scheduler. It tracks correct predictions to compute accuracy. 5. **Evaluate on Test Data** – After each epoch, the model runs in evaluation mode on the test set. It computes predictions on normalized data and calculates test accuracy by comparing outputs with actual labels. ```Python from torch.nn import functional as F optimizer = torch.optim.Adamax(model.parameters(), lr=0.002, weight_decay=0.001) device = torch.device( "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu" ) model.to(device=device) def normalize_data(x): x = x.reshape(x.shape[0], 1, 24, 256) mean = x.mean(dim=3, keepdim=True) std = x.std(dim=3, keepdim=True) + 1e-7 # add small epsilon for numerical stability x = (x - mean) / std x = x.to(device=device, dtype=torch.float32) # move to device, e.g. GPU return x # dictionary of genders for converting sample labels to numerical values gender_dict = {"M": 0, "F": 1} epochs = 2 for e in range(epochs): # training correct_train = 0 for t, (x, y, sz) in enumerate(train_loader): model.train() # put model to training mode scores = model(normalize_data(x)) _, preds = scores.max(1) y = torch.tensor( [gender_dict[gender] for gender in y], device=device, dtype=torch.long ) correct_train += (preds == y).sum() / len(train_ds) # Calculates the cross-entropy loss and performs backpropagation loss = F.cross_entropy(scores, y) optimizer.zero_grad() loss.backward() optimizer.step() if t % 50 == 0: print("Epoch %d, Iteration %d, loss = %.4f" % (e, t, loss.item())) # validation correct_test = 0 for t, (x, y, sz) in enumerate(val_loader): model.eval() # put model to testing mode scores = model(normalize_data(x)) _, preds = scores.max(1) y = torch.tensor( [gender_dict[gender] for gender in y], device=device, dtype=torch.long ) correct_test += (preds == y).sum() / len(val_ds) print( f"Epoch {e}, Train accuracy: {correct_train:.2f}, Test accuracy: {correct_test:.2f}\n" ) ``` # Clinical Dataset Summary This example demonstrates how to summarize and visualize the distribution of clinical vs. healthy datasets across different recording modalities. ## Loading the data We use the `eegdash.EEGDash` client to fetch all available datasets. For this example, we will also load local curation data if fields are missing, to demonstrate the visualization capabilities with the latest categorized metadata. ```Python import json import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from pathlib import Path from eegdash import EEGDash # Initialize client (public read access) client = EEGDash() # Try fetching from API try: datasets = client.find_datasets(limit=1000) if isinstance(datasets, dict) and "data" in datasets: datasets = datasets["data"] except Exception: datasets = [] print(f"Fetched {len(datasets)} datasets from API.") # Fallback/Augment with local JSON if API returns few results (e.g. dev environment) json_path = Path("consolidated/openneuro_datasets.json") if len(datasets) < 10 and json_path.exists(): print(f"API returned few results. augmenting with {json_path}...") with open(json_path) as f: local_datasets = json.load(f) # Convert to DF and combine? simpler to just use local if API is empty datasets = local_datasets print(f"Using {len(datasets)} datasets from local JSON.") # Convert to DataFrame df = pd.DataFrame(datasets) # Ensure dataset_id key consistency (JSON uses dataset_id, API might vary) if "dataset_id" not in df.columns and "dataset" in df.columns: df["dataset_id"] = df["dataset"] ``` ## Augmenting with Local Metadata (For Demonstration) We will use the local CSV as the primary source if available to ensure we verify the categorization results, merging with API data for any missing fields. ```Python csv_path = Path("scripts/metadata_curation.csv") if csv_path.exists(): print(f"Loading local curation from {csv_path}...") curation_df = pd.read_csv(csv_path) # Merge with right join to prioritize categorized datasets # API/JSON data provides the 'recording_modality' which is missing from CSV df_merged = pd.merge( df, curation_df, on="dataset_id", how="right", suffixes=("_api", "_csv") ) # Resolve columns for col in ["is_clinical", "clinical_purpose", "paradigm_modality"]: if f"{col}_csv" in df_merged.columns: df_merged[col] = df_merged[f"{col}_csv"].combine_first( df_merged.get(f"{col}_api") ) # Defaults if "is_clinical" not in df_merged.columns: df_merged["is_clinical"] = False # Backfill fields if missing in CSV but present in API struct if "clinical" in df_merged.columns: def safe_extract(row, key, default): if isinstance(row, dict): return row.get(key, default) return default df_merged["is_clinical"] = df_merged["is_clinical"].fillna( df_merged["clinical"].apply(lambda x: safe_extract(x, "is_clinical", False)) ) else: df_merged = df.copy() # Mock extraction if no CSV df_merged["is_clinical"] = False df_merged["clinical_purpose"] = "Healthy" if "paradigm" in df_merged.columns: df_merged["paradigm_modality"] = df_merged["paradigm"].apply( lambda x: x.get("modality") if isinstance(x, dict) else "Other" ) ``` ## Data Cleaning ```Python # Normalize Modality (from recording_modality, NOT paradigm_modality) # recording_modality is usually a list e.g. ["eeg"] def normalize_recording_modality(val): if isinstance(val, list): # Flatten: prioritize iEEG > MEG > EEG val_str = " ".join([str(v).lower() for v in val]) elif isinstance(val, str): val_str = val.lower() else: return "Other" if "ieeg" in val_str or "intracranial" in val_str: return "iEEG" if "meg" in val_str: return "MEG" if "eeg" in val_str: return "EEG" return "Other" # Use recording_modality if available, else paradigm (imperfect proxy) if "recording_modality" in df_merged.columns: df_merged["Modality"] = df_merged["recording_modality"].apply( normalize_recording_modality ) else: # Fallback if recording_modality lost in merge (e.g. if CSV had IDs not in JSON) df_merged["Modality"] = "Unknown" # Normalize Subject Type (Clinical Purpose) def normalize_purpose(row): # Check is_clinical bool is_clin = row.get("is_clinical") if is_clin is False or str(is_clin).lower() == "false": return "Healthy" purpose = row.get("clinical_purpose") if ( not isinstance(purpose, str) or not purpose.strip() or purpose.lower() in ["unspecified clinical", "nan", "none"] ): # If is_clinical is True but purpose unspecified -> Unknown Clinical if is_clin: return "Unspecified Clinical" return "Healthy" return purpose.title() df_merged["Subject Type"] = df_merged.apply(normalize_purpose, axis=1) # Filter for main modalities plot_df = df_merged[df_merged["Modality"].isin(["EEG", "MEG", "iEEG"])] print(f"Plotting {len(plot_df)} studies.") print("Subject Types:", plot_df["Subject Type"].unique()) ``` ## Plotting ```Python plt.figure(figsize=(10, 6)) sns.set_theme(style="whitegrid") # Create stacked bar chart using histogram if not plot_df.empty: ax = sns.histplot( data=plot_df, x="Modality", hue="Subject Type", multiple="stack", shrink=0.8, palette="tab20", edgecolor="white", ) plt.title("Number of Studies by Modality and Subject Type") plt.ylabel("Number of Studies") plt.xlabel("Electrophysiology Modality") handles, labels = ax.get_legend_handles_labels() if handles: ax.legend(handles, labels, loc="upper left", bbox_to_anchor=(1, 1)) plt.tight_layout() plt.savefig("clinical_breakdown.png") # Save for verification plt.show() else: print("No data to plot.") ``` # Computation times **00:00.863** total execution time for 11 files **from generated/auto_examples/tutorials**: | Example | Time | Mem (MB) | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|------------| | [EEGDash API Tutorial](tutorial_api.md#sphx-glr-generated-auto-examples-tutorials-tutorial-api-py) (`tutorial_api.py`) | 00:00.543 | 0 | | [Transfer Learning with EEGDash](tutorial_transfer_learning.md#sphx-glr-generated-auto-examples-tutorials-tutorial-transfer-learning-py) (`tutorial_transfer_learning.py`) | 00:00.318 | 0 | | [Clinical Dataset Summary](plot_clinical_summary.md#sphx-glr-generated-auto-examples-tutorials-plot-clinical-summary-py) (`plot_clinical_summary.py`) | 00:00.002 | 0 | | [Age Prediction from EEG](noplot_tutorial_age_prediction.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-age-prediction-py) (`noplot_tutorial_age_prediction.py`) | 00:00.000 | 0 | | [Oddball Classification](noplot_tutorial_audi_oddball.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-audi-oddball-py) (`noplot_tutorial_audi_oddball.py`) | 00:00.000 | 0 | | [EEG Features for Sex Classification](noplot_tutorial_feature_extraction.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-feature-extraction-py) (`noplot_tutorial_feature_extraction.py`) | 00:00.000 | 0 | | [Eyes Open vs. Closed Features](noplot_tutorial_features_eoec.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-features-eoec-py) (`noplot_tutorial_features_eoec.py`) | 00:00.000 | 0 | | [P3 Visual Oddball Classification](noplot_tutorial_p3_oddball.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-p3-oddball-py) (`noplot_tutorial_p3_oddball.py`) | 00:00.000 | 0 | | [Predicting p-factor from EEG](noplot_tutorial_pfactor_features.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-pfactor-features-py) (`noplot_tutorial_pfactor_features.py`) | 00:00.000 | 0 | | [P-Factor Regression Tutorial](noplot_tutorial_pfactor_regression.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-pfactor-regression-py) (`noplot_tutorial_pfactor_regression.py`) | 00:00.000 | 0 | | [Sex Classification Tutorial](noplot_tutorial_sex_classification_cnn.md#sphx-glr-generated-auto-examples-tutorials-noplot-tutorial-sex-classification-cnn-py) (`noplot_tutorial_sex_classification_cnn.py`) | 00:00.000 | 0 | # EEGDash API Tutorial This tutorial demonstrates how to use the *EEGDash* API to query and explore EEG recording metadata without downloading any data files. 1. **Initializing EEGDash**: Create an `EEGDash` client to connect to the metadata database. 2. **Finding Records**: Use `find()` to retrieve recording metadata for a specific dataset. 3. **Exploring Record Keys**: Inspect the fields available in each record (e.g., `subject`, `task`, `sampling_frequency`, `ntimes`). 4. **Filtering Records**: Narrow down results by applying additional query filters such as task, subject, or session. 5. **Basic Statistics**: Compute summary statistics such as the number of subjects, recordings, and the total duration of a dataset. ## Initializing EEGDash Creating an `EEGDash` instance opens a connection to the EEGDash metadata database. No credentials are required for read-only access. ```Python from eegdash import EEGDash eegdash = EEGDash() ``` ## Finding Records Use `find()` to retrieve metadata records for all recordings in a dataset. The method accepts a MongoDB-style query dictionary. Only metadata is transferred at this stage — no EEG data is downloaded. #### NOTE Passing `limit` avoids unbounded pagination and keeps the query fast. ```Python DATASET_ID = "ds003039" try: records = eegdash.find({"dataset": DATASET_ID}, limit=50) except Exception as exc: print(f"API unavailable ({exc}); using empty result set.") records = [] print(f"Found {len(records)} records for dataset {DATASET_ID}.") ``` ```none Found 19 records for dataset ds003039. ``` ## Exploring Record Keys Each record is a dictionary containing metadata fields such as the subject identifier, task name, sampling frequency, and number of time points. Printing the keys of the first record gives an overview of available fields. ```Python if records: print("Keys available in a record:") for key in sorted(records[0].keys()): print(f" {key}: {records[0][key]!r}") ``` ```none Keys available in a record: _has_missing_files: False _id: '695936adb52d41d9f98c408e' bids_relpath: 'sub-019/eeg/sub-019_task-neurCorrYoung_eeg.set' bidspath: 'ds003039/sub-019/eeg/sub-019_task-neurCorrYoung_eeg.set' ch_names: ['Fp1', 'Fz', 'F3', 'F7', 'F9', 'FT9', 'FC1', 'C3', 'T7', 'TP9', 'CP5', 'CP1', 'Pz', 'P3', 'P7', 'O1', 'Oz', 'O2', 'P4', 'P8', 'TP10', 'CP6', 'CP2', 'Cz', 'C4', 'T8', 'FT10', 'FC2', 'F4', 'F8', 'F10', 'Fp2', 'Fpz', 'AF3', 'AF7', 'F5', 'FT7', 'FC3', 'C1', 'C5', 'TP7', 'CP3', 'P5', 'P9', 'PO9', 'PO7', 'PO3', 'I1', 'I2', 'PO4', 'PO8', 'PO10', 'P10', 'P6', 'CPz', 'CP4', 'TP8', 'C6', 'C2', 'FC4', 'FT8', 'F6', 'AF4', 'AF8', 'x_dir', 'y_dir', 'z_dir'] data_name: 'ds003039_sub-019_task-neurCorrYoung_eeg.set' dataset: 'ds003039' datatype: 'eeg' digested_at: '2026-04-04T19:41:28.872782+00:00' entities_mne: {'subject': '019', 'session': None, 'task': 'neurCorrYoung', 'run': None, 'acquisition': None} extension: '.set' nchans: 67 ntimes: 1777354 participant_tsv: {'gender': 'M', 'age': 20.0, 'handedness': 'R'} recording_modality: ['eeg'] run: None sampling_frequency: 500.0 session: None storage: {'backend': 's3', 'base': 's3://openneuro.org/ds003039', 'raw_key': 'sub-019/eeg/sub-019_task-neurCorrYoung_eeg.set', 'dep_keys': ['sub-019/eeg/sub-019_task-neurCorrYoung_channels.tsv', 'sub-019/eeg/sub-019_task-neurCorrYoung_events.tsv', 'sub-019/eeg/sub-019_task-neurCorrYoung_events.json', 'sub-019/eeg/sub-019_task-neurCorrYoung_electrodes.tsv', 'sub-019/eeg/sub-019_task-neurCorrYoung_coordsystem.json', 'sub-019/eeg/sub-019_task-neurCorrYoung_eeg.json', 'sub-019/eeg/sub-019_task-neurCorrYoung_eeg.fdt']} subject: '019' suffix: 'eeg' task: 'neurCorrYoung' ``` ## Filtering Records `find()` supports a rich set of query operators. You can pass keyword arguments as a shorthand for simple equality filters, or combine a query dictionary with keyword filters. The examples below show how to select recordings by task or by a list of subject identifiers. Filter by task using keyword argument ```Python try: task_records = eegdash.find({"dataset": DATASET_ID}, task="rest", limit=50) except Exception: task_records = [] print(f"Records with task='rest': {len(task_records)}") # Filter using the $in operator to select specific subjects if records: subjects_of_interest = [r["subject"] for r in records[:3]] try: subject_records = eegdash.find( {"dataset": DATASET_ID, "subject": {"$in": subjects_of_interest}}, limit=50, ) except Exception: subject_records = [] print(f"Records for subjects {subjects_of_interest}: {len(subject_records)}") ``` ```none Records with task='rest': 0 Records for subjects ['019', '010', '017']: 3 ``` ## Computing Dataset Statistics Because each record contains `ntimes` (number of samples) and `sampling_frequency` (Hz), it is straightforward to compute the duration of every recording and derive summary statistics for the whole dataset. ```Python if records: durations = [r["ntimes"] / r["sampling_frequency"] for r in records] subjects = set(r["subject"] for r in records) print( f"{len(subjects)} subjects. " f"{len(records)} recordings. " f"{sum(durations) / 3600:.2f} hours." ) else: print("No records available — skipping statistics.") ``` ```none 19 subjects. 19 recordings. 17.51 hours. ``` **Total running time of the script:** (0 minutes 0.543 seconds) # Transfer Learning with EEGDash **Objective**: Learn how to fine-tune a pre-trained EEG model on a new small dataset. **Scenario**: You have a model pre-trained on a large dataset (Dataset A). You want to adapt it to a new, smaller dataset (Dataset B) without training from scratch. This is useful when you have limited data for your specific task. ## Setup ```Python import copy import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader, TensorDataset from braindecode.models import EEGConformer import matplotlib.pyplot as plt # Configuration RANDOM_SEED = 42 torch.manual_seed(RANDOM_SEED) np.random.seed(RANDOM_SEED) DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") ``` ## Simulating Data For this tutorial, we will simulate two datasets: 1. **Source Dataset (Large)**: Represents the ample data we used for pre-training. 2. **Target Dataset (Small)**: Represents our new, limited dataset. ```Python def generate_dummy_data(n_samples, n_classes=2): # Simulate (Samples, Channels, Time) X = torch.randn(n_samples, 24, 256) # Generate labels: 0 or 1 y = torch.randint(0, n_classes, (n_samples,)) return TensorDataset(X, y) source_dataset = generate_dummy_data(n_samples=200) # "Large" pre-training set target_dataset = generate_dummy_data(n_samples=40) # "Small" target set ``` ## Pre-training the Model First, we define our base model and train it on the “Source” dataset. In a real scenario, you might load a saved checkpoint here. ```Python model = EEGConformer( n_chans=24, n_outputs=2, n_times=256, sfreq=128, ).to(DEVICE) print("Pre-training on Source Dataset (simulated)...") # (Skipping actual heavy training loop for tutorial brevity, assuming model is trained) # Let's save the initial state to compare later pretrained_state = copy.deepcopy(model.state_dict()) print("Pre-training complete.") ``` ```none Pre-training on Source Dataset (simulated)... Pre-training complete. ``` ## Transfer Learning Strategy To perform transfer learning, we typically: 1. **Freeze** the feature extractor (the early layers) so their weights don’t change. 2. **Replace** the classification head (the final layers) to match our new task (or just re-initialize it). 3. **Fine-tune** the model on the new dataset. ```Python # 1. Freeze Feature Extractor # For Conformer, we freeze everything except the final fully connected layer. for param in model.parameters(): param.requires_grad = False # 2. Replace/Unfreeze Classification Head # The Conformer's final layer is usually named `final_layer`. # We re-initialize it. This implicitly sets requires_grad=True for these new weights. model.final_layer = nn.Linear(model.final_layer.in_features, 2).to(DEVICE) print("\nModel Parameters Status:") for name, param in model.named_parameters(): if param.requires_grad: print(f" {name}: Trainable") # else: print(f" {name}: Frozen") ``` ```none Model Parameters Status: final_layer.weight: Trainable final_layer.bias: Trainable ``` ## Fine-tuning Now we train only the head on the Target Dataset. ```Python optimizer = torch.optim.AdamW(model.parameters(), lr=0.001) train_loader = DataLoader(target_dataset, batch_size=10, shuffle=True) losses = [] model.train() print("\nFine-tuning on Target Dataset...") for epoch in range(5): epoch_loss = 0 for x_batch, y_batch in train_loader: x_batch, y_batch = x_batch.to(DEVICE), y_batch.to(DEVICE) optimizer.zero_grad() output = model(x_batch) loss = F.cross_entropy(output, y_batch) loss.backward() optimizer.step() epoch_loss += loss.item() avg_loss = epoch_loss / len(train_loader) losses.append(avg_loss) print(f"Epoch {epoch + 1}/5 | Loss: {avg_loss:.4f}") ``` ```none Fine-tuning on Target Dataset... Epoch 1/5 | Loss: 0.7222 Epoch 2/5 | Loss: 0.7596 Epoch 3/5 | Loss: 0.7417 Epoch 4/5 | Loss: 0.6992 Epoch 5/5 | Loss: 0.7110 ``` ## Results ```Python plt.figure(figsize=(6, 4)) plt.plot(losses, marker="o") plt.title("Fine-tuning Loss Curve") plt.xlabel("Epoch") plt.ylabel("Loss") plt.grid(True) plt.show() print("\nTransfer Learning Complete!") print( "The model effectively adapted to the new domain by updating only the classification head." ) ``` ```none Transfer Learning Complete! The model effectively adapted to the new domain by updating only the classification head. ``` **Total running time of the script:** (0 minutes 0.318 seconds)